title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Semantics-aware Test-time Adaptation for 3D Human Pose Estimation
Accept (poster)
Summary: This paper presents a 3D human pose estimation that supports test-time optimization with semantics. The authors leverage video understanding and a well-structured motion-text space to adapt the model motion predictions. In addition, they incorporate a missing 2D pose completion with the motion-text similarity. Experimental results demonstrate the effectiveness of the proposed method. ## update after rebuttal The authors address my questions well. I keep my original rating. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: It is related to many human-centric computer vision tasks. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths As the authors argued, existing TTA methods lack semantics, which can suffer from many failure cases, including the top row of Figure 1. Adding semantics with a vision-language model is a good step toward semantics-aware 3D human pose estimation. The effectiveness of such incorporation is well-shown in the experiments section. Weaknesses 1. It is not clear how robust the system is to the wrong outputs from VLM. It could be interesting if the authors could show some failure cases of the proposed system including the failure cases from the VLM. 2. The results in Figure 1 below are not appealing. Due to the occlusion, it is not clear what is the correct one between CycleAdapt and the proposed method’s outputs. 3. The running time should be much slower than CycleAdapt due to the VLM. What is the exact running time in the TTA?. 4. Overall, qualitative results only include easy poses, such as simply walking or standing. More challenging poses should be included. Other Comments Or Suggestions: Please see weaknesses. Questions For Authors: 1. What if we directly optimize the output 3D poses without fine-tuning the 3D HPE model? I think that should make the TTA much faster. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We express our sincere appreciation for the helpful reviews and tackle the concerns below: **[Weakness 1] Failure cases from the VLM** **R:** Thank you for your suggestions. We manually examined 5,000 frames from the 3DPW dataset, where 96.4% is accurate, as the actions are relatively simple and VLM is reliable. For the remaining 3.6%, the VLM hallucinates actions based on background, e.g. a person picking up foil is mistaken as "golfing" as the foil resembles a golf club, or "eating something using both hands" and "cast fishing pole" in scenes with fruit stands and rivers, respectively. Visualizations of these VLM's failure cases can be found in Fig.I ([anonymous GitHub link](https://anonymous.4open.science/r/visualization-D59C/anonymous_pdf.pdf)). Despite these VLM failures, our method maintains robustness. As our semantics-aware alignment is a regularizer, its impact is limited if it conflicts with 2D projection loss. The table below shows that the adaptation with the wrong VLM label still reduces the MPJPE, but not as much as when the (oracle) correct action labels were applied. | Method | Initial | Incorrect VLM action output | Correct (oracle) action output | | --- | :---: | :---: | :---: | | MPJPE (mm) | 213.8 | 92.2 | 83.7 | **[Weakness 2] No GT mesh in Figure 1** **R:** Thanks for pointing this out. As shown in Fig.IV ([anonymous GitHub link](https://anonymous.4open.science/r/visualization-D59C/anonymous_pdf.pdf)), we include GT mesh for a better comparison, where our predictions preserve the same motion semantics as the GT. **[Weakness 3] Runtime analysis** **R:** Thanks for pointing this out. Yes, our runtime (107.6ms) is longer than CycleAdapt (74.1ms); the extra overhead is primarily from the VLM (85.4%). Details are given in our response to **[Question 1] (Runtime analysis)** for the reviewer **qP6M**. **[Weakness 4] Include challenging poses** **R:** Thank you for your suggestions. We provide more challenging poses, i.e., "open a door and sit", "hugging", "squats series" and "spin around with right foot". Please refer to Fig.II and Fig.III in the [anonymous GitHub link](https://anonymous.4open.science/r/visualization-D59C/anonymous_pdf.pdf). **[Question 1] Direct optimization without fine-tuning** **R:** Yes, it is faster since it does not require model fine-tuning. However, the success of direct optimization is linked heavily to the accuracy of the initial estimate. We compare with ScoreHMR [3], a state-of-the-art direct optimization approach on the 3DPW dataset using OpenPose 2D poses. As shown in Fig.V ([anonymous GitHub link](https://anonymous.4open.science/r/visualization-D59C/anonymous_pdf.pdf)), our method is robust across different initial prediction qualities. ScoreHMR is more sensitive and when the initial MPJPE exceeds 200mm, ScoreHMR reduces error only by 26.6%, whereas we can reduce it by 64.5%. **References** [3] Stathopoulos et al. Score-guided diffusion for 3D human recovery. CVPR'2024.
Summary: The paper introduces a TTA HMR method from videos using semantic information to address challenges caused by a lack of information when a large portion of the body is occluded. The paper suggests using MotionCLIP to align predicted motions with the CLIP representation of the textual description of the actions as extracted by GPT-4o. The 3D HPE model is then adapted during test time to match OpenPose keypoints and achieve semantic alignment while being temporally regularized. Next, the fine-tuned HMR model obtains initial predictions, which are further improved using a semantic-aware EMA operation. The main finding is that incorporating this semantics-aware motion prior significantly improves performance over state-of-the-art TTA methods, especially in cases with occlusions or truncations. Claims And Evidence: The paper claims improved performance over SOTA, which is supported in the experiments by showing the effectiveness of adding semantic information. The paper also propose an interesting approach to incorporate the new modality and through ablations, shows the effectiveness of fill-ins and EMA. Methods And Evaluation Criteria: The datasets and evaluation metrics are commonly used in this field. Using multiple datasets and ablations is appropriate to show the effectiveness of the proposed method. The experiments might be unfair in Table 1, since none of the other methods use semantic information, but it's acceptable given the context of the paper. Theoretical Claims: The paper does not have any theoretical proof. Experimental Designs Or Analyses: The experimental design and experiments seem sound. The paper claims that it follows the same evaluation protocol as prior research and uses recent papers for comparison. Supplementary Material: The supplementary materials provide more info on prompting GPT-4o and explain the visual captioning methods. It also offers more qualitative comparisons, showing the effectiveness of their approach. Additionally, it provides an analysis of the fill-in threshold, which may be the most crucial hyperparameter among the others, like the EMA update factor $\alpha$ and the two $\lambda$. Relation To Broader Scientific Literature: Motion priors trained on large motion capture datasets like AMASS have been used to regularize 3D human pose and motion estimation, with early examples like VPoser for pose regularization and later works like HuMoR employing VAEs for motion priors. Recent Test-Time Adaptation (TTA) approaches have demonstrated that fine-tuning 3D Human Mesh Recovery (HMR) models on test video sequences improves speed and accuracy. This paper builds upon TTA, proposing incorporating semantic information via a pre-trained motion-language model into fine-tuning. While the paper leverages existing components from the literature, it presents a well-justified strategy for addressing occlusions through semantic consistency during TTA. Essential References Not Discussed: The paper understands related literature well, citing adequate and recent references from the last two years. The model lacks a reference to diffusion models, like RoHM and MDM, which also use trajectory information. However, this lack of reference is acceptable since that line of research focuses on motion generation. Other Strengths And Weaknesses: ### Strengths 1. The paper is well-written, clear, and concise. It is easy to follow and provides sufficient details to understand the paper. Although some components are too reliant on the prior works, the paper has done a good job summarizing their efforts. 2. The supplementary materials provides adequate information to replicate the experiments. 3. The paper has a good understanding of the literature, citing and comparing with them when needed. 4. The paper provides a compelling approach to incorporate sematic information into TTA, improving upon previous works. ### Weaknesses 1. The paper does not provide any additional alternatives other than previous works. 2. As the authors mentioned, this method cannot perfectly capture the correct movements of the subjects while they are occluded. However, it could generate a plausible motion based on semantic consistency. There is no information about long periods of occlusion (whole body) or jitters (due to thresholding). Other Comments Or Suggestions: I did not find any major issues. Please address my questions below if possible. Thank you. Typo: - L302, right column, "Egobody" should be "EgoBody" Questions For Authors: 1. Could you please provide a runtime analysis, like Table 7 of the CycleAdapt paper? It is crucial to understand the computational requirements of training/testing of your method compared to others. 2. How did you choose the hyperparameters? Wouldn't the EMA and $\alpha$ cause a jitter in motion during different actions? 3. Could you provide the results of using GT 2D human keypoints instead of OpenPose for table 1? It would be interesting to understand the noise robustness of your approach compared to others when working with GT/OpenPose. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We express our sincere appreciation for the helpful reviews and tackle the concerns below: **[Method 1] Fairness in Table 1** **R:** Our paper highlights the problem of motion semantics and proposes a method to incorporate semantics as the core contribution. No existing works use semantics, so it is difficult to compare directly. We verify our method by comparing it with the closest components from the literature, as shown in Table 3. Specifically, we use CycleAdapt as a baseline (top row) and then add semantics through a motion discriminator (row 2) or unpaired local poses (row 3). In this way, we enforce the predicted motion to be aligned with the generated motion from [1] by sharing the same semantics as the video segment. Table 3 in the paper shows that our method better integrates semantics without additional motion data generation. **[References 1] Diffusion models** **R:** Thank you for your suggestion. We will add these references and also discuss diffusion models in our related works section. **[Weakness 1] Method alternatives** **R:** Thanks for pointing this out. We explore different strategies of integrating semantic information; two alternatives to our method are shown in Table 3. We will adjust the labels and captions of Table 3 and Sec. 5.4 (‘Analysis on semantics-incorporated techniques’) in the text to clarify this point. **[Weakness 2] Long periods of occlusion or jitters** **R:** Thank you for pointing this out. For long periods of full-body occlusion, motion can be generated to maintain semantic consistency with the predicted activity. However, this aligns more with motion generation, which is evaluated by plausibility (e.g., FID [2]), unlike reconstruction, which relies on ground truth. We plan to merge reconstruction and generation into a single framework and develop suitable evaluation metrics in future work. We evaluate jitter on the 3DPW dataset, comparing our method with ground truth and CycleAdapt. As shown in the table below, our jitter is significantly lower than the image-based DynaBOA but 11.1% higher than CycleAdapt. However, CycleAdapt often predicts static motion, even in dynamic motion scenes, resulting in low jitter but deviating from the ground truth. Take the walking motion from Fig. 1 of the paper as an example, we report the jitter values for this case in the last row of the table below, where ours is higher than CycleAdapt but closely aligns with the ground truth. | Jitter$(10m/s^3)$ | GT | CycleAdapt | DynaBOA | Ours | | --- | :---: | :---: | :---: | :---: | | All sequences | 22.3 | 27.9 | 172.1 | 31.0 | | Walking | 46.7 | 16.8 | - | 48.3 | **[Other Comments 1] Typo** **R:** Thank you for pointing this out. We will correct it in the revision. **[Question 1] Runtime analysis** **R:** Thank you for your suggestion. We perform runtime analysis following the environment settings described in Table 7 in CycleAdapt. The table below shows that compared to CycleAdapt, our method introduces an additional 33.5ms runtime per frame, 85.4% of which is caused by VLM, and 14.6% by our proposed framework components. However, it is still much faster than other TTA methods. We will add runtime analysis in the revision. | Method | BOA | DynaBOA | DAPA | CycleAdapt | Ours | | --- | :---: | :---: | :---: | :---: | :---: | | Runtime(ms) | 840.3 | 1162.8 | 431.0 | **74.1** | 107.6 (+33.5) | **[Question 2] Hyperparameters and EMA** **R:** We choose hyperparameters based on a grid search. The EMA update factor ($\alpha$) values used for the search are provided in the table below. For the fill-in threshold ($\sigma$) of missing 2D detection, please refer to Table 5 in the paper. The EMA and $\alpha$ do not cause jitters. This is because the EMA is performed over 2D poses for each frame and each frame has one fixed action label. | $\alpha$| 0.75 | 0.80 | 0.85 | 0.90 | 0.95 | | --- | :---: | :---: | :---: | :---: | :---: | | MPJPE(mm)| 78.6 | 77.9 | 76.8 | **76.4** | 77.5 | **[Question 3] GT 2D** **R:** Thank you for your suggestion. We add results in the table below. Our method still performs better than CycleAdapt. The improvement becomes smaller because GT 2D keypoints provide very strong indications for occluded keypoints, which is unrealistic in real-world scenarios. Our method especially takes effect when 2D keypoints are noisy or missing. | Method | |3DPW | | 3DHP | | | --- | :---: | :---: | :---: | :---: | :---: | | | | MPJPE | PA-MPJPE | MPJPE | PA-MPJPE | | OpenPose | CycleAdapt | 87.7 | 53.8 | 110.3 | 74.4 | | | Ours | 76.4 | 47.2 | 101.3 | 65.1 | | GT | CycleAdapt | 64.7 | 39.9 | 100.9 | 64.6 | | | Ours | **64.1** | **39.4** | **98.5** | **63.7** | **References** [1] Jiang et al. MotionGPT: Human motion as a foreign language. NeurIPS'2023. [2] Guo et al. Generating diverse and natural 3d human motions from text. CVPR'2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for their comprehensive rebuttal, which has addressed all my concerns. Specifically, I appreciate the detailed runtime analysis confirming the method's efficiency relative to other approaches. Furthermore, the analysis using Ground Truth 2D keypoints highlights the method's strength in handling the noisy or missing detections typical in real-world scenarios. The clarifications regarding EMA and the quantitative jitter analysis were also convincing, fully resolving my reservations on those aspects. Therefore, I maintain my score and recommend acceptance of this paper. --- Reply to Comment 1.1.1: Comment: Thank you for recognizing our responses in the rebuttal! We are glad that the clarifications and analyses addressed your concerns. We truly appreciate your recommendation for acceptance and will incorporate your valuable feedback into the revision.
Summary: This paper proposes a novel semantics-aware test-time adaptation (TTA) framework for 3D human pose estimation, addressing the issue of overly smoothed or unguided predictions, especially under occlusion or truncation. The key innovation is integrating motion semantics into the TTA process by leveraging MotionCLIP to align pose sequences with textual action labels inferred via a vision-language model (VLM). Additionally, missing 2D keypoints are completed based on motion-text similarity to improve supervision where 2D evidence is lacking. The method significantly outperforms prior approaches such as CycleAdapt on datasets including 3DPW, 3DHP, and EgoBody. Supplementary results show high semantic consistency, effective fill-in mechanisms, and robust improvements in both common and rare motion types. Claims And Evidence: The authors' claims are well supported: 1. The semantics-aware motion prior reduces depth ambiguity and guides the adaptation process. 2. 2D pose completion using text-aligned motion enhances adaptation under occlusion. Methods And Evaluation Criteria: The methods are carefully designed and appropriate: 1. MotionCLIP is used effectively as a semantics-aware regularizer. 2. A VLM (GPT-4o) assigns action labels to video segments, with verification via CLIP cosine similarity. 3. EMA and fill-in strategies help refine 2D poses over adaptation epochs. The evaluation uses standard benchmarks (3DPW, 3DHP, EgoBody) and metrics (MPJPE, PA-MPJPE, MPVPE). Theoretical Claims: There are no deep theoretical claims in this work. Experimental Designs Or Analyses: Strong design and analysis. The paper includes: 1. Solid baselines (BOA, DynaBOA, DAPA, CycleAdapt). 2. Ablation studies isolating the contribution of alignment, EMA, and fill-in components. 3. Per-semantic motion improvement analysis. 4. Supplementary evaluations on fill-in threshold impact and text-labeling accuracy. Supplementary Material: Yes, the supplementary material was thoroughly reviewed. It includes: 1. Details on VLM-based text labeling and prompt examples. 2. Cosine similarity matrices for text-video alignment. 3. More qualitative comparisons across datasets. 4. Trade-off analysis between 2D pose quantity and quality. 5. Fill-in mechanism evaluation across similarity thresholds. Relation To Broader Scientific Literature: This paper advances the state-of-the-art in semantics-aware adaptation for 3D human pose estimation, bridging two previously disjoint areas: vision-language modeling and test-time pose refinement. While MotionCLIP and CycleAdapt are relevant foundations, this work introduces a novel integration of motion semantics during adaptation. It complements prior efforts in temporal smoothing and motion priors by adding a high-level semantic guidance layer. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: N.A. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We express our sincere appreciation for the helpful reviews and welcome further discussion! --- Rebuttal Comment 1.1: Comment: There are no further issues from my side. I will keep my score at 4: Accept. --- Reply to Comment 1.1.1: Comment: Thank you for your time and effort in reviewing our paper! We truly appreciate your initial positive score and your decision to maintain it.
null
null
null
null
null
null
null
null
☕ Decaf: A Deconfounding Causal Generative Model
Reject
Summary: The paper introduces Decaf, a normalizing-flow based causal generative model that can sample interventional and counterfactual data when given the graph and trained on observational data. Importantly, as opposed to many prior works, Decaf does not assume causal sufficiency (i.e., unobserved confounding may be present), and the paper also shows the identifiability of the queries of interest. Experiments show that Decaf outperforms many competitors. To summarize my review, I cannot recommend acceptance for the current form of the paper, largely due to the issues I have with the claims and evidence, discussed below. I am open to discussion in case I misunderstood something about the paper. Claims And Evidence: The claims of the paper are as follows: **Claim 1:** Decaf is the first causal generative model that accounts for hidden confounders given observational data and the causal graph. This claim seems to be false; see “Relation to Broader Scientific Literature” section below. There have already been models developed that handle unobserved confounders and are more general than Decaf in many ways. **Claim 1b:** Decaf is the first causal normalizing flow-based causal generative model that accounts for hidden confounders. This would be the natural claim if claim 1 is false, and it is still a strong claim. Indeed, incorporating hidden confounders is not an easy task in normalizing flow-based models. However, I find this dubious too. First of all, the form of $\mathbf{Z}$ and the way that the model incorporates the graph is not clear. The way that the proposed model jointly performs the inference on $\mathbf{z}$ seems to be problematic. One example of a graph that is particularly challenging for a normalizing-flow design is the graph $A \rightarrow B \rightarrow C$, with confounding between $A \leftrightarrow B$ and $B \leftrightarrow C$, but notably not between $A$ and $C$. In this case, one implied constraint is that $P(c \mid do(a, b)) = P(y \mid do(b))$. However, given the architecture of Decaf, it seems that both sets of confounding are modeled in $\mathbf{z}$. $T_{\theta}$ would take $a, b, c$ as input, and $T^{-1}_\theta$ would output $a, b, c$. It looks like this constraint is lost, and instead, the problem is modeled with joint confounding between all three variables. The paper does not seem to discuss anything about the implied constraints of the graph, and it seems that this is one case where two different graphs implying different identifiable results are not distinguishable through the Decaf architecture. Second, the model does not seem to be able to handle all possible counterfactual queries, even when identifiable. For example, one may be interested in the query $P(y_x, z_w)$, where we evaluate $Y$ under an intervention on $X$ while simultaneously evaluating $Z$ under an intervention on $W$. If these cannot be sampled, that would imply a lack of generality of the Decaf architecture when compared to SCMs. **Claim 2:** Decaf identifies all causal queries and counterfactual queries under certain conditions. This claim is mostly correct but seems to lack generality. See “Theoretical Claims” section. **Claim 3:** Empirical results with Decaf outperform existing approaches. This claim is mostly correct. See “Experimental Designs or Analyses” section. Methods And Evaluation Criteria: Leveraging normalizing flows for this problem is an interesting concept, but it may have some issues (see above). Theoretical Claims: The paper makes two claims about the identifiability of queries for Decaf. Prop. 6.1 seems to be sound. However, it is strange that it seems to only prove identifiability for specific queries in specific families of graphs, such as queries that are identifiable through adjustment. There exist queries in graphs that do not satisfy this criterion (e.g, see napkin graph). Prop. 6.2 is not a property that holds in general, but looking in the proof, it seems that it is specifically about cases where $P(\mathbf{y} \mid do(\mathbf{t}))$ is identifiable from proxy variables. This may be important to state in the proposition. Additionally, it is again strange that only this particular family of counterfactual quantities is shown to be identifiable, as opposed to a more general set of counterfactuals. Is the paper saying that for queries not covered by Prop. 6.1 and 6.2, Decaf makes no claim? I should also point out that it is somewhat misleading that the paper claims that Decaf identifies queries, since it is not Decaf itself that is performing the inference, but rather the proofs of the paper that show that the query is identifiable. Experimental Designs Or Analyses: The experiments are extensive and show fairly conclusively that Decaf achieves lower estimation error when compared to other models. It is also impressive that Decaf can be applied to such large graphs. The qualitative results surrounding the inference of the confounders are also interesting. I would be curious to see if there is a task that could leverage the generative capabilities of Decaf, as opposed to simply performing an estimation task. Supplementary Material: I did not evaluate the supplementary material, except to check the validity of the proofs of Sec. 6. Relation To Broader Scientific Literature: The paper does not acknowledge the work of some key papers that develop causal generative models that already handle unobserved confounding. [Goudet’17] develops one of the earliest causal generative models called the Causal Graph Neural Network (CGNN), and Sec. 6 of the paper explicitly discusses how the model could handle unobserved confounders. [Xia’21] introduces a causal generative model called the Neural Causal Model (NCM) and also discusses theoretical properties such as expressiveness and causal constraints. Not only could NCMs handle general graphs with arbitrary confounding, they could be used to solve the identification problem directly, and later work [Xia’23] shows that they can handle more general problem settings such as with interventional datasets or querying arbitrary counterfactual quantities (even nested ones). In contrast, the work presented in this paper appears to be less general. Only specific families of queries are discussed, and the model specifically works in the case with observational data. Identifiability appears to be proven in the paper for these specific families of queries, as opposed to providing a method to deduce identifiability. Sources: [Goudet’17] “Learning Functional Causal Models with Generative Neural Networks”, Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, Michèle Sebag [Xia’21] “The Causal-Neural Connection: Expressiveness, Learnability, and Inference”, Kevin Xia, Kai-Zhan Lee, Yoshua Bengio, Elias Bareinboim [Xia’23] “Neural Causal Models for Counterfactual Identification and Estimation”, Kevin Xia, Yushu Pan, Elias Bareinboim Essential References Not Discussed: See above. Other Strengths And Weaknesses: I appreciate that the assumptions are stated in the paper. However, there seem to be some assumptions that are missing. For example, some queries are only identifiable given positive probability in the dataset. Also, there seems to be an assumption on which particular identifiable queries Decaf is capable of handling. Another example is that there may be several regularity conditions related to the theoretical results arising from the normalizing-flow architecture. Other Comments Or Suggestions: It would be helpful to see an example architecture given a specific graph. Questions For Authors: Please let me know if there were details about the paper that I missed or misunderstood. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We appreciate all the feedback and references. Due to the space limit, we only respond to the most critical questions below. > issues I have with the claims Claim 1: We thank the reviewers for the references. We will better position our contributions with the related work and relax our claim accordingly. Refer to our response to Rev. fBNE for our comparison to [Xia’21] and [Xia’23]. Regarding [Goudet’17], we will include the reference yet, as far as we understand, the main focus of said work is on causal discovery. Claim 1b: We would like to stress that the proposed **Decaf exploits all the information in the causal graph,** including which variables are affected by each hidden confounder (see lines 172-173). For example, in Ecoli70, we separately model the three hidden confounders and their causal dependencies separately. Like most existing works, while we have not explored _simultaneous interventions in multiple variables_, we do not see any technical limitation that prevents doing so and, thus, it is an interesting line for future work. Note that Decaf inherits the do-operator from CNFs, and nothing impedes us from applying a do-operation several times. Claim 2: Prop. 6.1 requires _one of the three_ conditions to hold in order to identify the interventional query, and thus we can ensure accurate estimations also when proxy variables exist as in Miao et al. For counterfactual queries, Prop. 6.2 clearly stated that they are identifiable if their interventional counterpart is so too (under any of the three assumptions). The proof of Prop. 6.2 refers to Prop. A.2 as this is the most general case, and all the other results in the appendix can be derived from it as special cases (e.g. assuming $t \in ch(z)$). Unfortunately, we fail to understand what the reviewer means by “particular family of counterfactual quantities”. If the reviewer means, e.g., conditioning only on a subset of factual variables or performing multiple interventions, we will add these as lines for future work. Importantly, as shown in our empirical results, Decaf provides a practical, yet theoretically grounded, approach to accurately estimate a large number of causal queries in large graphs, as highlighted by the reviewer. The _Napkin graph_ is indeed an interesting case for which can also prove identifiability by reducing the query to smaller ones, similar to what we do in the frontdoor example of App. A.2.3. We will include the proof in the revised paper, which [we have already empirically corroborated with these figures](https://shorturl.at/UAKPE). Claim 3: We firmly believe that our empirical evaluation is thorough and enough to demonstrate both the capabilities (see, e.g., Section 7) and limitations (see, e.g., App. B.3.2) of Decaf. If the reviewer has specific additional experiments of interest in mind, we will happily consider them for the revised version of the paper. > misleading that the paper claims that Decaf identifies queries We thank the reviewer for pointing out this unfortunate wording, which arose from our efforts to summarize our results and particularize them to Decaf. We will rewrite our Proposition 6.1 (and A.2) to clearly state that, indeed, Decaf provides accurate estimates when a causal query is proven to be identifiable through careful causal analysis and appropriate assumptions (which we already stated in lines 305-316). We have carefully revised the manuscript to clarify this important distinction along the paper. > some assumptions that are missing…​​identifiable given positive probability in the dataset The positivity assumption is included in Def. 3 and we assume Decaf to perfectly match the observational distribution in line 185. We will make this assumption more explicit with regards to the available dataset used to train Decaf. We believe no regularity assumptions are missing, but we would appreciate it if the reviewer could be more specific so that, if any, we could properly include them. > I would be curious to see if there is a task that could leverage the generative capabilities of Decaf, as opposed to simply performing an estimation task. We (partially) evaluate this by checking observational and interventional MMD in the appendices and the fairness use-case, which shows how to build fairer classifiers. Further analysis of the generative capabilities of Decaf are deferred to future work. > It would be helpful to see an example architecture given a specific graph. We agree it can help and will add some examples in the revised paper. For now, we refer the reviewer to the original CNF paper for details on the architectural design of the networks (minus conditioning). We believe that the clarifications above help improve the clarity of the paper and thus we will revise the paper accordingly. We believe that the necessary changes in the paper (which we have already implemented) are not substantial and thus hope that the reviewer will reconsider their assessment. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I appreciate the analysis comparing to the related works, and I think it sheds light on the contributions of Decaf. I also appreciate that the authors are willing to add clarifications regarding assumptions, contributions, and examples in the revised version. That said, I still have some concerns. > Claim 1b I don't think this addresses my comment. I understand that the intent is the Decaf should exploit all information in the causal graph, but I believe the example I gave (with $A \rightarrow B \rightarrow C$, $A \leftrightarrow B$, $B \leftrightarrow C$) is a counterexample. > we have not explored simultaneous interventions in multiple variables > Unfortunately, we fail to understand what the reviewer means by “particular family of counterfactual quantities”. If the reviewer means, e.g., conditioning only on a subset of factual variables or performing multiple interventions, we will add these as lines for future work. It seems that this is being used as a counterargument regarding the lack of generality of Decaf being able to represent the full set of counterfactual queries. I am OK with this being deferred to future work, but I would then expect to see the contributions revised to say that the model only handles a specific family of counterfactual queries. >The Napkin graph is indeed an interesting case for which can also prove identifiability by reducing the query to smaller ones, similar to what we do in the frontdoor example of App. A.2.3. We will include the proof in the revised paper, which we have already empirically corroborated with these figures. I appreciate the reference to the empirical results. That said, it is unclear to me how Decaf is able to represent this graph, architecturally speaking. There is unobserved confounding between $X \leftarrow W$, and there is unobserved confounding between $W \leftrightarrow Y$, but there is no unobserved confounding between $X$ and $Y$ (otherwise it would not be ID). How is this handled? > We believe no regularity assumptions are missing, but we would appreciate it if the reviewer could be more specific Since Decaf is designed to handle continuous cases, there are many potential cases where the generating model could be poorly behaved. For example, in any query where a hard intervention is performed, say $do(X = 1)$, it is possible that $do(X = 0.9999)$ and $do(X = 1.0001)$ have completely different behavior. It seems that there may implicitly need to be some kind of smoothness requirement to have any kind of theoretical guarantee. --- Reply to Comment 1.1.1: Comment: We appreciate the engagement from the reviewer, and we are certain we can further clarify the reviewer’s questions. >... the example I gave [...] is a counterexample. Let us provide further details on how Decaf models both graphs differently (and thus it is not a counterexample), now that we are not as limited as in the previous rebuttal by space constraints. Refer as CASE 1 to the graph where the three variables $A,B,C$ are affected by the same hidden confounder $Z$; and CASE 2 to the graph where $A \leftrightarrow B$ and $B \leftrightarrow C$, i.e., there are two (a priori) independent confounders, $Z_1$ and $Z_2$, affecting respectively $A,B$ and $B,C$. Next we discuss how Decaf models (and thus distinguishes) both cases at both the encoder and decoder networks: CASE 1: - **Encoder**: The posterior of $Z$ depends on all the three observed variables, i.e., $p(Z | A, B, C)$ (and thus can not be factorized). That is, the encoder in Decaf will receive an adjacency matrix that connects all $A,B,C$ to $Z$. See our response below for further details on the structural constraints of Decaf. - **Decoder**: As in the original CNF, the decoder needs to model a causally consistent data-generating process. In this case, it means that $p(A, B, C|Z) = p(A|Z) p(B|A, Z) p(C|B, Z) $. Causal relationships are enforced in the decoder using the corresponding adjacency matrix. CASE 2: - **Encoder**: For the posterior of the hidden confounders $Z= \{Z_1, Z_2\}$, we may factorize it as $p(Z | A, B, C)= p(Z_1, Z_2 | A, B, C)=p(Z_1 | A, B) p(Z_2 | Z_1, B, C)$, such that the Decaf encoder will connect $A,B$ to $Z_1$ and $B,C, Z_1$ to $Z_2$. - **Decoder**: In this case, the decoder factorizes as $p(A, B, C|Z_1, Z2) = p(A|Z_1) p(B|A, Z_1, Z_2) p(C|B, Z_2)$. We would like to remark that, as discussed with Reviewer fBNE, we have generalized Eq. 5 to capture more general cases. We refer the reviewer to the discussion with fBNE for more details. > … it is unclear to me how Decaf is able to represent this graph, architecturally speaking... How is this handled? **Architecturally speaking**, both the Decaf encoder and decoder build upon the structural constraints of Masked Autoencoder for Distribution Estimation (MADE) [1], already exploited by the original CNF to ensure causal consistency. More specifically, Decaf exploits MADE [1] and the explicit knowledge of the (directed and acyclic) causal graph, to impose a specific functional dependency (given by an adjacency matrix) in two conditional normalizing flows. In that way, the decoder (respectively, encoder) generates each endogenous variable (or hidden variable) using only those variables that we know cause them (or that appear in the corresponding posterior of the new Eq. 5 above). For details on the specifics on the implementation and nuances of this masking we refer the reviewer, if interested, to [2]. For completeness, [we provide here](https://anonymous.4open.science/r/rebuttal-decaf/structural_scheme_napkin.pdf) an expanded version of Fig. 2 of the paper for the napkin graph, explicitly showing how each module generates each variable, complementing the explanation given above. [1] [MADE: Masked Autoencoder for Distribution Estimation](http://arxiv.org/abs/1502.03509) [2] [Structured Neural Networks for Density Estimation and Causal Inference](http://arxiv.org/abs/2311.02221) > … implicitly need to be some kind of smoothness requirement to have any kind of theoretical guarantee. We would like to highlight that, as explicitly stated in our assumptions paragraph (lines 122-125), we assume the true SCM to have $C^1$-diffeomorphic causal equations which implies that, given $z$, the data-generating process is assumed to be continuous, differentiable, and invertible with respect to the exogenous variables $u$. Such an assumption is also made for Decaf in Section 5, as it relies on a conditional CNF. Thus, all our assumptions have already been explicitly written in the paper. That said, we will add an assumption paragraph (similar to the one in lines 122-125) to Section 5 to make sure that the reader does not miss such important information when using Decaf. >... I would then expect to see the contributions revised to say that the model only handles a specific family of counterfactual queries. Despite the lack of explicit confirmation, we understand from the reviewer’s answer that the examples mentioned in our rebuttal are indeed the more general queries they were referring to. We will revise the contributions to clearly specify what (broad) family of queries we have considered in this work, and which ones we defer for future ones. We hope that with the above response, the reviewer does not have any outstanding concerns and, as a consequence, they would consider updating their score on our paper accordingly. Thanks again for the feedback!
Summary: To identify interventional and counterfactual queries in the presence of hidden confounders, this work proposes Decaf, an encoder-decoder architecture that combines causal normalizing flow (CNF) (Javaloy et al., 2023) as the decoder with conditional normalizing flow (CdNF) as the encoder. The key idea is to treat hidden confounders $Z$ both as the conditional variables in the CNF and as the latent variables (i.e., the latent code) in the ELBO of variational inference, while exogenous variables $U$ remain the latent variables in the CNF. The identifiability result is derived by combining the backdoor adjustment and the two-proxy approach from Miao et al. (2018). Counterfactual identifiability is established using the twin SCM technique (Balke & Pearl, 1994). Claims And Evidence: **(Critical)** In the informal Proposition 6.1, clause (iii) incorrectly claims that Decaf identifies causal queries using proxy variables. While the formal Proposition A.2 correctly states that causal queries are identifiable, Decaf itself does not achieve this identification. Specifically, *when computing the distribution of $Z$ using the encoder (Equation 5), it does not solve the Fredholm integral equation proposed in Miao et al. (2018)*, as is done in [1,2,3,4]. This is similar to the mistake made in the CEVAE paper [5], which cites identifiability results for proxy variables but proposes an estimator that does not leverage them. To illustrate the issue more starkly, consider an analogy: we have the standard adjustment formula (Equation 9), but instead of using the necessary adjustment variables $\bf{b}$, we simply regress $Y$ on $T$ naively. Furthermore, as is clear in Proposition 6.1 clause (i), the backdoor adjustment does not work in the presence of hidden confounders. Thus, **the method principally fails under hidden confounders**, contrary to the paper’s main highlight. *(Over-stretch the significance)*. The contrast that Decaf enables “training once [and] comput[ing] any causal query on demand” while other methods “are tailored to a specific causal graph” and “train[ed] one model per query” is misleading. *Decaf is also “tailored to a specific causal graph” because "the given causal graph" is explicitly embedded into the CNF (Equation 3)*. While Decaf allows querying multiple causal effects within a large graph, other methods can be used as building blocks to deal with large graphs as well. In fact, Decaf’s encoder would likely need to perform similar operations to those in [1,2,3,4]. Additionally, Proposition A.2 does not appear to be qualified as “a generalization of the results previously presented by Miao et al. (2018) and Wang & Blei (2021)”. The combination and modification seem rather straightforward. Methods And Evaluation Criteria: In its current form, there is no clear justification for using a CdNF as the encoder—any neural network would do (if the method really works). This suggests the critical problem mentioned earlier: *arbitrary NNs, including CdNF, won't do*; a more appropriate approach would be to incorporate elements similar to those in [1,2,3,4]. I will revisit this in **Experimental Design and Analysis**. Theoretical Claims: Despite the critical concern mentioned above, the formal theoretical statements are reasonable, though I did not check the proofs. Below are some non-fatal but important points. The assumptions and mathematical formulation regarding the *invertibility of $\bf{f}$* are unclear. The core requirement, I believe, is that for *any* given value of $\bf{z}$, the vectors $\bf{x}$ and $\bf{u}$ must be connected by *an* invertible function. This is a significant assumption that the paper does not clearly state. Moreover, the notation is incorrect. According to the paper’s convention, $\bf{f}$ denotes the vector collecting the functions $f_i$. However, this differs from the whole-system function that connects $\bf{x}$ and $\bf{u}$ given $\bf{z}$. Stating that the vector $\bf{f}$ is invertible is, at best, ambiguous: on the one hand, we certainly do not mean that each $f_i$ is invertible (which they cannot be); on the other hand, what could we mean by that, then? Also, (2) should be written as $T_{\theta,z}(\mathbf{x})=\mathbf{u}$ and $\mathbf{x}=T_{\theta,z}^{-1}(\mathbf{u})$ where $T_{\theta,z}(\cdot):=T_{\theta}(\cdot, \mathbf{z}=z)$, because the invertibility is between $\bf{x}$ and $\bf{u}$ *given* $\mathbf{z}=z$. The current notation reads as if $T_{\theta}$ was at same time an invertible function between $(\mathbf{x, z})$ and $\bf{u}$ and an invertible function between $(\mathbf{u, z})$ and $\bf{x}$, which is very confusing. Experimental Designs Or Analyses: The experiments themselves appear sound, and I do not suspect bugs. However, I hypothesize that dataset biases contribute to the reported performance. For instance, in Section 7.1, if the artificial variables follow Gaussian (or more generally, exponential family) distributions and the functions are invertible, these properties align with the inductive bias of normalizing flows, potentially leading to favorable results. Similar biases—particularly those related to exponential family distributions—could be present in other datasets as well. A useful sanity check would be to *replace the CdNF encoder with several types of NNs and observe whether the results change significantly*. Supplementary Material: I reviewed all formal theoretical statements in the Appendix and examined Sections C (Do-operator) and E (Algorithms) in detail. Relation To Broader Scientific Literature: While I've never recommended removing references, I believe it is justified in this case. The CEVAE paper is better not to be cited, as this submission itself demonstrates how that work can mislead future research. I suggest the authors have a look at [5]. [5] Rissanen, Severi, and Pekka Marttinen. "A critical look at the consistency of causal estimation with deep latent variable models." Advances in Neural Information Processing Systems 34 (2021): 4207-4217. Essential References Not Discussed: The following papers are essential for understanding estimation based on Miao et al. (2018): [1] Shi, Xu, et al. "Multiply robust causal inference with double-negative control adjustment for categorical unmeasured confounding." *Journal of the Royal Statistical Society Series B: Statistical Methodology* 82.2 (2020): 521-540. [2] Cui, Yifan, et al. "Semiparametric proximal causal inference." *Journal of the American Statistical Association* 119.546 (2024): 1348-1359. [3] Mastouri, Afsaneh, et al. "Proximal causal learning with kernels: Two-stage estimation and moment restriction." *International Conference on Machine Learning*. PMLR, 2021. [4] Kompa, Benjamin, et al. "Deep learning methods for proximal inference via maximum moment restriction." *Advances in Neural Information Processing Systems* 35 (2022): 11189-11201. # Other Strengths And Weaknesses: Despite the critical concern I mentioned, the work is solid for the most part. And, set aside the issues in claims and mathematical formuations, the paper is generally well written. I am willing to raise my score if the authors can show that my critical concern is invalid. Other Comments Or Suggestions: - The term “amortized” in the abstract is never explained and appears unnecessary. The approach can be understood simply as variational inference. - **L267:** “We find that an interventional query is identifiable if…” This is not an original finding, as mentioned earlier. - **L1540:** Editorial error in the title—it should be “counterfactual”. - **Algorithm 2, Line 4, and Algorithm 4, Line 6:** These steps resemble Abduction, not Action. In particular, Action should not modify exogenous variables. Here, the update of the latent $U$ occurs because it differs from the true exogenous variables by an invertible transformation of the whole system (cf my **Theoretical Claims** section). - There is another well-cited, but not quite related paper, named Decaf. Van Breugel, Boris, et al. "Decaf: Generating fair synthetic data using causally-aware generative networks." Advances in Neural Information Processing Systems 34 (2021): 22221-22233. Questions For Authors: - **Algorithm 4, Line 3:** Is it necessary to “estimate the mean”? This line can be removed; the algorithm can be run multiple times, and the final output is averaged. In fact, I belive that taking the average of $Z$ first is wrong: the mean of a function is not equal to the function of a mean. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough comments. We will revise our work to include the clarifications below and improve its clarity. We believe that the necessary (and already-implemented) changes are not substantial and thus hope the reviewer will reconsider their assessment. > clause (iii) incorrectly claims that … We thank the reviewer for pointing out this unfortunate wording. Please refer to the response to reviewer Ui4z for more details. > does not solve the Fredholm integral equation [...] This is similar to the mistake made in the CEVAE paper [5] …better not to be cited… We acknowledge we take an alternative approach and, akin to Wang and Blei (2021), Decaf ***implicitly*** solves the Fredholm integral equation by modeling $p_\theta(x|z)$ consistently with the causal graph (see Eqs. 11-16 in Prop. A.2). Importantly, unlike CEVAE, we do not attempt to recover the _true_ $z$, and our decoder is a CNF which is identifiable given $z$, as shown in the original CNF article. We only advocate to use Decaf under specific conditions: we have been very careful with our assumptions, theory, and queries we can accurately estimate (see lines 305-316). We understand the reviewer’s concerns on CEVAE, and will extend the discussion in line 1575 to reflect the criticisms in [5]. > …the backdoor adjustment does not work in the presence of hidden confounders. Note that Decaf can estimate queries for which ***any*** of the conditions in Prop 6.1 hold. We believe the reviewer’s example is similar to that of Fig. 11, which we can estimate if there exists proxy variables. Please correct us if we misunderstood the statement. > Decaf is also “tailored to a specific causal graph” … We meant that Decaf is not designed around any particular causal graph, but that it can be applied to any given DAG, as we extensively show in our experiments. We have experimented with diverse DAGs with tens of variables and multiple confounders (see, e.g., Fig. 1, 7 and 8), demonstrating the flexibility and effectiveness of Decaf in accurately estimating causal queries that are or not affected (see, respectively, Sec. 7 and App. B.3.2) by hidden confounders. > … other methods can be used as building blocks to deal with large graphs as well. While we agree with the statement, combining such blocks probably requires expert handcrafting, and we are unaware of packages automating this process.Thus, we believe that the reviewer's statement further reinforces the Decaf’s main contribution, i.e., providing a practical and theoretically grounded approach that amortizes parameters and training to estimate any (identifiable) causal query. > Prop. A.2 does not appear to be qualified as “a generalization of [...]”. Prop. A.2 generalizes existing results as it accommodates for the covariate $c$ and the adjustment set $b$, which is critical for the results that follow in App. A.3 and was absent in previous works.These are novel results, even if not difficult to prove, which support our main contribution. > there is no clear justification for using a CdNF as the encoder… Since the encoder has to match $p_\theta(z|x)$ as closely as possible, a conditional NF allows us to approximate any given density. However, we agree that other networks could be used, and thus we run an ablation on Sachs to test different encoder options ([see here](https://shorturl.at/jHUU9)). > The assumptions and mathematical formulation regarding the invertibility of f are unclear. We thank the reviewer for pointing out this imprecision. We will clarify in lines 124 and 163 that _invertibility of f is conditioned on $z$_, as we already do for Decaf in lines 157-159. We also agree that the reviewer’s notation is more clear, and will adopt it in the revised paper. > I hypothesize that dataset biases contribute to the reported performance. We politely disagree with this statement. First, we generate the data using the code of Chao et al. (2023) which uses random weights and nonlinear functions ([see here](https://shorturl.at/Kpn9h)). This does not resemble Decaf’s architecture. Second, we can always find a mapping from $x$ to a standard Gaussian using the Knuth-Rosenblatt map, irrespective of whether this was originally the case. > These steps resemble Abduction, not Action… As detailed in App. C, our abduction-action steps resemble the ones of the original CNF, where the authors showed the equivalence between the implementation of their do-operator (which modifies the u’s) and the standard do-operator on SCMs. > Alg. 4: Is it necessary to “estimate the mean”? For counterfactual estimation, we decided to generate a single sample by taking the average latent representation, which is a common approach in latent-variable modelling [1-2]. [1] Be more active! understanding the differences between mean and sampled representations of variational autoencoders (2023) [2] Challenging common assumptions in the unsupervised learning of disentangled representations (2019) --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. However, my main concerns remain. “Decaf provides accurate estimates when a causal query is proven to be identifiable” “Decaf implicitly solves the Fredholm integral equation” I appreciate the intent, but I still cannot see how Decaf implicitly solves the Fredholm integral equation. Neither the CNF nor the CdNF architecture incorporates the necessary moment restrictions or estimation techniques found in works like Miao et al. (2018) or follow-ups [1–4]. Without such mechanisms, Decaf is not a valid estimator based on the identifiability result. “We meant that Decaf is not designed around any particular causal graph” But in your experiments, for each dataset, I believe you build a specific causal graph into Decaf—i.e., into the CNF via Equation (3). So Decaf is tied to a particular graph, and while the graph may be large enough to support multiple queries, the method is still tailored to that structure. This weakens the contrast drawn with other methods. On Proposition A.2, you seem to agree that “the combination and modification seem rather straightforward”? If so, I suggest making this more transparent in the paper—clearly explaining what is novel and what is inherited or adapted from existing results (e.g., Miao et al., Wang & Blei, etc.). The use of the Knothe-Rosenblatt map to justify the invertibility of the encoder is reasonable. I recommend adding this explanation—at least in the Appendix—for completeness. Still, the consistent empirical advantage of CdNF over MLP in your experiments deserves further explanation. If CdNF is not theoretically necessary for identifiability, what inductive bias or training behavior explains the performance gain? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging with our rebuttal. > I still cannot see how Decaf implicitly solves the Fredholm integral equation [...]. Without such mechanisms, Decaf is not a valid estimator based on the identifiability result. We politely disagree with the reviewer statement that Decaf is not a valid estimator of identifiable causal queries. Both our theoretical results and empirical evidence demonstrates otherwise. Our proof of Prop. A.2 clearly shows that ***the estimated causal query by Decaf is equal to the true causal query*** (Eqs. 17-21). The proof shows that the solution to the integral equation of Decaf (Eq. 10) also solves the equation of the original model (Eqs. 29-32). I.e., modelling the data following the true causal graph does solve the integral equation **implicitly**, meaning that we **do not have access** to the solution of the integral equation, $\tilde h$, but we do not need it as we can compute $\tilde p(y|do(t), c)$. Please, refer to (Wang and Blei, 2021, Sec 2.2), whose theoretical results follow the same argument. We have carefully revisited both, our proof and the one in (Wang and Blei, 2021, Sec 2.2), and could not find any error. Also Reviewer Ui4z stated: “I checked the proof of proposition 6.1 (A.2) in the appendix. ***There are no issues with the proof and I can see the rest of the proofs have the same quality so I don’t worry about their validity***.” Therefore, we respectfully invite the reviewer to check our proof, and provide evidence to support their claim about the invalidity of Decaf. > I believe you build a specific causal graph into Decaf—i.e., into the CNF via Eq. (3) [...] We do not fully understand nor share the reviewer's concern, especially the statement “This weakens the contrast drawn with other methods”. First, for all the papers referenced by the reviewer [1-4], the causal graph is always the same, and only one specific causal query can be estimated. Therefore, those methods are limited to a specific causal graph and query. If one wants to estimate another causal query with the same data, they need to train another model, even if the query can be estimated through adjustment. Second, ***Decaf is not designed with a specific causal graph in mind and it can estimate every possible query in that causal graph***. We have clearly stated that we need to train a Decaf model for each specific pair of observed data and causal graph. That said, Decaf is able to: i) model any causal graph as long as it is a DAG; and ii) estimate all queries that are identifiable as long as the true SCM fulfills our assumptions. These queries include those solvable by proximal inference as well as through adjustment. Moreover, the methods in [1-4] do not compute counterfactuals, which is also a main contribution of Decaf. We find this a significant difference from a practical point of view, which is our main focus, as stated in lines 84-85: “Decaf offers a practical and efficient solution for causal inference in the presence of hidden confounding”. > On Prop. A.2, you seem to agree that “the combination and modification seem rather straightforward”? If so, I suggest making this more transparent in the paper—clearly explaining what is novel and what is inherited or adapted from existing results (e.g., Miao et al., Wang & Blei, etc.). We appreciate the suggestion. However, note that this is already stated in the main paper (lines 266-268) as well as in the appendices (lines 827-832) and we clearly state how to recover the existing results in lines 937-940. Although the generalization can be regarded as “simple”, it is crucial to model counterfactuals, and we consider to have been honest and transparent already in our manuscript. >The use of the Knothe-Rosenblatt map to justify the invertibility of the encoder is reasonable. I recommend adding this explanation—at least in the Appendix [...] If CdNF is not theoretically necessary for identifiability, what inductive bias or training behavior explains the performance gain? We will add this discussion in full detail to the Appendix. We will also clarify that CdNFs present several advantages for our purposes. First, it allows us to factorize the posterior distribution according to the correct dependencies inferred from the causal graph (we refer to the discussion with reviewer fBNE for more details). This can be observed in [this figure](https://anonymous.4open.science/r/rebuttal-decaf/structural_scheme_napkin.pdf), where we expand Fig. 2 of the paper. There, we observe how the CdNF correctly handles the dependencies of the posterior distribution. Second, a CdNF is a universal density approximator and, more importantly, does not restrict the posterior to a specific parametric form (e.g. Gaussian), unlike an MLP. >Note about the name of the model We will rename our model to DeCaFlow. We hope that this reply answers all the concerns, and that the reviewer updates their assessment accordingly.
Summary: This paper proposes a Causal Generative Model (CGM) that can identify all causal queries under certain conditions. The architecture of the model is an encoder-decoder network where both encoder and decoder are conditional normalizing flows, which is constrained by an assumed causal graph. The authors then informally state that the model is able to identify a given interventional query under 3 possible conditions. Finally, the authors test their model on synthetic and semi-synthetic data. Claims And Evidence: Partially. What I understand of the proposed approach is a plug-in estimator of structural causal models and everything that derives from it; that is, interventions and counterfactuals. So I believe some of the theoretical statements are not necessarily supported by evidence. For example, in proposition 6.1, the authors write “Decaf is able to identify a given interventional causal query if one of the following exists: i) a valid adjustment set b not containing z, ii) an invalid one where p(b | do(t)) is identifiable, or iii) sufficiently informative proxy and null proxy variables.” But that is true with any estimation method and not particular to the architecture they are proposing. This is also seen in the proof of proposition A.2. on the appendix and the algorithms for identification on the appendix, which again, don’t depend on the method. This is not to say that the architecture is not valuable but more to say that there might be some statements stronger than the evidence. Methods And Evaluation Criteria: Yes, they are consistent with the theory they are presenting. Theoretical Claims: I checked the proof of proposition 6.1 (A.2) in the appendix. There are no issues with the proof and I can see the rest of the proofs have the same quality so I don’t worry about their validity. Experimental Designs Or Analyses: Yes. Whatever was on the main paper plus appendix B where the authors describe the data generation process. Supplementary Material: See above. Relation To Broader Scientific Literature: I think the proposed architecture is an interesting addition to the causality research, expanding the method by Javaloy et.al. (2023) to deal with hidden confounders. Essential References Not Discussed: I would say yes. Since they draw a lot of inspiration from Miao et.al. (2018), Wang and Blei (2019) and Javaloy et.al. (2023), I don’t know how they could have missed the discussion on D’Amour (2019). Indeed, D’Amour even proposes using proximal causal methods, like Miao et.al. (2016; 2018) and the authors of this paper, to solve some of the problems in Wang and Blei. Other Strengths And Weaknesses: Strengths: - I particularly liked the combination of the work by Miao et.al. (2018), Wang and Blei (2019), and Javaloy et.al. (2023). In particular, extending the work of Javaloy et.al. to include unobserved confounders is important, given this is a common phenomenon in the real world. - The proofs are of very high quality. Weaknesses: - See claims and prior work. Other Comments Or Suggestions: None. Questions For Authors: No questions beyond my comment in the claims section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and appreciation of our paper, especially regarding the quality of our proofs. In the following, we clarify the points raised by the reviewer, which are of great help to further improve the presentation of our contributions. > I believe some of the theoretical statements are not necessarily supported by evidence. [...] But that is true with any estimation method and not particular to the architecture they are proposing. We thank the reviewer for pointing out this unfortunate wording, which arose from our efforts to summarize our results and particularize them to Decaf. We will rewrite our Proposition 6.1 (and A.2) to clearly state that, indeed, Decaf provides accurate estimates when a causal query is proven to be identifiable through careful causal analysis and appropriate assumptions (which we already stated in lines 305-316). We have carefully revised the manuscript to clarify this important distinction along the paper. > I don’t know how they could have missed the discussion on D’Amour (2019). We were indeed aware of D'Amour's work, and therefore apologize for missing the reference. We have already revised our paper to include this relevant reference. D'Amour’s work is a key citation for our discussion in Appendix D, as it recognizes the use of proxy variables as an appropriate (alternative to use only active treatments) way to approach causal inference under hidden confounding. In our paper, similar to Wang and Blei (2021), we use this result to build on Miao et al. to propose a practical approach to causal inference under hidden confounding. We are greatly surprised the reviewer liked our the combination of other three great works we present here with Decaf, and thanks them once again for their feedback. We hope the changes implemented during the rebuttal help improve their assessment on our work as much as they helped improve the quality of the same.
Summary: The paper proposes a method called Decaf that learns a causal generative model in presence of unobserved confounders. After training the model, the model can perform interventional and counterfactual estimation. Finally, the authors showed empirical evaluation on the Ecoli70 dataset. Claims And Evidence: * The authors considered a confounded structural causal model where $x_i= f_i(pa(i), u_i, z)$. Its not clear how the authors are doing the abducting step for $u$. Even though $u_i$ are independent, during the abduction process, they become dependent due to conditioning on direct children $x_i$ or some descendant. If the authors are using the deconfounding network for the abduction step of $z$, how are they abducting $u_i$ for specific $x$? * The authors mentioned that to obtain posterior of $z$, they obtain independent hidden confounder $z_k$ by conditioning on their children. However, please consider this graph: 
$x_1 \leftarrow z_1 \rightarrow x_2 \leftarrow z_2 \rightarrow x_3$. We can not obtain correct posterior of $z_1$ by only conditioning on $x_1,x_2$ because, conditioning on $x_2$ makes $z_1$ independent on $x_3$. The authors should defend how they address such case. Methods And Evaluation Criteria: * The authors mentioned that existing works in their relevant area either consider causal sufficiency or tailored for specific causal graph. However there exists multiple works on. causal generative models such as [1, 2, 3] which can estimate causal effects in presence of unobserved confounders for any causal graph when the causal query is identifiable. Also, the authors also claim that they are the first to identify counterfactuals in presence of hidden confounders. However to my knowledge [4] can estimate counterfactuals in presence of unobserved confounders for specific cases. * There are many identifiable causal queries that do not have a valid adjustment set. For example, the frontdoor causal graph. Where does the proposed algorithm fail if there does not exists an adjustment set? Please check the Essential References section for the citations. Theoretical Claims: The theoretical claims appear to be correct although not checked in detail. Experimental Designs Or Analyses: I appreciate the author’s extensive experiments on synthetic, semi-synthetic (sachs, ecoli) and real word dataset. Supplementary Material: Yes, but not in detail. Relation To Broader Scientific Literature: Proper connection has been established although some important citations are missing. Essential References Not Discussed: [1] Rahman, Md Musfiqur, and Murat Kocaoglu. "Modular learning of deep causal generative models for high-dimensional causal inference." arXiv preprint arXiv:2401.01426 (2024).\ [2] Xia, Kevin, Yushu Pan, and Elias Bareinboim. "Neural causal models for counterfactual identification and estimation." arXiv preprint arXiv:2210.00035 (2022).\ [3] Xia, Kevin, et al. "The causal-neural connection: Expressiveness, learnability, and inference." Advances in Neural Information Processing Systems 34 (2021): 10823-10836.\ [4] Nasr-Esfahany, Arash, Mohammad Alizadeh, and Devavrat Shah. "Counterfactual identifiability of bijective causal models." International Conference on Machine Learning. PMLR, 2023. Other Strengths And Weaknesses: Strength: The paper is written in a nice and well-read manner. Other Comments Or Suggestions: See above. Questions For Authors: * Should there be any cross entropy term in equation 7? * How do the authors obtain $p_{\theta} (z|x)$ in equation 8? Isnt $z$ a conditional input in the $T_{\theta}$ model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback on our extensive experiments and questions, whose clarification will help to further improve our paper. > Its not clear how the authors are doing the abducting step for u. Note that, given $z$, the generative network becomes a regular CNF and thus we can perform causal inference as in the original CNF. More in detail, since the network is invertible given $z$, we can simply evaluate $T_\theta(x^\text{factual}, z)$ (Eq. 2). Due to space constraints, these details are omitted from the main paper, but explained at length in Appendix C, as mentioned in line 175. > x1←z1→x2←z2→x3. We can not obtain correct posterior of z1 by only conditioning on x1,x2 because, conditioning on x2 makes z1 independent on x3. We appreciate the reviewer input and agree that indeed, in such cases, our variational approximation of the posterior in Eq. 5 could be further improved by accounting for the dependencies between confounders, in the example via the factorization $q(z_1, z_2) = q(z_2 | x_3, x_2) q(z_1 | x_1, x_2, z_2)$. We will update our Eq. 5 to account for more general cases, like the one in the example, as ***it will only further improve our results***. Note that, there is only one variable, LacY, in Ecoli affected by two confounders, and in Sachs both confounders and jointly modeled. > However there exists multiple works on causal generative models such as [1-3] which can estimate causal effects in presence of unobserved confounders for any causal graph when the causal query is identifiable. We appreciate the suggested references, which we have read and included to the related works. With this new context, we will re-estate our claims and the key differences that still make Decaf a significant contribution to the field, namely: Decaf amortizes parameters and its structure is much more scalable than that of [2,3]. Decaf approximates the posterior of the latent variables, allowing for counterfactual inference, which [1,3] cannot do. [2,3] are restricted to discrete and low-dimensional variables, whereas Decaf handles continuous variables and arbitrarily large graphs. [1] cannot handle confounders that affect more than 2 variables and therefore, the presence of proxy variables cannot be leveraged in the same way as we do. > However to my knowledge [4] can estimate counterfactuals in presence of unobserved confounders for specific cases. We acknowledge that [2,4] can estimate counterfactuals under hidden counding for some cases, and we will relax our claims accordingly. We still believe that Decaf contributions are significant as it provides a practical, yet theoretically grounded, approach that amortizes parameters and training to estimate any (identifiable) causal query in large graphs with continuous variables (see review VqSV). In contrast, [2] and [4] focus on discrete variables, and require one network per variable. Furthermore, [2] requires a new model for each query and relies on rejection sampling for the abduction step. > Frontdoor causal graph. Where does the proposed algorithm fail if there does not exist an adjustment set? We believe there is a misunderstanding with the assumptions in Prop. 6.1, as it provides three ***alternative*** sufficient conditions for query identifiability. If there is no valid adjustment set, we can still prove identifiability if either condition ii) ***or*** iii) holds. Moreover, we analyze the usual frontdoor graph in App. A.2.3 and show that, leveraging the frontdoor criterion and the identifiability of the confounded-outcome case (lines 256-261), we prove identifiability for that query too. We are working on generalizing our theoretical result in Prop. 6.1 to also include the frontdoor criterion. > Should there be any cross entropy term in equation 7? We are afraid we do not fully understand the question. Eq. 7 expresses the ELBO by taking the prior $p(z)$ term from the KL divergence on $z$ to the first summand in Eq. 6, yielding the joint $p(x, z)$ minus the entropy, $H(q(z|x))$. We use this formulation, as stated in lines 284-288, to _reason_ about the information encouraged to be used within $z$. > How do the authors obtain $p_\theta(z|x)$ in equation 8? Isnt $z$ a conditional input in the $T_\theta$ model? We remark that we optimize the two networks in Fig. 2 using Eq.6. Instead, Eq. 8 aims to illustrate that optimizing the ELBO in Eq. 6 is equivalent to minimizing those two KLs, i.e., learning the data evidence with the generative network, and matching the intractable posterior with the deconfounding network. We finally would like to thank the reviewer again, as the above answers have helped us further improve our paper and clarify its contributions, which we believe lie in **a theoretically grounded, yet practical and efficient, approach for causal inference with hidden confounders**. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their rebuttal. > in such cases, our variational approximation of the posterior in Eq. 5 could be further improved by accounting for the dependencies between confounders, in the example via the factorization $q(z_1, z_2) = q(z_2 | x_3, x_2) q(z_1 | x_1, x_2, z_2)$. We will update our Eq. 5 to account for more general cases. The authors should make it clear how they would address this for more general cases as there might be sequence of confounders instead of just two. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging in the discussion on our paper. Below, we answer the reviewer's comment. > The authors should make it clear how they would address this for more general cases as there might be sequence of confounders instead of just two. In our previous response we had to be concise due to space limitations, but we agree with the reviewer and updated the paper accordingly. Specifically, we have generalized the previous factorization inEq. 5 to reflect these changes. Intuitively, we condition each $z_k$ with its children _and_ the parents of each child (as they form a collider). This equation can be expressed as follows: $q(\mathbf{z} \mid \mathbf{x}) = \prod_k q\left(z_k \mid \text{ch}(z_k) \cup \bigcup_{c \in \text{ch}(z_k)} \left( \text{pa}(c) \setminus \{z_j : j \geq k\} \right) \right)$. Note that this equation assumes a causal ordering between the different hidden confounders to avoid cyclic dependencies by excluding the posterior latents in the conditional factorization, i.e., either $z_i$ affects $z_j$ or $z_j$ affects $z_i$, but not both. That ordering is arbitrary and does not affect the results, since the collider associations have no causal direction. Note also that our encoder is an autoregressive normalizing flow where only the specified connections in the above expression, given by an adjacency matrix representing a DAG, are enabled. In addition, we are happy to share that we have repeated the Ecoli70 experiment with this new factorization for the encoder, and the obtained results have improved as foreseen in the previous response, ***obtaining comparable results to the Oracle model***. We provide [an updated figure in this link](https://anonymous.4open.science/r/rebuttal-decaf/results_ecoli_new_encoder_factorization.pdf). Note that, for the Sachs experiment, both factorizations are equivalent, so results do not change. We hope that with the above response, the reviewer does not have any outstanding concerns and, as a consequence, they would consider updating their score on our paper accordingly.
null
null
null
null
null
null
Reasoning Through Execution: Unifying Process and Outcome Rewards for Code Generation
Accept (poster)
Summary: This paper introduces Outcome-Refining Process Supervision (ORPS), a method unifying outcome supervision and process supervision in large language models (LLMs) for code generation tasks. The authors propose a tree-structured, inference-only, search framework using a combination of execution-based feedback, self-generated unit tests, and self-critiques to iteratively refine solutions. Empirical evaluations across three code-generation benchmarks (LBPP, HumanEval, MBPP) using different LLM models demonstrate improvements in correctness (Accuracy, or the Pass@1) and code efficiency metrics. The authors compare against LDB and Reflexion and show that the proposed ORPS achieves higher accuracy and more efficient code implementation. Claims And Evidence: The paper makes several claims, notably: 1. Structured reasoning guided by execution outcomes significantly improves code generation. Essentially, better textual reasoning leads to better code generation output. 2. A tree-structured search provides substantial performance gain due to multiple reasoning trajectories being maintained simultaneously. 3. Combining execution feedback with self-critique mechanisms creates a more reliable verification system than learned reward models. Notably, it showcases that a general model without PRM training is better than a trained PRM model. Claims 1, 2 are well supported by experiments. Claim 3 is not convincing enough IMHO, despite empirical evidence given in Table 4 for the following reasons: - The PRM training procedure is not clearly presented (from L366-370): It's not totally clear whether the line-level reward is collected by pure prompting GPT-4 to give line-level reward or it requires Monte-Carlo rollout, e.g. existing works such as [1] for math reasoning or [2] for code generation, and train another PRM with an modified value head. - The PRM training is done on half of the LBPP dataset only, which contains 162 / 2 = 81 problem entries. It seems problematic to me to claim, solely from experiments on such small-scale training, that learned PRM is worse. Moreover, Claims 1 and 2 are already undermined by existing literature: 1. The central claim that better reasoning improves outcomes has already been well-established by Olausson et al. [3], who demonstrated explicitly that reasoning quality and self-repair capabilities of LLMs correlate strongly with better outcomes in code generation. Notably, they also maintain a tree structure, called repair tree and show that the better quality of textual reasoning on previous failing code, such as coming from a stronger model or even human written, further increase the subsequent code generation performance. 2. Tree-structured search in LLM-based code refinement has been previously explored by Tang et al. [4] through a probabilistic exploration-exploitation approach for iterative code refinement, in which selecting which node in the tree to expand is heavily discussed. Another minor claim that worth revisiting is the L161-163 and L68-73 saying that outcom-supervision could lead to inefficient solution such as brute-force solution: the correctness in these benchmarks is determined by unit tests of IO pairs w/ time and memory constraint; and could be avoided by adding in large IO pairs with tight timeout. [1] OmegaPRM https://arxiv.org/abs/2406.06592 [2] Process Supervision-Guided Policy Optimization for Code Generation https://arxiv.org/abs/2410.17621 [3] Is Self-Repair a Silver Bullet for Code Generation? https://arxiv.org/abs/2306.09896 [4] Code Repair with LLMs gives an Exploration-Exploitation Tradeoff https://arxiv.org/abs/2405.17503 Methods And Evaluation Criteria: Benchmark: - The authors report performance on 3 coding benchmarks, including LBPP, HumanEval, and MBPP. This is a legitimate choice, given the comparison is done against LDB https://arxiv.org/abs/2402.16906 primarily, which reports performance on HumanEval, MBPP. Still, it remains a question about whether the claimed evidence extends to competitive programming benchmarks such as CodeContests and LiveCodeBench. Also, the result could be more solid if the results are reported in an enhanced version benchmark, due to the false positive rate in the original ones, for example HumanEval+ or MBPP+. Metrics: - The main metrics reported are solely pass@1. Other code-specific metrics are also reported in Figure 3, which looks interesting. However, the pass@1 in the manuscript is closer to what is defined as "accuracy", due to the fact that it overlooks the number of code sampled. This has been discussed in [1] and [2], where both argue that pass@k should consider the number of code sampled in iterative refinement code generation process, or furthermore, a fairer metrics could be pass n@k used in [2, 3, 4] (for example a code generated followed by 2 self-repair attempts should be reported as pass 1@3 instead of pass@1 since 3 code snippets are generated) or pass@tokens in [1]. [1] Is Self-Repair a Silver Bullet for Code Generation? https://arxiv.org/abs/2306.09896 [2] What Makes Large Language Models Reason in (Multi-Turn) Code Generation? https://arxiv.org/abs/2410.08105 [3] Competition-Level Code Generation with AlphaCode https://arxiv.org/abs/2203.07814 [4] RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning https://arxiv.org/abs/2410.02089 Theoretical Claims: There's no theoretical contribution in the manuscript, as it's a paper with empirical findings. Experimental Designs Or Analyses: The overall experiments look valid and sound, in particular for the main claim that ORPS is better than other variants such a LDB and Reflexion. As I mentioned before, results in 4.4 are not convincing enough due to the lack of details of PRM and the small scale of PRM training being done. The presented Process Rewarding in Sec 3.3 and in L192 is not the same as Process Reward Model in Table 4, which creates confusion. The Process Rewarding proposed by the authors still operates on a full code snippet. Also, the authors mentioned using generated unit tests in L246-247. There's no sufficient amount of details about how the authors generate the unit tests, the amount of the unit tests generated, and also a lack of reference for unit tests generation, which contains a rich literature either it being LLM generated or mutation methods. Supplementary Material: There's no supplementary material. Relation To Broader Scientific Literature: Please see above. Essential References Not Discussed: Please see above. Other Strengths And Weaknesses: Strengths: - A clear and practical inference-only framework that integrates execution-based feedback with LLM-driven critiques. - Empirical results indicating meaningful improvements in both correctness and efficiency. The including of metrics other than pass rate looks interesting. Weaknesses: - There're non-negligible flaws in the claims and the methods part (Please see above). More analysis would be appreciated; for example, the number of LLM calls, or the number of tokens generated, or the number of invocations of code evaluation of ORPS compared to LDB & Reflexion for a fairer comparison rather than just "Accuracy". - Limited conceptual novelty, as the core ideas (reasoning improvement from an LLM for self-repair, tree search) were previously established. Otherwise clarification from the authors is highly appreciated. - Presentation and writing could have been improved. Some important details are missing such as unit tests generations. Other Comments Or Suggestions: Notations are not clear, some symbols are never defined: - What is $j$ in L213? - What is $m_k^{j}$ in the equation in L177? - How is the weighted step score $q_t$ (L212) used? The beam size is only presented in Algo 1 and seems to be not mentioned in the main text. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review. We address your concerns point by point: ## Metrics Unlike methods that report Pass@k which attempts k solutions on testset (e.g. LDB and Self-Repair use ground-truth test cases which can be considered contamination), **our Pass@1 measures attempting only 1 solution on testset**. We only generate and execute multiple codes during reasoning on self-generated test cases. The final code (one per problem) is evaluated on the actual test set (unless marked "w/ T"). In fact, although baselines like LDB report Pass@k where k solutions are tested against testset cases, they sample much more code and debug on test cases from testset to obtain each solution. Thus it's still a fair comparison if we control the attempts on the full testset cases for all methods. Furthermore, we report metrics beyond Pass@1 as shown in Table 5. ## PRM Training We report training details in Appendix A.4. While we used 81 problems for PRM training to avoid contamination, we collected a larger dataset by beam search with 20 reasoning steps each. This generated 5414 steps with GPT-4o labels for training. **To further address your concern on label quality, we trained 2 new line-level PRMs:** 1. PRM-GPT: A larger GPT-4o labeled dataset (13644 steps). 2. PRM-Human: To obtain better reward labels, we sampled 400 reasoning chains with GPT labels and ask 3 authors to spend 12hrs each, to classify if the chains and rewards are valid. We got an average inter-annotator agreement of 0.44(Cohen's Kappa,AB=0.27,AC=0.61,BC=0.44). We kept 209 valid chains from 261 chains that annotators agree on each other and extracted 836steps for PRM training. ## Cost We add controlled experiments on LBPP with Qwen7B limiting LLM calls(Details in Rebuttal DEgP) with 2 new PRMs and REx[4]. |20 Calls|Pass@1|Tests%|Valid%|Time%| |-|-|-|-|-| |Reflexion|37.0|51.7|71.6|119.5| |LDB|37.0|50.8|66.0|274.7| |REx|43.2|57.4|72.2|268.4| |PRM-GPT|44.4|58.1|77.8|**100.1**| |PRM-Human|40.7|53.1|69.1|124.7| |ORPS|**48.4**|**64.8**|**84.5**|105.6| |50 Calls|Pass@1|Tests%|Valid%|Time%| |-|-|-|-|-| |Reflexion|40.7|57.6|79.6|130.3| |LDB|36.4|50.4|66.0|272.6| |REx|53.7|66.6|84.0|199.9| |PRM-GPT|37.0|52.9|71.6|**112.7**| |PRM-Human|38.3|53.5|69.1|137.7| |ORPS|**55.6**|**72.1**|**89.5**|116.8| |100 Calls|Pass@1|Tests%|Valid%|Time%| |-|-|-|-|-| |Reflexion|39.5|54.4|72.8|113.8| |LDB|37.0|51.0|66.7|275.7| |REx|54.3|65.6|79.0|218.4| |PRM-GPT|35.8|51.1|70.4|127.3| |PRM-Human|42.0|56.4|76.5|106.6| |ORPS|**64.2**|**75.4**|**88.9**|**91.0**| Results show repair-based methods (LDB, Reflexion) fail to scale effectively. LDB focuses on fixing code blocks in a single solution, which works for simple bugs but fails when the algorithmic approach is suboptimal, while ORPS improves significantly. REx may appear advantaged as it generates code every call while others generate code every 2 calls, attempting twice as many solutions given the same calls. **Given the same compute budget, ORPS consistently generates better solutions without training or test cases from testset.** ## Novelty and Contributions ORPS fundamentally challenges the assumption in process supervision(OmegaPRM,MathShepherd) that specially trained PRMs are necessary for reasoning guidance. Compared to trained PRMs, although increasing training data quality(which is expensive) could lead to improvements, **combining verifiable rewards with existing LLMs without training outperforms trained PRMs as compute increases**. This is significant conceptual advancement for process supervision that eliminates the overhead of PRM training while improving results. Unlike Self-Repair, REx, LDB that focus on repairing code or specific blocks, ORPS reasons about solution strategies at a higher level to avoid local optima. Instead of incrementally fixing specific solutions to pass more tests, ORPS explores different algorithms with extensive reasoning, even on solutions that pass all generated cases for improvement. Compared to REx which focus on selecting best nodes to explore by solving arm-acquiring bandit problem from exec outcomes, we propose **LLMs are capable of directly generating high quality process rewards given some feedback to select and expand certain reasoning chains**. In fact REx might be used in conjunction with ORPS process rewards since REx focus on selecting better nodes while we try to produce better rewards but is out of the scope of this work. Besides, previous methods require testset cases, which is practically hard to obtain(as pointed out by DEgP and LiveCodeBench), while ORPS relies on self-generated cases(**for unit-test generation, please refer to Rebuttal DEgP**). For notation issues, we will fix them in revised version. We will also cite, discuss and compare with works you listed that we missed previously,like REx and Self-Repair. **Given these clarifications and new results demonstrating ORPS's superior performance, we respectfully and strongly request reconsideration of your score.** --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and the clarification of the novelty. My concerns in my original reviews contain mainly 2 parts: 1. a fair comparison of the budget and report performance beyond "Accuracy" 2. Novelty and contributions could have been clearer if the manuscript is better contextualized given the existing literature. The results with controlled LLM calls, and addition results for comparison of PRM and REx strength the claim of the superiority of ORPS compared to the other methods and address my concern 1. The other flaws, such as notation and wording, are in my eye somehow "fixable"; I thus encourage the authors to incorporate the suggestions during the discussion, and the manuscript could benefit from the enhanced clarify for presentation. In particular, the PRM part already confuse me and Reviewer DEgP. For concern 2, I see the novelty as the **combination** of the techniques presented such as unit test generations, tree search, and reasoning on the previous attempt and execution, instead of individually, as some of them overlap with the findings in the literature (c.f. the ref in my original comment) already. > Unlike Self-Repair, REx, LDB that focus on repairing code or specific blocks, ORPS reasons about solution strategies at a higher level to avoid local optima. Instead of incrementally fixing specific solutions to pass more tests, ORPS explores different algorithms with extensive reasoning, even on solutions that pass all generated cases for improvement. This claim is very interesting: I agree with the authors that algorithmic reasoning and self-repair (to fix runtime or wrong answer error, to achieve better perf on HumanEval or MBPP) are different and the compared method could be "myopic" in this sense. And the authors could have been centered on this for the presentation of the manuscript and the claim could benefit more if eval is done on some commonly-used competitive programming benchmark such as CodeContests, TACO, or LiveCodeBench, along with the given existing results to show the difference. Taking these into account, I bump my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your thoughtful and constructive comments, which have significantly contributed to improving our work. > the claim could benefit more if eval is done on some commonly-used competitive programming benchmark such as CodeContests, TACO, or LiveCodeBench, along with the given existing results to show the difference. Thank you for suggesting additional competitive programming benchmarks to validate our claims. Regarding our original selection of datasets, we select MBPP and HumanEval as these are the most commonly used code generation benchmarks. We chose LBPP because it explicitly addresses data contamination and ensures difficulty by asking human experts with competitive programming experience to manually curate and verify problems from scratch. **Following your suggestion, we've now added experiments on CodeContests from DeepMind (test split) with all baselines using the same compute budget (100 LLM calls) on Qwen 2.5 Coder 7B.** Note the time metric is not normalized since CodeContests do not provide standard Python solutions for each problem thus we report average running time. | 100 Calls | Pass@1 | Tests % | Valid % | Time (ms) | |-------|--------|--------------|----------------|-----------| | Reflexion | 8.48 | 14.53 | 27.27 | 52318 | | LDB | 7.88 | 11.79 | 21.21 | 3350 | | REx | 13.33 | 20.17 | 32.12 | 27791 | | PRM-GPT | 4.94 | 8.64 | 17.28 | 3084 | | PRM-Human | 9.88 | 15.04 | 23.46 | **1985** | | ORPS | **20.61** | **28.36** | **40.61** | 36824 | These results align with our previous findings, further confirms that our approach scales effectively on complex tasks that require higher-level algorithmic thinking rather than mere code repair. Regarding novelty, we appreciate your recognition of our core contribution. Moreover, one of our core insight that **"combining verifiable rewards with existing LLMs without training outperforms trained PRMs as compute increases"** represents a significant advancement. This direction has been validated by several preprints after the ICML deadline (e.g. S∗: Test-Time Scaling for Code Generation [arXiv:2502.14382] mentioned by Reviewer DEgP, and a more recent one Review, Refine, Repeat: Dynamic Evaluation and Selection [arXiv:2504.01931]), confirming that we identified a promising research direction. We also thank you for pointing out presentation issues (the "fixable" flaws). We will definitely improve the clarity (including more details and the focus), and fix all notation issues. We'll also incorporate all the suggestions from the reviewers and our rebuttals. Your thoughtful feedback have been invaluable to us. They have helped us refine our empirical evaluation and pushed us to articulate our contributions more precisely. We are genuinely grateful for the time and expertise you've invested in reviewing our work. Given the additional evidence now supporting our claims and your recognition of our contribution's significance, we respectfully ask for your consideration in further raising your score of our paper.
Summary: The paper proposes outcome refining process supervision (ORPS), a unified framework to bridge the gap between process supervision and outcome supervision through structured reasoning and execution-based feedback. The experiment results show that concrete feedback signals are pivotal for solving complex programming tasks. Furthermore, the work shows that eliminating trained process reward models with hybrid process rewards significantly boosts model performances. Claims And Evidence: Yes, the paper provided evidences to support the claims made in the work. For example, the paper establishes: code generation offers a unique opportunity through concrete, verifiable signals. The paper further shows that eliminating the need for specially trained process reward models help in improving reasoning quality. Methods And Evaluation Criteria: The proposed method is composed of three steps, namely (1) candidate code generation, (2) executing candidate codes and run profiling on unit tests, and (3) self-critic and process rewarding. All these 3 steps make sense for the code generation. The evaluation is performed on three popular code generation benchmarks. Theoretical Claims: The paper does not have any proofs or theoretical claims. Experimental Designs Or Analyses: The paper performed experiments on three popular programming problem solving benchmarks. The evaluation setup is well detailed and looks sound. Quite a few strong models are evaluated with the proposed approach. Supplementary Material: I didn't review the supplementary material. Relation To Broader Scientific Literature: It seems like the contributions made in this paper is not highly novel, if we compare to the existing literature. For example, self criticism is not a new idea. Getting execution feedback to get an accurate criticism is not new as well. However, if we judge the proposed method as a whole, it seems quite interesting and effective. Essential References Not Discussed: I didn't carefully check but there seems to be enough paper in the reference. Other Strengths And Weaknesses: The paper is overall well written. The paper has done rigorous experiments which is the strength of the work. Other Comments Or Suggestions: None. Questions For Authors: 1. Does the proposed approach composed of components which already exists in the literature? If yes, how do the authors see the novelty of the work? 2. I am a bit confused but wanted to confirm, during inference, does the proposed approach iterative generation? I mean does it generate and then refine the generation? 3. If I understand correct, the proposed method does not require any training, right? It is an inference-only approach? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review and recognition of our work's effectiveness. We appreciate your thoughtful questions and address them below. > **Q1:** Does the proposed approach composed of components which already exists in the literature? If yes, how do the authors see the novelty of the work? Regarding novelty, while we integrate some established concepts, we introduce several key innovations rather than merely combining existing components: 1. Prior process supervision approaches (OmegaPRM, Math-Shepherd, etc.) assume specially trained Process Reward Models are required for effective reasoning guidance. **ORPS challenges this assumption and studies necessity of training PRMs, by combining execution feedback as verifiable rewards with existing LLMs to guide reasoning.** Results show that ORPS, as a pure inference-time method, produces better reasoning guidance than trained PRMs. 2. Unlike repair-based approaches (e.g., Self-Repair, LDB) that incrementally fix specific code blocks, ORPS reasons at a higher abstraction level about solution strategies. This enables exploring different algorithms rather than being trapped in local optima when the initial approach is suboptimal. Moreover, unlike previous works, we also aim to generate code solutions that are correct, efficient, easy to understand and maintain, instead of focusing solely on correctness. 3. ORPS effectively operates without access to ground truth test cases from benchmark testsets. ORPS utilizes self-generated unit tests during reasoning - a practical advantage in real-world scenarios where test cases are unavailable. > **Q2:** I am a bit confused but wanted to confirm, during inference, does the proposed approach iterative generation? I mean does it generate and then refine the generation? Yes, ORPS is an iterative generation approach. It maintains multiple solution trajectories simultaneously through beam search, where each iteration involves reasoning, code generation, execution, and self-critique. This iterative process allows for strategic pivots when initial approaches prove suboptimal. > **Q3:** If I understand correct, the proposed method does not require any training, right? It is an inference-only approach? Correct, ORPS requires no training whatsoever. It's a fully inference-time framework that leverages an LLM's existing capabilities for reasoning, code generation, and self-critique. Our experiments in Table 4 and additional analyses (in our response to Reviewer 4j6r) confirm that this inference-only approach outperforms methods requiring PRM training, especially as compute budget increases. Thank you again for your supportive review. We believe ORPS represents a significant step forward in unifying process and outcome supervision for complex code generation tasks. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions.
Summary: - The paper proposes an LLM-based algorithm (outcome-refining process supervision, ORPS) for code generation. The primary contributions are algorithmic and empirical. ORPS leverages LLMs abilities significantly more than prior work in this area. Specifically, this includes self-reflection, self-critique and process (per-step) rewards. The key difference from prior LLM-based codegen models is that ORPS eliminates the need for a separately trained process reward model (PRM), which provides dense execution feedback at each step of the iterative reasoning chain. Combined with tree search, this leads to significant performance improvement on modern codegen benchmark datasets. Claims And Evidence: - Yes. Methods And Evaluation Criteria: - Yes. Theoretical Claims: - Not applicable. Experimental Designs Or Analyses: - Not applicable. Supplementary Material: - Yes. I checked the appendices. Relation To Broader Scientific Literature: - The paper extends the state-of-the-art (to my knowledge) of using LLMs for code generation. It builds on very recent prior work (LDB, Reflexion). It leverages the LLM significantly more. That said, LLMs-for-code-generation is a very fast moving area so I can't be certain that I haven't missed relevant prior work or baselines. Essential References Not Discussed: - No (but I'm not certain). Other Strengths And Weaknesses: Strengths - The paper tackles an important and relevant problem. Improvement here is likely to have a large impact and be of large interest to the community. - The paper is well structured and mostly clearly written. The main ideas are easy to follow and nicely illustrated. - The experiments show significant performance improvement compared to strong baselines. Weaknesses - Overall, the paper is quite good. I wasn't able to spot any major technical issues. My biggest concern is around reproducibility. I didn't see the full prompt or prompts used as inputs to the LLM listed in any of the appendices. The format of the LLM responses "Programmer Thoughts", "Critic Thoughts", and numerical rewards suggest some prompt engineering was likely involved. If yes, they should be included in the appendices to aid reproducibility. (Please note that my current rating of the paper assumes a clear listing of the system prompts, if any, used to interact with the LLM will be included in the author response.) - The paper can be somewhat handwavy with terminology and some claims. A description or definition of concepts and terms before their usage would be preferable. Examples include "reasoning", "reasoning space", "finer reasoning", "intrinsic reasoning", and so on. Other Comments Or Suggestions: - No additional coments. Questions For Authors: - (Q1) What initial LLM inputs (system prompts) used in each call to the LLM, if any. If system prompts exist, is it possible to include them in the appendix. - (Q2) What's the current SOTA for LBPP? Is it ORPS? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you sincerely for your thoughtful review and constructive feedback. We deeply appreciate your recognition of our work and we address your questions and concerns point by point: > **Q1:** What initial LLM inputs (system prompts) used in each call to the LLM, if any. If system prompts exist, is it possible to include them in the appendix. We sincerely appreciate your emphasis on reproducibility. **Reproducibility is our first priority** and thus we have anonymously open-sourced all implementation details, including code, prompts, setup instructions, commands to run and reproduce our experiments in our anonymous repository. The repository contains all prompts used in ORPS. We attach our prompts at the end of this response. However, due to space constraints of rebuttal, we cannot paste everything here, but **we will make sure to include all prompts in our revised appendix**. For the new experiments and data involved in rebuttal, we will also add them to our repository. > **Q2:** What's the current SOTA for LBPP? Is it ORPS? To the best of our knowledge, ORPS currently achieves SOTA performance on the LBPP benchmark. Moreover, this improvement requires no additional model training and can be applied to any sufficiently capable base LLM. As demonstrated in our experiments, ORPS consistently outperforms other baselines across multiple models given the same compute budget. > **W2:** The paper can be somewhat handwavy with terminology and some claims. A description or definition of concepts and terms before their usage would be preferable. Examples include "reasoning", "reasoning space", "finer reasoning", "intrinsic reasoning", and so on. Thank you for pointing out our presentation issues. We acknowledge the need for clearer definitions and will revise the paper to explicitly clarify key terms and provide more explanation to our notations. We will ensure all technical terms are clearly defined before their first use. We are very grateful for your expertise and time in evaluating our work. Thank you again for your contributions to improving this work. The system prompt of model for reasoning and generating codes: ```` You are a Python programmer in a code interview. You're solving a coding problem while sharing your thoughts and discussing with an critic. Always follow the format below for each round: 1. First, share your thoughts as comments: - Your current approach (or an analysis of the problem and your plan if you haven't written any code yet) - What you learned from previous feedback (if there is any previous feedback, otherwise think about your plan, what might be missing from your previous plan) - Why you chose this approach (or how you plan to tackle the problem), do you need to shift your approach? - Be clear, concise, detailed and pay attention to the comments given by the critic, no chit-chat, no small talk. - Always use first person, e.g. I, we, our, etc. 2. Then write your solution: - Clean, efficient Python code that follows requirements exactly - No test cases in code, just the solution - Your code will then be tested by the critic, so do not include any test cases in your code, this is very important Format your response as: # === BEGIN PROGRAMMER THOUGHTS === # [Your response to previous feedback] # === END PROGRAMMER THOUGHTS === # === BEGIN SOLUTION === ```python [Your code] ``` # === END SOLUTION === Output strictly as shown above. ```` The system prompt for the self-critic role: ``` You are a technical critic reviewing a programmer's Python code. Provide clear, constructive feedback and suggest improvements or propose a new approach. The programmer's thoughts, code, execution analysis will be provided to you each round and you need to give constructive feedback to the programmer. Here are some rules you need to follow: 1. First, share your thoughts: - How did the code perform in the execution, did any test cases fail? - If any case failed, why it failed? If all passed, think about performance improvement. - Key observations on the code, e.g. what's good or bad, what's the most important thing to improve, etc. - Potential improvements: time / space complexity, a different algorithm, strategy, etc. - Think about the programmer's thoughts, propose a new direction if you have any better idea, or give some advice to the plan, or give additional analysis of the problem - Your thought process should be clear and concise, no chit-chat, no small talk. Guiding the programmer to improve their code is your main goal, you do not write any code. - Always use first person, e.g. I, we, our, etc. ... <omitted due to character limitation of rebuttal, please refer to our open-source repository for details> ```
Summary: This paper introduces ORPS, a novel framework that unifies outcome and process supervision to address complex code problems. Notably, this approach does not require training PRMs. ORPS demonstrates significant improvements when utilizing ground truth test cases. Claims And Evidence: Overall, most of the claims are well-claimed. Please check my other comments below. Methods And Evaluation Criteria: The method lacks persuasiveness because the salient improvement assumes the availability of ground truth test cases, which is uncommon in real-world code completion scenarios. Without these ground truth test cases, the performance advantage is not prominent. Theoretical Claims: N.A Experimental Designs Or Analyses: This paper should include a more detailed cost analysis of both the dynamic and static analysis processes. Supplementary Material: Have read all Supplementary Material Relation To Broader Scientific Literature: The primary contribution of this paper is the integration of dynamic and static analysis to construct the reward. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: The main weakness of the paper is that the performance improvement heavily depends on the availability of ground truth test cases, which is uncommon in real-world scenarios. Additionally, the rationale for combining dynamic and static analysis is unclear. For instance, the necessity of static analysis is questionable when dynamic analysis has already been performed. If static analysis is indeed necessary, there should be more consideration regarding the weighting of scores between static and dynamic analysis. Other Comments Or Suggestions: N.A. Questions For Authors: 1. It's somewhat puzzling that the paper begins with Process Reward Models. I initially assumed the designed reward would be used to train the model, but it's not addressed in this paper. Instead, it seems more like an inference-time strategy, similar in spirit to [1]. It's important to note that a minor resemblance to [1] does not affect the paper's rating. My question is why the narrative starts with Process Reward Models rather than an inference-time strategy when the paper doesn’t involve training. Please note that this affects the understanding of contribution 2 to eliminate specially trained PRMs 2. Additionally, while I understand that a tree-structured exploration space could enhance the pass rate of generation, I'm curious why the instances mentioned in this paper focus solely on efficiency, which is a non-functional requirement, rather than on functional aspects measured by the pass rate. This is evident in the instances mentioned in lines 72-73 and 161-163. [1] S∗: Test Time Scaling for Code Generation Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. ## Answers to Questions **Q1**: Currently, there's an assumption in process supervision and test-time scaling research (e.g. OmegaPRM, Math-Shepherd, Let's Verify Step by Step, Deepseek-Math, etc.) that a specially trained Process Reward Model is required to guide reasoning (during training with RL or inference with search algorithms like MCTS). **ORPS fundamentally challenges this assumption and study necessity of training PRMs, by combining execution feedback as verifiable rewards with existing LLMs to generate high quality process rewards to guide reasoning.** We compare ORPS with trained PRMs by using different methods to guide LLM reasoning during inference (Table 4) and we now add 2 new experiments (in rebuttal for 4j6r) to extensively validate our claims. Results indicates ORPS produce better rewards than trained PRMs to guide reasoning and consistently performs better given the same compute budget. Regarding S*[1], this work was made public one month after the ICML submission deadline which further validates our approach. **Q2**: We agree that the certain phrasing suggest a focus on efficiency over correctness. To clarify: our framework prioritizes optimizing correctness as the primary objective, while considering comprehensive metrics involving efficiency, code quality, complexity, etc. In all our experiments we report correctness metrics along with efficiency for comprehensive comparison. **Our motive here is that a good solution should be correct, efficient (evaluated with dynamic analysis), easy to read and maintain (static analysis). Furthermore, we study how dynamic and static analysis metrics affects overall performance(Figure 6 in Appendix D) and this should also address Weakness 2.** ## Unit Test Cases Thank you for raising the concern on the scarcity of ground truth test cases in practical scenarios. We agree quality of test cases might significantly affect the performance of all approaches that utilize execution feedback to debug or improve the code. Our baseline Reflexion points out "they rely upon ground truth test cases that invalidate pass@1." **However, we have explicitly addressed this concern, and the test cases used in ORPS are not the ground truth from test sets.** Instead, we follow Reflexion to prompt the LLM given the problem and example tests and filter invalid cases. These self-generated, weak tests are then used during reasoning, and only one final solution will be tested against the actual ground truth cases. To ensure fairness and reproducibility, we cached the cases generated by Qwen 7B and used the same cases in all our experiments. **While the quality of generated cases are suboptimal compared to ground truths, ORPS performs better, even compared to the baselines using ground truth cases.** Notably, we also evaluate ORPS with ground truth test cases (ORPS (w/T) in Table 1) in comparison. We will be including all system prompts in revised Appendix. ## Computational Cost The bottleneck of computational cost of ORPS and baselines is LLM Inference Calls instead of dynamic and static analysis, as this require GPU computing and introduce the majority of latency. Although we compared ORPS with BoN in Figure 4 by allocating the same number of samples per problem, we now include a new experiment for other baselines given the same number of LLM calls. To be precise, we analyze the worst case cost for each method: ORPS requires $2\times N \times (K \times T + 1)$ Calls, where N = Candidate Samples, K = Beam Size, T = Reasoning Steps. Reflexion requires $2\times T$ calls, where T = Steps. LDB requires $1 + P×T×(2+B×N)$ where P = pass_at_k, T = max_iterations, B = number of blocks in code control flow graph, N = max_trials. For trained PRMs, we swap the self-critic LLM call with the PRM and thus the calls are identical. Please refer to rebuttal for 4j6r for PRM training details. |20 Calls|Pass@1|Tests%|Valid%|Time%| |-|-|-|-|-| |Reflexion|37.0|51.7|71.6|119.5| |LDB|37.0|50.8|66.0|274.7| |PRM-GPT|44.4|58.1|77.8|**100.1**| |PRM-Human|40.7|53.1|69.1|124.7| |ORPS|**48.4**|**64.8**|**84.5**|105.6| |50 Calls|Pass@1|Tests%|Valid%|Time%| |-|-|-|-|-| |Reflexion|40.7|57.6|79.6|130.3| |LDB|36.4|50.4|66.0|272.6| |PRM-GPT|37.0|52.9|71.6|**112.7**| |PRM-Human|38.3|53.5|69.1|137.7| |ORPS|**55.6**|**72.1**|**89.5**|116.8| |100 Calls|Pass@1|Tests%|Valid%|Time%| |-|-|-|-|-| |Reflexion|39.5|54.4|72.8|113.8| |LDB|37.0|51.0|66.7|275.7| |PRM-GPT|35.8|51.1|70.4|127.3| |PRM-Human|42.0|56.4|76.5|106.6| |ORPS|**64.2**|**75.4**|**88.9**|**91.0**| Results indicate that **even we use weaker, self-generated test cases, ORPS consistently outperform baselines that use ground truth test cases given the same compute budget and scales up consistently.** We sincerely appreciate your time and constructive feedback, which has helped us strengthen our work. **Given these these clarifications and new results, we respectfully request reconsideration of your score.**
null
null
null
null
null
null
Robust Reward Alignment via Hypothesis Space Batch Cutting
Accept (poster)
Summary: The paper introduces a novel method called Hypothesis Space Batch Cutting (HSBC), which iteratively refines a space of potential reward functions by using batches of human preferences to make "cuts" based on a voting function. To handle potentially erroneous human feedback, HSBC employs a conservative cutting method within each batch, ensuring robustness against false preferences while still learning the desired reward function. Claims And Evidence: Yes Methods And Evaluation Criteria: It would be more informative to test the robustness of the method against different types of human errors as per Bpref [1] 1. B-pref: Benchmarking preference-based reinforcement learning." arXiv preprint arXiv:2111.03026 (2021). Theoretical Claims: I hace not checked the proofs Experimental Designs Or Analyses: Well presented with good ablations, but lacks comparisons with baselines. Supplementary Material: I have skimmed through the appendix of the submitted work Relation To Broader Scientific Literature: It could present an approach to improve the stability of preference based RL methods. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The method seems interesting and novel and seems to improve performance. However, the set of baselines used is small. It could benefit from comparing with more baselines. Other Comments Or Suggestions: 1. The function f’s intuitive meaning and direct relationship to how a preference (true/false) constrains the hypothesis space could be emphasised better for clarity. Specifically, explaining why the $(1-2y)$ term effectively flips the sign of the difference in rewards based on the preference label would be helpful for a clearer understanding. 2. Below Eq 8, the Heaviside function seems to be described incorrectly Questions For Authors: 1. The BPref paper cited in the paper shows there are multiple ways in which humans can be erroneous. How would HSBC deal with each of these individually? What kind of errors can it handle more easily? 2. With regards to the geometric interpretation, how does voting affect the cuts? 3. In practice, how much does the learned reward function (with HSBC) deviate from the ground truth? 4. Wouldn’t SURF (Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning) also be a suitable baseline for comparison? 5. Is it necessary to use a sigmoid function in Eq 20 or would any smooth differentiable function be sufficient? 6. While the authors do a good job with the ablations, I wonder how the size of the ensemble M would affect the results. Line 329 (left) ‘to using’ Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to reviewer #GLNZ We sincerely thank you for your comments. Below, we address each of your comments in detail. All our responses will be incorporated into the final paper. ### 1. There are multiple ways... What kind of errors can it handle more easily? Thanks for your question. For different human error types in Bpref, our method cannot handle “Equal” preferences as we do not consider ties. However, our method can naturally handle “Skip” cases, as these can be simply excluded when constructing preference batches. We added evaluations on “Stoc,” “Mistake,” and “Myopic” teachers in the Cartpole task, following BPref hyperparameters: $\beta = 10.0$ for "Stoc," $\epsilon = 0.2$ for "Mistake," and $\gamma = 0.98$ for "Myopic". The conservativeness level was set to 20%, with other settings following the paper. Results confirm our method’s robustness across these feedback types. |Teacher|Oracle|Ours|Stoc|Mistake|Myopic| |-|-|-|-|-|-| |Reward|$148.6$|$130.7 \pm 2.0$|$93.6 \pm 30.7$|$122.6 \pm 14.6$|$127.0 \pm 26.5$| The results show our method handles “Mistake” and “Myopic” teachers well but struggles with “Stoc” teachers. This could be because in the late stage of learning, the trajectories are close enough in its rewards and “Stoc” teacher tends to provide very noisy labels, which hinders the convergence of the algorithm. ### 2. How does voting affect the cuts? Thanks for the question. We will use Figure 3 in the paper for illustration. The voting function $V_i (\boldsymbol{\theta})$ controls the aggressiveness of hypothesis space cuts. Retaining only the maximum-vote region $V_i (\boldsymbol{\theta})=N$ would remove all hypotheses disagreeing with any preference, assuming perfect feedback accuracy. However, human errors could wrongly eliminate the ground-truth reward (left panel of Figure 3). To address this, we set a mild vote threshold $V_i (\boldsymbol{\theta}) \geq \lfloor(1-\gamma) N\rfloor -0.5$, preserving a broader hypothesis space containing $\boldsymbol{\theta}_H$ (Lemma 5.2), as shown in the right panel of Figure 3. ### 3. How much does the learned reward deviate from ground truth? Thanks for the question. To assess consistency, we compute the Pearson correlation between the learned and ground-truth rewards in Cartpole, Walker, and Humanoid. For each task, we generate five 200-step trajectories and report the mean and standard deviation. Results are below: |Task|false 0%|false 10%|false 20%|false 30% | |-|-|-|-|-| |Cartpole| $0.928 \pm 0.025$ | $0.888 \pm 0.031$ | $0.914 \pm 0.022$ | $0.851 \pm 0.062$ | | Walker| $0.584 \pm 0.035$ | $0.636 \pm 0.060$ | $0.598 \pm 0.070$ | $0.430 \pm 0.062$ | |Humanoid| $0.673 \pm 0.070$ | $0.657 \pm 0.066$ | $0.546 \pm 0.134$ | $0.500 \pm 0.079$ | The above results show that the learned reward and groundtruth reward has good correlation. Also, this correlation is weaker when the error rate increases, which is consistent with the performance results. ### 4. Comparison with SURF? Thank you for the comments. In this rebuttal, following the reviewer’s and all other reviewers’ comments, we have added the comparison with some other robust reward learning methods, including SURF. Please refer to the response of question 1 to reviewer #e9yz. ### 5. Is it necessary to use a sigmoid function in Eq 20...? No, any smooth differentiable function with a similar shape to sigmoid can be used, for example, the tanh function. ### 6. Ablation of the ensemble size M We added an ablation experiment on ensemble size $M$ on the walker-walk task with 20% error rate. The final result is shown below: |$M$|4|8|16|32| |-|-|-|-|-| Reward|$433.9\pm 12.17$|$446.7\pm 14.3$|$447.0\pm 14.4$|$450.2\pm 9.22$ With the groundtruth result of $472.9$. It can be seen from the result that increasing the ensemble size $M$ can slightly improve the performance, but not signfiicantly. ### 7. Better for clarity of the function f Thanks for the comment. The function $f$ represents the signed reward gap between ${\xi}^0$ and ${\xi}^1$, with the sign determined by human label $y$. Because $y$ is the “index” of the preferred trajectory, we set the sign as $1-2y$ to ensure $+1$ when $y=0$ and $-1$ when $y = 1$. As shown in Equ. (6), one human preference $({\xi}^0_{i,j}, {\xi}^1_{i,j}, y_{i,j})$, no matter true or false, imposes a constraint on hypothesis space using the inequality $f({\theta}, {\xi}^0_{i,j}, {\xi}^1_{i,j}, y_{i,j}) \geq 0$. ### 8. Typo of Heaviside function Thank the reviewer for point this out. This was a typo. The correct notation is $\mathrm{\mathbf{H}}(x) =1$ if $x \ge 0$ and $\mathrm{\mathbf{H}}(x) = 0$ otherwise. We will fix this in the revised version. ### Reference [1]Lee, Kimin, et al. “B-pref: Benchmarking preference-based reinforcement learning.” arXiv preprint arXiv:2111.03026 (2021). --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses. I have raised my score accordingly. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your thoughtful feedback and for recognizing our revisions—we truly appreciate your time and the improved assessment.
Summary: This study addresses the challenge of reward design in reinforcement learning and proposes a robust and efficient preference-based reward alignment method, particularly for noisy human feedback. The method introduces a novel framework called "hypothesis space batched cutting," which iteratively refines the reward hypothesis space using batches of human preferences. The method employs a conservative cutting mechanism to ensure robustness when facing incorrect preferences. In error-free settings, the framework performs comparably to PEBBLE, while significantly outperforming other methods under high error rates in preferences. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: see weakness Essential References Not Discussed: see weakness Other Strengths And Weaknesses: Advantages: 1. The paper is very clear, and the logic is easy to understand. 2. I can fully understand the motivation of this paper; performing effective RLHF learning on noisy data is very important issue. 3. The theoretical analysis is thorough, and the method makes sense. Disadvantages: 1. Does gamma really play an adaptive role? 2. Although the method is highly sophisticated, the experiments are the weak point of this paper. The paper only compares with the PEBBLE algorithm, which is a very basic baseline algorithm, while recently there have been more methods with better performance. Can HSBC work effectively across most algorithms? It would be better to compare it with more powerful baselines. 3. Why are the experiments conducted in a custom environment from dmc? Most similar papers for online rlhf/pbrl use the B-Pref benchmark environment. Will you release these new environments as benchmarks? 3.5. Similar to the above point, I observed that a possible reason might be that the environment requires large-scale parallelism to support MPPI. Would this affect the practicality in other environments or the transferability to real-world tasks? 4. Another issue is the lack of discussion of other reinforcement learning methods with noisy human feedback in related work and experiments. Although this type of work is novel and important, there are still some papers that have conducted preliminary studies, such as: [1] Cheng J, Xiong G, Dai X, et al. RIME: Robust Preference-based Reinforcement Learning with Noisy Human Preferences[J]. [2] Li Y, Das S, Taylor M E. CANDERE-COACH: Reinforcement Learning from Noisy Feedback[J]. arXiv preprint arXiv:2409.15521, 2024. [3] Xue W, An B, Yan S, et al. Reinforcement learning from diverse human preferences[J]. arXiv preprint arXiv:2301.11774, 2023. [4] Yuan Y, Hao J, Ma Y, et al. Uni-rlhf: Universal platform and benchmark suite for reinforcement learning with diverse human feedback[J]. arXiv preprint arXiv:2402.02423, 2024. 5. Have you tried using real human feedback? Real human feedback should inherently contain a certain noise. You may refer to [4]. Other Comments Or Suggestions: I think this is a good paper, but the experiments are somewhat weak, and it lacks discussion and comparison with relevant literature. If the author can provide reasonable rebuttals, I am willing to raise the score. Questions For Authors: ----- After rebuttal, I raised my score from 2 to 3. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer #sofd We sincerely appreciate your thoughtful feedback and constructive comments on our paper. Below, we address each of your concerns in detail. All our responses will be incorporated into the final paper. ### 1. Does gamma really play an adaptive role? Thank you for the comments. Currently, $\gamma$ is fixed as a (conservative) estimate of the batch error rate, but our method could be extended to adaptively adjust $\gamma$ if real-time error estimation is available, which we leave for future work. ### 2. It would be better to compare it with more powerful baselines Thank you for the comments. In this rebuttal, following the reviewer’s and all other reviewers’ comments, we have added the comparison with some other robust reward learning methods. Please refer to the comparison results in our response of question 1 to reviewer #e9yz. ### 3. Why are the experiments conducted in a custom environment from dmc? ... Thank you for the comments. We chose to implement our approach in new benchmark environments because we need a GPU-accelerated simulation to accelerate MPPI policies with parallel sampling. All of our custom testing environments will be released to support future research and standardized benchmarking in this area. Regarding practicality in other simulated environments, the proposed method is directly applicable. Environments without GPU support or parallel simulation may limit the speed of MPPI-based policies. In addition, as we pointed in Section 6.3, it is possible to use other policies (like RL) to generate trajectories for comparison. As for real-world transferability, recent works such as [5] have demonstrated that sampling-based control policies with parallelized simulation backends like MJX can be effectively transferred to real-world robotic tasks. ### 4. Lack of discussion of other RL methods with noisy human feedback in related work and experiments. Thank you for the comments. Both RIME [1] and CANDERE-COACH [2] improve learning from noisy labels by filtering human feedback—RIME uses KL-divergence to filter and flip corrupted labels, while CANDERE-COACH trains a neural classifier to predict preferences and filters based on discrepancies with real labels. Xue’s work [3] employs an encoder-decoder structure for reward models, estimating confidence levels from latent distributions and performing a weighted average for better predictions. Unlike these methods, ours does not explicitly assess feedback quality but instead updates the hypothesis space conservatively based on entire preference batches. Uni-RLHF [4] introduces an annotation platform and large-scale feedback dataset, using accuracy thresholds and manual verification, which are impractical for online reward learning due to the absence of ground truth and the high cost of manual inspection. In the revised version, we will include the above discussion. ### 5. Have you tried using real human feedback? Thank you for the comments. We conducted a new experiment evaluating HSBC with real human feedback on CartPole and Walker. Four volunteers provided trajectory preferences, which were used for reward learning. HSBC ($\gamma=0.4$) was compared to PEBBLE under the same settings as in the main paper, with a small amount of simulated feedback for pretraining. The results (sum of reward), presented below, show performance across different numbers of human preferences, with the first table for CartPole and the second for Walker. |# human preferences|0|10|20|30|40|50|Oracle| |-|-|-|-|-|-|-|-| |Ours (reward)|$45.1 \pm 44.7$|$80.5 \pm 50.8$|$132.8 \pm 19.8$|$147.4 \pm 11.1$|$132.5 \pm 32.8$|$133.2 \pm 31.2$|$148.6$| |PEBBLE (reward)|$43.3 \pm 20.2$|$49.9 \pm 8.3$|$49.4 \pm 14.6$|$44.5 \pm 17.3$|$63.2 \pm 27.3$|$73.5 \pm 40.2$|$148.6$| |# human preferences|0|20|40|60|80|100|Oracle| |-|-|-|-|-|-|-|-| |Ours (reward)|$181.7 \pm 92.3$|$309.0 \pm 33.4$|$301.9 \pm 38.4$|$310.3 \pm 41.7$|$331.4 \pm 41.2$|$356.2.2 \pm 66.0$|$472.9$| |PEBBLE (reward)|$131.3 \pm 85.2$|$154.2 \pm 78.9$|$175.5 \pm 126.2$|$326.5 \pm 15.2$|$200.1 \pm 108.5$|$177.9 \pm 123.6$|$472.9$| It can be shown that our method can achieve more stable convergence and superior performance when handling human feedback. ### Reference [1]Cheng, et al. "Rime: Robust preference-based reinforcement learning with noisy preferences." arXiv preprint arXiv:2402.17257 (2024). [2]Li Y, et al. CANDERE-COACH: Reinforcement Learning from Noisy Feedback[J]. arXiv preprint arXiv:2409.15521, 2024. [3]Xue W, et al. Reinforcement learning from diverse human preferences[J]. arXiv preprint arXiv:2301.11774, 2023. [4]Yuan Y, et al. Uni-rlhf: Universal platform and benchmark suite for reinforcement learning with diverse human feedback[J]. arXiv preprint arXiv:2402.02423, 2024. [5]Li, et al. "Drop: Dexterous reorientation via online planning." arXiv preprint arXiv:2409.14562 (2024). --- Rebuttal Comment 1.1: Comment: I’m very sorry for the late reply. I mistakenly used 'official comment' which is not visible to the author, I have changed it to a rebuttal response. I realized this and hope everything is still in time. Thank you for the response. Most of my concerns have been addressed, but I still have a few minor questions: As a planning and sampling technique, MPPI introduces additional costs. Does the stability of HSBC come from this additional planning? Since methods similar to MPPI are not very effective in environments that do not support large-scale parallelization, even though I believe that methods based on Isaac Gym represent the direction for the future, none of the other baselines consider parallel sampling setting. Would this kind of comparison introduce new unfairness? --- Reply to Comment 1.1.1: Comment: # Further Response to Reviewer #sofd Dear Reviewer #sofd, We appreciate the reviewer’s acknowledgement to our response and are glad to see the major concern is addressed. We sincerely thank for your thoughtful and constructive reply. Here are some further clarifications about your concerns on the usage of MPPI in our paper. 1. For fairness in all comparisons, including those presented in our paper, we replace the original RL policies in all baseline methods with MPPI-based planners. The reward learning components remain consistent with their original implementations to ensure a fair evaluation of planning performance. 2. We adopt MPPI as our planner instead of training an RL policy from scratch, as it avoids the need for extensive policy learning. Since our primary focus is on reward learning, the choice of planner is orthogonal: any controller capable of generating trajectories according to reward functions (e.g., sampling-based MPC like MPPI or RL) can be used as we discussed in the paper. Notably, several recent works ([1], [2]) have also employed simulator-based MPC as the planner in reward learning frameworks. 3. As shown in prior works (e.g., [3], [4], [5]), many successful reward learning methods employ non-parallelized simulators as predictive models within MPC frameworks. While our approach could similarly be extended to such settings, the current implementation leverages advanced parallelized environments like MJX for improved efficiency. Our method is also compatible with other GPU-parallel environments such as Isaac Sim, MuJoCo-Warp, and MuJoCo Playground. If the reviewer finds our clarifications and new results satisfactory, we would be grateful if the score could be updated to reflect the improvements and contributions more accurately. ## Reference [1] Yu, Wenhao, et al. "Language to rewards for robotic skill synthesis." arXiv preprint arXiv:2306.08647 (2023). [2] Liang, Jacky, et al. "Learning to learn faster from human feedback with language model predictive control." arXiv preprint arXiv:2402.11450 (2024). [3] Zakka, Kevin, et al. "Robopianist: Dexterous piano playing with deep reinforcement learning." arXiv preprint arXiv:2304.04150 (2023). [4] Li, Albert H., et al. "Drop: Dexterous reorientation via online planning." arXiv preprint arXiv:2409.14562 (2024). [5] Hess, Adrian, et al. "Sampling-Based Model Predictive Control for Dexterous Manipulation on a Biomimetic Tendon-Driven Hand." arXiv preprint arXiv:2411.06183 (2024).
Summary: The paper introduces Hypothesis Space Batch Cutting (HSBC), a framework for robust reward alignment in reinforcement learning (RL). HSBC addresses the challenge of learning reward functions from human preferences, particularly in the presence of false or noisy feedback. The core idea is to iteratively refine a hypothesis space of reward models by "cutting" regions inconsistent with human preferences. Batches of preferences are queried based on disagreement among current hypotheses, and a voting function aggregates these preferences to determine the cuts. To handle errors, a conservative cutting strategy ensures that up to $\gamma$N false preferences per batch are tolerated. Theoretical guarantees include PAC learning bounds and robustness proofs. Empirical results demonstrate that HSBC outperforms representative methods like PEBBLE under high false preference rates (up to 30%) across diverse tasks (DM-Control, dexterous manipulation, quadruped locomotion). Claims And Evidence: The paper’s claims are generally supported by clear evidence. Methods And Evaluation Criteria: The methods and evaluation criteria are well-aligned with the problem: - Hypothesis Space Cutting: The geometric interpretation of hypothesis updates is intuitive and addresses the limitations of prior preference-based RL methods (e.g., PEBBLE’s vulnerability to noise). - Disagreement-Based Queries: Actively selecting trajectory pairs where current hypotheses disagree ensures efficient learning, as shown in the ablation study for $\eta$ (Figure 7b). - Benchmark Tasks: The use of DM-Control, dexterous manipulation, and locomotion tasks (e.g., Go2-Standup) covers a broad range of control challenges, validating HSBC’s generality. Theoretical Claims: Theoretical claims are rigorously presented. Experimental Designs Or Analyses: The experimental design is generally sound, but minor issues exist: - Baseline Comparison: PEBBLE is used as the primary baseline, but comparisons with other robust Preference-based Reinforcement Learning (PbRL) methods (e.g., RIME [Cheng et al., 2024] or Xue et al. [2023]) would strengthen the evaluation. Supplementary Material: Yes. Appendix E for understanding the Theorem 4.2. Relation To Broader Scientific Literature: The paper situates HSBC within the PbRL literature, contrasting with prior work on hypothesis space reduction (e.g., Sadigh et al. [2017]) and robust learning (e.g., Heo et al. [2025]). Key contributions include: - Conservative Voting: A novel approach to handle false preferences without prior distribution assumptions, unlike methods like mixup [Heo et al., 2025]. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** - The combination of batch cutting, voting functions, and conservatism provides a unique approach to robust PbRL. - HSBC’s robustness is critical for real-world applications (e.g., human-robot interaction) where errors are inevitable. **Weaknesses** - All experiments are simulation-based. Validation in a practical setting would enhance credibility. Other Comments Or Suggestions: N/A Questions For Authors: 1. Scalability to Neural Networks: Given that the theoretical bounds assume a finite VC-dimension, how does HSBC generalize to high-dimensional neural reward functions used in experiments? Could the bounds be adapted for neural models, or is this a limitation of the current framework? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer #e9yZ We sincerely appreciate your thoughtful feedback and comments on our paper. Below, we address each of your concerns in detail. All our responses will be incorporated into the final paper. ### 1. Comparisons with other robust PbRL methods? Thank you for highlighting this point. Following the reviewer’ and other reviewers’ comments, in this rebuttal we have added the comparison with some other robust reward learning methods, including RIME[1], SURF[2], MAE[3] and t-CE[4]. The comparison is performed on the cartpole-swingup and walker-walk tasks, with 20% and 30% error rate. For the comparison of RIME [1] and SURF [2], we used the same settings as the original PEBBLE baseline for collecting trajectory segments. In RIME, KL-divergence between predicted preference probabilities and labels filters untrustworthy labels and flips them for improved learning. We set RIME parameters to $\alpha = 0.25$, $\beta_{max} = 3.0$, $\beta_{min} = 1.0$, $\tau_{upper} = -\ln(0.005)$, $k = 1/60$ for cartpole-swingup task and $\alpha = 0.3$, $\beta_{max} = 2.2$, $\beta_{min} = 1.7$, $\tau_{upper} = -\ln(0.005)$, $k = 1/100$ for walker-walk task. For SURF, we changed the length of collected segments to 60 and use temporal data augmentation to crop segments of a fixed-length 50. We choose $\tau = 0.95$, $\mu = 1.0$ and $\lambda=1.0$ for SURF in both tasks. All algorithm parameters are chosen for the best performance of the baseline methods. In MAE[3], the original loss function is replaced with $L_{MAE} = \mathbb{E}|\hat{y} - P_{\theta}|$ for robust reward learning. In t-CE[4], the loss is replaced with $L_{t-CE} = \mathbb{E}\sum_{i=1}^t \frac{(1-\hat{y}^T P_{\theta})}{i}$. Here, $\hat{y}$ is the one-hot version of noisy label and $P_{\theta}$ as the predicted probability of human preference on trajectory pairs. We choose $t=4$ in t-CE loss of its best performance. The result (sum of reward) is shown in the table below: | Task|Oracle|Ours|PEBBLE|RIME|SURF|MAE|t-CE| |-|-|-|-|-|-|-|-| | Cartpole-Swingup-20% | 148.6 | $130.7 \pm 2.0$ | $52.3 \pm 26.8$ | $75.0 \pm 45.3$ | $98.0 \pm 35.8$ | $98.6 \pm 25.9$ | $73.3 \pm 16.3$ | | Cartpole-Swingup-30% | 148.6 | $111.3 \pm 16.8$ | $42.8 \pm 23.3$ | $81.0 \pm 37.0$ | $62.3 \pm 42.0$ | $59.9 \pm 30.7$ | $52.0 \pm 30.5$ | | Walker-Walk-20% | 472.9 | $447.0 \pm 14.4$ | $401.9 \pm 37.6$ | $408.4 \pm 24.8$ | $397.2 \pm 30.7$ | $425.5 \pm 30.2$ | $410.8 \pm 19.9$ | | Walker-Walk-30% | 472.9 | $417.2 \pm 12.2$ | $277.0 \pm 62.3$ | $310.2 \pm 84.0$ | $292.0 \pm 69.0$ | $288.3 \pm 139.0$ | $345.6 \pm 52.2$ | The results show that the proposed HSBC method outperforms baselines in robust learning under high error rates. Among the baselines, RIME excels at handling false preference labels with its label denoising design. In the Walker task, using t-CE loss also achieves robust learning. We will include the above baseline comparison in the revised version of the paper. ### 2. Validation in a practical setting? We appreciate this observation. To address this, we performed HSBC on real-human feedback, please refer to the response of question 5 to reviewer #sofd. ### 3.The theoretical bounds assume a finite VC-dimension, how does HSBC generalize to high-dimensional neural reward functions? We thank the reviewer for the helpful comments. The VC-dimension is finite for certain neural network classes, such as multilayer perceptrons (MLPs) with ReLU activations, where it scales with the number of parameters and layers [5]. For such networks, the upper bound of the sample complexity applies. However, the exact VC-dimension of general neural networks, especially deep architectures with complex connectivity and unbounded weight norms, remains an open problem. In these cases, Theorem 4.2 provides a worst-case upper bound, which may be conservative. Tighter complexity bounds are an important direction for future work. We appreciate your time and effort in reviewing our work. Your feedback has been invaluable in strengthening our manuscript. ### Reference [1]Cheng, et al. "Rime: Robust preference-based reinforcement learning with noisy preferences." arXiv preprint arXiv:2402.17257 (2024). [2]Park, et al. "Surf: Semi-supervised reward learning with data augmentation for feedback-efficient preference-based reinforcement learning." arXiv preprint arXiv:2203.10050 (2022). [3]Ghosh, , et al. "Robust loss functions under label noise for deep neural networks." Proceedings of the AAAI conference on artificial intelligence. Vol. 31. No. 1. 2017. [4]Feng, et al. "Can cross entropy loss be robust to label noise?." Proceedings of the twenty-ninth international conference on international joint conferences on artificial intelligence. 2021. [5]Bartlett, et al. "Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks." Journal of Machine Learning Research 20.63 (2019): 1-17.
null
null
null
null
null
null
null
null
On the Guidance of Flow Matching
Accept (spotlight poster)
Summary: In this paper, the authors provide a general flow matching framework for conditional generation based on an energy function. The authors propose a unifying framework that constructs guidance for arbitrary source distributions and couplings. They derive various guidance methods based on MC estimation, which then they approximate with Taylor expansion, furthermore simplify using affine path assumptions, and finally assume stronger uncoupled affine Gaussian paths. Each of these provides one guidance method, which are all theoretically justified and then empirically analyzed. Then, the authors also provide derivative-free guidance and an inverse problem guidance. Finally, the authors also give training-based exact guidances. These are all examined on toy examples, RL and image problems. Claims And Evidence: Upon reviewing this paper, I have identified several claims that appear to require further clarification or support. Firstly, the title and abstract may be perceived as somewhat misleading, as they refer to "Guidances of Flow matching" without explicitly indicating that the primary focus is on energy-guided sampling. To avoid potential confusion, I think the authors should change the title and abstract to emphasize this key aspect of the work. In Theorem 3.1, the authors assume that \mathcal{P} equals or can be approximated as 1. Then, the authors say that it is reasonable for independent couplings or mini-batch OT flow matching with small batch size. Doesn't sufficiently small batch size OT basically behave almost as independent couplings? The reason why I am pointing this out is that later, at beginning of page 4, the authors put a lot of emphasis on "guidance substantially different from diffusion one", and as one of the examples give dependent coupling, which does not seem convincing given this assumption on \mathcal{P}. Furthermore, I question the claim that experiments on synthetic datasets demonstrate the effectiveness and correctness of the proposed guidance methods. While some methods do perform well, others struggle to replicate simple base distributions, as illustrated in Figure 2. Specifically, only two methods (g^MC and g^\phi) achieve satisfactory results, however the Monte Carlo method requires a nested MC estimation, which may not be practical. Furthermore, similar limitations are observed in the image experiments, where the authors acknowledge that g^MC does not perform well due to high complexity and sample budget constraints. Regarding classifier-free guidance, the authors mention its relevance but do not provide sufficient context or explanation. According to existing literature (e.g., Lipman et al.'s "Flow Matching Guide and Code"), classifier-free guidance can be readily applied using specific transformations between velocity fields and score functions, even though the theoretical derivations in their paper in order to establish CFG motivation are limited to Gaussian paths. The authors briefly mention CFG through "These losses open the design space of the training loss for classifier-free guidance of flow matching (Ho & Salimans, 2022; Zhang et al., 2023).", however it is unclear what the authors meant by this. I recommend rephrasing or removing the sentence to accurately reflect this. Methods And Evaluation Criteria: The proposed methods and evaluation criteria appear to be well-reasoned and sound. However, I have some questions regarding the results presented in Figure 2. Specifically, it is intriguing that only two of the proposed methods demonstrate satisfactory performance on several simple and toy examples. This raises concerns about the generalizability and robustness of the other methods. What is really surprising is that some of these methods seem to exhibit significant improvements when applied to more complex problems. It would be helpful if the authors could provide further insight into this, as it may indicate that there are specific conditions or characteristics of the problems that favor certain methods over others. Theoretical Claims: I did not go through the proofs in the supplementary material but the theoretical claims in the main paper seem correct. However, the calculation of g_t^local should be referenced in the main paper as it is not clear how the formulation follows directly. If I am not mistaken, this is calculated in line 1080 in the appendix? If so, shouldn't the first line of Eqn (68) say g^local rather than g^cov? Experimental Designs Or Analyses: Please see above. Supplementary Material: I did not review the supplementary material in detail. Relation To Broader Scientific Literature: The contribution seem timely and relevant, particularly considering the importance and need of guidance methods in flow matching. Essential References Not Discussed: I am not aware of any relevant works that exist but have not been cited. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - Figure 1. is not completely clear. It would be nice if there was an explanation of it in the supplementary material at least. - Shouldn't the first sentence on in Section 2. Background, say data samples x_1 rather than x_t? - Furthermore, line 060, it should say "it has been proven" rather than "it has been proved" - Lines 154-156, where it is stated that "the new VF has the conditional probability path as that of the original" this is unclear why. The authors should elaborate further. Questions For Authors: If the authors could address my concerns above I would be willing to increase my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review and for acknowledging the contribution and potential impact of our work. We will address your concerns in the following: > Q1: Misleading expression of "guidance" We apologize for the confusion. We will switch to 'energy guidance' to further enhance clarity. > Q2: The assumption of \mathcal{P}=1 seems to contradict the claim in the paper that the guidance is substantially different from diffusion guidance. 1. From the empirical perspective, we argue that $\mathcal{P}=1$ is a valid choice for dependent couplings in realistic datasets. As we show in the table below, the learned VF of the OT-CFM in CelebA and uncoupled CFM have small relative error at all flow time steps, so their guidance VFs are approximately the same, validating our approximation of $\mathcal{P}=1$. |Flow time|0.05|0.25|0.5|0.75|0.95| |-|-|-|-|-|-| |Relative L2|$0.0382$|$0.0297$|$0.0271$|$0.0312$|$0.0717$| 2. Theoretically, with a slow-varying $J$, the approximate $\mathcal{P}=1$ holds for any coupling. A detailed discussion can be found in our response to reviewer YEWM (Q2). 3. We would like to emphasize that in addition to dependent couplings, our framework is also substantially different from diffusion guidance because (a) it extends to any source distribution and conditional probability paths as is also recognized by reviewer erey and reFm (b) a different theoretical framework is provided, as our derivations do not start from score-based models. > Q3. The Monte Carlo guidance has the limitation of lower sample efficiency, so is it practical? 1. We argue that $g_t^{\text{MC}}$ is practical (at least in some tasks) as it shows satisfactory performance in the widely used offline RL task of D4RL. The dimensionality ($23*20$) is already high enough in many practical generative modeling tasks, including generative modeling and molecular structure generation. 2. Besides, there are many variance-reducing MC techniques, such as importance sampling. Specifically, we can use another guided VF to sample from an alternative distribution $\tilde{p}$ such that $\frac{e^{-J(x_1)}}{\tilde{p}(z)}$ has lower variance. For the details of the new $g^{\text{MC-IS}}$, please refer to our response to reviewer YEWM (Q1). It should be noted that this estimation is still unbiased. We conduct experiments to validate the effectiveness of this method, and the results in the table below showcase a huge improvement comparable to the state-of-the-art methods on the image inverse problem, where the vanilla $g_t^{\text{MC}}$ fails. |Method|FID$\downarrow$|LPIPS$\downarrow$|PSNR$\uparrow$|SSIM$\uparrow$| |-|-|-|-|-| |$g^{\text{MC-IS}}$|7.863|0.1889|23.63|0.8429| |$g^{\text{MC}}$|22.75|0.5589|8.67|0.3484| |$\Pi GDM$|15.27|0.1753|25.48|0.8700| > Q4. Why do many methods perform poorly in toy examples? Actually, the toy examples are not trivial: sampling from $\frac{1}{Z}p(x_1)e^{-J(x_1)}$ requires guidance theoretically exact. Otherwise, the samples will be distorted or exhibit mode collapse. Gradient-based guidance methods ($g^{\text{cov-A}}$, $g^{\text{cov-G}}$) rely on rough approximations; while approximate guidance ($g^{\text{sim-MC}}$, CEG) produces biased guidance vector fields, leading to distorted sample distribution. > Q5. Why do these methods exhibit significant improvements when applied to more complex problems? Not all tasks require exact energy-guided sampling as in the toy example. There are tasks where sampling from a distorted, rather than exact, target distribution $\frac{1}{Z}p(x_1)e^{-J(x_1)}$ may be practically useful, as long as both $e^{-J(x_1)}$ and $p(x_1)$ are high. The image inverse problem in our experiment is one instance where the approximate methods demonstrate improved performance. On the contrary, the offline RL task is conditioned on different initial conditions, necessitating stable guidance, so theoretically exact energy guidance is more robust. > Q6. More explanation is required for classfier-free guidance in the current framework (End of section 3.5, line 347-349) In our framework, directly following our definition of the general guidance VF, the classifier-free guidance is simply subtracting the original VF from the conditioned VF (which also extends previous CFG to dependent coupling and arbitrary source distributions). However, as you pointed out, our training-based guidance is actually the flow matching version of the diffusion classifier-based guidance, rather than the classifier-free guidance. We will revise this paragraph and move this to the discussion section, as well. > Q7. The calculation of $g^{\text{local}}$ is unclear. You are correct. We will add the derivation of $g^{\text{local}}$ in the main text in the revised version. > Q8. Other typos. Thank you. We will fix these. For 4, the conditional probability path is a design choice in flow matching, and we have thus assumed the new conditional probability path to be identical. We will make the assumption more explicit. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I strongly hope that you will change the name to avoid future confusion to "energy guidance". I have thus raised my score. --- Reply to Comment 1.1.1: Comment: Thank you for your support. We will change the name.
Summary: This paper proposes a general framework for guidance in continuous flow matching that includes arbitrary source distributions, conditional paths, and coupling (to some extent). Guidance is viewed as tackling energy-based sampling or posterior sampling given an existing flow matching model. This general theory is shown to cover past instances of flow matching guidance as approximations and the framework leads to several new guidance methods including an asymptotically exact Monte-Carlo version. The paper considers both training-free and training-based guidance, and also introduces several loss functions for training the guidance based on the framework. The methods are then evaluated and compared on synthetic datasets, image inverse problems, and offline RL settings, where performance differences are understood relative to the approximations being made. ## update after rebuttal The rebuttal addressed performance in high dimensions and showed Monte-Carlo guidance could be useful for image inverses. While the training-based losses' performance in practical tasks seems still lacking, I increased my score as the rebuttal increased my confidence that the results are both theoretically and practically valuable. Claims And Evidence: - Claims are supported throughout. The framework is demonstrated useful through both organizing and understanding past research, as well as understanding relative performance on the experimental tasks Methods And Evaluation Criteria: - The choice of methods, datasets, and evaluation criteria are appropriate for analyzing guidance. Theoretical Claims: - Checked proofs briefly in Appendix which appeared correct, but did not go through in detail. - The guidance in Theorem 3.1 is exact with respect to the true marginal vector field, but not necessarily a trained marginal vector field. This is not an correctness issue in the proofs, but could be emphasized more in the main text. - Similarly, the conditional probability path and conditional VF are assumed the same for the guided distribution, and most of the paper considers independent coupling. While the framework is still quite general, the authors could clarify these limitations further. Experimental Designs Or Analyses: - Experiment details for the RL, image inverse, and synthetic data setting appears sound Supplementary Material: - Reviewed Sections A, B, C including the proofs, limitations, and experimental details Relation To Broader Scientific Literature: Key contributions: - Framework to understand flow matching guidance covering and organizing many past approaches - General guidance methods that expand the scope of when guidance can be applied (arbitrary sources, conditional probability paths) Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - Generally well-written with ample details provided, Figure 1 is a helpful reference organizing past research on flow matching guidance - Code to reproduce experiments is provided - Highly likely to be used as a reference for future research on guidance with flow matching models Weaknesses: - Usefulness of the Monte-Carlo guidance and the training-based losses in practical settings is underdeveloped Other Comments Or Suggestions: - Figure 1 contains notation (see general guidance expression) that disagrees with the rest of the text. Questions For Authors: 1. Do you anticipate that the training-based losses and / or asymptotically exact Monte-Carlo guidance will be useful in high dimensions? To my understanding, the training-based methods were generally unhelpful outside the synthetic dataset example and were not tried in the image inverse setting. Further, the Monte-Carlo guidance was also not useful for image inverses as discussed in Appendix C.3. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for acknowledging the theoretical and empirical soundness, the presentation, and ample details of our work, as well as the contributions and potential benefits for future works in the field. We would like to address your concerns in the following: > Q1: The correctness of guidance VF assumes the trained VF to be identical to the true VF. Thank you for raising this question. We indeed made this assumption throughout the paper. We will make it explicit in the revised manuscript. > Q2: There are assumptions on the specific type of guidance VF and coupling, limiting the framework's generality. Thank you for your comment, and we appreciate your recognition of the overall generality of our guidance. We will add a more detailed discussion on these assumptions we made in the revised version, but would like to make the following clarifications: For the conditional VF assumption, it is indeed an interesting research question to explore other possible conditional probability paths that may have different advantages, e.g., enhancing "straightness" of the VF for accelerated sampling. Nevertheless, our assumption of conditional VF is natural, and it allows us to simplify the form of guidance, covering many existing guidance methods. For the assumption of $\mathcal{P}=1$ in the dependent coupling case: 1. Empirically, we argue that $\mathcal{P}=1$ is a valid approximation for dependent couplings in realistic datasets such as CelebA $256\times256$ in our experiment. As we show in the table below, the learned VF for mini-batch OT CFM (batch size as large as 128) and uncoupled CFM are close at different time steps, with a relative error of $\sim 10^{-2}$, so their guidance VFs can be approximately the same, which validates our approximation of $\mathcal{P}=1$. |Flow time|0.05|0.25|0.5|0.75|0.95| |-|-|-|-|-|-| |Relative L2|$0.0382\pm0.0076$|$0.0297\pm0.0043$|$0.0271\pm0.0032$|$0.0312\pm0.0033$|$0.0717\pm 0.0096$| 2. Theoretically, the $\mathcal{P}=1$ assumption holds for any dependent couplings when $J$ is slow-varying. A more detailed discussion can be found in our response to reviewer YEWM (Q2). > Q3: Inconsistency of the general guidance expression between Figure 1 and the main text. Thank you for pointing this out. We will fix it in the revised manuscript. > Q4: Are MC and training-based guidance useful in practice? What do you think of the experiment results of these guidance in high dimensions? 1. We believe $g^{\text{MC}}$ is of practical use in high dimensions. First, $g_t^{\text{MC}}$ is practical (at least in some tasks) as it shows satisfactory performance in the widely used offline RL task of D4RL, where the sample dimensionality is not very small ($23\times 20$ per sample). Besides, there are many techniques that can be readily applied to enhance the efficiency of $g_t^{\text{MC}}$. For example, we can adopt importance sampling to reduce the variance of the MC estimation. Specifically, using $$ g_t^{\text{MC-IS}}(x_t) = \mathbb{E}_{x_1,x_0\sim \tilde{p}(z)} \left[\frac{{p}(z)}{\tilde{p}(z)} (\frac{e^{-J(x_1)}}{Z_t} - 1) \frac{p_t(x_t|z)}{p_t(x_t)} v_t (x_t|z) \right], $$ $$ Z_t(x_t) = \mathbb{E}_{x_1,x_0\sim \tilde{p}(z)} \left[\frac{{p}(z)}{\tilde{p}(z)} e^{-J(x_1)} \frac{p_t(x_t|z)}{p_t(x_t)} \right], $$ we can select an alternative distribution $\tilde{p}$ such that $\frac{e^{-J(x_1)}}{\tilde{p}(z)}$ has lower variance (i.e., when $e^{-J(x_1)}$ is large, $\tilde{p}(z)$ is also large, and vice versa). This can be achieved by using another guided VF to sample from $\tilde{p}$ such as using $g^{\text{cov-A}}$ and then $\frac{p(z)}{\tilde p(z)}$ can be estimated using, for example, the Hutchinson trace estimator to preserve scalability [1]. It should be noted that this estimation is still unbiased. We conduct experiments to validate the effectiveness of this method, and the results in the table below showcase a huge improvement comparable to the state-of-the-art methods on the image inverse problem, where the vanilla $g_t^{\text{MC}}$ fails. |Method|FID$\downarrow$|LPIPS$\downarrow$|PSNR$\uparrow$|SSIM$\uparrow$| |-|-|-|-|-| |$g^{\text{MC-IS}}$|7.863|0.1889|23.63|0.8429| |$g^{\text{MC}}$|22.75|0.5589|8.67|0.3484| |$\Pi GDM$|15.27|0.1753|25.48|0.8700| 2. As for the training-based guidance, it has the potential to be widely applicable because it demonstrates the theoretical exactness in the synthetic dataset experiments. Although currently, its performance is restricted, possibly due to high variance induced by the dependency among multiple neural networks in training and inference, it can be potentially addressed by dynamically estimating $Z_t$ rather than training another NN to approximate it, or fine-tuning the guidance VF on the actual original VF. [1] Lipman, et al., Flow Matching for Generative Modeling, ICLR 2023.
Summary: This paper provides a unified perspective on the guidance of flow matching and proposes a bunch of guidance methods to make it more general. The result shows the relevance between the tasks and guidance methods. Claims And Evidence: All the claims are clear and convicing. Methods And Evaluation Criteria: This paper proposes different guidance methods for different tasks. Theoretical Claims: NA Experimental Designs Or Analyses: NA Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - This paper provides a unified Framework for flow-matching guidance applicable to various distributions and couplings. - The training-free (g^MC) and training-based (g_φ) guidance techniques are novel. - Demonstrates effectiveness across synthetic data, image tasks, and RL, showcasing adaptability. - Contributions and derivations are well-structured and accessible. Weakness: - Does not fully address scenarios where couplings significantly influence outcomes. Other Comments Or Suggestions: Test more tasks to highlight real-world utility. Questions For Authors: Can P be dynamically estimated to relax the P ≈ 1 assumption? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for acknowledging our contribution in the theoretical framework, the novelty of guidance methods, the soundness of our empirical validation, and the clarity of presentation. We will respond to your questions below: > Q1: How to further address the scenarios where the $\mathcal{P}\approx 1$ approximation does not apply? First, we would like to emphasize that $\mathcal{P}=1$ is a valid approximation for dependent couplings in realistic datasets such as CelebA $256\times256$ in our experiment. As we show in the table below, the learned VF of mini-batch OT CFM (batch size as large as 128) and uncoupled CFM are close at different flow time steps, with a relative error of $\sim 10^{-2}$, so their guidance VFs can be approximately the same, which validates our approximation of $P=1$. |time|0.05|0.25|0.5|0.75|0.95| |-|-|-|-|-|-| |Relative L2|$0.0382\pm0.0076$|$0.0297\pm0.0043$|$0.0271\pm0.0032$|$0.0312\pm0.0033$|$0.0717\pm 0.0096$| To produce exact guidance for dependent couplings, one approach is to dynamically adapt the guidance VF as you pointed out. Since $\mathcal{P}\neq 1$ changes high-dimensional integrals, it is costly to directly compute its influence. However, if we make assumptions on the form of $p(x_0|x_1)$ of the original VF, we can derive the corresponding $g^{\text{MC}}$ and approximate guidance. Meanwhile, the approximation error caused by $P=1$ can also be compensated by adapting the source distribution as we discussed in our response to YEWM (Q2). Therefore, we can also parameterize the source distribution and optimize it to recover the exactness of the energy guidance. We will add a discussion on these future directions in the revised manuscript. > Q2: Test more tasks to highlight real-world utility. Thank you for your suggestions. The experiments in our paper include data modalities from time-series data to images, and are realistic and high-dimensional. Therefore, we believe our empirical evaluation can effectively reveal the utility of different guidance methods on different types of realistic generative modeling tasks. In addition, we provide an additional experiment on a class-label conditioned image generation task to increase the variety of guidance energy functions. With the trained CFM model in our image inverse problem experiment, we use another classifier on the gender to produce the objective function $J$, and consider two cases where either the male or female is set as the target. The table below shows the classification accuracy of the samples generated by different guidance methods: |Guidance \ Target label|Male|Female| |-|-|-| |$g^{\text{cov-A}}$|98%|85%| |$g^{\text{MC}}$|82%|68%| These results reveal that on image generation tasks, the performance gap between $g^{\text{MC}}$ and $g^{\text{cov-A}}$ is narrowed when the objective function is less complicated.
Summary: The paper introduces a framework for guiding flow-matching models, which are advanced generative models. It extends the concept of guidance from diffusion models to a more general form. The framework includes training-free, asymptotically exact guidance using Monte Carlo methods, new training-based guidance losses, and approximate guidance methods that cover classical guidance techniques as special cases. The paper validates these innovations through theoretical analysis and experiments on synthetic datasets, offline reinforcement learning, and image inverse problems, demonstrating improved effectiveness and flexibility. ## update after rebuttal Claims And Evidence: The claims in the paper are supported by thorough theoretical derivations and extensive experimental results. The introduction of the Monte Carlo-based training-free guidance is demonstrated to be asymptotically exact, backed by theoretical analysis and pseudocode. Methods And Evaluation Criteria: Yes. The proposed methods are well-suited for the addressed generative modeling challenges. Theoretical Claims: Theoretical claims are carefully proven, such as the equivalence of the general flow matching guidance to classical diffusion guidance under certain assumptions. Experimental Designs Or Analyses: Yes. The experimental design is sound, employing appropriate baseline comparisons Supplementary Material: Yes. Check the code link attached to the paper Relation To Broader Scientific Literature: The paper builds on existing work in flow matching and diffusion models Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: - Could the efficiency of the Monte Carlo sampling method be further improved for high-dimensional image tasks? - What are the major limitations you're currently addressing in P estimation for strong coupling? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for acknowledging the theoretical and empirical soundness of our work, as well as the potential impact in addressing the challenges in the generative modeling field. We answer your questions in the following: > Q1: How can the MC sampling efficiency be improved? Many existing techniques can be readily applied to enhance the efficiency of $g_t^{\text{MC}}$. For example, we can adopt importance sampling to reduce the variance of the MC estimation. Specifically, using $$ g_t^{\text{MC-IS}}(x_t) = \mathbb{E}_{x_1,x_0\sim \tilde{p}(z)} \left[\frac{{p}(z)}{\tilde{p}(z)} (\frac{e^{-J(x_1)}}{Z_t} - 1) \frac{p_t(x_t|z)}{p_t(x_t)} v_t (x_t|z) \right], $$ $$ Z_t(x_t) = \mathbb{E}_{x_1,x_0\sim \tilde{p}(z)} \left[\frac{{p}(z)}{\tilde{p}(z)} e^{-J(x_1)} \frac{p_t(x_t|z)}{p_t(x_t)} \right], $$ we can select an alternative distribution $\tilde{p}$ such that $\frac{e^{-J(x_1)}}{\tilde{p}(z)}$ has lower variance (i.e., when $e^{-J(x_1)}$ is large, $\tilde{p}(z)$ is also large, and vice versa). This can be achieved by using another guided VF to sample from $\tilde{p}$, such as using $g^{\text{cov-A}}$. Then $\frac{p(z)}{\tilde p(z)}$ can be estimated using, for example, the Hutchinson trace estimator to preserve scalability [1]. It should be noted that this estimation is still unbiased. We conduct experiments to validate the effectiveness of this method, and the results in the table below showcase a performance comparable to the state-of-the-art methods on the image inverse problem, where the vanilla $g_t^{\text{MC}}$ fails. |Method|FID$\downarrow$|LPIPS$\downarrow$|PSNR$\uparrow$|SSIM$\uparrow$| |-|-|-|-|-| |$g^{\text{MC-IS}}$|7.863|0.1889|23.63|0.8429| |$g^{\text{MC}}$|22.75|0.5589|8.67|0.3484| |$\Pi GDM$|15.27|0.1753|25.48|0.8700| > Q2: What are the limitations of our $\mathcal{P}=1$ approximation? First, we would like to clarify our approximation. Our framework allows us to choose any $\pi'(x_0|x_1)$ (and hence $\mathcal{P}$) as long as the source distribution is consistent: $p(x_0) = \int\pi'(x_0|x_1)\frac{1}{Z}p(x_1)e^{-J(x_1)}dx_1$. In other words, the error induced by setting $\mathcal{P}=1$ can be characterized by the deviation in either the source distribution or the vector field. In the former case, we assume the guidance VF to be exact, i.e., $\pi'(x_0|x_1) = \pi(x_0|x_1)$. Here the error is induced by the fact that we should have sampled from $\int\pi'(x_0|x_1)\frac{1}{Z}p(x_1)e^{-J(x_1)}dx_1$, rather than the original $p(x_0)$. In the latter case, we assume the source distribution to be unchanged, i.e., we need $\pi'(x_0|x_1)=p(x_0)$ to make the source distributions compatible automatically. In this case, the error is caused by approximating $\mathcal{P}=\frac{p(x_0)}{\pi(x_0|x_1)}$ with $1$. 1. In the case of strongly dependent couplings, $\mathcal{P}\approx 1$ still holds as long as $J$ varies slowly. This is demonstrated by the small deviation in the error of the source distribution (assuming $\pi'(x_0|x_1)=\pi(x_0|x_1)$) as we discussed above. If J is always near its average value, the new source distribution $\int\pi(x_0|x_1)\frac{1}{Z}p(x_1)e^{-J(x_1)}dx_1$ is almost $\int\pi(x_0|x_1)p(x_1)dx_1 = p(x_0)$ that is the original source. 2. Nevertheless, when both the coupling is strong and $J$ varies intensively, a more complicated treatment is required for exact guidance. For example, we can try to sample from the new source distribution $\int\pi(x_0|x_1)\frac{1}{Z}p(x_1)e^{-J(x_1)}dx_1$. Although one may argue that this may be equally difficult as sampling from the target distribution $\frac{1}{Z}p(x_1)e^{-J(x_1)}$ exactly, it may be learned more easily as the target distribution is potentially smoothed after being convolved with the "kernel" $\pi(x_0|x_1)$. We will add this discussion to the revised manuscript. [1] Lipman, et al., Flow Matching for Generative Modeling, ICLR 2023.
null
null
null
null
null
null
Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction
Accept (poster)
Summary: Other reviews checked. I keep my score. Thanks for the rebuttal. This paper investigates two types of robust Markov decision processes (RMDPs) in tabular reinforcement learning: CRMDP and RRMDP. It introduces hard instances and visitation ratios and establishes both regret lower and upper bounds. The authors also present empirical results to support their findings. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. This paper presents hard instances to illustrate that a well-defined exploration hardness condition is essential for general online learning in robust Markov decision processes (RMDPs). When state visits are exponentially rare, online learning becomes intractable. The authors introduce the supremal visitation ratio as a metric for measuring exploration difficulty in RMDPs. They propose computationally efficient algorithms for CRMDPs and RRMDPs with various uncertainty sets and provide theoretical guarantees for these methods. Plus, the paper establishes regret lower bounds, showing that the supremal visitation ratio is an unavoidable factor in the sample complexity of online RMDP learning. Experimental Designs Or Analyses: Yes. The authors assess the algorithms in a simulated RMDP and the Frozen Lake environment, demonstrating their effectiveness under significant distribution shifts. Supplementary Material: The theoretical results seem intuitive, so I only reviewed the supplementary material briefly. It appears solid to me. Relation To Broader Scientific Literature: Off-dynamics reinforcement learning (RL) has attracted much attention due to scenarios where the transition dynamics of the deployment environment differ from those during the training process. This approach requires exploration strategies that proactively address distributional shifts, ensuring robustness to dynamic changes. Essential References Not Discussed: na Other Strengths And Weaknesses: In other parts Other Comments Or Suggestions: The proposed algorithms follow the robust value iteration template. Assumption 4.5 is interesting. My understanding is that the intuition behind it is that collecting data in the nominal MDP is somewhat analogous to an offline RL problem in the true MDP, requiring adequate data coverage in the true MDP. I’m curious whether this necessity assumption could be relaxed to require coverage of only d^{pi*} instead of covering d^pi for all pi. The KL bound in Theorem 4.19 appears to have an exponential dependence on H. Could you provide some intuition for this bound? This paper presents many upper bound results. Do you have any comments or comparisons with other existing upper bound results? Questions For Authors: Some bonus terms include the 1/K component, while others do not. Could you provide some intuition behind the presence of the 1/K term in the bonus functions? Additionally, why do some P notations include w while others do not, such as in line 274? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback on our work. We hope our response will fully address all of the reviewer's questions. --- ### 1. Discussion on possible relaxation of Assumption 4.5 We appreciate the reviewer's insightful question and agree that Assumption 4.5 might be relaxed to require coverage only of $d^{\pi^\*}$ instead of $d^\pi$ for all $\pi$, similar to the off-policy problem. While this modification is theoretically feasible, it is not straightforward due to the dynamics shift in the deployment environment. It would likely involve revising the way we decompose the regret, leading to non-trivial changes in our current proof structure. --- ### 2. Explanation about KL bound in Theorem 4.19 As stated by Remark 4.2 in Blanchet et al. (2023), the $e^H$ dependency for KL bounds is commonly observed in the literature. This is due to the logarithmic function used in the definition of KL divergence. --- ### 3. Do you have any comments or comparisons with other existing upper bound results? Thank you for the question. We would like to point out that no previous work has explored the exact same setting we are studying. The research Lu et al. (2024), which most closely resembles ours in the constrained TV setting, relies on a different assumption that cannot be generalized to other settings considered in our paper. Unlike their approach, we do not concentrate on one specific scenario of uncertainty sets (constrained TV setting) but instead explore the general online learning of robust MDP. --- ### 4. Intuition behind the presence of the 1/K term in the bonus functions The $1/K$ terms in the bonus functions are not intrinsic. In the proof, we constructed $\epsilon$-nets to bound one part of the estimation, which leads to an additional term of $\mathcal{O}(\epsilon)$ or $\mathcal{O}(\sqrt{\epsilon})$, where the value of $\epsilon$ can be arbitrary. We set $\epsilon=\mathcal{O}(1/K)$ or $\epsilon=\mathcal{O}(1/K^2)$, which results in the extra $1/K$ term in the bonus. For a detailed explanation, please refer to Lemmas E.2, E.12, E.16, F.2, F.6, and F.11 respectively for the specific setting. --- ### 5. Clarification about notations such as on line 274 As defined in Definition 4.3, $P^o$ represents the nominal transition, $P^w$ represents the worst-case transition, and $P^\pi$ is the corresponding visitation measure. These are distinctly different concepts. --- We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them. ### References: [1] Blanchet, J., Lu, M., Zhang, T., & Zhong, H. (2023). Double pessimism is provably efficient for distributionally robust offline reinforcement learning: Generic algorithm and robust partial coverage. Advances in Neural Information Processing Systems, 36, 66845-66859. [2] Lu, M., Zhong, H., Zhang, T., & Blanchet, J. (2024). Distributionally robust reinforcement learning with interactive data collection: Fundamental hardness and near-optimal algorithm. arXiv preprint arXiv:2404.03578.
Summary: The paper considers two types of robust MDP formulations - one with constraints and one with regularization. A new value update is proposed for both which can be shown to guarantee for a tabular case some regret bounds. Lower and upper regret bounds are given as a function of the supremal visitation ratio which is introduced in the paper. The authors also show case when the regret is especially high due the supremal visitation ratio. Finally, the authors provide some simple experiments to support their claims. Claims And Evidence: The paper is mainly theoretical and its claims are well supported. Methods And Evaluation Criteria: The authors provide experiments that show general behavior as a function of the supremal visitation ratio, but these experiments do not seem to reflect the behavior of the regret they found ($\sqrt{C_{vr}}$). Also if they could show the behavior reflects the other parameters in their bound it would've been better (e.g. $S$, $A$). Theoretical Claims: No. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: The key contributions seem to be well related to relevant literature. Essential References Not Discussed: No to my best knowledge. Other Strengths And Weaknesses: The paper is written very clearly, it has a solid theoretical result that can be significant to the community. The weaknesses of the paper in my opinion are: 1. The bounds in 3.2 and 3.3. are not explained enough. What is the difference between all of them? do the terms we see there makes sense? 2. It's nice that the proofs are in the appendix, but it would be good if you share the general intuition beyond them - what was the main idea in the proof. This is in my view much more important than showing the bounds for another case. 3. In the third page the text looks cramped. I hope it's an artifact in my viewer since changing the spaces between lines is prohibited as far as I know. 4. The experiments are not very important because the result is mainly theoretical - but still. The first environment is too simple (H=3 is barely RL). As I wrote before it would make sense to show that the regret bounds behave like you would expect it for all of its parameters (S, A, K, Crv...). I didn't see a discussion on why in Figure 1 ORBIT works less good when there is small or no perturbation? Other Comments Or Suggestions: None. Questions For Authors: Can you provide a proof sketch for the main result in the paper (Thm. 4.14 and also 4.17)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable time and effort in providing detailed feedback on our work. We hope our response will fully address all of your questions. --- ### 1. Explanation about the bounds in 3.2 and 3.3 Sections 3.2 and 3.3 present update formulas for different types of robust sets (constrained vs. regularized) and different $f$-divergences (TV, KL, and $\chi^2$). Although these updates all follow a similar pattern--applying an optimism-based principle and a robust Bellman operator--their specific forms differ because the dual formulations vary across robust set types and divergence measures. Hence, the resulting $Q$-function estimates and bonus terms change accordingly. We have outlined the final equations for the updates in Sections 3.2 and 3.3 due to space constraints, but we agree that more in-depth explanation would help clarify how each term arises from the respective dual formulation. We plan to include further details in the final version of the paper to make the derivations and resulting bounds more transparent. --- ### 2. Proof sketch of this paper We summarize the core proof idea as follows. For the upper bounds, we decompose the regret as $\text{Regret}=\sum\limits_{k=1}^K\big(V_1^{\*,\rho}-V_1^{{\pi^k},\rho}\big)=\sum\limits_{k=1}^K\big(V_1^{\*,\rho}-V_1^{k,\rho}\big)+\sum\limits_{k=1}^K\big(V_1^{k,\rho}-V_1^{{\pi^k},\rho}\big)$. We use the dual formulation to estimate $V$ and add an extra bonus term to maintain an optimistic estimate, ensuring that the first term is less than zero. From the definition of $V^k$, it follows that the second term can be controlled by $\sum\limits_{k=1}^K\sum\limits_{h=1}^H\mathbb{E}\_{\\{P_h^{w,k}\\}_{h=1}^H,\pi^k}\big[2\text{bonus}_h^k\big]$. The cumulative bonus is of an order not larger than $\sqrt{K}$, as derived from the concentration inequalities. For the lower bounds, we construct a key state that is difficult to explore in the nominal environment but easier to explore in the perturbed environment. Consequently, insufficient knowledge about this key state can lead to significant regret. We will add the proof sketch in the final version. --- ### 3. The text display on the third page We believe this might be due to the floating environment in LaTeX. --- ### 4. Further clarifications on experiment setup and observed performance In the case of $C_{vr}$, we constructed a well-designed MDP environment where the hyperparameter $\beta$ allows for adjustments to the value of $C_{vr}$ to observe performance changes. However, the $\frac{1}{2}$ order dependency on $C_{vr}$ represents merely a worst-case bound. Furthermore, the regret formulations incorporate additional terms, so we do not anticipate results to be strictly proportional to $\sqrt{C_{vr}}$. Here, we merely demonstrate a general positive correlation to validate our intuition, akin to an ablation study. Regarding $S$, $A$, and other variables, they are not the primary focus within this topic. The crucial parameters here are $K$ (indicating efficient learning of the problem) and $C_{vr}$ (demonstrating that $C_{vr}$ is a precise metric for the difficulty of the environment). Unlike $C_{vr}$, which is relatively easier to manipulate, altering the values of $S$ and $A$ would lead to a complete structural change in the environment. Given that our environment is specifically designed, it is challenging to introduce $S$ and $A$ as variables into the existing problem. To answer the reviewer's question, we have conducted a new experiment on the Gambler's Problem (inspired by Panaganti et al., 2022, Section 6). We set the heads-up probability to $p_h=0.6$ for the nominal environment and $p_h=0.4$ for the perturbed environment. We applied the constrained TV setting with $H=30$ and varied the size of $S$ from 15 to 55, reporting the results across 10 training runs as shown in the following table. This demonstrates that performance deteriorates as the sizes of $S$ and $A$ increase. |size of S|15|23|31|39|47|55| |-|-|-|-|-|-|-| |mean of reward|0.1294|0.1254|0.1192|0.1240|0.1140|0.0968| |std of reward|0.0088|0.0134|0.0164|0.0163|0.0219|0.0118| --- ### 5. Explanation about performance under minimal perturbation in Figure 1 As is well-known in robust reinforcement learning, algorithms designed to hedge against large uncertainties naturally trade off some performance in near-nominal settings. Therefore, while ORBIT typically outperforms non-robust approaches under significant perturbations, it may appear less effective when perturbations are small or absent because it prioritizes worst-case scenarios over short-term gains. --- We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them. ### References: [1] Panaganti, K., & Kalathil, D. (2022, May). Sample complexity of robust reinforcement learning with a generative model. In International Conference on Artificial Intelligence and Statistics (pp. 9582-9602). PMLR.
Summary: This paper explores robust Markov decision processes (RMDPs) in the context of off-dynamics reinforcement learning, where distribution shifts occur between training and deployment environments. The authors investigate two variants: constrained RMDPs (CRMDPs) and regularized RMDPs (RRMDPs). They propose computationally efficient algorithms for both settings and establish sublinear regret guarantees, demonstrating the effectiveness of their approach in mitigating performance degradation due to environmental shifts. Claims And Evidence: The paper is well-structured, with clear results. Section 3 introduces the ORBIT algorithm (Online Robust Bellman Iteration), which serves as a foundation for addressing both RMDPs and CRMDPs. Section 4 establishes the corresponding regret bounds, demonstrating the theoretical performance of the proposed methods. Finally, Section 5 presents numerical experiments that validate the effectiveness of the approach in practical scenarios. Methods And Evaluation Criteria: I am not entirely clear on the key insights from Figure 2. If I understand correctly, in the CRMDP setting, the TV-ambiguity set appears to yield the best performance, while in the RRMDP setting, the $\chi^2$-ambiguity set performs better. Can this be theoretically justified, or is it purely an empirical observation? If it is the latter, what makes this result interesting, and how can we be sure it is not merely an artifact of the Frozen Lake problem setting? This also raises a broader question: How should one decide which ambiguity set to use? Can you provide theoretical insights to guide this choice, and do the numerical results align with such theoretical expectations? Theoretical Claims: I am wondering why the authors decided to choose exactly these 3 ambiguity sets: (1) TV, (2) KL, (3) $\chi^2$? Could you also include the popular Wasserstein or MMD ambiguity set? Experimental Designs Or Analyses: See points raised above Supplementary Material: The paper contains a substantial appendix, which is beyond the scope of what I could verify. Relation To Broader Scientific Literature: The related literature is adequately summarized. I have no compaints here. Essential References Not Discussed: - Other Strengths And Weaknesses: MDPs under environment shifts are a crucial and challenging topic in both the ML and OR communities. This paper explores an interesting direction in addressing these challenges, presenting a well-written and relatively accessible exposition. Other Comments Or Suggestions: - Questions For Authors: 1) Can you motivate better the specific choice of ambiguity sets? 2) All uncertainty sets are defined a rectangular ambiguity sets. Uncertainty sets defined from maximum likelihood principles, however turn out to be non-rectangular. Can you also handle those? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback on our work. We hope our response will fully address all of the reviewer's questions. --- ### 1. The key insights from Figure 2 The primary purpose of Figure 2 is to show that our algorithm converges stably by the end of training in a fixed environment, rather than to compare performance under different ambiguity sets. In the Frozen Lake experiment, our main goal is to highlight the algorithm’s robustness, specifically its ability to handle worst-case scenarios effectively. --- ### 2. How should one decide which ambiguity set to use? As depicted in Figure 3(a), the computational costs vary with different ambiguity settings, where the regularized setting generally proves to be more computationally efficient than the constrained setting. Selecting an appropriate ambiguity set (TV v.s. KL v.s. $\chi^2$) may be specific to the application and the structure of the problem. For hyperparameters like the radius or the regularization parameters, we can select them via hyperparameter tunning based on empirical performance. --- ### 3. Can you better motivate the specific choice of ambiguity sets? Our primary goal is to study the general online robust problem. We focus on these three ambiguity sets because they are widely used and extensively investigated in the reinforcement learning literature (Panaganti & Kalathil, 2022; Yang et al., 2022; Xu et al., 2023; Shi et al., 2024), enabling more direct comparisons with existing studies. --- ### 4. Could you also include the popular Wasserstein or MMD ambiguity set? Thanks for the great suggestion! Our algorithm has the potential to be adapted to these two ambiguity sets since we address the general online robust problem. However, this adaptation might require additional analysis, as these ambiguity sets lack a closed-form dual formulation, posing challenges for direct application of our current algorithm. This will be a great future direction on online robust MDPs. --- ### 5. Can you also handle those non-rectangular ambiguity sets? Non-rectangular ambiguity sets differ fundamentally from those discussed in this paper, and our algorithm is not equipped to handle these due to their inherent complexity. To our knowledge, nearly all studies in the robust MDP literature use rectangular ambiguity sets. Moreover, Wiesemann et al. (2013) showed that solving DRMDPs with general uncertainty sets can be NP-hard. We therefore identify this as an open problem for future research. --- We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them. ### References: [1] Panaganti, K., & Kalathil, D. (2022, May). Sample complexity of robust reinforcement learning with a generative model. In International Conference on Artificial Intelligence and Statistics (pp. 9582-9602). PMLR. [2] Yang, W., Zhang, L., & Zhang, Z. (2022). Toward theoretical understandings of robust markov decision processes: Sample complexity and asymptotics. The Annals of Statistics, 50(6), 3223-3248. [3] Xu, Z., Panaganti, K., & Kalathil, D. (2023, April). Improved sample complexity bounds for distributionally robust reinforcement learning. In International Conference on Artificial Intelligence and Statistics (pp. 9728-9754). PMLR. [4] Shi, L., Li, G., Wei, Y., Chen, Y., Geist, M., & Chi, Y. (2023). The curious price of distributional robustness in reinforcement learning with a generative model. Advances in Neural Information Processing Systems, 36, 79903-79917. [5] Wiesemann, W., Kuhn, D., & Rustem, B. (2013). Robust Markov decision processes. Mathematics of Operations Research, 38(1), 153-183.
Summary: This paper considers distributionally robust RL with online access to the nominal model, including the constrained MDP (CMDP) and the regularized robust MDP (RRMDP) frameworks. They propose *supremal visitation ratio* $C_{vr}$ as a hardness measure and show that this measure is unavoidable in the regret lower bounds. They also provide numerical results to validate how $C_{vr}$ would affect learning efficiency. Claims And Evidence: See Question 1. Methods And Evaluation Criteria: Yes. Theoretical Claims: See Questions. Experimental Designs Or Analyses: The experimental setups in Section 5 are clearly described. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths:** 1. This paper introduces supremal visitation ratio as a hardness measure and shows it is unavoidable in the lower bounds. 2. They present numerical results showing how the harness measure would affect learning performance. **Weaknesses:** See Questions. Other Comments Or Suggestions: 1. What are the definitions of $P_h^\pi$ and $d_h^\pi$ in Assumption 4.5? Questions For Authors: 1. Can you compare ORBIT and the algorithms in (Liu & Xu, 2024a) and (Lu et al., 2024)? 2. In Figure 3(a), why the average training time for $\chi^2$ cases is significantly longer? 3. Is it correct that the fail-state conditions 4.1 and 4.2 are special cases of bounded visitation measure ratio? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable time and effort in providing detailed feedback on our work. We hope our response will fully address all of your questions. --- ### 1. The definitions of $P_h^\pi$ and $d_h^\pi$ in Assumption 4.5 The definitions are provided in Definition 4.3, where $P_h^\pi$ represents the visitation measure under the worst-case scenario, and $d_h^\pi$ represents the visitation measure in the nominal environment, both are defined to be induced by the policy $\pi$. --- ### 2. Comparison of ORBIT with algorithms in Liu and Xu (2024a) and Lu et al. (2024) All three works investigate the online robust MDP setting using value-iteration-based methods. In particular, Liu and Xu (2024a) focuses on the linear function approximation regime, whereas Lu et al. (2024) and our work consider the tabular setting. However, Liu and Xu (2024a) and Lu et al. (2024) address only the constrained TV-divergence scenario. In contrast, our study addresses a broader range of online robust problems, including robust sets defined by three different $f$-divergences, as well as constrained RMDP and regularized RMDP, the latter of which was not considered in their work. Consequently, our algorithm provides distinct update formulations and bonus terms across six settings. --- ### 3. Discussion on the average training time for $\chi^2$ cases in Figure 3(a) Lemmas E.15 and F.10 reveal that deriving the dual formulation for the $\chi^2$ update requires solving a multi-dimensional optimization problem, which leads to longer training times. In contrast, the constrained KL setting (Lemma E.11) relies only on one-dimensional optimization. Furthermore, for constrained TV (as illustrated in Algorithm 2), regularized TV (Lemma F.1), and KL settings (Lemma F.5), the optimization problems have closed-form solutions. This is why the average training time for $\chi^2$ cases is significantly longer. --- ### 4. Is it correct that the fail-state conditions 4.1 and 4.2 are special cases of bounded visitation measure ratio? No, our assumptions do not imply theirs, nor do theirs imply ours. The fail-states condition was specifically designed for the constrained TV robust sets, as explained in Proposition 4.4. In contrast, our assumption addresses the general online robust problem and is applicable to various other $f$-divergence robust sets. --- We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them. ### References: [1] Liu, Z., & Xu, P. (2024, April). Distributionally robust off-dynamics reinforcement learning: Provable efficiency with linear function approximation. In International Conference on Artificial Intelligence and Statistics (pp. 2719-2727). PMLR. [2] Lu, M., Zhong, H., Zhang, T., & Blanchet, J. (2024). Distributionally robust reinforcement learning with interactive data collection: Fundamental hardness and near-optimal algorithm. arXiv preprint arXiv:2404.03578.
null
null
null
null
null
null
Federated Learning for Feature Generalization with Convex Constraints
Accept (poster)
Summary: For generalization in FL framework, this paper proposes FedCONST which impose more updates with larger probabilities to under-learned features by centralization and orthogonal constraints at clients. Various FL algorithms could get performance improvement with the proposed constraints. ## update after rebuttal Thank you for your rebuttal. I have reviewed the authors’ rebuttals to all the reviews. I think most of the concerns are resolved in the rebuttal. Therefore, I want to increase my score to 3. Weak accept. Claims And Evidence: Positive correlation between weight magnitude and GSNR, gradient variance reduction, drift diversity are supported by experiments. Methods And Evaluation Criteria: Evaluations of using various FL methods with the additional proposed convex constraints for cross-device and cross-silo conditions makes sense. Theoretical Claims: Theoretical analysis mainly based on “Understanding Why Neural Networks Generalize Well Through GSNR of Parameters, ICLR’20” Experimental Designs Or Analyses: Experiments are somewhat small-scale. Federated domain generalization benchmark datasets such as PACS and OfficeHome would be better to show the effectiveness of the generalization. Supplementary Material: I have briefly reviewed the implementation Details and Additional Experiments in the supplementary material. Relation To Broader Scientific Literature: This paper proposes a federated domain generalization method by introducing convex constraints. Essential References Not Discussed: Most of the related papers are well cited. Other Strengths And Weaknesses: - Strengths - It is an interesting idea to include convex constraint to the local model updates by considering the aggregated global model. - Impressive performance improvement can be observed in Table 1. - Weaknesses - Due to the centralization constraint, the aggregated weight exists within the convex constraint area. However, for the aggregated model to exist within the generation area, the convex constraint area should be within the generation area. I guess that the generalization area is generally smaller than the convex constraint area. - Some missing details and typos make it difficult to understand. E.g. formulations of center gradient and project gradient, ‘i' is used for sample or channel. - The experiments were conducted on small-scale datasets with limited model architectures (mainly conv networks). It would be better to evaluate the Conjecture 1 (Global weight magnitude is a reliably proxy for feature strength) with various network architectures. Other Comments Or Suggestions: $w_i^l$ might be $w_c^l$ below Eq. (8) Questions For Authors: 1) I am not sure why the convex constraints guarantee the aggregated global model belongs to the generalization area as shown in Figure 1. Could you explain it with more intuitive examples? 2) In Algorithm 1, what are the exact formulations of the Center gradient $C(g^k_{m,t})$ and Project gradient $P(g^k_{m,t})$. And why are center gradient and project gradient calculated serially? 3) Is the proposed method applicable to transformer architectures? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Generalization Area We appreciate your insightful concern regarding the relationship between the convex constraint region and the generalization area. You raised an important point: if the generalization area is narrower than the constraint region, then the aggregated model could potentially drift away from the generalizable zone. In response, we offer an intuitive argument suggesting that the generalization area can, in fact, be sufficiently large. Consider the $l$-th layer representation of a neural network: $$ \Phi^l_c = \sigma((\Phi^{l-1})^\top W^l_c) $$ where $\sigma$ is the activation function (e.g., $\tanh$), and $\Phi^l_c$ denotes a representation within the generalization regime. We ask: under what conditions does $$ \sigma((\Phi^{l-1})^\top (W^l_c + \Delta W^l_c)) \approx \Phi^l_c $$ still hold? ### Case 1: Large $\|W^l_c\|$ (saturation regime) If the norm of $W^l_c + \Delta W^l_c$ is large, then the activation function saturates. In this case, $\sigma$ becomes relatively insensitive to $\Delta W^l_c$, so a wide range of perturbations still yield generalizable representations. ### Case 2: Small $\|W^l_c\|$ (linear regime) When weights are small, $\sigma$ behaves almost linearly: $$ \Phi^l_c \approx (\Phi^{l-1})^\top W^l_c $$ We can apply Chebyshev’s inequality: $$ P(|\Phi^l_c - \Phi^l_{c,\text{gen}}| \geq \epsilon) \leq \frac{\text{Var}(\Phi^l_c)}{\epsilon^2} $$ Here, the variance is small, so the output remains close to generalization with high probability. These cases show that the perturbation space $\Delta W^l_c$ that preserves generalization is often broad, especially when orthogonal to $W^l_c$—a behavior encouraged by our convex constraints. Furthermore, generalization often means consistent loss across training and test—even if predictions are wrong—so the region itself is inherently wide. Our formulation characterizes the generalization region by high GSNR parameters, which span a broad zone, especially early in training (Liu et al., ICLR 2020). Because our convex constraints encourage high-GSNR directions, they not only prevent drift but also promote convergence within the generalization zone. --- ## Implementation on Constraints We appreciate the reviewer’s observation. Indeed, when both constraints are affine (e.g., linear equalities), it is possible to compute a closed-form projection directly onto their intersection: $$ \mathbf{x}^* = \mathbf{v} - A^\top (A A^\top)^{-1} (A \mathbf{v} - \mathbf{b}) $$ Where $A$ stacks the constraint vectors, and $\mathbf{b}$ contains target values. However, our implementation uses two-step sequential projection for simplicity and modularity: - Centering constraint: $$ C(g_c^l) = g_c^l - \frac{1}{n_c^l} \mathbf{1}^\top g_c^l $$ - Projection constraint: $$ P_{G^l_c}(g_c^l) = (I - G^l_c G^{l\top}_c) g_c^l $$ This modular approach keeps each step interpretable and separable. Nonetheless, we agree a single-step version might be more efficient and plan to explore this in the future. --- ## Transformer Architecture While we didn’t test on Transformers, our method is architecture-agnostic. Our constraints apply at the weight update level. In Transformers, this means we can project updates in attention matrices ($W^Q$, $W^K$, $W^V$) and feedforward layers ($W_1$, $W_2$), just as with CNNs. Because the projection is direction-aware and lightweight, it can be plugged into Transformer training pipelines without architectural changes. We plan to validate this in future work. ### Reference - Liu et al., "Understanding Why Neural Networks Generalize Well Through GSNR", ICLR 2020
Summary: This paper introduces FedCONST, a novel federated learning (FL) algorithm that addresses the challenges of generalization and overfitting in FL environments with heterogeneous data. By employing convex constraints based on the global model's parameter strengths, FedCONST adaptively modulates update magnitudes to prevent overemphasis on well-learned parameters while reinforcing underdeveloped ones. This approach not only stabilizes local training but also enhances feature transferability and robustness across diverse FL settings. The authors validate their method through extensive experiments on various datasets and model architectures, demonstrating state-of-the-art performance compared to existing FL techniques. Claims And Evidence: Yes. The author's claims regarding theoretical and experimental contributions are well-supported by concrete content. Methods And Evaluation Criteria: The proposed method, FedCONST , draws inspiration from Domain Generalization (DG) by leveraging Gradient Signal-to-Noise Ratio (GSNR) insights and applying convex constraints to enhance feature generalization in Federated Learning (FL). This approach is particularly effective in cross-silo settings, where it demonstrates significant improvements in generalization. However, the authors should provide a more detailed analysis of the computational overhead introduced by the method. Specifically: - Theoretical Overhead : Discuss the additional computational cost of applying centralization and orthogonality constraints during local training. - Practical Convergence Speed : Include convergence curves (e.g., test accuracy vs. global rounds) to assess whether the added complexity translates into faster or more robust convergence. Theoretical Claims: While the paper introduces several formulaic expressions and provides a theoretical foundation for the proposed method, it lacks rigorous proofs for some of its core claims, particularly regarding convergence and generalization . Experimental Designs Or Analyses: The experimental results effectively demonstrate the benefits of FedCONST in enhancing generalization and stability. However, in Figure 14(e), the large fluctuations in the L2 norm and cosine similarity for FedCONST toward the end of training raise concerns about potential instability. Supplementary Material: Yes, more experimental results part. Relation To Broader Scientific Literature: FedCONST adapts the concept of GSNR from DG to FL, leveraging global weight magnitudes as a proxy for feature importance. This innovation aligns with recent efforts in model optimization research to enhance generalization under data heterogeneity, offering a computationally efficient and scalable solution tailored to FL's distributed nature. By stabilizing local training and preserving generalizable features during aggregation, FedCONST addresses key limitations of existing methods, such as overfitting and misalignment. These contributions inspire further exploration into constraint-based optimization strategies, particularly in scenarios involving sparse or imbalanced data distributions, and position the work as a meaningful advancement in both FL and general machine learning optimization paradigms. Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: It is recommended to carefully review the figures and equations for potential typos. For example, in Figure 2 , the "Client - Side" section should likely be labeled as Client 1, Client 2, ..., Client N instead of the current notation, which appears to be inconsistent with standard representation. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their constructive feedback and thoughtful suggestions. ## Theoretical Overhead: The proposed constraints—centralization and orthogonal projection—are implemented using simple linear operations with negligible computational overhead. Specifically, for a gradient vector of dimension $n$, each constraint introduces only $\mathcal{O}(n)$ additional computation per update during local training. No additional backpropagation steps or network modules are required. As a result, our method maintains computational efficiency and scalability within the federated learning setting. ## Practical Convergence Speed: As shown in Supplementary B.3, we provide convergence plots (test accuracy vs. global rounds) under various experimental settings. These results demonstrate that models trained with our constraint-based approach not only achieve better generalization but also converge more rapidly in terms of test accuracy. This indicates that the added constraints enhance both the learning dynamics and the final performance.
Summary: This paper targets at addressing the generalization challenges in federated learning (FL). The authors propose FedCONST, an approach that adaptively adjusts update magnitudes based on the global model's parameter strength, preventing overemphasis on well-learned parameters and reinforcing underdeveloped ones. FedCONST employs linear convex constraints to maintain training stability and preserve locally learned generalization capabilities during aggregation. FedCONST aligns local and global objectives, mitigating overfitting and enhancing generalization across diverse FL environments, achieving state-of-the-art performance. Claims And Evidence: The claim on line 92 that "this paper provides theoretical and empirical analyses guarantee that the proposed method boosts generalization by imposing more updates with larger probabilities to under-learned features" is misleading, as the theoretical results in Theorem 2.1 appear to be directly adopted from existing research. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes, there are no outstanding issues in the designs and analyses of this paper. Supplementary Material: No. Relation To Broader Scientific Literature: The essential motivation behind the proposed method is discussed in Conjecture 1, which, however, is not verified by either theoretical or empirical evidence in this paper. As a result, the contributions of this work to the relevant community remain unclear. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths:** 1. Extensive experiments on multiple benchmark datasets, including CIFAR-100, demonstrate the superior performance of the proposed method in diverse scenarios. **Weaknesses:** 1. The essential motivation behind the proposed method is discussed in Conjecture 1, which, however, is not verified by either theoretical or empirical evidence in this paper. The soundness of this work would be improved if more evidence were provided to demonstrate the correctness of the claim in Conjecture 1. 2. The contributions of this work to the federated learning community are unclear. It appears that the theoretical results in Theorem 2.1 are directly adopted from an existing research paper. If this is the case, it would be more appropriate to title it as a Proposition rather than a Theorem. 3. The soundness of the evaluation section can be improved. For example, more datasets (e.g., EMNIST, ImageNet) should be considered to enhance the evaluation. Other Comments Or Suggestions: Please refer to the weaknesses listed in the previous section. Questions For Authors: Please refer to the weaknesses listed in the previous section. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. Your comments have helped us refine our intuition and better communicate the contributions of this work. ## Justification on Conjecture We appreciate your comment regarding the lack of direct evidence supporting *Conjecture 1*. In our paper, we proposed that the magnitude of global model parameters can serve as a proxy for feature generality, and that adjusting updates based on weight size could improve generalization. We provide further theoretical discussion in our response to Reviewer ETbn. To offer additional empirical support, we present a t-SNE visualization of feature representations from the global model (see [here](https://imgur.com/a/additional-validation-on-conjecture-UQB05mN); fully anonymized). In this analysis, we excluded the top 20% and bottom 20% of weights by magnitude to isolate the role of mid-range weights. Interestingly, removing the bottom 20% (i.e., the smallest-magnitude weights) resulted in *clearer clustering and more semantically aligned* representations. In contrast, removing the top 20% did not produce the same clarity. This observation supports our conjecture that small-magnitude weights may contribute more noise than signal, while mid-to-high magnitude weights are more aligned with generalizable features. This lends empirical credibility to our claim that **weight magnitude encodes meaningful signals about feature generality**, which justifies the use of convex constraints based on parameter norms to improve generalization. ## Contribution of this Work Thank you for the suggestion regarding terminology. We will revise terms like “Theorem 3.1” in the final version to avoid confusion. Our initial choice was to highlight the central theoretical result connecting constrained optimization with generalization. The main contribution of our work is to offer a new perspective on Federated Learning: The magnitude of global model parameters can implicitly encode information about the data distribution—information which can be harnessed even when local data is inaccessible. Building upon Theorem 2.1, we demonstrate that *simple convex constraints*, applied solely to global model weights, can influence generalization across clients. This is especially relevant in federated settings, where privacy constraints prevent access to raw data. Moreover, our work reframes the problem from *what* the model learns to *how* it learns—highlighting the role of directional constraints in implicit regularization and representation control in decentralized learning. Finally, we emphasize that our method is *lightweight and architecture-agnostic*: - No additional loss terms - No auxiliary models - No heavy computation This makes it highly practical for federated learning, where efficiency, privacy, and scalability are key concerns.
Summary: This paper proposes FedCONST, a federated learning (FL) framework, to boost generalization under heterogeneous client data distributions. Specifically, the authors adaptively modulate the magnitudes of updates based on the global model's parameter strength by applying convex constraints during client training. This prevents overemphasis on well-learned features while reinforcing underdeveloped ones. The authors further supported their approach with theoretical analyses and a series of experiments on different datasets, which validated its effectiveness in enhancing generalization. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: This paper does not have supplementary material, but I read the final appendix. Relation To Broader Scientific Literature: The paper situates itself well within the federated learning and domain generalization literature. It builds on established methods such as FedAvg, FedProx, and FedSAM while addressing known issues like client drift and feature misalignment. The theoretical inspiration drawn from GSNR-based dropout and domain generalization techniques is articulated, positioning the contribution as a natural yet innovative extension to the FL paradigm. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The paper proposes an innovative and conceptually straightforward method to tackle overfitting and misalignment in FL. 2. It provides theoretical justification and extensive experimental validation for the strategy. 3. The writing is well-organized, with intuitive illustrations. Weakness 1. The experimental evaluation relies solely on the CIFAR-10 and CIFAR-100 datasets. It is recommended that the experiments be supplemented with Tiny-ImageNet, as in some related works. 2. The baseline methods compared in the paper (e.g., FedAvg, FedProx) can validate the fundamental effectiveness of FedCONST. However, the selection of baseline models is somewhat outdated, failing to cover more recent federated learning methods, such as FedALA and FedFA. 3. Results in Table 1 report accuracy improvements but omit p-values or confidence intervals, leaving the significance of improvements unclear. 4. The proof assumes Gaussian-distributed weight updates (Equation 9), which may not hold in non-convex neural network optimization. Some discussion may be needed here. 5. The paper cites relatively few works and fails to cover many of the recent advances in the field. More recent relevant works need to be incorporated. Other Comments Or Suggestions: No. Questions For Authors: In Section 3.2.2, while the projection matrix is constructed based on global model parameters, local model parameters are directly used in the derivation of Equation 13. Why does such a symbol substitution occur? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive feedback and valuable suggestions. Below, we address each point in detail: --- ## Dataset diversity We appreciate your suggestion. While we agree that evaluating on larger datasets such as Tiny-ImageNet would further strengthen the experimental scope, we note that several recent works in the federated learning literature—including Wang et al. (CVPR 2024), Li et al. (ICML 2023), Qu et al. (ICML 2022), and Lee \& Yoon (ICML 2024)—also did not include Tiny-ImageNet in their evaluations. Instead, we extended our experiments to the EMNIST dataset. Preliminary results show the improvement using FedCONST, and we will include these results in the final version of the paper. ### EMNIST Accuracy Comparison | Method | Accuracy (%) | |-----------|--------------| | FedAvg | 83.54 | | FedCONST | 85.10 | --- ## Comparison with recent FL methods We agree that comparing with more recent federated learning methods is important. While our main focus was on validating the fundamental effect of convex constraints through widely used baselines such as **FedAvg** and **FedProx**, we have additionally conducted experiments on recent algorithms, including **FedLAW**, **FedWon**, and **FedFA**, to strengthen the empirical comparison. The results below demonstrate that our proposed method performs competitively across both cross-device and cross-silo settings: | Algorithm | Cross-Device CIFAR-10 (α=0.5) | Cross-Device CIFAR-10 (α=0.2) | Cross-Silo CIFAR-10 (α=0.5) | Cross-Silo CIFAR-100 (α=0.5) | |----------------|-------------------------------|-------------------------------|------------------------------|-------------------------------| | FedLAW (Wang et al., 2024) | 47.22 | 45.15 | 52.36 | 17.80 | | FedWon (Zhuang & Lyu, 2023) | 35.80 | 31.30 | 39.31 | 2.81 | | FedFA (Qu et al., 2022) | 48.14 | 45.61 | 50.95 | 20.21 | | **FedCONST (ours)** | **54.28** | **54.79** | **59.66** | **26.86** | We will include these updated results in the final version. --- ## Statistical significance (p-values / confidence intervals) Thank you for pointing this out. We are currently computing **95% confidence intervals** based on repeated runs. Early results indicate that the performance improvements remain statistically significant. We will incorporate these confidence intervals in the final version of the results table. --- ## Gaussian assumption in the proof In our setting, local updates result from multiple gradient steps. By the Central Limit Theorem (CLT), their distribution tends to be Gaussian as the number of steps increases. Additionally, momentum-based optimizers accumulate gradients over time, reinforcing this tendency. We also assume the model has sufficiently many parameters per channel, allowing grouped parameters to be approximated as Gaussian. Thus, the Gaussian assumption is a practical and reasonable approximation for theoretical analysis in FL. We thank the reviewer for pointing this out and will include a more rigorous discussion on its validity in the revised version. --- ## Limited citations of recent work We appreciate the feedback and acknowledge that our related work section can be improved. We will revise it to incorporate more recent and relevant literature, particularly those that address generalization in FL and representation robustness. --- ## Clarification on Symbol Consistency in Equation 13 In Equation 13, we used the same symbol for local updates and global model parameters since the local model starts from the global one. The weight update is computed as a delta from the global model, so the notational unification was intended for simplicity. However, we admit this may cause confusion, and we will revise the notation for clarity in the final version. --- ### References - Wang, Y., Fu, H., Kanagavelu, R., Wei, Q., Liu, Y., & Goh, R. S. M. (2024). *An aggregation-free federated learning for tackling data heterogeneity*. In CVPR. - Li, Z., Lin, T., Shang, X., & Wu, C. (2023). *Revisiting weighted aggregation in federated learning with neural networks*. In ICML. - Qu, Z., Li, X., Duan, R., Liu, Y., Tang, B., & Lu, Z. (2022). *Generalized federated learning via sharpness aware minimization*. In ICML. - Lee, T., & Yoon, S. W. (2024). *Rethinking the flat minima searching in federated learning*. In ICML. - Zhuang, W., & Lyu, L. (2023). *FedWon: Triumphing multi-domain federated learning without normalization*. arXiv:2306.05879.
null
null
null
null
null
null
The Brain's Bitter Lesson: Scaling Speech Decoding With Self-Supervised Learning
Accept (poster)
Summary: This paper introduces a pre-training strategy for MEG recordings, which consists in neuroscientifically-grounded pretext tasks. It shows scaling laws on two downstream tasks on two different datasets. ## update after rebuttal Since the authors did not provide additional results or modifications along my suggestions, my score remains the same Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. The authors follow the standard procedure of evaluating their pretrained model on downstream tasks, and provide comparisons with two baselines and two competing methods. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design makes sense to me Supplementary Material: N/A: the supplementary material is small and presents additional experimental details, no additional results Relation To Broader Scientific Literature: In the neuroscientific literature, there has been a relatively limited amount of work on pre-training deep learning models compared to other fields, in part due to the difficulty to handle cross-dataset distribution shifts. Existing approaches (e.g. BENDR, EEG2Rep, BIOT) typically involve self-supervised losses, where the input is masked or perturbed in some way and must be reconstructed, and only consider EEG recordings (rather than MEG). This paper takes a novel approach which consists in using neuroscientifically-grounded pretext tasks, and is (to the best of my knowledge) the first to consider MEG recordings. Essential References Not Discussed: None that come to mind Other Strengths And Weaknesses: Strengths - Novelty: novel approach with the pretext tasks (differs from the usual self-supervised losses), and first to consider MEG - Soundness: the experiments presented are sound and well executed (although not very exhaustive) - The paper is well-written and easy to follow Weaknesses: - Missing results: I may have misunderstood the sentence "the backbone uses randomly initialised and frozen weights", but it looks to me as if the authors do not report anywhere two important points of comparison: (i) results obtained when finetuning the full model (not just a linear layer on top) on the labeled data of the downstream tasks, and (ii) the same but training the full model from scratch (no pretraining). - The results are clean, but not particularly impressive: the downstream tasks chosen are rather easy tasks (binary classification). I appreciate that the authors wanted to display very clear scaling laws, and that has certainly been achieved, but it would be nice to also consider some more challenging tasks such as phoneme classification (even if results are poor). Similarly, the results on CamCAN+MOUS hardly outperform pretraining on CamCAN alone: although the gap might be statistically significant, I'm not sure how relevant it is; this section seems preliminary to me. - Clarity: although the paper is well-written, there are some missing details. (i) After going back and forth several times, I could not clarify which task Table 2 reports results for: is it speech classification, voicing classification, or something else? Similarly for table 3. Please clarify this in the caption of the tables. (ii) I find it confusing that the plots in figures 3 and 4 look very similar, but in fig 3 the colors represent the datasets and in fig 4 they represent in-distribution vs out-of-distribution. Please change colors or find a different way of plotting Other Comments Or Suggestions: None Questions For Authors: “The latter dataset is particularly difficult to decode from as there is very little within-subject data and it did not enforce the use of head casts to immobilise participants” --> Why not use classification of global rotation of sensors as another pretext task (similar to random rotation augmentations used in computer vision)? It seems as if this could alleviate the aforementioned issue Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to provide a review. We are glad that you found the paper easy to follow and that our experiments were sound and well-executed. Please find below, our responses to your question and concerns: > authors do not report anywhere [...] results obtained when finetuning the full model Thanks for pointing this out. You’re right that we have not provided results when fine-tuning the full model. Here, we focused mainly on demonstrating that our self-supervised representation is highly generalizable. The simplest way to do this is through linear probing as it allows us to cleanly separate the contribution of self-supervision to the contribution of supervised learning. Fine-tuning the full model, would, however, be more thorough and likely provide better results. We prioritised demonstrating the SSL method over this given time constraints. We are working on collecting this result and others for posterity. > the downstream tasks chosen are rather easy tasks We appreciate this perspective, but want to clarify that these tasks are surprisingly challenging in MEG. The signal-to-noise ratio in non-invasive neural recordings is exceptionally poor. Recent work in [A] demonstrated that more complex decoding tasks (e.g. transcript decoding) perform no better than chance with current methods. Similarly, so far, no work that we are aware of has demonstrated strong results for speech detection or voicing classification on MEG which we believe highlights the difficulty of the challenge even here. State-of-the-art works in MEG speech decoding have also used similar tasks such as [B] and their segment identification task. Theirs is also less relevant for a speech BCI as it’s based on paired audio and brain data. You are right that ultimately the objective is to reach more complex tasks in order to achieve full speech BCIs. To this end, we are actively working on full phoneme and word classification as follow up work. > [...] which task Table 2 reports results for Thank you for noting this. It is for speech and we will clarify this in the captions for Table 2 and 3. > I find it confusing that the plots in figures 3 and 4 look very similar Yes, we agree that this is confusing. We will update figure 4 to use different colours. Thanks for bringing this to our attention. > Why not use classification of global rotation of sensors as another pretext task Indeed, as we discuss in the limitations section, we did not pursue any spatial pretext tasks. Your suggestion of a global sensor rotation task could be particularly useful in learning to account for head position changes. Thank you. Beyond movement compensation, phoneme perception also seems to have a strong spatial signature in brain activity [C, Fig. 4] and could further benefit from your suggestion. We are working on a small-scale experiment to test this. On a related note, some MEG datasets (which use scanners built by Electa) have MaxFilter [D] programs which can be applied to automatically compensate for head movements. However, this is not general enough for scaling as not all datasets will use these scanners. Thank you once again for highlighting some important points that have helped to improve our paper. Do you have any further questions or concerns? [A] Jo, H., Yang, Y., Han, J., Duan, Y., Xiong, H. and Lee, W.H., 2024. Are eeg-to-text models working?. arXiv preprint arXiv:2405.06459. [B] Défossez, A., Caucheteux, C., Rapin, J., Kabeli, O. and King, J.R., 2023. Decoding speech perception from non-invasive brain recordings. Nature Machine Intelligence, 5(10), pp.1097-1107. [C] Joan Orpella, Francesco Mantegna, M. Florencia Assaneo, David Poeppel; Decoding imagined speech reveals speech planning and production mechanisms; bioRxiv 2022.05.30.494046; doi: https://doi.org/10.1101/2022.05.30.494046 [D] https://ohba-analysis.github.io/osl-docs/pages/docs/preprocessing-maxfilter.html
Summary: This paper presents a unified solution through data-efficient, self-supervised pretext tasks to improve the speech detection and voicing Classification tasks. The experiments demonstrate significant gains from self-supervised pre-training. The method surpassed the baselines and is comparable to the model trained with surgical data. The data ablation (data size and data source) provides interesting insights to the community. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: They make sense. Theoretical Claims: This is not a theoretical paper. Experimental Designs Or Analyses: Experimental design is valid, and the analyses are good. Supplementary Material: I have reviewed the appendix with data details and hyperparamters. Relation To Broader Scientific Literature: This is an incremental work by studying more pretext tasks and show its effectiveness on down-stream tasks. Essential References Not Discussed: The references look good. Other Strengths And Weaknesses: The strengths of this paper: 1) The paper is well organized and written. 2) The experiments are sufficient to validate the hypothesis. The Weaknesses of the paper: 1) The designed pre-text tasks might be suitable for simple tasks only. The gain might be limited on transcript tasks. 2) Data scaling might be over-claimed as 1000h is a relatively small data size as compared to the speech recognition tasks. Other Comments Or Suggestions: N/A Questions For Authors: Several minor questions: 1. In Table 3, it shows that the cleaner data helps more on the finetuning tasks (CamCAN better than MOUS). The addition of MOUS data on CamCAN does not seem to improve a lot actually. Have the authors considered about data filtering method? Some data in MOUS might have negative effect on the pretraining tasks. 2. I am not familiar with surgical shown in Tabel 2. How about applying the proposed method for surgical as what was done for BrainBERT2+linear? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your efforts in reviewing our work. We are glad that you found the paper well written and organized, and that the experiments validated our hypothesis. Below, we have provided our responses: > The designed pre-text tasks might be suitable for simple tasks only. The gain might be limited on transcript tasks. Yes, thank you for highlighting this consideration. It is an important tradeoff. If the pre-training task is overly specific, then it will likely be easier to learn but the representation’s utility will be limited for more general and complex downstream tasks. Ultimately, in the extreme, the most general pre-training task is next-step prediction. However, unlike some of the baselines we compare to (e.g. BrainBERT), we did not pursue this as the data scale in MEG is limited and, given most of the signal is noise, it is a very difficult pre-training task to learn well. Instead, we aimed for balance by designing pre-text tasks that are more specific than next-step prediction, through targeting useful features in brain activity, but general enough that they work well with downstream tasks related to neural speech decoding. This balance led us to achieve better downstream performance than comparable methods while generalising across multiple tasks, datasets, and subjects. We do share your interest in extending this work to full speech transcription. Jo et al. 2024 [A] demonstrate that current EEG/MEG approaches haven't exceeded chance for full speech decoding, highlighting the fundamental challenges in this space. Our approach specifically addresses prerequisites for more complex decoding by addressing generalisation and data efficiency problems. We're actively developing these extensions in follow-up work to further support our hypothesis that some of these pretext tasks will also enable more complex decoding. > Data scaling might be over-claimed as 1000h is a relatively small data size as compared to the speech recognition tasks. While this may seem relatively small in comparison to modalities such as audio now, data scaling should be considered within the context of MEG, which is the focus of our work. Here, we have scaled well beyond the volume of data used in prior MEG work e.g. Défossez et al. 2023 [B]. We are approaching the scale of early work in the adjacent field of deep learning speech recognition from audio e.g. Speechstew [C]. Thus, with these efforts we hope to similarly scale up MEG to progress speech recognition from brain activity in the near future. We will make this point clearer to ensure that we are not over-claiming. Thank you for noting this. > Have the authors considered about data filtering method? Thank you for the suggestion. So far, we have only explored detecting and rejecting corrupted channels through a variance-based threshold (known as autoreject in neuroimaging [D]). However, you are right that there are perhaps better ways to deal with data filtering as some artefacts can be global (across all channels) e.g. heart beats, muscle spikes, breathing, etc. One way we could attempt to address this in future work is to use signal-space projection (SSP) [E] if the network is not learning to ignore these types of artefacts. > How about applying the proposed method for surgical as what was done for BrainBERT2+linear? This is an interesting suggestion. In our work, we opted to apply BrainBERT+linear to non-invasive data as that is what we are mainly concerned with. While surgical data is out-of-scope for our current work, it is certainly of interest to the community and something we may explore in the future. Thank you again for your review. You have helped elucidate some critical points in our work. Do you have any further questions or concerns? [A] Jo, H., Yang, Y., Han, J., Duan, Y., Xiong, H. and Lee, W.H., 2024. Are eeg-to-text models working?. arXiv preprint arXiv:2405.06459. [B] Défossez, A., Caucheteux, C., Rapin, J., Kabeli, O. and King, J.R., 2023. Decoding speech perception from non-invasive brain recordings. Nature Machine Intelligence, 5(10), pp.1097-1107. [C] Chan, W., Park, D., Lee, C., Zhang, Y., Le, Q. and Norouzi, M., 2021. Speechstew: Simply mix all available speech recognition data to train one large neural network. arXiv preprint arXiv:2104.02133. [D] https://autoreject.github.io/stable/explanation.html [E] https://mne.tools/stable/auto_tutorials/preprocessing/50_artifact_correction_ssp.html
Summary: This paper proposes a framework about how to leverage self-supervised learning (SSL) to improve the decoding of speech from brain activity. The authors propose a approach that utilizes large-scale unlabeled MEG data to train models, thereby addressing challenges posed by individual differences and dataset heterogeneity. The method demonstrates the generalization capabilities across multiple datasets. Claims And Evidence: No. Methods And Evaluation Criteria: The model architecture used in this study does not differ fundamentally from those in prior work. The paper primarily focuses on unsupervised pretraining for MEG and speech task decoding. However, pretraining before downstream tasks is not a novel concept, and the lack of comprehensive experimental evaluation may undermine the claimed contributions of the proposed method. The downstream tasks in this paper are focused on Speech Detection and Voicing Classification. However, significant progress has already been made in Brain-to-Text tasks by numerous studies [1][2][3]. I believe that the limited scope of downstream tasks somewhat diminishes the contribution of this work. References [1] Zheng H, Wang H, Jiang W, et al. Du-IN: Discrete units-guided mask modeling for decoding speech from Intracranial Neural signals[J]. Advances in Neural Information Processing Systems, 2024, 37: 79996-80033. [2] Chen X, Wang R, Khalilian-Gourtani A, et al. A neural speech decoding framework leveraging deep learning and speech synthesis[J]. Nature Machine Intelligence, 2024, 6(4): 467-480. [3] Défossez A, Caucheteux C, Rapin J, et al. Decoding speech perception from non-invasive brain recordings[J]. Nature Machine Intelligence, 2023, 5(10): 1097-1107. Theoretical Claims: The methodological framework proposed in this paper is relatively straightforward, involving pretraining through the setup of three proxy tasks and fine-tuning specific modules. However, in terms of performance, it only marginally surpasses the fine-tuning performance of previously proposed general foundational models such as BIOT and BrainBERT. Additionally, it lacks comparisons with neural large-scale models like LaBraM [1] and EEGPT [2]. References [1] Jiang W, Zhao L, Lu B. Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCI[C]//The Twelfth International Conference on Learning Representations. [2] Wang G, Liu W, He Y, et al. Eegpt: Pretrained transformer for universal and reliable representation of eeg signals[J]. Advances in Neural Information Processing Systems, 2024, 37: 39249-39280. Experimental Designs Or Analyses: See Methods And Evaluation Criteria. Supplementary Material: I have gone through all the supplementary material. Relation To Broader Scientific Literature: See Theoretical Claims. Essential References Not Discussed: See Methods And Evaluation Criteria. Other Strengths And Weaknesses: The writing in this paper is straightforward and easy to follow, with clear descriptions of figures and tables. However, the lack of robust downstream task experiments and comprehensive baselines weakens the impact and contribution of this work. I must point out that the methodology presented in this paper is overly simplistic, as the use of unlabeled data for pretraining and fine-tuning on downstream tasks is not a novel insight. It is recommended to emphasize the originality of the proposed method and its exceptional performance across a variety of downstream tasks. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for taking the time to provide a detailed review. We are glad you found the paper easy to follow. Please find our responses below: > significant progress has already been made in Brain-to-Text tasks by numerous studies [1][2][3] While [1] and [2] are impressive, they address a different context and problem with invasive data and [3] suffers fundamental limitations to scaling. Specifically, [1] and [2] work on surgical data which is much easier to decode due to the better signal-to-noise ratio. We focus on non-invasive decoding as it avoids the risks and complexity of surgery in implanting BCIs. Additionally, [1] and [2] do not show scaling, do not work on novel subjects, and for [2], do not leverage pretraining. Thus, although they are important works, they are not relevant to our problem context. [3] is the only milestone work so far in non-invasive speech decoding from MEG. We discuss in lines 58-63 (column 2) that “they do not demonstrate generalisation to novel subjects and retrain their model for new datasets rather than being able to generalise across datasets or pool them. Their method is also unable to incorporate data without corresponding audio labels and so does not scale with other kinds of tasks.” As a result, they are fundamentally limited in scaling up data. Thank you for noting these papers as they are still important and interesting works. We will ensure to cite [1] and [2] in our revised PDF. > I believe that the limited scope of downstream tasks somewhat diminishes the contribution of this work. Our scope is similar to that of comparable leading work [3] who use a similar but ultimately less practical task to speech detection. Their speech segment identification task matches arbitrary 3-second segments of audio and brain and could be picking up on lots of unknown features (e.g. patterns of silence). Because it requires paired audio and brain data, it is unlikely to be a useful task for future BCIs. We go beyond a single task by also looking at voicing classification and in contrast to [3], speech and voicing classification are also directly relevant to speech BCIs for segmenting phrases and distinguishing phonemes for word decoding. While we agree that the ultimate goal is full phoneme, word, and sentence decoding, our work provides essential building blocks that non-invasive approaches have failed to achieve so far. [A] demonstrate that current EEG/MEG approaches haven't exceeded chance for full speech decoding. Our approach establishes prerequisite foundations by solving generalisation and data efficiency problems. We are developing more complex tasks in follow-up work. > it only marginally surpasses the fine-tuning performance of previously proposed general foundational models such as BIOT and BrainBERT Our improvements are 15-27% over BIOT and BrainBERT. They are also highly statistically significant (p<<.05). We would classify this as substantial as it is even sufficient to bring non-invasive decoding up to BrainBERT’s surgical decoding accuracy. The improvements are also larger than those quoted in BIOT over its own baselines. > lacks comparisons with neural large-scale models like LaBraM [1] and EEGPT [2] We have now added this comparison. Thank you. Our method outperforms EEGPT: | Method | AUROC | | :---- | :---- | | EEGPT | .602 \+/- .006 | | Ours | .705 \+/- .003 | As the largest EEGPT model supports up to 54 channels and our dataset uses 269 sensors, we concatenate the embeddings of consecutive chunks of 54 channels so that we can fairly take into account information from all sensors before linearly probing. We will add this result to Table 1. Thank you again for noting this. We have already cited LaBraM and will do the same with EEGPT. > the methodology presented in this paper is overly simplistic, as the use of unlabeled data for pretraining and fine-tuning on downstream tasks is not a novel insight. Our primary contribution isn't just applying pre-training and fine-tuning, but developing novel neuroscience-informed objectives (amplitude scaling, phase shifting, and band rejection) that take steps to address three fundamental challenges: 1) Novel- and cross-subject generalisation (historically a major blocker for MEG research [B]) 2) Data-efficient and scalable self-supervised learning from heterogeneous datasets (no other work in MEG has done this); and 3) Generalisation across tasks and datasets In practice, our results also show a significant jump over baselines. We will make sure to emphasise this more in the revised PDF. Thank you again for your efforts in reviewing. You have helped us clarify several important aspects of our paper. Do you have any further concerns? [A] Jo, H., Yang, Y., Han, J., et al., 2024. Are eeg-to-text models working?. arXiv preprint arXiv:2405.06459. [B] Csaky, R., Van Es, M.W., Jones, O.P. et al., 2023. Group‐level brain decoding with deep learning. Human Brain Mapping, 44(17), pp.6105-6119.
Summary: Current speech decoders are generally trained individually per subject and only on task-specific data. The authors propose an MEG-specific self-supervised learning objective to build representations from a vast quantity of unlabeled MEG data from several subjects and tasks. They then built decoders that used these representations to detect phoneme voicing or the presence of speech. The authors claim that these decoders beat previous state-of-the-art self-supervised methods, with the advantage of generalizing to unseen subjects. Claims And Evidence: The authors were thorough in their evaluation, but I believe it's not always clear which task the decoder is being evaluated on. Most tables mention a single ROC AUC score, whereas Figures 3 and 4 mention separate scores for speech detection and voicing classification. I am assuming that all scores are for speech detection unless otherwise noted (based on "We evaluate our methods primarily on speech detection"), but clarification is needed. Methods And Evaluation Criteria: 1. The individual terms of the self-supervised loss are well motivated, but I am somewhat confused by one aspect of amplitude scale prediction. Based on the choice of $\rho$, could the correct amplitude not be ambiguous? For example, suppose $\rho=0.5, A=2$. Then either of these could be true: A. $x_s^A = 2x_s$ for each sensor $s$ that _was_ randomly sampled B. equivalently, $x_s^A = 1/2 \cdot x_s$ for each sensor that was _not_ randomly sampled. So it could be that $A = 1/2$. Does this term strictly require $\rho < 0.5$ to have an unambiguous class? I also wonder if there's a boundary effect near 0.5, and this is why a smaller $\rho$ like 0.2 was chosen (as discussed at the end of Appendix B). 2. The two decoding tasks (speech detection & voicing classification) seem like a good place to start evaluating the quality of the SSL method. But as the authors mention in Sec. 4.6, existing BCIs can decode more useful features like phoneme (or even word) identity. Have the authors looked at decoding higher-level features like these? (I expect that phoneme error will at least be above chance given the phoneme voicing detection results.) Based on Appendix C, it looks like >90% of compute was dedicated to pre-training the network. Since that component is task-agnostic, I don't think adding another decoding task like phoneme classification would be too computationally difficult, but it could make the overall contribution much more substantial. Theoretical Claims: No theoretical claims were made in the paper. Experimental Designs Or Analyses: The evaluation method seemed sound -- in particular, it seemed the authors were careful not to contaminate the test set for within- or across-subject evaluations (except for the stimulus as noted in Appendix A, which I think is not a concern for these decoding tasks.) Supplementary Material: I reviewed all of the supplementary material (Sections A-D). Relation To Broader Scientific Literature: The authors demonstrate that their self-supervised pre-training method outperforms existing methods for MEG by making use of existing publicly-available data. In particular, their non-invasive decoder shows comparable performance to invasive ones, and they show for the first time generalization to unseen subjects. They also show that scaling increases log-linearly, suggesting there is still room for improvement. Essential References Not Discussed: I am not aware of any missing references. Other Strengths And Weaknesses: The paper is written well, and the ablations and comparisons to other models are done well. Other Comments Or Suggestions: Minor, but it may be helpful to visually differentiate Figures 3 and 4 more -- they're easy to mix up at a glance. Changing the line colors in Fig. 4 would likely be sufficient, since currently, the same dark blue means different things: "Gwilliams" in Fig. 3 and OOD in Fig. 4. Questions For Authors: 1. In Sec. 4.5, the authors write: > During our experiments, we found that data quality, even among unlabelled data, can have a significant effect as artefacts in recordings disrupt learning. Does this specifically refer to a difference in quality between Cam-CAN and MOUS? If so, how was data quality judged? What kinds of artefacts were found in MOUS but not in Cam-CAN? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to provide a review. We are glad you found the paper to be well-written and the experiments and comparisons to be sound. Please find below, our responses to your questions and concerns: > Most tables mention a single ROC AUC score [...] I am assuming that all scores are for speech detection Yes, where it is not specified the score is for speech detection. Thanks for highlighting that this is unclear. We will add the specific score to the table captions where relevant. > could the correct amplitude not be ambiguous? You are absolutely correct. Thank you for identifying this and demonstrating it with an example. There is indeed an ambiguity here and we must restrict rho to be < 0.5 in order to have an unambiguous class. We conducted our ablation with values between 0 and 0.5 as we were aware of this, but as you have identified, we did not make this clear in the writing. We will add a note about this to the revised PDF. This should similarly apply in the phase shift task. The boundary effect hypothesis is very interesting. Could you please explain why you believe this influences the selection of a smaller rho? > [...] as the authors mention in Sec. 4.6, existing BCIs can decode more useful features like phoneme (or even word) identity So far, only existing *invasive* BCIs have been able to decode more complex features convincingly [A, B, C]. Attempts with non-invasive signals for sentence decoding have not yet produced results statistically significant beyond chance level, failing replication studies [D]. Nevertheless, as you point out, our voicing results do imply better-than-chance phoneme recognition results. In our preliminary experiments, while the results were better than chance, they were not much beyond that. We are actively working on improving the decoding of phonemes (focusing on acoustic features) and of words (focusing on semantic features) in a follow-up work. > it may be helpful to visually differentiate Figures 3 and 4 Yes, we agree. Thank you for noting this. We will change the colours in Figure 4. > Does this specifically refer to a difference in quality between Cam-CAN and MOUS? Yes. We used a method to automatically detect corrupted channels and remove them using a variance-based threshold (known as autoreject in neuroimaging [E]). The percentage of corrupted channels in sessions from MOUS was higher than in Cam-CAN suggesting that the quality of the data was not as good in general. Better data filtering is likely to be an important future direction to continue getting improvements from additional datasets. Thank you again for your review. You have highlighted several important points which will help improve our paper. Do you have any further questions or concerns? [A] Moses, D.A., Metzger, S.L., Liu, J.R., Anumanchipalli, G.K., Makin, J.G., Sun, P.F., Chartier, J., Dougherty, M.E., Liu, P.M., Abrams, G.M. and Tu-Chan, A., 2021. Neuroprosthesis for decoding speech in a paralyzed person with anarthria. New England Journal of Medicine, 385(3), pp.217-227. [B] Willett, F.R., Kunz, E.M., Fan, C., Avansino, D.T., Wilson, G.H., Choi, E.Y., Kamdar, F., Glasser, M.F., Hochberg, L.R., Druckmann, S. and Shenoy, K.V., 2023. A high-performance speech neuroprosthesis. Nature, 620(7976), pp.1031-1036. [C] Card, N.S., Wairagkar, M., Iacobacci, C., Hou, X., Singer-Clark, T., Willett, F.R., Kunz, E.M., Fan, C., Vahdati Nia, M., Deo, D.R. and Srinivasan, A., 2024. An accurate and rapidly calibrating speech neuroprosthesis. New England Journal of Medicine, 391(7), pp.609-618. [D] Jo, H., Yang, Y., Han, J., Duan, Y., Xiong, H. and Lee, W.H., 2024. Are eeg-to-text models working?. arXiv preprint arXiv:2405.06459. [E] https://autoreject.github.io/stable/explanation.html --- Rebuttal Comment 1.1: Comment: Thank you for the thorough responses. > The boundary effect hypothesis is very interesting. Could you please explain why you believe this influences the selection of a smaller rho? Whatever my reasoning was (I don't exactly remember), I don't agree with the notion now.
null
null
null
null
null
null
$S^2$FGL: Spatial Spectral Federated Graph Learning
Accept (poster)
Summary: This paper identifies and defines two significant limitations in FGL: label signal disruption and spectral client drifts. The proposed framework, S2S^2S2FGL, addresses these issues by consolidating globally accessible semantic knowledge and aligning the high- and low-frequency spectral domains. Abundant experiments validate its effectiveness. Claims And Evidence: The claims in this paper do not exhibit any significant flaws. Methods And Evaluation Criteria: The two methods effectively address the issues outlined in Figures 1 and 2. Additionally, the Louvain partition and the datasets used in the experiments are standard, making them appropriate for evaluating the effectiveness of these methods. Theoretical Claims: NA Experimental Designs Or Analyses: The experimental designs presented in this paper are reasonable. Specifically, the authors employ the Louvain partition, a widely used method for simulating real-world subgraph-FL scenarios, and utilize a standard node classification task as the evaluation criterion. Supplementary Material: I have reviewed the provided supplementary materials. Relation To Broader Scientific Literature: The two issues identified in this paper are highly relevant and provide new insights for future FGL research. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: The problems explored in this paper are intriguing. Drawing inspiration from graph active learning, the authors introduce the structure inertia score to measure the significance of semantic signal strength, highlighting a key challenge in subgraph-FL. This work also investigates the heterogeneity of spectral signal propagation across clients. Furthermore, the paper comprehensively addresses these issues in both domains and proposes reasonable solutions. Weaknesses: 1. The discussion of current FGL research from lines 26 to 30 is too general. A more detailed analysis of existing methods and their limitations is required, particularly for the FGL baselines. 2. For the second method proposed in this paper, why does it align only the high-frequency and low-frequency spectral features, rather than other components? A more rational explanation is needed. Other Comments Or Suggestions: I have no additional comments or suggestions. Questions For Authors: Please see weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer uVfb: Thank you for your encouraging comments on our work. We hope that our responses below will address your concerns and reinforce your positive evaluation. ## Weakness **W1: A more detailed analysis of existing methods and their limitations is required, particularly for the FGL baselines.** A1: In response to your concerns, we further elaborate on the deficiencies of the FGL baseline methods evaluated in our study. Specifically, FedSage+ seeks to reconstruct subgraphs by locally training a neighborhood generator. FedGTA determines aggregation weights using mixed moments of neighbor features and local smoothing confidence. FGSSL aligns similarity matrices between local and global GNNs while employing node-level contrastive learning. Similarly, FGGP applies contrastive learning, integrating clustering prototypes to enhance prediction accuracy. FedPub utilizes similarity-based clustering on random graphs at the client side and employs local masks to incorporate the global model. FedStar introduces a specialized channel for processing graph structure, sharing only this channel’s parameters. In summary, existing approaches fail to address two critical issues: spatial label signal disruptions and spectral client drifts. Label signal disruptions weaken the semantic recognition ability of GNNs in FGL, thereby reducing its overall effectiveness. Additionally, spectral client drifts hinder collaboration among clients, making it difficult to establish an efficient spectral signal propagation paradigm. Our work aims to address this gap by focusing on these two underexplored aspects. **W2: Why does the second method align only the high-frequency and low-frequency spectral features, rather than other components?** A2: Both low-frequency and high-frequency areas of the graph spectrum are critical and highly representative. Low-frequency components typically capture the global structural properties of the graph, for instance, the second smallest eigenvalue indicates graph connectivity. Conversely, high-frequency eigenvectors contain detailed local information. Directly matching all eigenvectors is computationally prohibitive. Instead, selecting those associated with the smallest and largest eigenvalues reduces computational complexity while preserving the graph's essential structural characteristics. We are grateful for your encouraging feedback and hope that our response has effectively addressed your concerns! --- Rebuttal Comment 1.1: Comment: I appreciate the thorough rebuttal you have provided and will maintain my current score. Regarding the comments from other reviewers, I would like to provide some additional insights. While edge loss is acknowledged in FGL, its semantic impact remains unexplored. This paper examines label signal disruption and quantifies the relationship between client scale and semantic degradation, offering new insights. Moreover, the moderate complexity of the proposed method is reasonable, especially given its strong performance. Overall, I recommend its acceptance.
Summary: The authors propose S2FGL, a framework that simultaneously addresses the spatial and spectral challenges in federated graph learning. Instead of focusing on static graph structures, it provides solutions through the lens of graph signal propagation. Claims And Evidence: Given the inherent interconnection between the spatial and spectral domains, methods that perform well in the spatial domain are expected to positively impact the spectral domain. However, the authors do not adequately explain the key benefits of utilizing the spectral domain to address spectral client drifts. Methods And Evaluation Criteria: The solutions provided by S2FGL are targeted, and the datasets used are commonly employed in federated graph learning research. Theoretical Claims: The definitions are rigorous Experimental Designs Or Analyses: The experiments are comprehensive. However, the authors should consider the impact of the number of representative prototypes for each class in the NLIR on performance and conduct a hyperparameter study. Furthermore, the performance comparison experiment should include FedTAD [1]. [1] FedTAD: topology-aware data-free knowledge distillation for subgraph federated learning. IJCAI 2024 Supplementary Material: I have reviewed the code provided by the authors. Relation To Broader Scientific Literature: The problems explored in this study are meaningful. First, the authors shift the focus from the typical issue of heterogeneity and examine the shortcomings of FGL in comparison to centralized graph learning from a spatial perspective. Second, the spectral domain has received insufficient attention in existing FGL studies, and this paper contributes to filling that gap. Essential References Not Discussed: None Other Strengths And Weaknesses: The primary strengths and weaknesses have been discussed above. Other Comments Or Suggestions: There is a missing punctuation mark after the equation (9). Questions For Authors: 1. The authors should conduct a hyperparameter study for the NLIR and include a performance comparison with FedTAD. 2. The authors should clarify the key advantages of the spectral solutions over spatial solutions in mitigating spectral client drifts. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer yo13: Thank you for your thoughtful review and for acknowledging the value of our work. We hope the responses provided below will clarify your concerns and contribute to a more favorable evaluation. ## Weakness **W1: The authors should conduct a hyperparameter study for the NLIR and include the performance comparison with FedTAD.** A1: Thank you for your thoughtful suggestions. We have added experiments regarding the prototype count hyperparameters in NLIR to Table 1 and included comparison experiments with FedTAD in Table 2. The results in Table 1 demonstrate the stability of the NLIR method under variations in prototype counts. The results in Table 2 show that our method consistently outperforms FedTAD, validating its efficiency. We will include these results in the final version. *Table 1: Hyper-parameter study of prototype count in NLIR* | Prototype Count | Cora | Citeseer | Pubmed | |:-:|:-:|:-:|:-:| | 4 | 83.4 | 76.0 | 88.6 | | 8 | 83.6 | 76.3 | 88.5 | | 16 | 83.5 | 76.4 | 89.0 | *Table 2: Performance comparison with FedTAD* |Datasets|Cora|Citeseer|PubMed|Texas|Wisconsin|Minesweeper| |-|-|-|-|-|-|-| |FedAvg|81.9|74.3|87.3|72.8|77.6|79.6| |FedTAD|82.5|75.3|87.7|73.4|77.7|79.9| |$\bf{S^2FGL}$|**83.4**|**76.0**|**88.6**|**74.7**|**78.4**|**80.5**| **W2: The authors should clarify the key advantages of the spectral solutions over spatial solutions in mitigating spectral client drifts.** A2: Methods that rely on the spatial domain fail to capture signal propagation patterns across different frequencies in the graph spectra. As a result, client drifts caused by GNNs overfitting to local frequency signal propagation paradigms are challenging to resolve for them. Instead, our method performs spectral reconstruction based on the adjacency awareness of different GNNs and promotes the alignment of local spectral signal propagation patterns with the globe. This approach enables the formation of a generalizable and strong spectral signal propagation paradigm, effectively mitigating spectral client drifts. Thank you again for your efforts in reviewing our work. We hope our rebuttal has clarified and addressed your concerns!
Summary: This paper investigates graph signal propagation in federated graph learning through both the spatial and spectral domains, highlighting the issues of label signal disruption and spectral client drift. In response, it proposes two methods: Node Label Information Reinforcement and Frequency-aware Graph Modeling Alignment, which address these identified challenges. Comprehensive experiments are performed to demonstrate the effectiveness of these methods. Claims And Evidence: Claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods are well-designed and appropriately address the identified challenges. The NLIR method creates a global category knowledge repository by enabling clients to upload multiple easily-learnable prototypes as reference points. By evaluating the similarity between each node and these reference points, it achieves precise semantic localization of nodes, thereby integrating globally accessible category information into local GNNs. The FGMA method utilizes the similarity matrix obtained from GNN inference to perform spectral reconstruction, aligning local and global features by projecting them onto the low-frequency and high-frequency spectral domains. The reconstruction allows FGMA to synchronize local and global message passing paradigms, effectively alleviating spectral drift. Theoretical Claims: The definitions are clear and correct, and there is no theoretical proof provided. Experimental Designs Or Analyses: This paper presents extensive experimental evidence, encompassing performance comparisons, ablation studies, and hyperparameter studies, among others. Overall, the experimental framework is well-constructed. Notably, in the Q4 of the experiments, the correlation between the NLIR method and the structure inertia score is examined, directly demonstrating the method's relevance. However, this raises a concern: under larger partition sizes, SIS does not significantly decrease. In such instances, will NLIR lose its original effectiveness? Consequently, the performance of NLIR with a substantial number of clients still needs to be validated, ideally with the SIS reduction level indicated. Furthermore, the ablation study should incorporate error bars to provide a more comprehensive analysis of the results. Supplementary Material: I have examined the code of this work. Relation To Broader Scientific Literature: This paper innovatively identifies the issue of label signal disruption , with a strong motivation that is empirically validated. Moreover, the investigation into spectral drift presents a novel perspective in the context of subgraph-FL, encouraging further exploration of graph spectral domain in FGL. Essential References Not Discussed: All essential references have been included, especially those related to the latest methods in subgraph-FL. Other Strengths And Weaknesses: Strengths: 1. The motivation is both strong and innovative, with exploration of the issues of label signal disruption and spectral client drift. 2. The proposed solutions are well-suited and specifically targeted, effectively addressing the identified problems in the spatial and spectral domains. 3. The experimental design is generally rigorous and effective. Moreover, the experiment in Q4, which examines the relationship between the performance of the NLIR method and SIS, successfully validates the motivation.. Weaknesses: 1. In the experiment Q4, as the partition size increases, SIS rises, leading to a decline in the performance of the NLIR. The authors should consider conducting experiments with a larger number of clients to ensure that NLIR remains effective in configurations with more clients. 2. The ablation study should incorporate error bars for enhanced clarity and precision in the results. Other Comments Or Suggestions: Under the Notations section, there is a typo in line 122 where a word is repeated. Questions For Authors: Please refer to the weaknesses, as no other questions have been posed. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer yM48: We sincerely appreciate the time and effort you have invested in reviewing our paper, as well as your favorable assessment of the motivation and design of our method. We hope that our rebuttal has effectively addressed your concerns. ## Weakness **W1: The authors should consider conducting experiments with a larger number of clients to ensure that NLIR remains effective in configurations with more clients** A1: We kindly emphasize that our experimental setup adheres to standards in existing FGL research, incorporating suitably configured numbers of clients. Nonetheless, to address your concerns, we conducted additional experiments with a substantially larger number of clients on the large-scale graph dataset arxiv-year. The results in Table 1 confirm the robustness and effectiveness of our method in large-scale client settings. *Table 1: NLIR performance under large client scale.* | Client Scale | 50 | 100 | 150 | |:-:|:-:|:-:|:-:| | FedAvg  | 32.1 | 34.8 | 35.5 | | NLIR  | 32.9 | 35.7 | 36.2 | **W2: The ablation study should incorporate error bars.** A2: In response to your concern, we incorporate error bars in Table 2 as follows. We will incorporate your suggestions in the final version. *Table 2: Ablation study of $S^2FGL$ with error bars* | NLIR | FGMA | Cora | Citeseer | |:-:|:-:|:-:|:-:| | ✗ | ✗ | 81.9 ± 0.7     | 74.3 ± 0.4     | | ✓  | ✗  | 83.2 ± 0.4     | 75.6 ± 0.3     | | ✗  | ✓ | 82.6 ± 0.3     | 75.0 ± 0.2     | | ✓ | ✓  | **83.4 ± 0.5** | **76.0 ± 0.3** | ## Other Comments Or Suggestions A3: We are grateful for your feedback regarding the typo. We will implement the revision in the final version. We sincerely appreciate your thoughtful review comments and hope that our rebuttal has addressed the concerns you raised!
Summary: The paper presents a novel framework called S2FGL (Spatial Spectral Federated Graph Learning) to address two key challenges in subgraph federated learning (FGL): Label Signal Disruption (LSD) and spectral client drifts. LSD occurs when subgraphs lose critical label signals due to edge losses between clients, which hampers the ability of Graph Neural Networks (GNNs) to learn class knowledge. Spectral client drifts arise from inconsistencies in signal frequencies across subgraphs, leading to degraded global generalizability. To solve these issues, the authors propose two strategies: Node Label Information Reinforcement (NLIR), which creates a global repository of class knowledge to restore label signals, and Frequency-aware Graph Modeling Alignment (FGMA), which aligns high- and low-frequency spectral components across clients to mitigate spectral drifts. Extensive experiments on various datasets demonstrate the effectiveness of S2FGL, outperforming existing methods in terms of global generalizability. Claims And Evidence: The label signal disruption in subgraph-FL has already been recogonized and can not be taken as one contribution of this work. Methods And Evaluation Criteria: Yes Theoretical Claims: There are no theoretical proofs. Experimental Designs Or Analyses: There are only experiments with one strong GNN backbone, ACM-GCN. But many graph federated learning works use other more naive but popular GNNs, e.g., GCN and GPRGNN. The authors should include the experiments with other GNNs used in prior works. Supplementary Material: N/A Relation To Broader Scientific Literature: This article addresses a common intra-graph federated learning task, identifying two challenges it faces: 1) Label Signal Disruption caused by the absence of cross-client links, and 2) Spectral Client Drift, a distribution shift resulting from inconsistent spectral domain distributions of graph signals across different clients. The first challenge is widely recognized, while the second has received little attention. Overally, the proposed method itself is a combination of existing techniques. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** - This paper is well-organized and easy to follow. - Although the spectral client drift problem is quite intuitive to conceive, this paper appears to be the first to explicitly pinpoint this issue. **Weakness:** - The problem NLIR aims to address and NLIR itself are not new things in federated learning since there are many works discussing the loss of egdes and prototype-based federeated learning. The authors should emphasize the innovative aspects of their method in addressing Challenge , compared with prior works. - The cosine similarity matrix requires second-order complexity. And the projection step on p. 6 requires doing EVD on a dense Laplacian. - The Spectral Client Drift challenge is novel. But the authors do not experimentally verify the existence of the second challenge or whether their proposed method can mitigate the spectral domain distribution shift of graph signals. It would be beneficial for the authors to provide experimental validation for Challenge 2, rather than relying on mere conjecture. - Complexity analysis is missing. Other Comments Or Suggestions: see Above Questions For Authors: see Above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer svqb: We sincerely appreciate your time and effort and hope that our responses will address your concerns and lead to an updated score. ## Experimental Designs Or Analyses **There are only experiments with ACM-GCN.** A1: $S^2FGL$ is a backbone-agnostic framework designed to address label signal disruption and spectral client drifts. In light of your thoughtful comment, we have incorporated results using GCN in Table 1, showing that $S^2FGL$ consistently outperforms current methods. The results will be included in the final version. *Table 1: Performance comparison using **GCN.*** |Method|Cora|Citeseer|PubMed|Texas|Wisconsin|Minesweeper| |-|-|-|-|-|-|-| |FedAvg|80.2|69.5|84.9|63.4|62.1|77.9| |FedProx|80.3|69.2|85.2|64.0|62.2|78.0| |FedFA|80.8|70.1|85.4|64.7|63.4|78.3| |FedSSL|82.6|70.9|85.8|64.2|64.5|78.6| |FGGP|82.8|70.6|85.5|65.5|63.6|78.7| |$\bf{S^2FGL}$|**83.7**|**72.2**|**86.8**|**66.8**|**65.5**|**79.1**| ## Weakness **W1: Why NLIR is novel.** A2: For the target problem, we innovatively focus on label signal disruption (LSD), which brings significant semantic degradation. Though existing methods mitigate edge loss structurally, they ignore the semantic degradation under LSD and fail to offer targeted solutions. Therefore, current FGL methods inevitably suffer from limited GNN semantic recognition ability. Methodologically, existing prototype-based approaches exhibit two critical limitations under LSD: insufficient semantic richness and semantic deviation under structural biases. Correspondingly, we propose a novel metric $\Lambda^\text{SALC}$ (Eq. 3 and 4, Page 4), which jointly accounts for label influence and structural representativeness to effectively mitigate LSD. **W2: The similarity matrix requires second-order complexity. The projection requires doing EVD on a dense Laplacian.** A3: First, the computation of the feature similarity matrix is widely accepted in FGL. Specifically, FGSSL leverages it to align structural consensus, FGGP utilizes it for contrastive learning, and FedGL employs it for structure updates. Distinctively, we align the reconstructed spectrum to enable a highly expressive signal propagation scheme against spectral drifts, achieving the best performance. Second, our method **does not** require an EVD on a dense Laplacian. Since only partial eigenvectors are needed, we leverage SciPy's sparse solver eigsh with $O(r\cdot nnz\cdot I)$. $r$ denotes the number of eigenvectors needed, $nnz$ is the non-zero element, and $I$ is required iterations. **W3: The authors need to verify the existence of the spectral challenge and prove the method can mitigate it.** A4: We kindly note that we have demonstrated the spectral challenge in Fig. 1(b). For further validation, we compute the Pearson correlation between spectral shift (KL divergence of a client's eigenvalue distribution from the global) and local accuracy of the global GNN under FedAvg. After runs under five random seeds, they are -0.39 (Cora), -0.27 (Citeseer), and -0.34 (Pubmed), indicating that greater spectral shift leads to higher inconsistency with global optimization and confirming spectral drifts. FGMA facilitates a generic signal propagation paradigm across clients to mitigate spectral drifts. For validation, Table 2 shows that FGMA is **more** effective under **higher** spectral heterogeneity, providing strong evidence for its specificity and effectiveness. Spectral heterogeneity here is the average KL divergence among clients' eigenvalue distributions. *Table 2: Correlation between **Spectal Heterogeneity** and **FGMA Improvement** on FedAvg.* |Client Scale|11|13|15|17|19|21|23| |-|-|-|-|-|-|-|-| |Spectral Heterogeneity|0.63|0.88|0.85|2.09|2.23|3.30|3.75| |FGMA Improvement (%)|0.73|0.97|0.92|1.27|1.29|1.51|1.57| **W4: Lack of complexity analysis.** A5: For a $k$-layer GNN with batch size $b$ and feature dimension $f$, the propagated features $X^{(k)}$ have a space complexity of $O((b+k)f)$, while linear regression has $O(f²)$. Key parameters include $n$, $m$, $c$ (nodes, edges, classes), $s$ (augmented nodes), $g$ (complemented neighbors), $p$ (trainable prototype matrix dimension), $Q$ (query set size for contrastive learning), and $N$ (selected clients per round). Analysis in Table 3 shows that $S^2FGL$ achieves top performance with reasonable overhead. *Table 3: Complexity analysis. Best in bold and second with underline.* |Method|Client Mem|Server Mem|Client Time|Server Time| |-|-|-|-|-| |FedStar|$\bf{O(2((b+k)f+f^2))}$|$\underline{O(N f^2)}$|$\bf{O(2(kmf+nf^2))}$|$\bf{O(Nf)}$| |FGSSL|$O(Q(b+k)f+f^2+n^2)$|$\underline{O(Nf^2)}$|$O(Qkmf+Qnf^2+n^2f)$|$\bf{O(Nf)}$| |FGGP|$O((n+sg)f+f^2+Qcp+n^2)$|$\bf{O(Ncp)}$|$O((m+sg)f+(n+sg)f^2+Qcp^2)$|$\underline{O(N^2(\log(N)+c^2p^2)+Ncp)}$| |$S^2FGL$|$\underline{O((b+k)f+f^2+n^2)}$|$\underline{O(Nf^2)}$|$\underline{O(kmf+nf^2+n^2f)}$|$\bf{O(Nf)}$| Thank you for your valuable feedback and hope that our rebuttal adequately addresses your concerns!
null
null
null
null
null
null
Improved Expressivity of Hypergraph Neural Networks through High-Dimensional Generalized Weisfeiler-Leman Algorithms
Accept (poster)
Summary: The paper presents a higher-order version of the WL test for hypergraphs. It is a conservative extension of the well-known higher-order WL for graphs, in the sense that when run over graphs, both tests have the same expressive power. A hypergraph-GNN architecture is then designed in terms of this higher-order test: It is shown that the expressive power of such hyperedge-GNNs is bounded by that of the higher-order WL test, and that it is always possible to emulate the expressive power of the test over graphs with n vertices by one of these hyperedge-GNN architectures. Experiments analyzing the suitability of the approach on real-world datasets. Claims And Evidence: All theoretical results are supported by carefully written proofs. My only concern refers to the proof that k-folklore and (k+1)-oblivious WL coincide in expressive power (see my comments in the Questions for Authors). Methods And Evaluation Criteria: Yes, the experimental data has been chosen appropriately. Theoretical Claims: I have not read all proofs in the appendix, but I know in depth this class of results and I am basically convinced about their correctness (save for the equivalence of k-folklore and (k+1)-oblivious WL, see below). Experimental Designs Or Analyses: I have not checked this since I do not have enough competence to do it. Supplementary Material: I have diagonally read the appendix trying to understand the proof techniques. Relation To Broader Scientific Literature: The paper extends a well-established line of research on graphs and hypergraphs, exploring the expressive power of GNNs in relation to isomorphism tests. The techniques employed are standard within this literature. In this sense, while the paper is solid and well-rounded, it is somewhat incremental and not particularly innovative. For instance, all separability results obtained in the paper are shown by a direct translation into well-known (and sophisticated) results for graphs. This raises some doubts about its suitability for ICML. In my view, it falls slightly below the standard one would expect from a strong ICML paper. Essential References Not Discussed: I think that the related literature is well-covered in the paper. Other Strengths And Weaknesses: As mentioned before, I feel lukewarm about this paper. While it is a solid piece of work and may interest some researchers in the community, it lacks truly innovative contributions. Perhaps its main strength lies in the clear presentation of the extension of k-WL from graphs to hypergraphs, but I have doubts about whether this alone is sufficient to justify acceptance at ICML. Other Comments Or Suggestions: I have no further comments. Questions For Authors: Q1. About the removal of singleton hyperedges: Does it mean that you allow no vertex colors in hypergraphs? Do you allow for repetition of vertices in hyperedges? Cannot you then encode a singleton hyperedge of the form (v) with (v,v)? Q2. Regarding your result that k-folklore and (k+1)-oblivious WL are equally expressive: I understand that this is a proper generalization of the result obtained by Grohe & Otto for graphs. But your proof seems considerably simpler. In fact, Grohe & Otto requires heavy machinery based on linear algebra and linear programming to obtain this result. Please compare your result against them and explain how you managed to obtain the results without extending their machinery. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: >As mentioned before, I feel lukewarm about this paper. While it is a solid piece of work and may interest some researchers in the community, it lacks truly innovative contributions. Perhaps its main strength lies in the clear presentation of the extension of k-WL from graphs to hypergraphs, but I have doubts about whether this alone is sufficient to justify acceptance at ICML. k-WL cannot be directly applied to hypergraphs. The two main challenges in generalizing k-WL to hypergraphs are: (1) how to initialize the features of k-tuples through sub-hypergraph extractions while ensuring degeneration to k-WL, and (2) how to construct k-tuple hypergraphs from the original hypergraphs so that 1-GWL can be applied. Our key findings include: (a) establishing a Generalized WL hierarchy for hypergraphs with increasing expressivity, with the notable difference between 1-GFWL vs 2-GOWL, unlike its graph counterpart. (b) The hierarchy allows us to design hypergraph neural networks with the desired expressivity, improves upon most existing hypergraph neural networks whose expressive power is upper bounded by 1-GWL. >Q1. About the removal of singleton hyperedges: Does it mean that you allow no vertex colors in hypergraphs? Do you allow for repetition of vertices in hyperedges? Cannot you then encode a singleton hyperedge of the form (v) with (v,v)? Thank you for the comments. We allow vertex colors/features in hypergraphs, such as the example of vertex colors 1 and 2 in Figure 3’s hard instances. In addition to removing singleton hyperedges, one could also consider adding the singleton hyperedge information as an extra vertex feature. However, the singleton hyperedges still need to be removed, ensuring that k-GWL can degenerate to k-WL for simple graphs, with consistent sub-hypergraphs and subgraphs. We treat hyperedges as sets and do not allow duplicate vertices in hyperedges. But we allow repetitions of vertices in k-tuples, which is discussed in our response to Reviewer eKNU. >Q2. Regarding your result that k-folklore and (k+1)-oblivious WL are equally expressive: I understand that this is a proper generalization of the result obtained by Grohe & Otto for graphs. But your proof seems considerably simpler. In fact, Grohe & Otto requires heavy machinery based on linear algebra and linear programming to obtain this result. Please compare your result against them and explain how you managed to obtain the results without extending their machinery. Thank you for the good question. You are right that Grohe & Otto (2015) used heavy algebraic and linear programming methods to establish the equal expressivity between k-folklore and (k+1)-oblivious WL. Later, Grohe in his LICS'21 paper “The Logic of Graph Neural Networks” provided a considerably simpler less-than-one-page proof based on mathematical induction on iterations and analysis on the two variants’ computations. (See page 16 of https://arxiv.org/pdf/2104.14624, Proof of Theorem V.6 in Appendix.) We performed a careful adaptation of this simpler proof for hypergraphs, showing the equivalence of k-folklore and (k+1)-oblivious Generalized WL.
Summary: The paper defines a k-WL variant for hypergraphs which did not exist so far. The test relies on a new structure that only relies on the nodes of the hypergraph and follows the idea of k-FWL or k-WL (the oblivious variant) which differ in the order of aggregation. They additionally come up with a GNN variant of it which effectively generalizes k-WL from Morris et al 2019 (using the same tricks with sets instead of tuples and local aggregation, both making the algorithm theoretically weaker). For most levels, the difference between k-FWL and k-OWL for hypergraphs follows the same pattern as for graphs (with FWL being one level stronger), but for hypergraphs this does not hold on the first level, which is surprising. Claims And Evidence: All the claims are backed with evidence, typically proofs. Methods And Evaluation Criteria: yes, proofing theorems about theoretical results indeed makes a lot of sense. Since there are not many hypergraph datasets, testing might be a bit harder. It would be interesting to see how the method works on databases (which are inherently hypergraphs), but those are typically too big to be handled by even 2-WL due to the exponential blowup. Theoretical Claims: I did not check the proofs, but the results are not surprising (except for the difference between 1-GFWL vs 2-GOWL which they showed using an example). I would like to mention that the notation could be more clear and explicit in certain situations, especially when it comes to tuples, multisets, and sets and how they are interacting and how duplicates are handled. Experimental Designs Or Analyses: looks good (given that there are not many hypergraph datasets available and databases are too big). Supplementary Material: Briefly looked over it, the proofs could be a bit more verbose in telling what the idea behind it is and how they relate to known results, but otherwise it looked fine. Relation To Broader Scientific Literature: The paper very clearly states where it stands in term of hypergraph GNN research and also clearly shows the research gap. Essential References Not Discussed: none. Other Strengths And Weaknesses: As mentioned before, I was struggling with lacking formality around sets, multisets, and tuples. In section 3, please mention explicitly that hyperedges are unordered (which becomes relevant because the tuples of k-WL are ordered and k-WL somewhat expects relations to have an ordering as well). It would be nice if that difference was thoroughly discussed. p5: it is only once mentioned indirectly how the hyperedges are included in the construction - namely only indirectly through the isomorphism type. This is not directly expected intuitively and could be mentioned explicitly. It took me quite a while to figure out that the whole construction really only depends on the vertex set and none of the introduced hyperedges depended on the actual hyperedges of the hypergraph we are working on. It would also be nice to mention that the runtime is independent of the number of hyperedges in the original hypergraph. The way it is written in sec 4.2 could be quite a bit clearer (as there is a set of hyperedges mentioned, but that does not seem to be the set of hyperedges of the original hypergraph, but rather of the constructed one). p5: the examples are misleading as they do not consider repetitions which are (for standard k-WL) allowed. Generally, I would have liked a very early explicit mention for the 2-GOWL vs 1-GFWL case instead of just writing this holds for k>=2. Because at least I was thinking about what that should mean and what is happening in the abovementioned case (as it is different from graphs). sec 5.2: to me this section is not clear at all, what is done about permutations and how is the relation to individualization and refinement? (where you also put labels to stuff but in a way that does not change things). Furthermore, I would expect $\binom{n}{l}^l$ instead of without the exponent. Other Comments Or Suggestions: Please note that the paper did NOT use the official template (but probably last year's template without the line numbers). p3 first equation and eq 1 both have an extra pair of unnecessary brackets. Also eq 1 is not nicely indented (please align below the = symbol) same for eq 4 and 5 (and probably any multiline formula in the paper) p3 the Babai 2016 citation is really unexpected here. It really has nothing to do with the whole paragraph (at least in my understanding) p3 it would be nice to describe the exact difference between FWL and OWL here as it is used later on. Or at least do so in the appendix. p3 end: please mention here that this compares tuples against sets and make explicit how duplicate entries in tuples are handled. p3 def 4.1: I believe that is still just the atomic type that is known in logic. Or is there any difference (if yes, please mention it here) p4 def of N(s): this looks to be identical to the graph setting, so it would be nice to state that this part is not new but rather standard. p5: a good way to describe the difference between OWL and FWL is "all elements in one position" as opposed to "one element in all k positions". p3 isomorphiosm -> isomorphism p4 usng -> using p6 typography in eq 6 and 7: please use \text{FWL} etc instead of just writing $GOWL$. This should also avoid the - sign to have that amount of space (in doubt, move it into the \text{} part. Questions For Authors: My main questions are in the textbox about Other Strengths And Weaknesses. I am especially interested in the difference between sets, multisets, and tuples and would like the paper to be a bit more formal in that regard. Otherwise I really like the results (but the presentation could still be improved). sec 7: is it common to exclude original vertex features? And how do results change if you add them? Expressivity: injectivity and resulting equality in expressivity should only hold when its about tuples and not when its sets. Otherwise please write more about that difference here. How small is the average hyperedge degree in Wri-Genre and why is that a problem? Is it really that different to the other datasets? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: >As mentioned before, I was struggling with lacking formality around sets, multisets, and tuples. Misleading examples. Injectivity and resulting equality in expressivity should only hold when its about tuples. Thank you for the valuable comment. We should formally define tuples, multisets, and sets in k-GWL and will include the following definitions at the beginning of Section 4.1. “Tuples are ordered and allow repetitive elements; multisets are unordered and allow repetitive elements; and sets are unordered and do not allow repetitive elements. In k-GWL, we consider k-tuples, which allow repetitions of vertices. There are two types of hypergraphs: the input hypergraph and the k-tuple hypergraph (see its construction in Section 4.1.2). Hyperedges in both types of hypergraphs are sets, that is unordered.” We will add the following discussions on how to handle repetitions of vertices in k-tuples in Section 4. “Specifically, both k-GWL and k-WL allow repetitions of vertices in k-tuples. In the initialization of k-tuple features, they extract sub-hypergraphs and subgraphs induced by the *set* of vertices in k-tuples and then use the isomorphism type as features, where the set operation before extraction removes vertex repetitions. For the construction of k-tuple hypergraphs and k-tuple graphs, because tuples allow repetitions, the vertex replacement strategies in the oblivious and folklore variants ("all elements in one position" vs. "one element in all k positions") still work with no issues.” For better illustrating the construction of k-tuple hypergraphs, we will plot a new version of Figure 3 based on k-tuples, where repetitions of vertices are considered. For example, the folklore hypergraph now includes five hyperedges instead of only two. The hyperedge with $v_1$ in all three positions of $(v_1,v_2,v_3)$ includes $(v_1,v_2,v_3)$, $(v_1,v_1,v_3)$, and $(v_1,v_2,v_1)$. The oblivious hypergraph still has three hyperedges but $e_1$ with all vertices in the first position of $(v_1,v_2,v_3)$ includes two extra k-tuples $(v_2,v_2,v_3)$ and $(v_3,v_2,v_3)$. Last but not least, we will explicitly state that all our theoretical findings on the k-GWL hierarchy and equivalence of k-HNNs and k-GWL (theorems in Sections 5 and 6) are based on using k-tuples. The only place where we switch from k-tuples to k-sets is in the implementation of k-HNNs. Ignoring the vertex ordering and repetitions of vertices in k-tuples makes k-HNNs considerably more practical. >p5: it is only once mentioned indirectly how the hyperedges are included in the construction... Thanks for bringing this issue to our attention. In Section 4, we will emphasize that the construction of k-tuple hypergraph in k-GWL does not depend on the set of hyperedges in the original hypergraph, which is only used in the isomorphism type. In Section 6, we will point out, unlike k-GWL, for k-HNNs based on local neighborhoods, the actual hyperedges of the input hypergraph do influence the construction of the k-tuple hypergraph. >sec 5.2: what is done about permutations... Thank you for pointing out the issue. The original idea of relational pooling (that inspires $(k,l)$-WL) is to build expressive permutation-invariant models, where permutation invariance is obtained by averaging or summing over representations under all permutations of node IDs. Later on, local relational pooling performs permutation and averaging within a subgraph of size $l$ to reduce computational complexity. Similarly, $(k,l)$-WL assigns extra labels $1,2,\cdots,l$ to $l$ vertices, but runs k-WL on the whole graph with the additional labels only for those $l$ vertices. This is repeated for every possible labeled graph and then the final representation of the graph is aggregated from the representations of all labeled graphs. The extra vertex labels help boost expressivity. One can apply the same idea in k-GWL to get $(k,l)$-GWL. In the computational complexity, the term for the labelling should be $\binom{n}{l}^l$ (that is, select $l$ vertices from $n$ vertices and then assign $l$ labels to the $l$ vertices). >Other Comments Or Suggestions Thank you for your detailed comments. We sincerely appreciate your efforts in helping enhance the presentation of this manuscript and will apply these comments. >sec 7: is it common to exclude original vertex features? And how do results change if you add them? Keeping the original vertex features makes it easier to distinguish non-isomorphic hypergraphs, due to the potentially very different vertex features. Hence, it is a common practice to exclude these features and focus on structural information only. We used the datasets from (Feng et al., 2024), which performed the pre-processing. Unfortunately, they have not provided the original vertex features, but we expect the accuracy would be higher with original vertex features being added. > Analysis of underperformance in Wri-Genre Please refer to the first response to Reviewer H4JM.
Summary: The paper introduces the k-dimensional Generalized Weisfeiler-Leman (k-GWL) algorithm, an extension of the classical Weisfeiler-Leman (WL) test to hypergraphs. The primary contribution is the formulation of k-GWL, which generalizes k-WL from graphs to hypergraphs, providing a unified theoretical framework for hypergraph isomorphism testing. Building on k-GWL, the authors introduce k-HNN (k-dimensional Hypergraph Neural Networks), which leverage k-GWL's structure to enhance hypergraph representation learning. Empirical evaluations on six real-world hypergraph classification datasets demonstrate that k-HNN achieves state-of-the-art (SOTA) performance. Claims And Evidence: Claims Supported: - The expressivity hierarchy of k-GWL is rigorously proven (Theorems 5.1–5.4). - The reduction of k-GWL to k-WL for graphs is well-supported by theoretical results. - The claim that k-HNN outperforms existing hypergraph neural networks is empirically validated on diverse datasets. Methods And Evaluation Criteria: - The chosen task of hypergraph classification aligns well with the proposed method. - The benchmark datasets (IMDB, Steam-Player, Twitter-Friend) cover diverse domains, ensuring broad applicability. Theoretical Claims: No, I did not check the proofs, but the overall statements make sense. Experimental Designs Or Analyses: Pro: - Strong empirical performance across diverse datasets, confirming the practical benefits of k-HNN. - Ablation studies (removal of singleton hyperedges) provide useful insights. Con: - No training time comparisons—it is unclear how much additional overhead k-HNN introduces compared to existing models. - The underperformance on IMDB-Wri-Genre suggests that certain hypergraph structures may not benefit from k-GWL, requiring further investigation. Supplementary Material: No, I did not review supplementary. Relation To Broader Scientific Literature: - The paper builds on the Weisfeiler-Leman hierarchy and extends prior work on hypergraph neural networks. - It contributes to hypergraph isomorphism testing by formalizing k-GWL as a higher-order method. Essential References Not Discussed: It seems like [1] also discuss the expressivity of HNNs, but it is not cited by this paper. [1] Luo, Zhezheng, et al. "On the expressiveness and generalization of hypergraph neural networks." arXiv preprint arXiv:2303.05490 (2023). Other Strengths And Weaknesses: Pro: - Establishes a clear theoretical expressivity hierarchy for hypergraphs. - Provides a unified framework that extends existing graph and hypergraph isomorphism methods. Con: - Computational overhead remains a concern, especially for large k. - The method is only tested on classification tasks, limiting its generalizability. Other Comments Or Suggestions: - Comparison to Approximate WL Methods: How does k-GWL compare to randomized WL tests. - Training Time Comparison: Can you provide training time vs. existing HNNs? How much overhead does k-GWL introduce? - Can you explain more on the Definition 4.1? From my understanding, $s$ represents ID of nodes, then what does $s^1_{i_1}=s^1_{i_2}$ mean? Questions For Authors: Same as comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >The underperformance on IMDB-Wri-Genre suggests that certain hypergraph structures may not benefit from k-GWL, requiring further investigation. Thank you very much for your comment. Following the suggestion of Reviewer yrE6, we have run experiments on k-HNNs for $k=3$ based on the vertex sampling strategy with sample size 10 in a GPU cluster with larger memory. As shown in the table below, 3-HNNs with the sampling show comparable performance to 2-HNNs across the datasets, while outperforming the previously best model AllSetTransformer in the Wri-Genre dataset. We will run the full 3-HNNs without the sampling (when there is no rebuttal time limit) and expect higher accuracy and include the results in the paper. We examined the dataset statistics in Table 5 of Appendix: there are six classes in Wri-Genre, which is significantly higher than in other datasets, making it a challenging dataset where every model seems to struggle. Moreover, in the two co-writer datasets (Wri-Genre and Wri-Type), there are significantly smaller number of hyperedges, with only about four hyperedges on average in a hypergraph, and each vertex participates in only 1.5 hyperedges on average. We was suspecting that the less amount of high-order information in the original hypergraph leads to a more dispersed construction of local neighbors between k-sets, which is not friendly to k-HNNs. However, the results of 3-HNNs with the sampling seem not support the thought. | Method | IMDB_Dir_Form | IMDB_Dir_Genre | IMDB_Wri_Form | IMDB_Wri_Genre | Steam_Player | Twitter_Friend | |----------------------|------------------|------------------|-----------------|-----------------|-----------------|-----------------| | AllSetTransformer | 66.76±2.55 | 79.31±0.94 | 52.67±4.77 | 54.09±2.41 | 61.87±1.87 | 64.03±1.34 | | 2-OHNN | 67.25±2.35 | 79.75±1.14 | 55.35±3.74 | 50.08±1.89 | 65.97±1.37 | 64.12±1.30 | | 2-FHNN | 68.11±2.46 | 78.52±1.07 | 55.36±3.30 | 45.44±1.24 | 67.53±1.23 | 62.75±2.18 | | 3-OHNN (Sample Size 10) | 67.66±2.59 | 79.11±1.11 | 52.75±4.12 | 57.50±3.55 | 63.00±2.01 | 62.97±2.04 | | 3-FHNN (Sample Size 10) | 67.07±2.33 | 78.90±1.1 | 50.56±2.69 | 53.84±2.05 | 61.50±1.59 | 63.05±3.67 | >It seems like [1] also discuss the expressivity of HNNs, but it is not cited by this paper. Thank you for sharing the paper. We will ensure the following discussions are included in the final version. “(Luo et al, 2022) studied the expressivity of hypergraph neural networks and constructed hierarchies of arity (the maximum number of vertices in hyperedges) and depth. For example, when the depth is larger than a certain value, a neural logic machine with a larger arity is more expressive. In contrast, this paper generalizes k-WL to k-GWL, unifies graph and hypergraph isomorphism tests, and establishes a clear expressivity hierarchy for hypergraphs.” >Comparison to Approximate WL Methods: How does k-GWL compare to randomized WL tests. We could not find papers introducing randomized WL tests after a Google search but do our best to discuss it based on our understanding. Our models with the vertex sampling become an approximation of the original k-GWL, effectively saving run-time at the expense of slight decrease in prediction accuracy. Randomized/approximate WL seems orthogonal with our k-GWL and can be combined for better computational efficiency. >No training time comparisons—it is unclear how much additional overhead k-HNN introduces compared to existing models. Training Time Comparison: Can you provide training time vs. existing HNNs? How much overhead does k-GWL introduce? Sorry for not making the results of training time and their comparison easier to see. In Table 8 of Appendix E.5, we report the average time per epoch for different hypergraph learning models. The run-times of our k-HNN methods are mostly smaller than double those of other compared methods. The vertex sampling approach can effectively reduce run-time to be closer to that of other methods. For both GNNs and HNNs, it is still an open problem on how to further improve the trade-off between expressive power and run-time. >Can you explain more on the Definition 4.1? From my understanding, $s$ represents ID of nodes, then what does $s^1_{i_1} = s^1_{i_2}$ mean? Sorry for the confusion. We use $s$ to represent a k-tuple and $s^1$ to represent a k-tuple in $HG_1$. The $i^{th}$ element in $s^1$ is referred to as $s_i^1$. $\forall i_1, i_2 \in [k], s^1_{i_1} = s^1_{i_2} \leftrightarrow s^2_{i_1} = s^2_{i_2}$ means that if the $i_1^{th}$ element and $i_2^{th}$ element in $s^1$ are the same, then the $i_1^{th}$ element and $i_2^{th}$ element in $s^2$ are also identical.
Summary: This work generalize high-order weisfeiler-Lehman test and high-order GNNs on graph to hypergraph. Furthermore, it build expressivity hierarchy among different orders of WL test on hypergraph. Using it instead of hyperGNNs corresponding to 1-WL leading to performance increase on real-world datasets. Claims And Evidence: Yes. The proposed k-WL on hypergraph is meaningful for improving hypergraph network. The proposed method achieves provably high expressivity and better performance on real-world hypergraph datasets. Methods And Evaluation Criteria: Yes, the tasks and baselines are representative for hypergraph domain and can show the performance of hyperGNN clearly. Theoretical Claims: Yes. I checked the proof for main theorem 5.2 and 5.3. The language and proofsketchs are commonly used in expressivity analysis. The proof is correct. Experimental Designs Or Analyses: 1. The number of parameters is not compared between baselines and the proposed models, which may lead to unfair comparison. 2. Models corresponding to k-WL with $k>2$ should also be included. Supplementary Material: Yes, I read the proof. Relation To Broader Scientific Literature: It is related to high-order WL test and high-order GNN commonly used in graph. This work adapts it to hypergraph. Some details like the initialization of k-WL on hypergraph is meaningful. Essential References Not Discussed: The related work is detailed and exhaustive, including representative expressive GNNs and representative HNNs. Other Strengths And Weaknesses: The adaption of k-WL to hypergraph is too straightforward and lacks novelty. Improved complexity is also a weakness. Other Comments Or Suggestions: Text in figures are small and hard to read. Table 3 is too wide. Questions For Authors: Hypergraph can be bijectively mapping to a bipartite graph, with nodes and hyperedges in hypergraph as nodes in graph. Can we directly apply k-WL to this bipartite graph? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >The number of parameters is not compared between baselines and the proposed models, which may lead to unfair comparison. Thank you for the valuable comment. We have collected the number of parameters in all tested models and reported them in the table below. It can be observed that our k-HNN models use more than three times the parameters compared to several models, double the parameters of HNHN, but notably fewer parameters than AllSetTransformer. However, our proposed models and AllSetTransformer clearly outperform other baselines as shown in Table 1. For both graph and hypergraph deep learning, it is still an open problem on how to address the trade-off between expressive power and model complexity. In addition, we report the average time per epoch for all the models in Table 8 of Appendix E.5. The run-times of our methods are mostly smaller than double those of compared methods, while the vertex sampling approach can effectively reduce the run-time to be closer to that of other methods. | Models | The number of parameters | |--------------|--------------------------| | MLP | 20102 | | HyperGCN | 87942 | | HNHN | 187014 | | UniGCNII | 52230 | | AllSetTransformer | 390790 | | ED-HNN | 103558 | | HIC | 54406 | | k-HNNs | 365382 | Note: k-HNNs have the same number of parameters for different values of k since they share the same neural network architecture. >Models corresponding to k-WL with k>2 should also be included. A: In our experiments, we had access only to a normal GPU server with a 1080Ti GPU with 11GB memory. Due to limited GPU memory, we considered only the case of $k = 2$. However, we recently obtained access to a GPU cluster with 40GB memory and have run experiments using our k-HNNs for $k=3$ based on the vertex sampling strategy with sample size 10. As shown in the table in our response to Reviewer H4JM, 3-HNNs with the sampling show comparable performance to 2-HNNs across the datasets, while outperforming the previously best model AllSetTransformer in the IMDB-Wri-Genre dataset. We will run the full 3-HNNs without the sampling (when there is no rebuttal time limit) and expect higher accuracy and include the results in the paper. >The adaption of k-WL to hypergraph is too straightforward and lacks novelty. k-WL cannot be directly applied to hypergraphs. The two main challenges in generalizing k-WL to hypergraphs are: (1) how to initialize the features of k-tuples through sub-hypergraph extractions while ensuring degeneration to k-WL, and (2) how to construct k-tuple hypergraphs from the original hypergraphs so that 1-GWL can be applied. Our key findings include: (a) establishing a Generalized WL hierarchy for hypergraphs with increasing expressivity, with the notable difference between 1-GFWL vs 2-GOWL, unlike its graph counterpart. (b) The hierarchy allows us to design hypergraph neural networks with the desired expressivity, improves upon most existing hypergraph neural networks whose expressive power is upper bounded by 1-GWL. >Text in figures are small and hard to read. Table 3 is too wide. A: Thank you for the comment to help improve the presentation of the paper. We will adjust the figures and table accordingly. >Hypergraph can be bijectively mapping to a bipartite graph, with nodes and hyperedges in hypergraph as nodes in graph. Can we directly apply k-WL to this bipartite graph? Thank you for the good question. One could transform hypergraphs to graphs via a bijective mapping, such as star expansion or line expansion [a], and then apply k-WL on the transformed graphs for isomorphism testing. However, even though the transformation is bijective, it does not guarantee the results of k-WL on the transformed graphs are the same as those of k-GWL on the original hypergraphs. For instance, we can find two non-isomorphic hypergraphs where k-GWL can distinguish them but k-WL cannot distinguish their transformed graphs. Figure 3(a) provides such an example: while 1-WL (equivalent to 2-OWL) fails to distinguish the transformed graphs, 2-GOWL successfully identifies the original hypergraphs. Therefore, this indirect method does not achieve the same expressive power as our k-GWL algorithm. We will include this observation in the final version. Reference: [a] Chaoqi Yang, Ruijie Wang, Shuochao Yao, and Tarek F. Abdelzaher. "Semi-supervised hypergraph node classification on hypergraph line expansion." In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 2352-2361. 2022.
null
null
null
null
null
null
DataDecide: How to Predict Best Pretraining Data with Small Experiments
Accept (poster)
Summary: This paper presents DataDos, a suite of experiments to examine the comparison of 25 variously processed corpora across scales by pretraining them with up to 1B models and 100B tokens. It finds that 150M models trained with < 2% compute of 1B targets correctly predict 80% of comparisons; spending compute towards smaller, more completely trained experiments for ranking is cost-effective for overall decision accuracy, but fitting scaling law predictions will provide additional information for extrapolating scales or extreme quality thresholds. Detailed per-task/metric analyses reveal different levels of predicting difficulty across tasks. Claims And Evidence: Claims about how to predict the best pretraining data across scales are supported by the results of comprehensive experiments. Methods And Evaluation Criteria: The methods and evaluation metrics for pretraining models on different corpora are reasonable and comprehensive. Theoretical Claims: This paper does not make theoretical claims and focuses mostly on the empirical part. Experimental Designs Or Analyses: The experiment and analysis design is overall sound. Supplementary Material: Yes. The code looks reasonable to me. Relation To Broader Scientific Literature: Scaling law has shown effective promises in guiding the optimal usage of computation. This paper examines the data comparison law across scales, which can further provide guidance and insights into the future development of pretraining data curation, especially extrapolating beyond small-scale experiments. Essential References Not Discussed: No. Other Strengths And Weaknesses: The strengths of this paper lie in its comprehensive experiments and detailed analyses about predicting pretraining data differences over scale. The main weakness, from my perspective, is the clarity and the organization of paper writing. Other Comments Or Suggestions: I would suggest the authors run experiments on a target scale of 7B because this is likely to be the smallest mainstream LLM scale, but the feasibility definitely depends on the available computational resources the authors have because pretraining is always costly. The writing also can be polished to make this work more impactful. For example, - Better arrangement of the order of subsections in Section 3, preferably aligned with Section 2. - Polishing all Figures to make them more readable. Questions For Authors: Mainly about clarity issues. 1. In the discussion of Figure 3, "To identify a generally best metric per task we find the metric which achieves the highest decision accuracy in the most compute budgets." What do you mean by that? I suppose the decision accuracy is only measured in the target budget. Also, "average decision accuracy for a metric over its best outcomes", what do you mean by "best outcomes"? 2. You mention "150 extrapolated comparisons" in 3.3. Why 150? I cannot interpret this number from Figure 5. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed clarifying questions and are happy they found our work has “comprehensive experiments and detailed analyses.” **Expanded results** We appreciate the reviewer’s encouragement to polish our paper. Since submission we have fully revised our paper and extended our suite with more small runs and examined additional benchmarks. Updated figures, which we hope are more readable, can be viewed [here](https://anonymous.4open.science/api/repo/tmp-pdf-4938/file/ICML_responce-2.pdf). | | Initial Submission | Current Revision | |---|---|---| | Total Models | 375 models | 1,050 models | | Model Scales | 5 (from 150M to 1B) | 14 (from 4M to 1B) | | Random Seeds | 3 (<1B: up to 10% of target compute; 1B: 100%) | 3 (<1B: up to 25% of target compute; 1B: 100%) | | Evaluation Benchmarks | OLMES | OLMES + Code (MBPP, HumanEval) + Math (GSM8K, Minerva) | | Scaling Law Methods | 1 variant with 3 windows of model sizes used for compute | 8 variants, with 23 windows of model sizes down to 4 orders of magnitude less than target compute | We find that **the conclusions to our research questions hold over these new results**: 1) Compute trades off with decision accuracy and different tasks get better signal for decisions. 2) 8 scaling law methods fail to outperform single-scale ranking; future work can use DataDos to iterate on these. 3) Continuous metrics over raw likelihood are always better at small compute; differences due to which length normalization is used are smaller. 4) Low run-to-run variance and large spread between recipes explain success; we can also use proxy metrics to get beyond noise floor for tasks like code benchmarks. **Clarity and Organization** Following the reviewer’s suggestion, we will move the discussion of scaling laws ahead of proxy metrics in section 3 (results) to better align with the order of topics in section 2 (methods). Also regarding the reviewer’s question 1: In the third figure in our submission we identified best metrics based on which achieves the highest decision accuracy over all of the small compute sizes used for prediction. Indeed, the *target* budget remains the same (1B 5xC) but we consider which metric gets best decision accuracy for the most small *experimental* budgets. Regarding the “best outcomes,” this quantity would be better defined as measuring the average decision accuracy over tasks and scales when using the best metric for each task. In our revised figure (#4), we instead use a new, more direct figure to visualize the relationship of compute to decision accuracy for each type of metric with a specific length normalization. Regarding question 2: 150 is the number of combinations visualized in the former Figure 5, since the number of rows is halved to save space. Thanks to the reviewer’s feedback we’ll be sure to make this more clear.
Summary: The paper "Data Differences over Scale (DataDos) Suite: How to Predict Best Pretraining Data with Small Experiments" presents an empirical study on the predictability of pretraining data decisions at large scales using small-scale experiments. The authors conduct controlled pretraining experiments across 25 corpora, training models up to 1B parameters and 100B tokens, and introduce the DataDos Suite, which provides open-source models, data, and evaluations. The study finds that single-scale experiments, rather than multi-scale extrapolations, are the most efficient for predicting the best pretraining data, achieving 80% accuracy with only 2% of the compute budget of the target scale. The research also highlights the predictability of certain benchmarks, such as MMLU and ARC, using small-scale experiments while others, like CSQA, require more compute. The paper positions DataDos as a resource for researchers to explore data selection and scaling trends. Claims And Evidence: Main claims: - C1: Single scale experiments are more efficient for performance prediction than multiple scales. Figure 2 clearly shows that single ranking provides better decision accuracy than multiple scale extrapolation at various compute budget and tasks. I consider the evidence for the claim is sufficient. - C2: No single metric gives better signal consistently. I consider the evidence (Sec 3.2) is sufficient. Methods And Evaluation Criteria: **Benchmarks** OLMES suite of 10 multi-choice QA benchmarks is used. It contains common and standard benchmarks for LLM evaluation. **Metrics** The study uses both performance prediction errors and decision accuracy to reflect the practical consequence of pretraining data prediction. **Methods** The study effectively isolates the impact of scaling decisions by holding model architecture constant while varying training data and compute. It presents large scale experiments consists of a suite of 375 models using 25 data recipes, 5 model scales, and 3 random seeds for initialization and data order. I also find using 1B as the target model is sufficient for generalization purpose given the compute budget. Theoretical Claims: None Experimental Designs Or Analyses: 1. The experimental design appears to be robust to me, covering 25 data recipes, 5 model scales, and 3 random seed variations to control for initialization effects (target model only). 2. Models are trained using fixed architectures while varying datasets and compute budgets to isolate scaling relation and ensure a fair comparison. 3. Insightful findings and analysis includes - Mixed results for proxy metrics choice; - limitation of single ranking method in the case of predicting performance orders of magnitude. - Mixed results for benchmark signals. I think the paper will benefits from more discussion on the roots of mixed results. For examples, what drives the discrepancy of benchmarks as a signal for large model performance? Why some benchmarks are saturated while others are not? Does it mean single rank method predict well simply because of the saturation of the benchmark (too easy?)? Supplementary Material: I review some additional results. Relation To Broader Scientific Literature: The study builds on prior work in language model scaling laws and data selection, referencing key papers such as Kaplan et al. (2020) and Hoffmann et al. (2022). It differentiates itself by focusing specifically on data selection rather than general model scaling, bridging a gap in the literature. Essential References Not Discussed: Unknown. Other Strengths And Weaknesses: **Strengths**: - Open-source contribution of models, datasets, and evaluations. **Weakness** - Limited depth analysis on the cause of scaling law failure and single rank success. - nitpick: lack of sensitivity analysis for hyperparameter choices. Other Comments Or Suggestions: None Questions For Authors: - When performance is not reported per task, do you report the average of OLMES? - Does the conclusion hold when using the benchmark as a whole, by group? This tests the sensitivity of the conclusion to benchmark choices. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their clear and helpful feedback and supporting that “using 1B as the target model is sufficient for generalization purpose given the compute budget.” **Expanded results** Since our submission we have enriched our analysis and extended our suite with more small runs and examined additional benchmarks. Updated figures can be viewed [here](https://anonymous.4open.science/api/repo/tmp-pdf-4938/file/ICML_responce-2.pdf). | | Initial Submission | Current Revision | |---|---|---| | Total Models | 375 models | 1,050 models | | Model Scales | 5 (from 150M to 1B) | 14 (from 4M to 1B) | | Random Seeds | 3 (<1B: up to 10% of target compute; 1B: 100%) | 3 (<1B: up to 25% of target compute; 1B: 100%) | | Evaluation Benchmarks | OLMES | OLMES + Code (MBPP, HumanEval) + Math (GSM8K, Minerva) | | Scaling Law Methods | 1 variant with 3 windows of model sizes used for compute | 8 variants, with 23 windows of model sizes down to 4 orders of magnitude less than target compute | We find that **the conclusions to our research questions hold over these new results**: 1) Compute trades off with decision accuracy and different tasks get better signal for decisions. 2) 8 scaling law methods fail to outperform single-scale ranking; future work can use DataDos to iterate on these. 3) Continuous metrics over raw likelihood are always better at small compute; differences due to which length normalization is used are smaller. 4) Low run-to-run variance and large spread between recipes explain success; we can also use proxy metrics to get beyond noise floor for tasks like code benchmarks. **Explanations of better decisions** We appreciate the reviewer’s encouragement to provide further discussion of the “roots” of differing decision accuracy. In our revised figure (#5) we provide an analysis of how decision accuracy of different tasks and metrics can be explained in part by low run-to-run variance and a wide spread of performance values for different data. Using the correct probability proxy metric sees wider spreads for many tasks, though some instead see reduced noise. The reviewer’s questions about saturation in particular are interesting, as this could either lead to easy decisions (if some recipes saturate at lower performance than others) or to noisy decisions (if all recipes saturate around a similar value by the target scale). However, in our suite we do not observe saturation on any of the benchmarks we examine. Instead we see near trivial performance on Boolq, which leads to it having low decision accuracy as its target rankings are mostly determined by noise. Thus we do not recommend it for use on predictions over the scales we examine. Likewise we appreciate the reviewer’s encouragement to provide further analysis of scaling laws vs. single ranking. Our smaller results let us consider a wider range of compute budgets for scaling law fits, and we elevate our 8 additional scaling law variants (previous appendix line 770) to the body of our paper (see new figure #3). Our finding that these baseline scaling laws do not outperform single scale ranking holds over the breadth of sizes and variants. The latest results show that the single ranking approach is strong at even smaller compute budgets than we previously found, and we believe future work is required to ascertain the specific ways in which scaling laws fail despite getting low prediction error. **Aggregate benchmark analysis** Regarding the reviewer’s question: when figures do not name a specific task for downstream performance, the quantity is the macro average of OLMES (line 202). We find that our claims hold when aggregating: intermediate checkpoints decide as well as compute equivalent final models, single scale experiments outperform scaling laws, continuous metrics outperform discrete ones at small scales. As we examine in new (and former) figure (#2) slope and range of the positive relationship between compute and decision accuracy depends a lot on which benchmark is used, but the other claims we make hold.
Summary: The paper introduces the DATADOS Suite, an extensive experimental framework designed to guide pretraining data decisions for large language models using small-scale experiments. By systematically exploring 25 data recipes (varying in sources, deduplication, filtering, and mixing) across 5 model scales with a fixed token-to-parameter ratio, and repeating experiments with 3 different random seeds (yielding 375 runs in total), the authors aim to predict which data recipes will yield the best downstream performance when scaled up to a 1B parameter, 100B token regime. It further introduces a set of continuous proxy metrics (e.g., normalized correct probability, margin) that improve decision accuracy over standard discrete metrics, reporting that even small models (150M parameters) trained with less than 2% of the target compute can correctly predict 80% of pairwise winners among data recipes. I believe the paper is well-motivated because the full scaling law for data quality decision can be very computational expensive and the benchmark score doesn't make sense if the model is only trained on small amount of data and doesn't emerge. Claims And Evidence: I think the main claim and contribution of this paper is that small‐scale pretraining experiments (e.g., using 150M–parameter models) can reliably predict which data recipes will perform best when scaled up to target levels (1B parameters, 100B tokens). They report an 80% decision accuracy in predicting the winner between data recipe pairs. The paper conducted extensive experiments covering 25 recipes and multiple scales, with detailed reporting of decision accuracy and proxy metric performance. However, while the reported 80% decision accuracy is promising, the reliance on a fixed token-to-parameter ratio and a narrow range of model sizes (150M to 1B) may limit the robustness of these conclusions when applied to other settings. Methods And Evaluation Criteria: The DATADOS is proposed to estimate the dataset performance/quality using relatively small model and data while keeping high accuracy. They also provide the proxy metrics for different primary benchmarks. The paper is well motivated and the results are very helpful as a guidance or technique report. However, the methodology is still holistic - one key issue is that if we have new benchmark (e.g. shopping_mmlu) and a new dataset/recipe (e.g. nemotron-cc), we have to re-run all the experiments to get the estimation but the estimation is not well-explained and may be difficult to extrapolate (compared to scaling law). For the proxy metrics, I agree the continuous metrics can be much better indicator for small scale experiments before the model emerges. However, there is a lack of analysis on how these proxy metrics are robust across different data recipes and models - if we use another models/data recipe, will the proxy keep accurate. Otherwise, some theoretical bound/guarantee can also be helpful. Theoretical Claims: The authors note that only 4.7% of scaling trends “crossover” at the current scales, but warn that as compute increases, more crossovers may occur. It would be great if the author can provide more theoretical analysis/insights for the observations. It remains vulnerable to noise in small-scale experiments. A deeper discussion of how sensitive the extrapolations are to such noise and how uncertainty in the scaling law fits is handled would be valuable. Experimental Designs Or Analyses: The paper covers 25 data recipes across a fixed range of model sizes and including multiple seeds offers an extensive dataset for analysis. The clear comparison between ranking and scaling law approaches provides actionable insights. However, the experimental design is constrained to a single token-to-parameter ratio and a limited range of scales, which might not capture the full spectrum of scaling behavior (for example, we compare two dataset, one dataset is of high quality but has limited tokens, another is of low quality but has enough tokens, in this case we have to predict the performance in a larger data scale). Moreover, the evaluation is restricted to OLMES multiple choice tasks, raising questions about generalizability to other evaluation paradigms or domains. One key concern is that we believe the results are valuable as a summary but for new tasks/data, people can be unsure about how reliable the analysis results are. Supplementary Material: The authors provide the main codebase for the their methods and evaluation. Relation To Broader Scientific Literature: The work is well-situated within the literature on scaling laws, data curation, and pretraining data selection, building on foundational studies as well as recent empirical efforts. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The experiments are extensive and the analysis is detailed. However, besides the empirical study, it would be great if the paper can further elaborate on 1) the reliance/theoretical support of the methods; 2) how well the methods/findings can be transferred to different model families and tasks; 3) if it is difficult to guarantee the former properties, it is great to make the proposed methods a easy to use tool. Other Comments Or Suggestions: Please kindly refer to the previous sections. Questions For Authors: Please kindly refer to the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful questions and are glad they found our “paper is well motivated and the results are very helpful.” The reviewer writes that “the main contribution of this paper is that small-scale pretraining experiments (e.g., using 150M–parameter models) can reliably predict which data recipes will perform best.” We highlight that our paper has two main contributions: 1) We release an extensible framework, easily updated with new prediction methods, benchmarks, and small models. 2) We provide practical recommendations based on empirical study of decision accuracy that we observe to be reliable over wide ranges of settings, such as model sizes (now from 4M to 1B parameters) and benchmarks (including knowledge QA, commonsense, and reasoning, as well as math and code now). **Expanded results** We have extended our suite with more small runs and evaluations. Updated figures can be viewed [here](https://anonymous.4open.science/api/repo/tmp-pdf-4938/file/ICML_responce-2.pdf). | | Initial Submission | Current Revision | |---|---|---| | Total Models | 375 models | 1,050 models | | Model Scales | 5 (from 150M to 1B) | 14 (from 4M to 1B) | | Random Seeds | 3 (<1B: up to 10% of target compute; 1B: 100%) | 3 (<1B: up to 25% of target compute; 1B: 100%) | | Evaluation Benchmarks | OLMES | OLMES + Code (MBPP, HumanEval) + Math (GSM8K, Minerva) | | Scaling Law Methods | 1 variant with 3 windows of model sizes used for compute | 8 variants, with 23 windows of model sizes down to 4 orders of magnitude less than target compute | We find that **the conclusions to our research questions hold over these new results**: 1) Compute trades off with decision accuracy and different tasks get better signal for decisions. 2) 8 scaling law methods fail to outperform single-scale ranking; future work can use DataDos to iterate on these. 3) Continuous metrics over raw likelihood are always better at small compute; differences due to which length normalization is used are smaller. 4) Low run-to-run variance and large spread between recipes explain success; we can also use proxy metrics to get beyond noise floor for tasks like code benchmarks. **Ease of use and tooling** The reviewer encourages us to make DataDos easy to use, for instance adding a “new benchmark (e.g. shopping_mmlu).” In our public release of 1K models, 25 corpora, and code for pretraining, evaluation, prediction, and decision accuracy (line 104) we will include step by step documentation for adding new tasks, prediction methods, and small pretraining experiments. We will host all checkpoints as native models on Hugging Face. Running new tasks can be as simple as adding a new argument to the open source evaluation framework that we leverage. Our aim is that researchers with a range of compute can build on our suite: 1) by trying new prediction methods with lightweight manipulations of our results CSV such as smoothing or curve fitting, 2) adding new benchmarks over our released checkpoints for just the cost of inference, 3) or even pretraining additional small models for a fraction of the target model cost (e.g., the 9 additional model sizes we trained since submission cost only ~2% of our total compute). **Reliable and generalizable claims** The reviewer wrote that our experiments covered “a narrow range of models (150M to 1B)”, so we’ve significantly expanded our results to include 9 additional even smaller model sizes as small as 4M, or ~0.3% of the 1B compute. We continue to find consistent trends of compute to decision accuracy that support our claims for our 4 research questions (line 50). We agree that evaluation noise is a concern at small model scale, so our work is the first in this area to train multiple random seeds, even for the largest model sizes (line 152) We use the mean of these to make our decision targets more reliable. The reviewer asked about how are claims generalize, so we added four new heldout benchmarks and found that our preferred continuous proxy metric also leads to better decision accuracy when target performance is above the noise floor (new Figure #6). **Focus on exploring data difference** Our study investigates dimensions of differences not yet studied extensively; we explore 25 different data recipes, an order of magnitude more than previous suites. We chose to spend our compute budget by overtraining (5x the Chinchilla multiplier) many models (14 different sizes) and evaluating many checkpoints per model to estimate what performance would be like with different token-parameter ratios. Our conclusions are very consistent across intermediate and final checkpoints, suggesting that different ratios might give similar results, but we leave that to future work to confirm.
null
null
null
null
null
null
null
null
Fast and Provable Algorithms for Sparse PCA with Improved Sample Complexity
Accept (poster)
Summary: The paper proposes a two-stage algorithm to obtain the principal component of the single-spiked covariance model (Sparse PCA). The first stage, called the thresholding-based algorithm, obtains a first estimation of the principal component and, most importantly, accomplishes reduced computational cost compared to competing algorithms such as diagonal thresholding. In particular, it reduces the number of data samples required for the computation of the principal component – assuming a restricted strength of the signal of the single-spiked covariance model. To further enhance this estimation, the second stage utilizes the truncated power iteration to refine the solution, accomplishing minmax optimal rate. UPDATE AFTER REBUTTAL Thanks for replying to all the questions. Most of them were properly addressed, and I appreciate the effort to clarify both theoretical and empirical aspects. However, I would prefer that the answers which mention future inclusion of additional experiments or material in the final version already present that content within the rebuttal. Promising to include something later is not entirely sufficient, as it leaves the possibility that the addition may not be implemented after acceptance. Including this material now would allow reviewers to properly assess its quality and relevance during the review process. In any case, after reading the rest of the comments and the corresponding response, I would like to update my final score. Claims And Evidence: Overall, the paper mainly provides proper theoretical proves and derivations to theoretically demonstrate the computational complexity of the algorithm and its theoretical error. Additionally, it evaluates with synthetic data the main theoretical claims; one is missing regarding diagonal thresholding and I also miss experiments regarding the two stage refinement (see questions for authors). Methods And Evaluation Criteria: Overall, the paper mainly provides proper theoretical proves and derivations to theoretically demonstrate the computational complexity of the algorithm and its theoretical error. Additionally, it evaluates with synthetic data the main theoretical claims; one is missing regarding diagonal thresholding and I also miss experiments regarding the two stage refinement (see questions for authors). Theoretical Claims: Theoretical proves are included in the supplementary, which I have not reviewed. Experimental Designs Or Analyses: I have checked the experimental analysis. From my perspective, experiments are properly aligned with the paper, but two main questions arises from two experiments that lack: one is missing regarding diagonal thresholding and I also miss experiments regarding the two stage refinement (see questions for authors). Supplementary Material: I have not reviewed the supplementary. Relation To Broader Scientific Literature: The key contributions of this paper build upon previous work in sparse PCA, particularly in improving sample complexity and computational efficiency. By introducing a novel thresholding algorithm for principal component initialization and truncated power iteration, the paper provides an efficient solution that achieves optimal statistical guarantees with significantly reduced sample complexity. Essential References Not Discussed: Main references are included. Other Strengths And Weaknesses: Overall, I think that the paper is good, but it requires addressing a few things to be a clearly ICML publcication. Please address the two comments in “Other Strengths And Weaknesses” and the three questions in “Questions For Authors”. If they are addressed, I would modify my recommendation to accept: 1) I would rather start with an introductory text explaining the single-spiked covariance model, instead of just starting with the mathematical formulation; also missing including the citation (Johnstone, 2001). Moreover, for me, it is missing in this introductory text part the basic differences between PCA and sparse PCA/single-spiked covariance model: we assume that the data can be explained with a unique principal component, coefficients follow a standard normal (instead of are "sampled"), epsilon_i gathers the inherent noise or the information not explained by the single component, an example or two of applications/problems where this model is useful, etc. 2) Sorry if I miss it in the paper, but I think it would be interesting to either include the prove that diagonal thresholding (Johnstone & Lu, 2009) requires n=Ω (k 2 logp) or either to include a discussion/counterexample illustrating that diagonal thresholding does not fulfill Theorem 3.2 (maybe this will be ever more interesting, as the other one is included in the original paper). This will showcase that your algorithm is effectively better than diagonal thresholding. It would be ok to prove it in the appendix and just reference it with one sentence. Other Comments Or Suggestions: 3) "reduce the parameter space of the model" --> include here the notation that you use 4) second stage incurs O(np) costs.. --> typo, two dots 5) 2.1. Preliminaries --> explain you work with zero-mean observations 6) to estimate a single sparse vector v, to identify a sparse unit vector w --> explain better the differences between v and w 7) This approach is based on the observation that the expectation values of the diagonal entries of --> missing a reference to Eq. 2 8) Is v^~ notation introduced in Equation (8)? 9) When you define the Error in Section 4, you are repeating the same equation than Eq. 10, just reference it. Moreveor, this may be just a styling thing, but Error and F-score appear in another font and font size, which does not look really formal for a paper. Questions For Authors: Overall, I think that the paper is good, but it requires addressing a few things to be a clearly ICML publcication. Please address the two comments in “Other Strengths And Weaknesses” and the three questions in “Questions For Authors”. If they are addressed, I would modify my recommendation to accept: 10) You mention that "operating under a specific condition on λ", which is the implication of this restriction? It would be interesting to include it this discussion in the paper, either theoretically or experimentally. 11) I understand that the two-stage algorithms theoretically guarantees the minimax optimal rate. However, until which level is this refinement necessary? In other words, until which level refines the your proposed thresholding algorithm? This is missing. For instance, include an experiment showing before and after the refinement using your Error metric. 12) I understood that Diagonal Thresholding (DT) is computationally slower than your proposed algorithm. However, in Figure 2 b, we see that it is as fast as yours. Then, if we include the refinement using the Truncated power method (3.1.2), then we could accomplish the same performance with the same computationally efficiency? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. You mention that "operating under a specific condition on $\lambda$", which is the implication of this restriction? It would be interesting to include it this discussion in the paper, either theoretically or experimentally. **Reply:** **Please refer to the response to the first question for Reviewer gGz9 for a detailed discussion on the strength and necessity of this condition.** > 2. Include an experiment showing before and after the refinement using your error metric. **Reply:** Thank you for your question. Our experimental results clearly demonstrate that the refinement stage significantly enhances the performance of the initial estimator. In the table below, you can see that the estimation error is consistently reduced after applying the refinement across various sample sizes: | Sample Size **n** | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | |-------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | **Initialization** | 1.0553 | 0.6733 | 0.4183 | 0.2880 | 0.1981 | 0.1401 | 0.1028 | 0.0848 | 0.0750 | 0.0697 | | **Refinement** | 0.7188 | 0.2635 | 0.1347 | 0.1062 | 0.0943 | 0.0863 | 0.0789 | 0.0751 | 0.0701 | 0.0664 | These results indicate that while the initial (thresholding) algorithm provides a solid baseline, the refinement stage significantly reduces the estimation error. In the final version, we will include additional experiments to compare the performance before and after refinement. > 3. I understood that Diagonal Thresholding (DT) is computationally slower than your proposed algorithm. However, in Figure 2(b), we see that it is as fast as yours. Then, if we include the refinement using the Truncated power method (3.1.2), then we could accomplish the same performance with the same computationally efficiency? **Reply:** Thank you for your question. As explained in Section 3.2, both Diagonal Thresholding (DT) and our initialization algorithm incur the same order of computational cost of $O(np + nk^2)$, since they involve computing the entries of the sample covariance matrix and performing an eigenvalue decomposition. Our refinement stage utilizes truncated power iterations, each costing $O(np)$. In our experiments (see Tables 1-3), the number of iterations required for convergence is very small---typically no more than 10 iterations. Consequently, because the initialization stage dominates the overall cost, the additional refinement improves accuracy without compromising computational efficiency. This is further supported by Figure 2(b), where our two-stage method (red line) runs as fast as DT (green line). > 4. Start with an introductory text explaining the single-spiked covariance model. It is missing the basic differences between PCA and sparse PCA. **Reply:** Thank you for your valuable suggestion. In the final version of our manuscript, we plan to revise the introductory text in Section 1 to improve clarity and provide a more comprehensive overview of the single-spiked covariance model. Specifically, we will: - Introduce principal component analysis (PCA) by emphasizing its importance, typical applications, and inherent limitations (e.g., the production of dense principal components). - Present sparse PCA as a remedy to these limitations by highlighting its role in generating more interpretable components. - Provide a detailed explanation of the single-spiked covariance model and add the citation (Johnstone, 2001) when presenting the mathematical formulation. - Include one or two examples of applications or problems where this model proves particularly useful. We believe that these revisions will enhance the clarity and accessibility of the introductory section. >5. Why diagonal thresholding does not fullfill Theorem 3.2. **Reply:** Thank you for your comment. With assumption $\lambda = \Omega({||v||} _{\infty}^{-1})$, diagonal thresholding (DT) cannot achieve sample complexity $\Omega(k \log p)$. This is based on the following observations and analysis. The sample complexity of DT is governed by the statistical gap defined in Eq.(5) of our paper. In contrast, the sample complexity of Algorithm 1 is governed by the statistical gap as Eq.(7) in our paper. A larger gap leads to a smaller sample complexity. If the infinity norm of $v$ is of constant order, then our gap depends only linearly on $\min_{j \in S} |v_j|$, improving order-wisely the gap of DT and consequently reducing the sample complexity $\Omega(k^2 \log p)$ to $\Omega(k \log p)$. However, for DT, the constant order of ${||v||} _{\infty}$ does not improve its gap, suggesting that its sample complexity cannot be improved. In conclusion, Algorithm 1 provides a way to incorporate the property of $v$ into estimation, while DT cannot do it according to our analysis. We will add this discussion in the final version. > 6. Other Comments: **Reply:** Thank you—we'll fix the typos.
Summary: This paper presents their algorithms for Sparse Principal Component Analysis (Sparse PCA) under the setting of the Single-Spiked Covariance Model. With the assumption of signal strength, the authors introduce a thresholding-based algorithm with better (big-Omega) sample complexity and show it merges the gap of existing polynomial-time algorithms and information-theoretic lower bounds. Additionally, they propose a two-stage nonconvex optimization algorithm that refines the estimation using truncated power iteration, achieving minimax-optimal statistical error rates. The author provided theoretical analysis for the proposed algorithms and showed the numerical experiments that verified their performance claim in terms of estimation accuracy, sample complexity, and computational cost. ## update after rebuttal I have read the author's response and would like to thank them for their detailed rebuttal. After reading other reviewers' opinions and the rebuttal, I acknowledge that the proposed method demonstrates a reasonable degree of practical robustness, even though the underlying assumptions may be somewhat idealized. Hence, I have decided to maintain my original score of Weak Accept. Claims And Evidence: The reviewer is not from the community of sparse PCA. The reviewer is convinced by the claims in general but some concerns need to be justified. From intuition, the ‘Single-Spiked Covariance’ may be not suitable for some cases with multiple principle components, which the analysis will not work for sure. It seems quite strong combining the signal strength assumption. Could the author have explained and verified this is a common setting in sparse PCA? Besides, it assumes i.i.d. Gaussian Noise, is there other cases that this is not satisfied and then your analysis fails? How does it work in more real applications? What is the robustness of different cases of SNR? Methods And Evaluation Criteria: The synthetic experiments are based on their tailored designed setting, this is fine since it's hard to find ground truth for sparse PCA. However, it may be not practical since the assumption is quite strong. Also, the reviewer would suggest verifying the proposed method for some real applications that heavily rely on the sample complexity, computational cost, and accuracy. Theoretical Claims: Under their assumption, the result looks correct and makes sense to the reviewer. Experimental Designs Or Analyses: The synthetic experiments are based on their tailored designed setting, this is fine since it's hard to find ground truth for sparse PCA. However, it may be not practical since the assumption is quite strong. Also, the reviewer would suggest verifying the proposed method for some real applications that heavily rely on the sample complexity, computational cost, and accuracy. Supplementary Material: Yes. Relation To Broader Scientific Literature: The reviewer believes this is an interesting work with significant improvement. However, the author needs to justify how general is their algorithm in more practice scenarios in the broader areas. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The truncated power method for sparse eigenvalue problems has already been published. Could the author clarify what is their unique contribution in this part? Other Comments Or Suggestions: N/A Questions For Authors: See my previous discussion. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **1. The strength and necessity of the additional assumption on $\lambda$** **Reply:** The single-spiked covariance model is a well-studied framework in high-dimensional statistics. Nevertheless, there remains a substantial gap between the information-theoretic sample complexity, $\Omega(k \log p)$, and the best known sample complexity of existing polynomial-time algorithms, $\Omega(k^2)$. **To bridge this gap under the planted clique conjecture, it is necessary to introduce additional assumptions**. In fact, prior work—based on reductions from the planted clique conjecture—provides strong evidence that without extra conditions no polynomial-time algorithm can recover the spike with sample complexity $\Omega(k \log p)$ [R1-R5]. Our assumption $\lambda = \Omega(||v|| _\infty^{-1})$ ensures that the nonzero entries of the spike $v$ decay sufficiently fast. In many theoretical analyses, $\lambda$ is taken to be a constant when deriving both the information-theoretic lower bounds and the sample complexities of various algorithms. Moreover, **in models where the nonzero entries of $v$ follow a power-law decay—a scenario well-known in compressed sensing—$\lambda$ naturally remains of constant order**. A similar phenomenon was observed in the sparse phase retrieval problem [R6], where signals with power-law decay could be recovered with optimal sample complexity. We will expand upon this discussion in the revised manuscript. Reference: [R1] Statistical and computational trade-offs in estimation of sparse principal components, Annals of Statistics, 2016. [R2] Do semidefinite relaxations solve sparse PCA up to the information limit, Annals of Statistics, 2015. [R3] Optimal detection of sparse principal components in high dimension, Annals of Statistics, 2013. [R4] Sparse CCA: Adaptive estimation and computational barriers, Annals of Statistics, 2017. [R5] Reducibility and computational lower bounds for problems with planted sparse structure, COLT, 2018. [R6] Sample-efficient algorithms for recovering structured signals from magnitude-only measurements, IEEE Transactions on Information Theory, 2019. > 2. It assumes i.i.d. Gaussian Noise, is there other cases that this is not satisfied and then your analysis fails? How does it work in more real applications? **Reply:** Thank you for your question. The single-spiked covariance model is a well-known model in high-dimensional statistic analysis. Its research is mainly from a theoretical perspective. In the single-spiked model, for the sample $x_i = \sqrt{\lambda} g_i v + \xi_i$, the noise $\xi_i$ is generally assumed to follow a standard Gaussian distribution $N(0,I)$, or sometimes it is assumed to follow a sub-Gaussian distribution with variance proxy $O(1)$ [R7]. Therefore, in our paper, we also assume $\xi_i \sim N(0,1)$, which is the same as most of the previous work. In our theoretical analysis, the concentration inequalities (Line 607-610 in Lemma A.1, Eq.(19) in Lemma A.6, Eq.(22)(23) in Lemma A.8) hold based on this assumption of $\xi_i$. If we assume $\xi_i$ follows a sub-Gaussian distribution with variance proxy $O(1)$ (a generalized Gaussian), we can still derive similar concentration inequalities to complete our analysis. However, if $\xi_i$ is assumed to follow other kinds of distributions, the same or similar concentration inequalities may not hold, and thus the analysis may fail. We will add this discussion in the revised manuscript. Reference: [R7] Sum-of-squares lower bounds for sparse PCA, Advances in Neural Information Processing Systems, 2015. > 3. What is the robustness of different cases of SNR? **Reply:** **Please refer to the response to the fourth question for Reviewer NHCS for a detailed discussion on the robustness of our method under various SNR regimes.** > 4. The truncated power method for sparse eigenvalue problems has already been published. Could the author clarify what is their unique contribution in this part? **Reply:** While the truncated power method was introduced in [R8] and its convergence shown under a good initialization, that work does not provide a practical procedure for obtaining such an initialization nor does it establish the optimal sample complexity. Our unique contribution is the development of a novel thresholding algorithm that produces an initialization satisfying the necessary conditions for the truncated power iterations. This practical initialization enables our two-stage algorithm to achieve the optimal sample complexity of $\Omega(k \log p)$, a significant improvement over the $\Omega(k^2)$ samples typically required by existing polynomial-time methods. In short, our work bridges the gap between theoretical conditions and practical implementation, thereby enhancing both the statistical and computational efficiency of sparse PCA. Reference: [R8] Truncated power method for sparse eigenvalue problems, Journal of Machine Learning Research, 2013.
Summary: The paper presents efficient algorithms for solving the sparse Principal Component Analysis (PCA) problem, it is one of fundamental problems in machine learning. The proposed algorithm significantly reduces the required sample complexity compared to previous polynomial-time methods. Under typical sparse conditions, prior polynomial-time algorithms required sample complexity on the order of $O(k^2)$ , whereas this paper achieves near-optimal complexity of $k\log p$, aligning closely with theoretical lower bounds. The authors propose a thresholding-based algorithm along with a two-stage nonconvex approach, combining thresholding initialization with truncated power iteration to ensure both theoretical rigor and computational efficiency. Rigorous proofs demonstrate that the proposed methods significantly improve upon existing polynomial-time methods in terms of both runtime and required sample size. Experiments on synthetic data illustrate practical effectiveness, validating the theoretical claims regarding improved scalability and accuracy. ## update after rebuttal Thank you for the authors’ responses. I maintain my rating. Claims And Evidence: The claims presented in this submission are generally supported by the provided evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are suitable and aligned with the problem addressed in the paper. Theoretical Claims: I took a glimpse at the proofs; they seem to be correct. Experimental Designs Or Analyses: The authors utilized synthetic data to examine the trade-offs between solution quality (near-optimality) and computational effort, highlighting the advantages of their proposed algorithms over state-of-the-art methods in achieving a balance between these competing factors. Supplementary Material: I read both proofs and additional experiments for Algorithm 2. Relation To Broader Scientific Literature: The paper presents a significant advancement in the design and analysis of sparse PCA algorithms by reducing sample complexity while maintaining computational efficiency. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1) Handling $\ell_0$-constrained sparsity PCA is a challenging task. The paper makes a significant contribution by bridging the gap between the optimal information-theoretic sample complexity of Ω(k log p) and the higher complexities, typically Ω(k²), required by earlier polynomial-time approaches. The authors rigorously support their claims with novel theoretical results and numerical experiments—specifically Theorems 3.2 and 3.3—that establish robust performance guarantees under reasonable assumptions. 2) The innovative integration of thresholding-based initialization with refined truncated power iteration represents a creative approach. This two-stage framework not only enhances statistical performance but also preserves computational efficiency, a crucial factor in high-dimensional settings. A theoretical analysis for the combined algorithm is provided. Weaknesses: 1) The condition on signal strength that $\lambda = \Omega(\||v\||^{-1}_\infty)$ is crucial for the proposed improvements in sample complexity. Nonetheless, the paper could be enhanced by a more thorough examination of how practical this assumption is. It would be particularly useful to investigate if this condition is frequently met in real-world scenarios or datasets. Furthermore, conducting empirical studies or discussing how robust the method is when this condition is not fully met could significantly enrich the paper. 2) All experiments for the two proposed algorithms are performed using synthetic data. Although these synthetic experiments are crucial for validating theoretical guarantees, demonstrating the algorithms’ performance on real-world, high-dimensional datasets would greatly enhance the practical relevance of the research. Other Comments Or Suggestions: Some numerical experimental results for Algorithm 2 should be moved to the main paper. Questions For Authors: 1) In Lines 276-279, the authors wrote, “We present a series of numerical experiments designed to verify the theoretical results and validate the efficiency and effectiveness of our proposed two-stage algorithm.” Was Algorithm 1 used in Section 4? 2) In Algorithm 1, when selecting the top $k$ elements, how do you handle cases where some of the last elements are equal? Does this affect the theoretical analysis? 3) What is the stopping criterion for Algorithm 2 in Tables 1-3 in Appendix B? Additionally, how many samples were used for warm-starting (i.e., running the initialization stage)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > 1. Was Algorithm 1 used in Section 4? **Reply:** Thank you for your comment. In Section 4, all experiments are conducted using Algorithm 2. This is because Algorithm 2 is our full two-stage procedure, which first employs Algorithm 1 for initialization and then refines the estimate via truncated power iteration. Since the refinement stage in Algorithm 2 consistently improves upon the initialization provided by Algorithm 1, we report only the results from Algorithm 2 in the main experimental section. > 2. In Algorithm 1, when selecting the top $k$ elements, how do you handle cases where some of the last elements are equal? Does this affect the theoretical analysis? **Reply:** Thank you for your question. We address the issue in two parts: **1. Potential Ties Do Not Affect Our Theoretical Analysis:** In our theoretical analysis (e.g., see Lemma A.8), we establish that with high probability there is a strict separation between the values corresponding to the indices in the true support and those outside of it. Specifically, the minimum value among the selected entries is shown to be strictly greater than the maximum value among the unselected ones. This strict inequality guarantees correct support recovery and renders any ambiguities in the ordering of lower-ranked elements irrelevant. In other words, even if ties were to occur among the lower-ranked entries, they would not compromise the validity of our theoretical guarantees. **2. The Probability That Any Two Entries Are Exactly Equal Is Zero:** The values used in our selection process are computed from continuous random variables (i.e., the entries in the empirical covariance matrix derived from i.i.d. samples). In any continuous distribution, the probability that any two independently drawn real numbers are exactly equal is zero, so the occurrence of two identical values is of zero probability. In summary, our support recovery analysis is robust: it relies on a high-probability strict separation that ensures correct selection, and the continuity of the underlying distributions guarantees that exact ties effectively never occur. We will add this discussion to the revised paper. > 3. What is the stopping criterion for Algorithm 2 in Tables 1-3 in Appendix B? Additionally, how many samples were used for warm-starting (i.e., running the initialization stage)? **Reply:** Thank you for your comment. First, the stopping criterion for Algorithm 2 in Tables 1-3 in Appendix B is that the relative error between $v^{t-1}$ and $v^t$ is less than $10^{-8}$. Second, Algorithm 2 employs the same set of samples for both the initialization and refinement stages. Hence, the samples used in the warm-start (initialization stage) are exactly the same as those used in the entire experiment. For example, in the experiments reported in Table 1 of Appendix B, the total sample size is 2500; therefore, 2500 samples are used to warm-start the algorithm. > 4. Conducting empirical studies or discussing how robust the method is when this condition is not fully met could significantly enrich the paper. **Reply:** Thank you for your comment. To assess the robustness of our method when the ideal condition is not fully met, we conducted experiments with various values of $ \lambda $. The results below (with $p = 1000$ and $k = 20$) report the estimation error as a function of the sample size: | Sample size $n$ | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 900 | 1000 | |------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | $\lambda = 0.5$ | 1.4003 | 1.3971 | 1.3973 | 1.3996 | 1.3976 | 1.3903 | 1.3939 | 1.3943 | 1.3927 | 1.3897 | | $\lambda = 2.5$ | 1.3174 | 1.1650 | 0.7896 | 0.5427 | 0.3980 | 0.2459 | 0.1736 | 0.1535 | 0.1331 | 0.1280 | | $\lambda = 5$ | 0.7188 | 0.2635 | 0.1347 | 0.1062 | 0.0943 | 0.0863 | 0.0789 | 0.0751 | 0.0701 | 0.0664 | | $\lambda = 7.5$ | 0.2762 | 0.1246 | 0.0961 | 0.0843 | 0.0749 | 0.0683 | 0.0626 | 0.0596 | 0.0556 | 0.0526 | These results show that the estimation error decreases with increasing sample size and that the method remains robust across different $ \lambda $ values, even under less than ideal conditions. We will add this discussion to the revised paper.
Summary: This paper looks at sparse PCA with few samples in the spiked Gaussian model, under the assumption that the largest single coordinate of the spike has pretty high variance. ## update after rebuttal I remain unenthusiastic about this paper, but I don't strongly object to it. The main contribution is the proposed parameterization, in terms of the largest coordinate in the spike; given this formulation, getting an algorithm is fairly straightforward. I do agree that identifying that this is a fairly simple formulation with a potential for significantly improved sample complexity is a real contribution. But I'm not excited by *further* restrictions on the spiked covariance model, which is already toy. Claims And Evidence: The paper shows how to achieve O(k log p) sample complexity, which is in general optimal, under an assumption that the largest single coordinate has fairly large variance. Methods And Evaluation Criteria: Under the assumption given, the result seems like it should be pretty straightforward to achieve. Why not just take (Sigma^ e_{j0})_{S^} as your estimate v^0? For any j != j0, (Sigma^ e_{j0})_{j} is going to be distributed as (1/m) sum_i lambda v_j0 v_j g_i^2 + (N(0, 1) * N(0,1)) The noise term is independent and constant variance, so it sums to basically N(0, O(m)), and will be O(sqrt(m log p)) for all coordinates; the leading term has (sum g_i^2) being (m +/- sqrt(m log p)) for all coordinates; and by assumption, lambda v_j0 > 1. So we get the value v^0_j = (lambda v_j) v_j +/- v_j sqrt(log p / m) +/- sqrt(log p / m) The scaling (lambda v_j) > 1 only helps; ignoring it, we have v_j < 1 so the error is |v^0_j - v_j| <~ sqrt(log p / m) For the given m = (1/gamma^2) k log p, this is coordinatewise error at most gamma/sqrt(k). That Linf error of a sparse vector, after restricting to the largest k entries, gives gamma L2 error. Am I missing something? Theoretical Claims: I didn't check the proofs carefully, but the result seems pretty straightforward so I believe them. Experimental Designs Or Analyses: The experiments are just synthetic, presumably because the spiked covariance model is a bit of a toy. Supplementary Material: No Relation To Broader Scientific Literature: There's an annoying gap between what we can achieve computationally and information theoretically for sparse PCA; this paper looks at how to work around that gap by parameterizing in terms of a different value. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: The single-spiked covariance model is already a toy model, so adding another assumption beyond the basics isn't that exciting. Other Comments Or Suggestions: see above Questions For Authors: Please address my question about a simpler algorithm above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > 1. Under the assumption given, the result seems like it should be pretty straightforward to achieve. Why not just take $(\hat{\Sigma} e _{j _0}) _{\hat{S}}$ as your estimate $v^0$? The scaling $(\lambda v_j) > 1$ only helps; ignoring it, we have $v_j < 1$ so the error is $|v^0 _j - v_j| \leq c \sqrt{\log p / m}$. **Reply:** We thank the reviewer for the interesting suggestion. Our response is organized in three parts. **1. Slight Modification Required for the Reviewer’s Estimator** The reviewer suggested using $ (\hat{\Sigma} e _{j _0}) _{\hat{S}} $ as an estimator for $v$, where $e _{j _0}$ is the canonical basis vector. However, under the spiked covariance model, we have $E[\hat{\Sigma}] = \lambda v v^T + I$, so that $E[\hat{\Sigma}] e _{j _0} = \lambda v _{j _0} v + e _{j _0}$. Thus, while for $j \neq j_0$ the entry is $\lambda v _{j_0} v_j$, the $j_0$-th entry becomes $\lambda v _{j_0}^2 + 1$, introducing a bias of $1$. To remove this bias, we subtract $e _{j_0}$ and define the modified estimator $$ \hat{v} _{\text{new}} = \frac{(\hat{\Sigma} e _{j _0} - e _{j _0}) _{\hat{S}}}{ ||(\hat{\Sigma} e _{j _0} - e _{j _0}) _{\hat{S}} ||_2}, $$ thereby properly centering the estimator. **2. Technical Challenges** Both the estimator $\hat{v} _{\text{new}}$ and our proposed estimator rely on the random index $j_0$ and the estimated support $\hat{S}$. Because of this, establishing the sample complexity of $\hat{v} _{\text{new}}$ is nontrivial and requires extra technical development. First, since $\hat{v} _{\text{new}}$ involves $j_0$ and $\hat{S}$, its analysis requires key results (e.g. Lemmas A.6 and A.8) to control their randomness. In particular, Lemma A.8 (see Lines 880--888) directly leads to the assumption $\lambda = \Omega ( ||v|| _{\infty}^{-1} )$, which ensures the necessary probability bounds in concentration inequalities (as in Eq. (23)). Second, since $j_0$ is random, one cannot treat $\hat{\Sigma} _{j,j_0}$ as if $j_0$ were fixed; similar techniques as in Lemma A.8 are needed to handle its variability. Third, since the true vector $v$ is unknown, $\lambda v _{j _0}$ cannot directly serve as the scaling between $(\hat{\Sigma} e _{j _0} - e _{j _0}) _{\hat{S}}$ and $v$. Instead, one must first accurately estimate the $L_2$ norm $|| (\hat{\Sigma} e _{j _0} - e _{j _0}) _{\hat{S}} ||_2$ to properly normalize the estimator. **3. Comparison: Computational Load and Empirical Performance** The two estimators differ mainly in their final computational steps. Our proposed estimator (via Algorithm 1) computes the leading eigenvector of the $k \times k$ submatrix $\hat{\Sigma} _{\hat{S}}$, which costs $O(nk^2)$, while the modified estimator $\hat{v} _{\text{new}}$ only normalizes $(\hat{\Sigma} e _{j_0} - e _{j_0}) _{\hat{S}}$, costing $O(p)$. Although $\hat{v} _{\text{new}}$ is more efficient, our experiments show that the estimator via Algorithm 1 achieves lower estimation error. For example, Table 1 displays estimation error versus sample size (with dimension $p = 4000$, sparsity $k = 20$ and signal strength $\lambda = 10$), and Table 2 reports the corresponding computational times. **Table 1. Estimation error versus sample size for two estimators, with $p = 4000$, $k = 100$, $\lambda = 10$.** | Sample size | 500 | 1000 | 1500 | 2000 | 2500 | 3000 | 3500 | 4000 | |-------------------------------|--------|--------|--------|--------|--------|--------|--------|--------| | Estimator via Algorithm 1 | 0.9593 | 0.6111 | 0.4309 | 0.3104 | 0.2274 | 0.1652 | 0.1272 | 0.0895 | | Estimator $\hat{v}_{\text{new}}$ | 1.0920 | 0.7596 | 0.5534 | 0.4136 | 0.3215 | 0.2578 | 0.2215 | 0.1867 | **Table 2. Computational time(s) versus sample size for two estimators, with $p = 4000$, $k = 100$, $\lambda = 10$.** | Sample size | 500 | 1000 | 1500 | 2000 | 2500 | 3000 | 3500 | 4000 | |-------------------------------|--------|--------|--------|--------|--------|--------|--------|--------| | Estimator via Algorithm 1 | 0.0887 | 0.1192 | 0.1479 | 0.1737 | 0.1907 | 0.2145 | 0.2416 | 0.2648 | | Estimator $\hat{v}_{\text{new}}$ | 0.0854 | 0.1168 | 0.1454 | 0.1707 | 0.1875 | 0.2113 | 0.2383 | 0.2622 | We will add further discussion and experimental evaluations of this estimate in the revised manuscript. > 2. The single-spiked covariance model is already a toy model, so adding another assumption beyond the basics isn't that exciting. **Reply:** Although the single-spiked covariance model is a toy model, a significant gap still exists between the information-theoretic sample complexity and what polynomial-time algorithms achieve. Under the planted clique conjecture, it is known that no polynomial-time algorithm can recover the spike at the optimal sample complexity without additional assumptions. **Please refer to the response to the first question for Reviewer gGz9 for a detailed discussion on the necessity of this assumption.** --- Rebuttal Comment 1.1: Comment: Right, you do need to subtract e_{j0}, thanks for the reminder. But other than that I don't think these challenges are significant: - j0 being not fixed doesn't really matter -- the concentration is so good, you could just union bound over possible j0. (So union bound over p^2 not p things -- nbd.) - you don't need to accurately estimate the L2 norm, since you're estimating a unit vector; you'll normalize it anyway. I'll raise my score a bit, but I'm not convinced -- and sure the empirical performance is slightly better, but probably a more thorough local search would further improve it. Empirical performance isn't really the main selling point here. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful feedback. Below, we try our best to address your concerns and strengthen our contribution by providing additional clarification of our method. Our proposed thresholding algorithm comprises three steps: 1. Estimate the index corresponding to the largest absolute entry of the true spike by $j_0 = \arg \max_j \hat{\Sigma}_{j,j}$. 2. Recover the full support of the true spike by selecting the indices of the top $k$ absolute entries from the $j_0$-th column of $\hat{\Sigma}$. 3. Estimate the spike's values within the estimated support via eigendecomposition. **The first two steps are critical for achieving the sample complexity of $\Omega(k\log p)$**, as demonstrated in Lemmas~A.6 and A.8. In particular, by amplifying the separation between the in-support and out-of-support entries compared with typical diagonal thresholding (see Equations (5) and (7)), these steps make it easier to distinguish the two sets and require fewer samples to recover the support. This enhanced separation is the innovation of our method. Notably, **alternative procedures for the third step—such as the variant you suggested—can be employed without affecting the overall sample complexity**. By contrast, diagonal thresholding directly selects the top $k$ diagonal entries of $\hat{\Sigma}$ for support recovery. Compared with our approach, this method produces a less pronounced separation between the in-support and out-of-support entries, which in turn results in a higher sample complexity of $\Omega(k^2\log p)$, even though the final step for estimating the spike's values within the support is identical. Finally, the necessity of introducing an additional assumption is underscored by reductions from the planted clique conjecture, which strongly suggest that **without extra condition, no polynomial-time algorithm can achieve $\Omega(k\log p)$ sample complexity**. We also thank the reviewer for suggesting a variant estimation procedure that offers improved computational efficiency at the expense of some estimation performance. A discussion of this alternative will be included in the final version of the paper.
null
null
null
null
null
null
Reflection-Window Decoding: Text Generation with Selective Refinement
Accept (poster)
Summary: This paper introduces a built-in mechanism for refinement and correction during LLM generation. The authors provide theoretical analysis characterizing the sub-optimality of purely autoregressive decoding and propose a "reflection-window" decoding approach that allows for selective refinement during generation. The method shows improved empirical results on multiple benchmarks. Claims And Evidence: The potential deviation of auto-regressive generation is supported by theory and demonstrative examples. Methods And Evaluation Criteria: The proposed methods exhibit some deviations from the dependence structure illustrated in Figure 1(a). Theoretical Claims: The proofs are sound. Experimental Designs Or Analyses: Some experimental designs are not convincing: 1. In Figure 5, the authors compare the win rate between Beam Search and Reflection-Window (Greedy). However, since Reflection-Window (Greedy) primarily relies on greedy decoding with occasional (3.5%–5.5%) use of Beam Search, it is essential to include pure greedy decoding as a baseline for comparison. I strongly encourage the authors to add this comparison. 2. In Table 2, the authors compare the performance between Top-K/Top-P and Reflection-Window. The improved performance of Reflection-Window primarily highlights the effectiveness of Beam Search over Top-K/Top-P in this task. I strongly encourage the authors to include a pure Beam Search baseline for comparison. Supplementary Material: Yes. Theory part, ablation study and demonstrative examples. Relation To Broader Scientific Literature: None Essential References Not Discussed: No Other Strengths And Weaknesses: **Stengths** Clear motivations supported by theoretical analysis and demonstrative examples. **Weaknesses** I am mostly concerned with the experimental parts. Please refer to Experimental Designs Or Analyses. Besides, in Table 1, Reflection-Window (Greedy) shows no improvement over purely greedy decoding. Other Comments Or Suggestions: None Questions For Authors: A more rigorous experimental evaluation and positive results would change my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer ZxMG Thank reviewer for the comments and questions, as well as the time devoted! Please kindly notice that there might be some **potential misunderstandings**. Below please see our point-by-point responses: --- ### **C1:** "In Figure 5, the authors compare the win rate between Beam Search and Reflection-Window (Greedy). However, since Reflection-Window (Greedy) primarily relies on greedy decoding with occasional (3.5%–5.5%) use of Beam Search, it is essential to include pure greedy decoding as a baseline for comparison. [The reviewer] strongly encourage the authors to add this comparison." **A1:** Thanks for considering our experimental results. There might be some **potential misinterpretation**. In Figure 5, we present the win rate of beam search _against_ greedy decoding, and that of our approach _against_ greedy decoding. In other words, the win/lose is not calculated between beam search and our approach, but compared against greedy decoding, respectively. We think the brief caption (due to space limit) might be the reason behind this misinterpretation, and we have modified it into "Figure 5: Comparison of win rates of beam search and our reflection-window decoding (both against greedy decoding) on MT-Bench across categories." --- ### **C2:** "In Table 2, the authors compare the performance between Top-K/Top-P and Reflection-Window. The improved performance of Reflection-Window primarily highlights the effectiveness of Beam Search over Top-K/Top-P in this task. [The reviewer] strongly encourage the authors to include a pure Beam Search baseline for comparison." **A2:** Thanks for the comment. There might be some **potential overlook** in our specified setting. In Table 2, the "regular decoding" in our approach is Top-$k$/Top-$p$, and the difference between our approach and the vanilla Top-$k$/Top-$p$ baseline includes both the **sliding-window reflection** and the **selective refinement** mechanisms. Describing Table 2 as primarily highlighting the effectiveness of beam search might oversimplify the results, potentially overlooking the substantial improvements in computational efficiency and performance introduced by these additional mechanisms. Regarding the pure beam search baseline, we included the evaluation results in Table 11, and the pointer was provided on line 344 (Right).
Summary: This proposes Reflection-Window decoding, an attempt at addressing the limitations of autoregressive text generation in large language models (LLMs), which lack built-in mechanisms for refining or correcting generated content. The authors analyze how sequential token-by-token optimization can deviate from a globally optimal response and propose an alternative approach called Reflection-Window Decoding. This method introduces a sliding reflection window and a pausing criterion, enabling selective refinement of text as it is generated. By balancing efficiency and optimality, the proposed framework improves text generation quality, outperforming traditional autoregressive decoding while maintaining efficiency comparable to beam search. Extensive empirical evaluations validate the effectiveness of this approach in mitigating suboptimalities inherent in existing decoding strategies. Claims And Evidence: - Decoding towards step-wise MAP of the autoregressive model is suboptimal - Non-trivially correct but this is well-known by the community. People still following the current practices just because this is a rather good enough approximation. - Saliency-based reflection - This is a rather interesting point, but I don't think its deeper insightly is sufficiently discussed and/or proven. - Compatibility and versatility of the proposed method - This is mostly right to me. - Improved efficiency with less sacrificing of quality - True but the empirical results kinda weak and the significance is also less consistent given different base models. Methods And Evaluation Criteria: The evalutation is standard and correctly conducted. Theoretical Claims: There is but one strictly verified theorem A.1 that is non-trivially correct yet kinda well-known and thus not surprising to the community. The other major theorem 3.6, as is admitted by the authors, provides a bound that is too weak to be practically useful. While the general framework of the proposed method is of great potential and very interesting, the current progress of the algorithm is kind of disappointing and less theoretically supported than it might seem. Experimental Designs Or Analyses: The experiment is well formulated and correctly conducted. Supplementary Material: Yes, the supplementary material includes neccessary proof and empirical details to show the soundness of the proposed method. Relation To Broader Scientific Literature: There are a few more recent works/commercial practices (such as OpenAI O1/3 and DeepSeek R1) that uses a combination of reflection-aware chain-of-thought SFT and RL to achieve a similar computational goal of the proposed method. While I agree with the authors that this proposed method has the merit of minimising the additional overhead in achieving better results, unfortunately (yes, yet another bitter lesson), it is less extensive and scalable than those rather direct methods to introduce the reflection mechanism through non-architectural/algorithmic ways. Essential References Not Discussed: I am not an expert in this domain, to the best of my knowledge, apart from the aforementioned direct approaches, there is not any essential references that I know omitted by the authors. Other Strengths And Weaknesses: This paper, while not very empirically strong, provides an interesting point of view for the community to rethink between the choice of "computation through tokens" and "computation through logits". In this particular case, it is "reflection as tokens" and "reflection as logits' (change in values)". This might reveal a deeper, unified story about how reflection in LLM works that could facilitate future research. Other Comments Or Suggestions: Using the fluctuation of likelihood to determine potential reflection is an interesting point. I wonder if the authors could combine this with contrastive methods (e.g. before computation of saliency first baselining the likelihood by a smaller proxy model) to achieve more compelling results. Questions For Authors: What's your understanding about the scalability of your proposed method, i.e. to what extent, it could enable the models to do complex tasks that it couldn't without the approach? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # Response to Reviewer Fghj We are very grateful for the thoughtful comments, as well as the time and effort devoted! Below please see our point-by-point responses: --- ### **C1:** "There are a few more recent works/commercial practices (such as OpenAI O1/3 and DeepSeek R1) that uses a combination of reflection-aware chain-of-thought SFT and RL to achieve a similar computational goal of the proposed method." **A1:** Thanks for the comment. We completely agree that recent works/commercial practices have explored various methods to enhance generated content (Sections 1 and 2). However, the fundamental limitation of autoregressive decoding itself remains under-explored. This gap represents a distinct perspective, different from high-level model behaviors or inference efficiency. Furthermore, as you kindly pointed out, our method is versatile and provides practitioners additional flexibilities to incorporate different strategies (without the need to retrain or finetune). --- ### **C2:** "[The paper] provides an interesting point of view for the community to rethink between the choice of 'computation through tokens' and 'computation through logits.' In this particular case, it is 'reflection as tokens' and 'reflection as logits' (change in values). This might reveal a deeper, unified story about how reflection in LLM works that could facilitate future research." **A2:** Thanks for sharing the insight and the perspective! In light of your comment, we would very much like to include such discussion in the paper, and we sincerely hope that our work can facilitate future research. It would be greatly appreciated if you can kindly share pointers to where these terms were discussed. --- ### **C3:** "[The reviewer] wonder if the authors could combine this with contrastive methods (e.g. before computation of saliency first baselining the likelihood by a smaller proxy model) to achieve more compelling results." **A3:** Thanks for the thoughtful question. If we understood the comment correctly, by "contrastive methods" we are discussing the compatibility of our framework with contrastive decoding (CD), e.g., Li et al. (2023), O'Brien and Lewis (2023). CD utilizes a search-based decoding approach that contrasts LMs of different scales, for instance, an expert (larger LM) and an amateur (smaller LM). In principle, our approach is versatile and compatible with contrastive methods, and can be applied in various ways. For instance, one can apply our reflection-window decoding on expert and amateur LLMs in parallel, and then continue with CD's method of factoring out undesired behaviors of smaller LMs while retaining good behaviors of larger ones. Alternatively, one can also apply CD first, and then design appropriate pausing criterion to incorporate our selective refinement framework. We leave these as interesting directions for future works. Please kindly let us know if we accidentally misunderstood your comment. --- ### **Q4:** "What's your understanding about the scalability of your proposed method, i.e. to what extent, it could enable the models to do complex tasks that it couldn't without the approach?" **A4:** Thanks for the thoughtful question and for trying to go further. As the primary goal of this paper is to address the pitfall of the purely autoregressive way of decoding, we do not claim that our approach can enable high-level behaviors (e.g., complex new tasks) that are otherwise unattainable. However, as you kindly pointed out in **C2**, we sincerely hope our perspective can provide a distinct point of view for community to rethink about related issues, and facilitate future research. --- ### References Li, Xiang Lisa, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. "Contrastive Decoding: Open-ended Text Generation as Optimization." _The 61st Annual Meeting Of The Association For Computational Linguistics_. 2023. O'Brien, Sean, and Mike Lewis. "Contrastive decoding improves reasoning in large language models." arXiv preprint arXiv:2309.09117 (2023).
Summary: The paper makes the observation that, given an autoregressive language model $p_{LM}$, the token sequences $\mathbf{\hat{x}}$ generated via greedy decoding does not always correspond to the MAP state $\mathbf{x}^* = argmax_{\mathbf{x}} p\_{LM}(\mathbf{x})$. Theoretical analysis shows that under mild assumptions, at step $L < T$ of the autoregressive generation, if $p\_{LM}(\mathbf{\hat{x}}\_{\leq L})$ falls below $p\_{LM} ({\mathbf{x}^*}\_{\leq L})$, then (1) the next-token probability $p\_{LM}(\mathbf{\hat{x}}\_L | \mathbf{\hat{x}}\_{< L})$ falls under some threshold and (2) there exists some $K < L$ s.t. $\mathbf{\hat{x}}\_{\leq K}$ differ from $\mathbf{x}^*\_{\leq K}$ (if I interpret Theorem 3.6 correctly). To overcome this sub-optimality of greedy decoding, as an alternative to beam search, the authors propose an approach where: (1) we can pause autoregressive generation depending on certain criteria (e.g. the entropy of the next-token distribution as suggested by Theorem 3.6) and (2) then regenerate the last $d$ tokens via, e.g. beam search, to maximize the probability. Empirical evaluations on benchmarks such as MMLU and MT-bench show that the proposed approach consistently outperforms greedy decoding and beam search, as well as top-p and top-k sampling. Claims And Evidence: Theorem 3.6 needs further clarification. In particular, the probabilistic semantics of the threshold $\epsilon\_{L}$ is unclear, i.e., what does the ratio mean or how does it correlate with the discrepancy between the sequence obtained via greedy decoding and the MAP state? Without such explanation, it's kind of hard to interpret why we want to regenerate the last $d$ tokens when $\epsilon\_{L}$ is "small". Further, is it possible to analyze when the proposed approach is guaranteed to generate sequences with probability higher than beam search? Methods And Evaluation Criteria: Though I agree that the ultimate goal of new decoding algorithms is to improve model performance on downstream applications, to support the argument made by this paper, it would be more helpful if the authors can also directly compare the probability of the sequences generated by beam search, greedy decoding and the proposed approach. For example, Table 1 suggests that beam search performs worse than greedy decoding, which is kind of counter-intuitive as the sequences generated from beam search should always have higher probability than that from greedy decoding (correct me if I'm wrong). This is probably suggesting that higher probability does not always imply better accuracy/fluency etc. However, this is not a huge problem: as long as the proposed approach can effectively boost the probability of the generations, the main argument made by this paper should already be well-supported. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: The contributions of this paper is very relevant to language generation in general, the proposed approach can be viewed as a generalization of beam search, which has been commonly used for LLM decoding. Essential References Not Discussed: Perhaps discuss the relationship between this work and *Shih, Andy, Dorsa Sadigh, and Stefano Ermon. "Long horizon temperature scaling." International Conference on Machine Learning. PMLR, 2023.* Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: The statement of Theorem 3.6, as well as the whole Sec. 3, should be improved. The notations used are overly complicated. For example, I don't see why $\mathbf{v}$ and $T$ need to be carried everywhere throughout the section: $T$ can be assumed to be some constant and omitted and $v$ seem to serve as the same functionality as $w$ and $X_t$. More specifically, Defn 3.1 is really just defining the sequence obtained from greedy decoding and Defn 3.2 is just defining the sequence that maximizes the joint probability. I don't see why something like $\mathbf{\hat{x}}$ and $\mathbf{x}^*$ are not sufficient. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer Lq2Q Thanks for the thoughtful and detailed comments, as well as the time and effort devoted! Below please see our responses to specific comments and questions: --- ### **Q1:** "[Theorem 3.6] what does the ratio $\epsilon_L$ mean or how does it correlate with the discrepancy between the sequence obtained via greedy decoding and the MAP state? why we want to regenerate the last $d$ tokens when $\epsilon_L$ is 'small'" **A1:** Thanks for asking about the $\epsilon_L$ term in our theoretical result. The denominator is the ground-truth joint probability of the length-$(L-1)$ stepwise-optimal response (obtained by greedy decoding), and the numerator is that of the length-$L$ globally-optimal response (obtained by MAP). Theorem 3.6 roughly states that if the model is very uncertain when trying to generate the $L$-th token, then there is an error in the generation history at the $K$-th token, and that $K < L$. Therefore, we need to get back to the token $K$ to start the revision. In practice, we look back $d$ tokens and regenerate them. We also provide additional discussions on the choice of $d$ in Section 5.4 and Appendix B.1. --- ### **Q2:** "Is it possible to analyze when the proposed approach is guaranteed to generate sequences with probability higher than beam search?" **A2:** Thanks for the thoughtful question and for trying to go further. If there is no limit on computation and storage, unconstrained-beam-width beam search could yield the globally optimal output (through actual MAP), i.e., guaranteed to be of the highest probability and to outperform (or at least no worse than) any other approach (including the proposed approach). With a fixed-beam-width beam search, the theoretical characterization of the generated length-$L$ sequence is highly nontrivial, since the frontier depends on the pruning at all previous steps. Please feel free to let us know if you would like suggest a way to perform such theoretical analysis. --- ### **C3:** "Table 1 [...] the sequences generated from beam search should always have higher probability than that from greedy decoding (correct [the reviewer] if [they were] wrong)" **A3:** Thanks for carefully considering our results. The metric in Table 1 is accuracy instead of joint probability. In benchmark evaluations, it is difficult to precisely control the output length of different decoding methods. Directly setting a hard cutoff of token numbers may yield incomplete/insensible responses. Therefore, in Table 1 we present accuracies on MMLU (consistent with previous works). --- ### **C4:** "Discuss the relation with Shih et al. (2023)" **A4:** Thanks for providing the pointer to a related work! Shih et al. (2023) proposed _Long Horizon Temperature Scaling_ (LHTS), which samples from temperature-scaled joint distributions, to address the myopic temperature scaling in autoregressive models. LHTS optimizes for the long horizon likelihood of samples, and can enable a model to generate with a controllable long horizon temperature parameter through finetuning. In comparison, our work aims to address the pitfall of the purely autoregressive generation itself, and our approach is versatile and compatible with LHTS (Shih et al., 2023). In light of your comment, we will incorporate the above discussion in the revised paper. Thanks again for providing the pointer. --- ### **C5:** "Sec. 3 the notations used are overly complicated. For example, [the reviewer] don't see why $\mathbf{v}$ and $T$ need to be carried everywhere [..., the reviewer] don't see why something like $\widehat{\mathbf{x}}$ and $\mathbf{x}^*$ are not sufficient" **A5:** Thanks for the comment on notation. When presenting the theoretical result, we explicitly keep length indices since lengths play an important role when evaluating the joint probability of a response. The longer the length, the lower the probability tends to be, and this is the case for both $\widehat{\mathbf{x}}$'s and $\mathbf{x}^*$'s. For instance (we provided this example in lines 165--177), if we were to use 10 words to distinguish between joint and conditional densities, one might say "joint density combines all variables; conditional adjusts for known variables." However, if we can use 15 words, one might say "joint density reflects combined probabilities of all variables; conditional density adjusts probabilities given known variables." A fair comparison between $\widehat{\mathbf{x}}$ and $\mathbf{x}^*$ should be length specific. Therefore, we think $\widehat{\mathbf{x}}$ and $\mathbf{x}^*$ together with length indices help make this subtlety more transparent. --- ### Reference Shih, Andy, Dorsa Sadigh, and Stefano Ermon. "Long horizon temperature scaling." _International Conference on Machine Learning_. PMLR, 2023.
Summary: The authors describe, theoretically and empirically, how greedy sampling is suboptimal for generating the sentence with maximum likelihood. They then propose an alternative algorithm, which pauses the generation when a specific criterion is triggered, and regenerates a small portion of the text. The proposed method is evaluated on several models and datasets, which shows its superiority over standard decoding strategies. Claims And Evidence: The main claims of the paper are regarding the suboptimality of greedy decoding. The authors do a good job are convincing the reader that this is indeed the case. The necessity to change the generation algorithm in order to generate a globally optimal sentence seems natural. Methods And Evaluation Criteria: The main algorithm is interesting and flexible. The choice of entropy as a pausing criterion is well motivated. However the criterion requires the entropy of all past $d$ tokens to be above the threshold, which seems less natural to me. The criterion would not trigger if the LLM is very uncertain about only one token. Is that a desirable behavior? It may be interesting to evaluate and compare difference choices for this pausing criterion. How come the proposed algorithm performs better than beam search, when it is supposed to be a cheaper approximation? It seems there are some things going on that are not aligned with the theoretical explanation and motivations. Theoretical Claims: The paper starts with a theoretical analysis to characterize the behavior of greedy decoding compared. The claims seem mathematically founded and are quite intuitive. However, assumption 3.3 about the LLM being an oracle (i.e. computes the exact conditional probabilities) seems farfetched. The paper could benefit from more discussion about this, for instance by comparing results for different sizes of models (smaller models are even less likely to be oracles). An indicator that this assumption may be wrong is when we notice that beam seems to perform noticeably worse than greedy decoding. Experimental Designs Or Analyses: Overall the authors evaluate their method on a few different models and datasets. It would still be more convincing to see more diverse evaluation, especially when the results seem to be noisy, with sometimes marginal gains. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper addresses the important problem of the suboptimality of greedy decoding. It describes well the context, the motivation, and the related literature. Essential References Not Discussed: None to the best of my knowledge. Other Strengths And Weaknesses: The writing is not always very clear. Other Comments Or Suggestions: Since it seems the end algorithm is a faster approximation of beam search, it would be great to measure the actual speedup. Questions For Authors: What model is used in the synthetic setting? The setup for these experiments needs to be more detailed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer KVvm We are very grateful for the insightful questions and constructive comments! Below please see our point-by-point response: --- ### **Q1:** "the criterion requires the entropy of all past $d$ tokens to be above the threshold [..., but] would not trigger if the LLM is very uncertain about only one token. Is that a desirable behavior?" **A1:** Thanks for carefully considering the pausing criterion. Yes, this is indeed a desirable behavior. Uncertainty can occur when the model does not know how to proceed (due to a previous error) or when there are multiple valid ways to proceed. Therefore, we aim to capture _the trend of_ increasing uncertainty, reducing the false-positive triggerings while maintaining a low computational overhead. --- ### **C2:** "It may be interesting to evaluate and compare difference choices for this pausing criterion." **A2:** We totally agree, and this is exactly why we present Table 4 (varying entropy threshold $\sigma$) and Table 5 (varying window size $d$) in Section 5.4, and provide additional discussions and analyses (due to space limit) in Appendix B.1 - B.3, on window size, entropy threshold, and modification rate, respectively. --- ### **Q3:** "How come the proposed algorithm performs better than beam search, when it is supposed to be a cheaper approximation?" **A3:** Thanks for the thoughtful question. If there is no limit on the computation and storage, unconstrained-beam-width beam search will yield the globally optimal output through brute force. However in practice, since maintaining a full frontier quickly becomes intractable and a fixed beam width is often introduced as a hyperparameter. Our algorithm can perform better since it tackles errors as the generation goes on, while vanilla beam search needs a larger beam width to be able to enclose all possible sequences that can be generated by our approach. --- ### **C4:** "Assumption 3.3 about the LLM being an oracle (i.e. computes the exact conditional probabilities) seems farfetched" **A4:** Thanks for carefully thinking about our theoretical results. There might be potential misunderstandings, please allow us to clarify in twofold: (1) Together with Assumption 3.3, our theoretical results indicate that, **even if** with an oracle LLM, there is still suboptimality in the purely autogressive way of decoding. In other words, even if the LLM itself perfectly decomposes the (conditional) probabilities (which, as you pointed out, is a farfetched benefit to assume in practice), there is still no guarantee in obtaining the globally optimal sequence with purely autoregressive decoding. (2) The purpose of Assumption 3.3 is to facilitate clear theoretical results, and our empirical evaluation does not rely on or employ this assumption. In light of your comment, we have included above clarifications in our revised draft. --- ### **C5:** "Overall the authors evaluate their method on a few different models and datasets. It would still be more convincing to see more diverse evaluation." **A5:** Thanks for the comment. In our empirical evaluations: - for **models**, we utilize models from different families, including Llama-3.1-8B-Instruct, Phi-3-Medium-128K-Instruct, Qwen2.5-14B-Instruct, Qwen2.5-7B-Instruct, Mistral-Nemo-Instruct-2407; - for **benchmarks**, we consider MMLU (which include 57 diverse subjects, e.g., humanities, STEM, and social sciences, at varying difficulty levels) for evaluating reasoning performance and factual knowledge, and also MT-Bench for a fine-grained evaluation through multi-turn conversational task, including correctness, coherence, and fluency. We provide pointers to our empirical results in List of Tables in Page 11. Please just kindly let us know if you have a specific evaluation task in mind. --- ### **C6:** "Since it seems the end algorithm is a faster approximation of beam search, it would be great to measure the actual speedup." **A6:** We totally agree, and that's why in Section 5.4 paragraph "Efficiency of Reflection-Window Decoding" we provide regeneration metrics by MMLU categories (humanities, STEM, social sciences, others), and also present additional analysis in Appendix B.3. --- ### **Q7:** "What model is used in the synthetic setting? The setup for these experiments needs to be more detailed." **A7:** Thanks for asking about the detail of our synthetic setting. We use Llama-3.1-8B-Instruct. For each prompt, together with a certain length $\{0, 20, 50, 200\}$ of generation history ($0$ means only the prompt is given), we evaluate whether the joint probability of the sequence generated with greedy decoding is greater than or equal to that produced by (fixed-beam-width, set to $10$) beam search (as proxy of the global optimum). This comparison indicates the extent to which greedy decoding deviates from the globally optimal response. In light of your comment, we have included above detail in the revised draft.
null
null
null
null
null
null
Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective
Accept (poster)
Summary: This paper explores a different scheme for rehearsal training in continual learning. It also provides theoretical framework for forgetting and generalization error of concurrent and sequential rehearsal. And the authors show under conditions where the difference between the sequential tasks is large, sequential rehearsal is provably better than concurrent rehearsal. This paper includes experimental results that verify its theory. Claims And Evidence: The claims are easy to understand and with rich illustration. Methods And Evaluation Criteria: About the sequential rehearsal, did you try training with the rehearsal samples for several rounds? For example, in figure 1, after training $M_{t, t-1}$, go back to train on $M_{t,1}$. What would it be? Theoretical Claims: The theoretical claims are supported with abundant proof. Experimental Designs Or Analyses: The experiments and analyses are adequate. Supplementary Material: The author provides abundant proof in the supplementary material, which seems good. Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: Are there any risk of overfitting in sequential rehearsal? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for providing the valuable review. Please note that all our new experiment results (i.e., the tables we quote below) and our codes can be accessed via the link https://anonymous.4open.science/r/repo-c14014 **Q:** Are there any risks of overfitting in sequential rehearsal? **A:** We appreciate the reviewer’s question. During training, the risk of overfitting indeed exists due to the limited size of memory data. To reduce the overfitting risk, our experiment has adopted a more conservative learning rate if a ‘dissimilar task’ is learned separately, as presented in Table 3 of the paper. **Q:** About the sequential rehearsal, did you try training with the rehearsal samples for several rounds? For example, in figure 1, after training $M_{t,t-1}$, go back to train on $M_{t,1}$. What would it be? **A:** We appreciate the reviewer’s insightful question. Despite the limited time, we made every effort to expand our experiments to train dissimilar tasks for multiple rounds, as presented in Table R3 in the anonymous link. The Averaged Final Accuracy(Acc) of multiple-round training is not changed significantly, while the Forgetting(Fgt) is slightly improved compared to single-round training. It could be because the same memory data is repeatedly learned when sequential training is conducted over multiple rounds, which may help recover more previous knowledge while introducing an unavoidable risk of overfitting.
Summary: In the context of Continual Learning, the authors propose a new rehearsal method, which is sequential. Then the authors present a theoretical analysis of both sequential and concurrent rehearsal methods. The authors derive a closed form expression of generalisation and forgetting for both methods. The main takeaway is that sequential rehearsal outperforms concurrent rehearsal when tasks are dissimilar. Based on these findings, they propose a hybrid rehearsal CL algorithm. For every new task, the memory dataset is split into similar and dissimilar samples. The similar samples are merged with the current task and the dissimilar points are finetuned afterwards. Claims And Evidence: - "our hybrid approach can perform better than concurrent rehearsal and the advantage is more apparent when tasks are more dissimilar" One question I have is whether the samples are seen the same number of times in the final benchmarks, when comparing the hybrid vs the concurrent rehearsal. (i.e. Are the experiments set with training steps or epochs as a hyperparameter ?) The following two claims are supported theoretically : - The first explicit closed-form expressions for the expected value of forgetting and generalization error for both concurrent rehearsal strategy and sequential rehearsal strategy under an overparameterized linear regression setting. - Sequential rehearsal outperforms concurrent rehearsal if tasks in CL are dissimilar, and the performance improvement is larger when the tasks are more dissimilar. Methods And Evaluation Criteria: The benchmarks and metrics are the standard ones used across most CL literature. Theoretical Claims: I didn't check the proofs for the theoretical claims. - Could you clarify the optimisation objective in L 186, is it equivalent to argmin (X \omega - y ) ** 2 plus some regularisation on the weights ? Experimental Designs Or Analyses: I reviewed all the experimental designs and analyses in the main paper. Some questions below : - Are the samples are seen the same number of times in the final benchmarks, when comparing the hybrid vs the concurrent rehearsal. (i.e. Are the experiments set with training steps or epochs as a hyperparameter ?) - In Table 2, could you add the std to conclude on the significance of the improvement ? - Is the method effective if the corruption is applied to the labels ? - L 370 : Actually I think that the task ordering for the buffer division is within scope and it would be valuable to see the improvement wrt the number of splits and the ordering of the dissimilar samples. Especially since that the improvements are not very significant in the current minimal setup. It would be valuable to see if it's because of this simplification, or if it's still the case for a more optimised buffer division and ordering. - Figure 2 : Could you clarify the unit of the y axis ? Supplementary Material: I didn't review the supplementary material. Relation To Broader Scientific Literature: This work relates to the theoretical Continual Learning literature. Several works quantify the impact of task similarity on CF under different task, model and data assumptions : [2], [3], [4], [5], [6]. Another set of related works enforce adapt training using rehearsal samples, in order to avoid interference : [8], [9]. Additionally, the closest work to this paper is [1] which investigates rehearsal-based CL in linear models with concurrent rehearsal. - [1] Banayeeanzade, Mohammadamin et al. “Theoretical Insights into Overparameterized Models in Multi-Task and Replay-Based Continual Learning.” ArXiv abs/2408.16939 (2024): n. pag. - [2] Bennani, Mehdi et al. “Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent.” ArXiv abs/2006.11942 (2020): n. Pag. - [3] Doan, Thang Van et al. “A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix.” International Conference on Artificial Intelligence and Statistics (2020). - [4] Lee, Sebastian et al. “Continual Learning in the Teacher-Student Setup: Impact of Task Similarity.” International Conference on Machine Learning (2021). - [5] Evron, Itay et al. “How catastrophic can catastrophic forgetting be in linear regression?” ArXiv abs/2205.09588 (2022): n. Pag. - [6] Evron, Itay et al. “The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting - An Analytical Model.” ArXiv abs/2401.12617 (2024): n. pag. - [7] Hiratani, N. (2024). Disentangling and Mitigating the Impact of Task Similarity for Continual Learning. ArXiv, abs/2405.20236. - [8] Chaudhry, A., Ranzato, M., Rohrbach, M., & Elhoseiny, M. (2018). Efficient Lifelong Learning with A-GEM. ArXiv, abs/1812.00420. - [9] Lopez-Paz, D., & Ranzato, M. (2017). Gradient Episodic Memory for Continual Learning. Neural Information Processing Systems. Essential References Not Discussed: The proposed sequential rehearsal method may share some similarities with other curriculum learning methods. I am not familiar with the literature, but It would be worth mentioning the related works in Curriculum Learning. Other Strengths And Weaknesses: Strengths : - Clarity of the presentation and ease to follow the paper - Clear motivation and positioning wrt the literature - Informative analysis which investigates rehearsal methods which are widely used - Clear experiments and results Weaknesses : - The linear model is very simplistic and it's unclear to which extends it transfers to more complex data. - The results look promising on the small scale MNIST and CIFAR-10 datasets, but the improvement is marginal on the larger CIFAR-100 benchmark. - The proposed method is evaluated with a single split of the memory buffer. It is unclear to which extend the improvement evolves as a function of the number of splits. - Other comments in the other sections. Other Comments Or Suggestions: Suggestions : - Theorem 5.1 : I think it would add clarity to share an intuition about the coefficients in the interpretation below the theorem. - L 223 to 237 : "By letting M = 0", "When p→∞" both quantities don't appear in the form of the theorem presented in the main paper. Questions For Authors: - Could the proposed method outperform the multitask learning baseline, provided the permutations of data are optimised and all the dataset is accessible at once ? - In Theorem 5.1, it look like the only way the task ordering may influence forgetting is through the coefficients. Could you share an intuition about how the task ordering impacts forgetting from the Theorem ? - Table 1 : On CIFAR-100, the improvement in CF is marginal compared to MNIST and CIFAR-10. Could it be because the data and model are more complex and further from the theoretical assumptions than the other two benchmarks ? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Please note that all our new experiment results (i.e., the tables we quote below) and our codes can be accessed via the link https://anonymous.4open.science/r/repo-c14014 **==Questions about Experiments==** **Effectiveness of the method under label corruption:** We conduct experiments under label corruption (see Table R2 in the anonymous link). Under varying levels of label corruption, our hybrid rehearsal also outperforms concurrent rehearsal, demonstrating the effectiveness of our method under label corruption. **Experiments on Tiny-Imagenet200:** Results are in Table R1 via the anonymous link. For this more complex dataset, hybrid rehearsal substantially outperforms concurrent replay. On the original dataset, hybrid rehearsal improves accuracy by 2.19% and reduces forgetting by 3.66%. On the corrupted dataset, it improves accuracy by 2.38% and reduces forgetting by 13.32%. **Add std in Table 2:** We marked standard deviation in Table R4 via the anonymous link. As can be observed, the improvement of most results are well beyond the error bars. **Impact of number of memory splits and ordering of dissimilar samples:** Thanks for the suggestion. We are currently exploring these issues with full efforts. **Improvement of CIFAR-100 is marginal compared to MNIST and CIFAR-10:** This may not be due to the more complex dataset, as our new experiment on Tiny-Imagenet200 shows significant improvement. It could be because the random selection of classes in our experiments did not have high dissimilarity among tasks. **Other questions:** In our experiments, each sample is used the **same** number of times in both hybrid and concurrent rehearsal for a fair comparison. The unit of y-axis is a scalar of 1. **==Questions about Other Issues==** **Whether objective in L186 is equivalent to $\text{argmin} (Xw - y )^2$ plus some regularization:** The optimization objective in Line 186 comes from the convergence point of SGD on the linear model with the starting point of $w_{t-1}$ **without any regularization** [2,3,5]. This optimization is not equivalent to $\text{argmin}_w (X^\top w-y)^2+\lambda \|w\|$ because we force $X^\top w-y=0$ (Line 188) while $\text{argmin}_w (X^\top w-y)^2+\lambda \|w\|$ usually leads to a non-zero $(X^\top w-y)^2$ due to the need of balance between $(X^\top w-y)^2$ and $\|w\|$. **Extension of linear model to more complex data:** A natural next step is to extend our analysis to neural networks in the NTK regime, which is effectively a linearized model. Further, our analysis of rehearsal-based CL can also integrate with recent advances in analyzing over-parameterized neural networks and attention models, to study CL under these more complex models. **Comparison to multitask learning:** Previous studies [1,4] have shown that conventional concurrent rehearsal outperforms multitask learning. We expect our method to outperform the multitask learning baseline, as it subsumes concurrent replay as a special case. **Intuition about coefficients in Theorem 5.1:** The coefficients in Theorem 5.1, which are given in (32) and (35) in appendix, suggest the following intuitions about how task ordering impacts forgetting. When memory size $M$ is small, introducing dissimilar tasks early reduces forgetting by encouraging broader feature exploration, aligning with [3] (which has $M=0$). As $M$ increases, delaying dissimilar tasks helps, since early introduction leads to frequent rehearsal, and their conflicting nature can disrupt learning of later tasks. **L 223 to 237:** $M$ and $p$ affect the theorem via the coefficients $c_i$, $c_{ijk}$, $d_{0T}$ and $d_{ijkT}$ (see Proposition B.2 in appendix). **Similarity of sequential rehearsal with Curriculum Learning:** Although they appear similar, their underlying training objectives differ significantly. In **curriculum learning**, the model is presented with data in a structured progression, typically organized by increasing difficulty, with the goal of optimizing an overall learning performance. In contrast, **sequential rehearsal** involves data drawn from different (possibly conflicting) tasks, where the model is trained by rehearsing over these tasks with a goal of mitigating forgetting. The forgetting issue is not addressed in curriculum learning. Thank you for your insightful comments. We hope our responses addressed your concerns and would greatly appreciate your kind consideration in increasing your score. Reference: [1] Goodfellow, et al. "An empirical investigation of catastrophic forgetting in gradient-based neural networks." arXiv. [2] Gunasekar, et al. "Characterizing Implicit Bias in Terms of Optimization Geometry." ICML 2018. [3] Lin, et al. "Theory on forgetting and generalization of continual learning." ICML 2023. [4] Wu, Zihao, et al. "Is multi-task learning an upper bound for continual learning?" ICASSP 2023 [5] Zhang, et al. "Understanding deep learning requires rethinking generalization." arXiv 2016. --- Rebuttal Comment 1.1: Comment: Apologies for my late response, I appreciate your time and effort running the additional experiments and clarifying my questions. Some follow-up comments : - In Table R1, I find it susprising that the improvement on AAC is +2.38, while the improvement in forgetting is -13. Does it imply that Hybrid rehearsal somewhat leads to lower accuracies overall ? How could it be explained ? - Sorry, where does [1] show that conventional concurrent rehearsal outperforms multitask learning ? - L 223 to 237: I think that it would be helpful to introduce M and p in the main paper because they are stated in it. Many thanks ! --- Reply to Comment 1.1.1: Comment: We thank the reviewer very much for providing further comments. Q: In Table R1, I find it surprising that the improvement on AAC is +2.38, while the improvement in forgetting is -13. Does it imply that Hybrid rehearsal somewhat leads to lower accuracies overall? How could it be explained? A: Thank you for the insightful observation and question. Below, we first clarify our definitions of Acc and Forgetting, as well as how to interpret our results under these metrics. We then provide our response based on two possible interpretations of the reviewer’s question — we apologize for any misunderstanding, as we are not entirely certain which interpretation aligns with the reviewer’s intended meaning. We note that Final Average Accuracy (Acc) evaluates the average testing accuracy **across all tasks** after learning the last task, where a higher value is better. As presented in Table R1, our hybrid method improves Acc by 2.38%, demonstrating that the **overall** testing accuracy of hybrid rehearsal is 2.38% higher than concurrent rehearsal. We also note that forgetting evaluates the average **accuracy drop** of old tasks due to learning new tasks, where a lower value is better. The detailed definition can be found at L362 in the paper. In table R1, a 13.32% lower forgetting in hybrid rehearsal compared to concurrent rehearsal indicates that our method effectively reduces the accuracy degradation of earlier tasks. If the reviewer is asking whether Hybrid rehearsal leads to lower accuracy for current task learning compared to Concurrent rehearsal, this can be true because the large amount of current task data dominates the multi-task model training in Concurrent rehearsal, leading to a model that favors the current task. However, the overall objective of CL is to strike a right balance between model stability and plasticity, and the Final Average Accuracy(Acc) is the widely used metric in the CL community to characterize how good an algorithm can handle this balance. In order to achieve a higher Acc, Hybrid rehearsal sacrifices the performance on current task learning to review the knowledge of old tasks (via sequential replay), which can be further verified by the improvement in forgetting. In contrast, Concurrent rehearsal focuses too much on current task learning and cannot retain the knowledge of old tasks. If the reviewer is asking whether the accuracy improvement of Hybrid rehearsal can be further improved by sacrificing a certain level of forgetting (given that there is a large improvement in forgetting), this can be possible. Again, how to strike the right balance between model stability and plasticity is the fundamental challenge in CL. The design of Hybrid rehearsal is by no means the optimal scheme to achieve this. However, our purpose here is to demonstrate that Hybrid rehearsal could potentially be a better choice than the widely used Concurrent rehearsal in CL, where the focus is slightly shifted to how to remember old tasks. How to further improve the performance of Hybrid rehearsal deserves an independent and more comprehensive study, which we will explore in the future work. Q: Reference about conventional concurrent rehearsal outperforms multitask learning. A: We apologize for the mistake about provided references. In [4], they investigate the relationship between multitask learning (MTL) and continual learning (CL), concluding that CL can provide superior performance compared to MTL when tasks are conflicting (i.e., dissimilar). We thank the reviewer for suggesting the introduction of $M$ and $p$ in the main paper, and we will revise the paper accordingly. Thank you again for your insightful comments. We hope our responses have addressed your questions satisfactorily, and we would greatly appreciate your kind consideration in increasing your score.
Summary: This paper theoretically and numerically investigates the effects of concurrent and sequential rehearsal in the context of continual learning. The authors analytically derive that, in a linear regression model, sequential rehearsal leads to better performance than concurrent rehearsal when tasks are more dissimilar. To validate these findings, they propose a hybrid approach in which memories with low dissimilarity are rehearsed concurrently, while those with high dissimilarity are rehearsed sequentially. This hybrid strategy is then applied and tested within a deep neural network model across multiple tasks, showing minimal to modest improvements over a purely concurrent approach. Claims And Evidence: The authors claim that in a linear regression model, where labels are generated from a noisy teacher network, and under the mean-squared error (MSE) criterion, in the over-parameterized case (i.e., when the input dimension exceeds the number of training examples), the following holds: 1. Sequential rehearsal outperforms concurrent rehearsal when tasks are more dissimilar. 2. This principle extends to deep neural networks. However, as I will argue below, I have concerns that the authors' performance metric may not be fully justified, and as a result, the theoretical claims may not be valid. Additionally, throughout their proofs, the authors rely heavily on the statement: "It is known that the convergence point of stochastic gradient descent (SGD) for MSE is the feasible point closest to the initial point with respect to the \mathcal{l}_2-norm, i.e., the minimum-norm solution." Unfortunately, they do not provide a reference for this claim. To my knowledge, there is no theoretical work that confirms this result in the over-parameterized case. As a consequence, the minimization procedures presented in lines 187 and 212 may not accurately reflect the outcome of running SGD. Methods And Evaluation Criteria: The loss over task $i$ for current parameters $w$ is evaluated as $$\mathcal{L}_i(\mathbf{w}) = ||\mathbf{w} - \mathbf{w}_i^*||.$$ However, I believe this measure is not appropriate in the underdetermined case, where the input dimension $p$ exceeds the number of current training examples $n$. In such situations, the current weight vector $\mathbf{w}$ can differ from the target weights $\mathbf{w}_i^*$ while still yielding an MSE of zero. This occurs because, in an underdetermined system, there are multiple weight configurations that can perfectly fit the data, making the comparison between $\mathbf{w}$ and $\mathbf{w}_i^*$ potentially missleading. For example, let's assume the first task has one input $\mathbf{x} = [1, 0]$ with $\mathbf{w}_1^* = [1, 1]$ then $\mathbf{w} = [1, 0]$ yields a MSE of $0$ but $\mathcal{L}_i(\mathbf{w}) \neq 0$! In fact, $\mathcal{L}_i(\mathbf{w})$ can be arbitraily large or small. Further, the authors state: "To simplify our theoretical analysis, we focus on the situation in which the memory data are all fresh and have not been used in previous training." However, if the input-output pairs in the memory are resampled after each task, then in the underdetermined case, this would alter the set of $\mathbf{w}_i^*$​ that perfectly solve the corresponding task. In summary, the theoretical setup does not appear to be fully consistent with the derivations and conclusions drawn. Numerically, the hybrid rehersal training framework leads to minimal to modest improvements over a purely concurrent approach. However, improvements are often within the margin of error and thus may be simply a result of noise. Theoretical Claims: All theoretical claims are based on the loss function $\mathcal{L}_i(\mathbf{w})$. However, as discussed in the Methods and Evaluation Criteria section, I have concerns about the validity of this measure. Specifically, I believe it may not be appropriate, which raises doubts about the validity of the theoretical claims that rely on it. Experimental Designs Or Analyses: The authors validate their analytical results numerically (albeit without label noise) in Figure 2. Interestingly, the theory and simulation appear to align perfectly. One possibility is that my concern about $\mathcal{L}_i(\mathbf{w})$ being unjustified is mistaken. Another possibility is that resampling items in the memory artificially increases the effective dimensionality of the training data, thereby shifting the model into the underparameterized regime. Or perhaps there is another explanation? Unfortunately, the authors do not provide their simulation code, making it difficult to verify these possibilities. In principle, the numerical setup (datasets, evaluation criteria, etc.) in Section 6. appears well-designed to test the hybrid rehearsal training framework. However, the authors state that they "adopt a straightforward relaxation: only one task with the lowest similarity characterization in memory is designated as the ‘dissimilar task.’" This raises a significant concern — if only one task is treated as dissimilar, doesn’t that essentially bypass the role of the threshold $\tau$? If $\tau$ is not properly explored or validated, Algorithm 1 is not actually been implemented and validated and it becomes unclear how the simulation results refer to the theoretical claims. This should also affect how we interpret the trend described in Table 2, which is intended to reflect the theoretical result. Supplementary Material: The authors provide no supplementary material. I did not work through the appendix. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Robins (1995) was one of the first to study catastrophic forgetting, rehearsal and pseudorehearsal and thus should be included. Similarly, Mc-Closkey & Cohen (1989) and Ratcliff (1990) where among the first to explore and describe the phenonemon of catastrophic forgetting and thus should be referenced. In my view, it would be important to include Prabhu (2020) to provide a more balanced perspective on the statement: "A large amount of studies have been proposed to address this issue, among which rehearsal-based approaches (Rolnick et al., 2019) have demonstrated state-of-the-art performance." Including this reference would offer a more nuanced view of rehearsal-based methods and their relative performance. Further, Lee et al. (2021) ("Continual learning in the teacher-student setup: Impact of task similarity."), is closely related to the presented work but not cited. They also study continual learning using a teacher-student paradigm as a function of similarity between teachers. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. Could you clarify why the choice of $\mathcal{L}_i(\mathbf{w})$, and consequently the chosen performance metrics, is justified in the overparameterized setting? 2. Do you plan to make the code (particularly for Figure 2) available (to reviewers)? 3. Could you explain why the influence of $\tau$? was not tested in your experiments? 4. Why do you think the improvements of the numerical results are minimal (often within the margin of error)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for providing the valuable review. Please note that all our new experiment results (i.e., the tables we quote below) and our codes (including code for Fig. 2) can be accessed via the link https://anonymous.4open.science/r/repo-c14014 **Q:** Clarify the choice of $\mathcal{L}_i(w)$, and justify the performance metric in the overparameterized setting **A:** We first clarify that $\mathcal{L}_i(w) = (w -w^*_i)^2$ is equivalent to the test error $\mathbb{E}[y - Xw]^2$. To see this, we derive $\mathbb{E}[y - Xw]^2 = \mathbb{E}[X(w -w^*_i)]^2 + \sigma^2= \mathbb{E}[(w -w^*_i)^\top X^\top X (w -w^*_i) ] + \sigma^2 = (w -w^*_i)^2+ \sigma^2 $, where the last equality follows because $X$ has i.i.d. standard Gaussian entries, and $\sigma$ is noise level. Such a loss has been commonly adopted in recent theoretical studies of CL [1,2]. The example given by the reviewer is not valid, because the loss function (i.e., the test error) needs to take the expected value over the distribution of $X$, not for a specific value of $X$. Further note that the model parameter $w$ depends on input data $X$, because the model is trained based on data. Thus, the performance of forgetting and generalization defined in (4) and (5) is evaluated using the **expected value** of model errors, which reflects the overall performance across a set of inputs. **Q:** Provide reference for the statement "the convergence point of SGD for MSE is the feasible point closest to the initial point in $l_2$-norm, i.e., minimum-norm solution." **A:** In [3,4], it has been mathematically shown that in overparameterized linear models, SGD/GD converges to the minimum-norm solution, i.e., the feasible point closest to the initial point with respect to $l_2$-norm. Further, such a property has been widely used to simplify the theoretical model in CL [1,2]. **Q** Numerical results are within error margin? **A:** In Table 1 of the paper, the improvement of both generalization and forgetting on **corrupted** datasets (with higher level of task dissimilarity) is substantial and far more outside the error, which validates our theoretical observation. The improvement is marginal on original datasets(Split-MNIST, Split-CIFAR-10, Split-CIFAR-100) because their task dissimilarity level is not significant. We further conducted **new experiments on Tiny-Imagenet200** (see Table R1 via the anonymous link). For this more complex dataset, where task similarity is much higher, on the original dataset, hybrid rehearsal improves the Averaged Final Accuracy by 2.19% and reduces forgetting by 3.66%. On the corrupted dataset, it improves the Averaged Final Accuracy by 2.38% and reduces forgetting by 13.32%. These results clearly demonstrate the benefits of our hybrid rehearsal. **Q:** Will resampled input-output pairs in memory alter $w_i^*$ that perfectly solves the corresponding task? **A:** We clarify that in this paper, ​$w_i^*$ represents the ground-truth model parameters, which are **fixed** for each task. Then based on such a ground-truth $w_i^*$, data are generated by $Y_i=X^\top_iw^∗_i+z_i$. Clearly, resampling of input-output pairs will not change ground-truth parameters. **Q:** Explain influence of $\tau$. Why was it not tested in your experiments? If only one task is treated as dissimilar, doesn’t that bypass the role of $\tau$? **A:** Thank you for highlighting this point. Our experiment is a simplified implementation of Algorithm 1, where **at most** one task is designated as the ‘dissimilar task’ for sequential rehearsal. $\tau$ still serves the role of threshold as follows. If multiple tasks have scores below $\tau$, then the most dissimilar task is set for sequential rehearsal. Otherwise, if all tasks have positive scores, then no task is chosen for sequential rehearsal. Note that this is a reasonable experiment setup in CL [5]. **Further note on choice of $\tau$:** Since the task similarity score is defined to be cosine similarity between gradient of each previous task and concurrent task, positive score indicates their alignment, and negative score indicates their confliction. Hence, in our experiment, we set threshold $\tau$ to be $0$ naturally. In practice, $\tau$ can be set negative to ensure the task dissimilarity is high enough. Thank you again for your insightful comments. We hope our responses addressed your concerns and would greatly appreciate your kind consideration in increasing your score. References: [1] Evron, et al. "How catastrophic can catastrophic forgetting be in linear regression?." COLT 2022. [2] Lin, et al. "Theory on forgetting and generalization of continual learning." ICML 2023. [3] Gunasekar, et al. "Characterizing Implicit Bias in Terms of Optimization Geometry." ICML 2018. [4] Zhang, et al. "Understanding deep learning requires rethinking generalization." arXiv 2016. [5] Lin, et al. "Trgp: Trust region gradient projection for continual learning." ICLR 2022. --- Rebuttal Comment 1.1: Comment: I sincerely appreciate the authors’ detailed response and their efforts in addressing the concerns I raised. In particular, the clarification regarding $\mathcal{L}_i$​ was very helpful. It may be beneficial to explicitly state in the manuscript that $\mathcal{L}_i$​ represents the test error over the distribution of inputs, rather than training error over a fixed set of samples. Additionally, I strongly recommend including references [3,4] to support the equivalence of the constraint optimisation problem and the convergence of SGD. Thank you for addressing my question regarding resampling data in memory. I understand that $\mathbf{w^*}$ is fixed, but I would like to clarify my concern. In the main text, you state: "we focus on the situation in which the memory data are all fresh and have not been used in previous training." Since $w_t$​ explicitly depends on both the training data and the memory data used for optimization, resampling the memory after each task means that more information about the corresponding $\mathbf{w^*}$ accumulates for older tasks compared to more recent ones. Each newly sampled underconstrained i.i.d. $\mathbf{X}$ further constrains learning (which is conceptually similar to performing SGD in a teacher-student model with an increasing number of samples). How does this assumption influence your results? I would encourage verifying numerically (as I understand that this assumption is necessary to simplify the analytical treatment) that using the same data for training and the memory does not change the conclusions of the paper (i.e., Figure 2). I remain of the opinion that the numerical insights from the deep-learning simulations are limited, often falling within error margins. Furthermore, the proposed algorithm, as described, does not appear to be fully implemented (ignoring $\tau$) or tested in its intended form. I suggest either revising the presentation of the algorithm accordingly or providing additional empirical validation. Given the clarification regarding $\mathcal{L}_i$​​, I have updated my overall recommendation accordingly. --- Reply to Comment 1.1.1: Comment: We thank the reviewer very much for the prompt response and for increasing the score. We will clarify in our paper that $\mathcal L_i$​ represents the test error over the distribution of inputs, not the training error over a fixed set of samples. We will also include references [3,4] to support the equivalence of the constraint optimisation problem and the convergence of SGD. Thank you for the suggestions. **Regarding resampling data in memory**, we appreciate the reviewer’s further explanation and we now get the point, which is quite insightful. As suggested by the reviewer, we conducted a new numerical simulation where the memory data is selected from training data of previous tasks (not the fresh resampled data). All other experiment parameters are set to be the same as in the paper (i.e., Figure 2 of the paper). As can be observed in Figure R1 via https://anonymous.4open.science/r/repo-c14014/rebuttal_table.pdf, our conclusion still holds. Namely, sequential rehearsal has smaller forgetting value and test error (and hence more advantageous) than concurrent rehearsal when task dissimilarity becomes large, and such an advantage of sequential rehearsal becomes more obvious as task dissimilarity enlarges. Regarding **insights from the deep-learning simulations**, we thank the reviewer for re-iterating this issue. The improvements within error margins primarily occur in the small-scale datasets studied in our original submission, where task dissimilarity may not be substantial enough to fully showcase the benefits of our hybrid algorithm. In our new experiment on a larger dataset **Tiny-ImageNet200**, as can be observed in Table R1 via the anonymous link: https://anonymous.4open.science/r/repo-c14014/rebuttal_table.pdf, the hybrid rehearsal method achieves an accuracy of **$63.29 (\pm 0.47) $**%, compared to **$61.10 (\pm 0.28) $**% for concurrent rehearsal. This represents a $2.19 $\% improvement, which is around three times the error margin. We hope this additional evidence helps demonstrate the potential of our approach in scenarios where task dissimilarity is more significant. We also thank the reviewer for the suggestion regarding Algorithm 1. We will clarify our implementation in the experiments and revise our presentation of the algorithm accordingly. Meanwhile, we are actively working on incorporating more dissimilar tasks based on $\tau$ in sequential rehearsal in our experiments.
Summary: The paper studies rehearsal in continual learning for overparameterized linear models. Next to concurrent rehearsal, which is the commonly used setting, they also look into sequential rehearsal, where different task data is revisited sequentially. From the theoretical analysis, they conclude that for highly dissimilar tasks, sequential rehearsal may be better. Interestingly, they next turn this into a practical algorithm, where the most dissimilar task is learnt sequentially while the other tasks are learnt concurrently. Although the effect is minor, a positive effect is observed on standard small continual learning benchmarks such as MNIST, Cifar 10 and Cifar-100. Claims And Evidence: I did not find any problematic claims. Methods And Evaluation Criteria: The proposed method is simple, but this is mostly a theoretical paper. The strategy for the proofs in the paper make sense. Evaluation criteria are the ones commonly used in this context. Theoretical Claims: I did not check all proofs completely, but could not find any issues in the proof outlines and other parts I checked. Experimental Designs Or Analyses: Results with more / larger datasets (e.g. (mini)ImageNet), and under different settings could have strenghtened the paper further, for instance going beyond the task-incremental setting, to class or domain incremental ones. Supplementary Material: I scanned over the supplemental material, looking into some parts, but not reviewing it rigorously. Relation To Broader Scientific Literature: The authors did a good job in contextualizing their work in a broader context, including both the standard continual learning literature as well as papers focusing on a more theoretical analysis. Essential References Not Discussed: I can't think of any references that should have been discussed but are missing. Other Strengths And Weaknesses: The key finding of the paper, that sequential rehearsal can in some cases outperform concurrent rehearsal, is interesting and somewhat surprising. The paper's strength is that this is not only observed empirically, but also analyzed theoretically, albeit on an overparameterized linear model only. Neither of the two parts of the paper (theoretical analysis, empirical study) would have been sufficient on their own, but combined I think there's sufficient evidence. I found the paper well structured and clear. I doubt whether the observed differences, which have only been shown in a task-incremental setting, are worth the extra complexity in a practical setting, and concurrent rehearsal may remain the go-to solution. Nevertheless, the paper shows that concurrent rehearsal may not be the optimal setting, and I think this is worth sharing. Other Comments Or Suggestions: / Questions For Authors: 1. Did you try experiments beyond the task-incremental setting ? Is there a reason why you did not ? Ethical Review Concerns: / Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for providing the valuable review. Please note that all our new experiment results (i.e., the tables we quote below) and our codes can be accessed via the link https://anonymous.4open.science/r/repo-c14014 **Q1:** Results with more / larger datasets (e.g. (mini)ImageNet), and under different settings could have strengthened the paper further. **A1:** Thank you for your constructive suggestion. Despite the limited time, we made every effort to expand our experiments to Tiny-Imagenet200, as presented in Table R1 via the anonymous link. Our experiments on Tiny-ImageNet demonstrate that hybrid rehearsal achieves a substantial improvement over conventional concurrent rehearsal. On the original dataset, hybrid rehearsal improves the Averaged Final Accuracy by 2.19% and reduces forgetting by 3.66%. On the corrupted dataset, it improves the Averaged Final Accuracy by 2.38% and reduces forgetting by 13.32%, which also aligns with our theoretical observation that benefits of our hybrid rehearsal increase as task dissimilarity rises. Furthermore, we observe that the advantage of our method is even more obvious in Tiny-Imagenet200 compared to MNIST, CIFAR-10 and CIFAR-100. This is because Tiny-Imagenet200 is a more complex dataset and exhibits a higher level of task dissimilarity, and hence benefits of our hybrid rehearsal are more salient. **Q2.** I doubt whether the observed differences, which have only been shown in a task-incremental setting, are worth the extra complexity in a practical setting, and concurrent rehearsal may remain the go-to solution. **A2:** We sincerely appreciate the reviewer’s insightful observation. We agree that hybrid rehearsal can introduce additional complexity in practical deployments, and that conventional concurrent rehearsal may offer greater simplicity and convenience. We also appreciate the reviewer’s recognition that one goal of our paper is to highlight, from a scientific perspective, that concurrent rehearsal, while practical, can be suboptimal in terms of performance, as we have demonstrated in task-incremental settings. Our findings suggest that alternative strategies, such as hybrid rehearsal, warrant further exploration for their potential benefits. We thank the reviewer again for your inspiring comments. We hope that our responses resolved your concerns. If so, we wonder if the reviewer could kindly consider to increase your score. Certainly, we are more than happy to answer your further questions.
null
null
null
null
null
null
Stealix: Model Stealing via Prompt Evolution
Accept (poster)
Summary: This paper introduces a method for model stealing attacks that do not require manually crafted prompts. Unlike prior approaches, which rely on predefined class names or expert knowledge to generate synthetic data, Stealix employs a genetic algorithm to iteratively refine prompts based on a victim model’s responses. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: all good. Experimental Designs Or Analyses: see Other Strengths And Weaknesses part. Supplementary Material: No Supplementary Material. Relation To Broader Scientific Literature: This paper builds upon and extends prior research in model stealing, generative adversarial techniques, and automated prompt optimization. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: Eliminates the need for manually crafted prompts or class names, making it more accessible and scalable for attackers with limited expertise. This paper proposes a more realistic threat model. The proposed proxy metric shows a strong correlation with the feature distance to the victim data. Weaknesses: As authors mentioned, this approach relies heavily on the quality of open-source generative models. While Stealix is tested on various victim model architectures (e.g., ResNet, VGG, MobileNet), the paper does not extensively explore more complex or state-of-the-art architectures (e.g., Transformers). This paper assumes that the victim model only provides hard-label outputs as a defense mechanism. However, the authors does not explore how Stealix would perform against more sophisticated defenses. The paper would benefit from a more detailed computational cost analysis. Other Comments Or Suggestions: None. Questions For Authors: see Other Strengths And Weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the realism of our threat model, the scalibility of our approach, and the effectiveness of our proposed proxy metric. We aim to address their concerns below. > W1. While Stealix is tested on various victim model architectures (e.g., ResNet, VGG, MobileNet), the paper does not extensively explore more complex or state-of-the-art architectures (e.g., Transformers). We do address this in Section 5.3: Stealing Model Based on Proprietary Data, where we apply Stealix to a real-world Vision Transformer (ViT) model trained on proprietary data and demonstrate better performance than other methods. > W2. This paper assumes that the victim model only provides hard-label outputs as a defense mechanism. However, the authors does not explore how Stealix would perform against more sophisticated defenses. Thanks for the suggestion. Most defenses such as [1, 2] perturb the posterior prediction to reduce the utility of stolen models, while keeping the predicted class (argmax) unchanged to preserve original performance for benign users. This pushes attackers to rely on hard labels, which are less informative but immune to such perturbations. Our work directly targets this setting, where attackers proactively use hard labels to circumvent the defenses. We view exploring additional defenses as complementary and will add this discussion in the revision. We are open to evaluating Stealix against defenses that the reviewer has in mind. [1] Taesung Lee, Benjamin Edwards, Ian Molloy, and Dong Su. "Defending against machine learning model stealing attacks using deceptive perturbations." IEEE Security and Privacy Workshops (SPW) 2019. [2] Mantas Mazeika, Bo Li, and David Forsyth. "How to steer your adversary: Targeted and efficient model stealing defenses with gradient redirection." ICML 2022. > W3. The paper would benefit from a more detailed computational cost analysis. We appreciate the reviewer's suggestion. We have reported the runtime comparison across methods in Appendix C: Comparison of Computation Time. The results show that Stealix maintains competitive computational efficiency while outperforming other baselines. --- Rebuttal Comment 1.1: Comment: Thank you for the response. After reading other reviews and rebuttals, I decide to keep my current score.
Summary: This paper proposes a model stealing attack method, named Stealix, to steal the functionality of an image classification victim model. Stealix generates synthetic images through a diffusion model, and fine-tunes the image-generation prompt based on victim model's responses. An iterative prompt refinement and reproduction process is employed to capture the features of training data distribution, so that the synthetic dataset is closer to the training distribution, leading to higher accuracy of the attacker's rebuilt model. Stealix enables an automatic choice prompt and does not require the knowledge class name, given that a few seed images are available. Comprehensive experiments are conducted, and Stealix consisstently outperformed baseline methods. Claims And Evidence: Yes. The paper is well-organized and well-written. The experiment results seem to be convincing. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The authors conduct experiments on four representative datasets and compare their proposed method to six other methods, which looks great to me. Also, an experiment on stealing a model trained on private dataset is conducted, further enhancing the realiability of authors' findings. Nevertheless, why the comparison with PEZ method is deferred to appendix and is conducted under only one setting? It will be great if PEZ is also included in the comparison. Supplementary Material: I checked the missing algorithms and Appendix C, D, F, G, H, J, K. They all look sensible. Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: The authors mention that the proposed method will be degraded to PEZ (Wen et al., 2024) if the image triplet contains the seed image only. It seems the work of PEZ is strongly related to this paper. Could you please clearly clarify the contribution and difference of PEZ and this work? Other Strengths And Weaknesses: **Strengths** 1. The proposed method, as author highlighted, avoids the need of pre-defined prompts or class name to generate synthetic images. This direction of soliciting queries is promising. 2. This paper is highly complete with sufficient experiments. **Weakness** It will be great if authors can clarify a couple questions: 1. While the attacker aims to steal the entire model, Alg 1 requires a specific target class $c$. Can you clarify which setting is used and correct any inconsistency? 2. How large is the seed image set needed, and used in the experiments? 3. Line 129-131, it is somewhat incorrect to claim that other methods do not utlize victim model's outputs. My understanding is that PEZ also optimizes the prompt using victim's responses, isn't it? 4. Does the proposed method apply to stealing regression models? Other Comments Or Suggestions: Typos: Line 218, Section 4 should be Figure 3, I guess. Questions For Authors: Please find in the weakness section above. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's recognition of this direction of soliciting queries as promising, and we're glad that the completeness and thoroughness of our experiments came through clearly. We aim to answer the questions below. > Q1. Why the comparison with PEZ method is deferred to appendix and is conducted under only one setting? It will be great if PEZ is also included in the comparison. We placed the PEZ comparison in the appendix due to page limits. Since PEZ is not originally a model stealing method, but a prompt tuning technique, we include it as part of an ablation study—not a baseline comparison—to isolate the impact of our proposed components: prompt refinement with victim feedback, prompt consistency, and prompt reproduction. Please see our answers to Q2 for a detailed comparison. > Q2. Could you please clearly clarify the contribution and difference of PEZ and this work? The key difference is that PEZ optimizes prompts using only the seed image $x_c^s$, while our prompt refinement reformulates prompt optimization as a contrastive loss over a triplet $(x_c^s, x_c^+, x_c^-)$, guided by the victim model’s predictions. This enables Stealix to capture class-relevant features more effectively. Stealix further introduce prompt consistency as a proxy for evaluation, and prompt reproduction using genetic algorithms, forming a complete and victim-aware model stealing framework. > W1. While the attacker aims to steal the entire model, Alg 1 requires a specific target class. Can you clarify which setting is used and correct any inconsistency? We apologize for the inconsistency and we will correct it in the paper. The target class is not required as we steal the entire model. Stealix iterates over all classes to collect synthetic images. We will revise Algorithm 1 to wrap Lines 3–25 in a `for each class` loop, and move Line 26 outside the loop with an updated description: "Train model $A$ using **all class image sets $\lbrace\mathcal{X}_c^s, \mathcal{X}_c^+, \mathcal{X}_c^-\rbrace _{c=1}^K$**", where $K$ denotes the total number of classes. We will also update the algorithm input accordingly to reflect that it processes all classes, rather than requiring a specified target class $c$. > W2. How large is the seed image set needed, and used in the experiments? We use only a single seed image per class in all our experiments, as noted in Line 205 (left column). > W3. Line 129-131, it is somewhat incorrect to claim that other methods do not utlize victim model's outputs. My understanding is that PEZ also optimizes the prompt using victim's responses, isn't it? We are sorry for the confusion: we do not claim that PEZ and other methods do not utlize victim model's outputs. More precisely, they use the victim's predictions only during **attacker model training**; in contrast, Stealix additionally uses them for **optimizing prompts** (refinement, consistency check and reproduction). PEZ, as detailed in our response to Q2, do not use victim responses to optimize the prompt, as the class of the seed image is known. We will update the paper to clarify this point. > W4. Does the proposed method apply to stealing regression models? Stealix can potentially be extended to regression tasks. For example, during prompt refinement, low regression error, such as a low mean square error (MSE), could be interpreted as “positive” feedback and high error as “negative,” similar to our classification setting. This would allow the triplet-based optimization and prompt consistency metric to operate analogously. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The authors have addressed most of my concerns. I decided to keep my score but lean to accept.
Summary: This paper introduces Stealix, a new model stealing method that leverages images synthesized from diffusion models to steal victim models. Compared with existing diffusion model-based model stealing attacks, the key improvement is that Stealix can automatically construct attack prompts for the stealing-image generation, thus eliminating the need for human-crafted prompts. Experiments demonstrate that Stealix enhances both query efficiency and stolen model performance in black-box query scenarios. ## **update after rebuttal** After reading the rebuttal, I think this paper has novel results but the authors need to improve their presentation (**espectially Algorithm 1**) to make the paper more clear. So I decide to keep my current score but tend to acceptance. The authors should update their paper according to me and other reviewers' reviews. Claims And Evidence: There are many technical details in this paper that need to be further clarified. (See **Weaknesses & Suggestions & Questions**) Methods And Evaluation Criteria: The query budget comparison in Table 1 might be not fair. (See **Weaknesses & Suggestions & Questions**) Theoretical Claims: N/A Experimental Designs Or Analyses: See **Weaknesses & Suggestions & Questions**. Supplementary Material: I have checked part of the supplementary material to find some experimental details but unfortunately failed. (See **Weaknesses & Suggestions & Questions**) Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** 1. The attack is conducted in a very strict black-box setting, which I appreciate. 2. The idea of automatically constructing prompts for generating model stealing images is promising. **Weaknesses & Suggestions & Questions:** 1. In Algorithm 1, are you performing Stealix with only a single class $c$? Is that really effective? I would be interested in seeing the performance when stealing the model with samples from multiple classes. 2. In Algorithm 1, Line#5 constructs the initial sample set $\mathcal{S}^0$ from $\mathcal{X}^s_c$, $\mathcal{X}^+_c$, and $\mathcal{X}^-_c$. However, according to Line#3, both $\mathcal{X}^+_c$ and $\mathcal{X}^-_c$ are initialized as empty sets. Wouldn't this result in $\mathcal{S}^0$ being an empty set, and thus the overall Algorithm 1 could not continue (Because the for-loop in Line#8 could never start)? Please clarify. 3. The query budget comparison in Table 1 may not be fair. Unlike other methods listed, the proposed Stealix method requires further querying the victim model during the prompt-constructing stage, as it needs to repeatedly query the victim model with newly synthesized images (see Lines#19-20 in Algorithm 1). I suggest the authors provide the exact equation for calculating the overall query budgets and list all related hyperparameters in a single table for clarity. 4. The experiments only consider a single victim model backbone (i.e., ResNet-34), which I think is insufficient to demonstrate the effectiveness of Stealix. I suggest including additional experiments on ResNet-like/CLIP-like victim backbones. 5. In Algorithm 1, Lines#9-11 are redundant and can be removed. Other Comments Or Suggestions: Please note that while I give a score of 3 (Weak Accept), it actually means that I think this paper is a borderline paper. As such, my final score will be based on the response of authors. If my concerns could not be addressed, I will decrease the score accordingly. Questions For Authors: See **Weaknesses & Suggestions & Questions**. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our strict black-box threat model and recognizing the novelty of our automatic prompt construction approach for model stealing. We answer their questions below. > Q1. In Algorithm 1, are you performing Stealix with only a single class? Is that really effective? I would be interested in seeing the performance when stealing the model with samples from multiple classes. We apologize for the mistake in Algorithm 1 and we will fix it in the revised paper. To clarify, Stealix considers all classes simultaneously. Algorithm 1 illustrates the process (Line 3-25) for a single class, but in practice, it is applied to all classes in parallel. After processing all classes, the generated images are collected and used to train the attacker model A, as outlined in the method overview (Lines 201–203, left column) We will revise Algorithm 1 to wrap Lines 3–25 in a `for each class c` loop, and move Line 26 outside the loop with an updated description: "Train model $A$ using **all class image sets** $\lbrace\mathcal{X}_c^s, \mathcal{X}_c^+, \mathcal{X}_c^-\rbrace _{c=1}^K$", where $K$ denotes the total number of classes. We will also update the algorithm input accordingly to reflect that it processes all classes, rather than requiring a specified target class $c$. > Q2. Wouldn't initial empty $\mathcal{X}_c^+$ and $\mathcal{X}_c^-$ result in empty initial $\mathcal{S}^0$? We will revise the algorithm to fix the notation. While $\mathcal{X}_c^+$ and $\mathcal{X}_c^-$ are initially empty, the seed set $\mathcal{X}_c^s$ is not, so initial $\mathcal{S}^0$ is populated using seed images: $\mathcal{S}^0 = \lbrace(x_c^s)_i^0\rbrace _{i=1}^N$. For generality, we denote $\mathcal{S}^t = \lbrace(x_c^s, x_c^+, x_c^-)_i^t\rbrace _{i=1}^N$ and allow any of the elements in the triplet to be null. Our prompt optimization supports learning from as little as a single image (Equation 3), ensuring the process works in the early stages where positive and/or negative samples are unavailable. > Q3. The query budget comparison in Table 1 may not be fair. Unlike other methods listed, the proposed Stealix method requires further querying the victim model during the prompt-constructing stage, as it needs to repeatedly query the victim model with newly synthesized images (see Lines#19-20 in Algorithm 1). I suggest the authors provide the exact equation for calculating the overall query budgets and list all related hyperparameters in a single table for clarity. Thanks for the suggestion. We clarify that Table 1 presents a fair comparison, because all synthesized images in Lines#19-20 of Algorithm 1 are included in training the attacker model, with each prompt synthesizing $M$ images (Line 15 in Algorithm 1). The full query budget corresponds exactly to the queries made during prompt construction, as noted in Line 200 (left column). Taking CIFAR-10 in Table 1 as an example, with a total budget per class of $B=500$, Stealix uses $M=10$ queries per prompt (Line 15 of Algorithm 1), resulting in 50 prompts per class. Across 10 classes, this leads to a total query budget of $10\times 500=5000$ that are used to train the attacker model, which is the same budget used for other methods. > Q4. The experiments only consider a single victim model backbone (i.e., ResNet-34), which I think is insufficient to demonstrate the effectiveness of Stealix. I suggest including additional experiments on ResNet-like/CLIP-like victim backbones. We clarify that our paper does include comparisons across multiple victim model architectures. Specifically, Appendix H provides results with different victim backbones, and Appendix G covers variations in attacker model architectures. For both setups, we study two ResNet variations, one VGG and one MobileNet. Additionally, in Section 5.3, we demonstrate Stealix's effectiveness against a Vision Transformer (ViT)-based victim model trained on proprietary data. > Q5. In Algorithm 1, Lines#9-11 are redundant and can be removed. Removing Lines 9–11 would allow the consumed budget $b$ to exceed the total allowed budget $B$, since $b$ is updated within the inner loop (Line 17). Nevertheless, we appreciate the reviewer's suggestion and will revise the algorithm to test the budget constraint only once. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their rebuttal. After reading the rebuttal, I decided to keep my current score (but tend to accept). Please update your paper according to me and other reviewers' reviews.
Summary: The paper proposes a new method for model stealing attacks for computer vision classification models. Specifically, they find that prior work uses a pretrained text to image generator model to synthesize images similar to the victim data. However, this step requires an attacker to have knowledge to craft useful prompts, an assumption the paper claims is often not met for more specialized domains. Hence, they introduce Stealix, which uses genetic algorithms to find the right prompt to synthesize useful images for model stealing. Specifically, they optimize a useful prompt under a contrastive loss using features extracted by a vision-language model from the prompt itself, further improving the prompt by using a genetic algorithm using a proxy metric as the fitness function. They compare their method to different methods form the literature, all of which are based on stronger assumptions for the attacker. Across 4 datasets, they still find stealix to work better (better accuracy on test set of victim model). They also provide qualitative results, showing that synthetic images generated using stealix generate images more similar to the original, real data. Claims And Evidence: Yes. They claim their method removes an assumption made by previous work on model stealing, and convincingly show that their attack still works on a variety of setups, and even improves upon other attacks. Methods And Evaluation Criteria: Yes Theoretical Claims: NA Experimental Designs Or Analyses: Yes, the experiments seem to make sense. Supplementary Material: No. Relation To Broader Scientific Literature: The specifically focus on model stealing for image classification models. They identify that many methods rely on certain expertise/knowledge of the attacker to craft useful prompts to generate useful images to train the proxy model. They remove this assumption, providing a way to craft useful prompts, and find model stealing to work better than previous work - especially for more specialized datasets. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - The paper identifies an assumption which prior methods make (an attacker being able to identify useful prompt to synthesize images to train the proxy model), argues that this assumption is not always met in practice and offers an effective method as a solution. - While complicated, the method is explained very carefully and formalized well. - The method outperforms previous methods although these are based on stronger attacker assumptions. - A substantial amount of ablations give confidence in the method and the quality of the work. Weaknesses: - Limited interpretability insights into why this method works so well (suggestions see questions). Other Comments Or Suggestions: NA Questions For Authors: - Can authors provide some examples of optimized prompts? For instance, it would be nice to have the optimal prompt for each class label determiend by stealix such as in Table 7. Additionally, can you give some examples how the prompts change during the evoluation. This would shed more light into why stealix works so well compared to human-crafted prompts. - The main argument for the proposed method is when the task is highly specialized and requires specific prompts. What is the author's intuition why the attack works so much better than the baselines for a simple dataset like Cifar-10? What do the prompts look like in this case? - Do you think the same attack method/philosophy would work for other kinds of models? (e.g. image segmentation, text classification). - How would stealix perform on datasets with even more classes? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our contribution in addressing a key limitation of prior work, and for their appreciation of our method's clarity, effectiveness, and thorough evaluation. We answer the questions below. The reviewer can try the provided prompts at [Stable Diffusion 2.1 demo](https://huggingface.co/spaces/stabilityai/stable-diffusion). Note that the demo may use a different generation setup from ours in the experiments. > Q1. Can authors provide some examples of optimized prompts? We provide examples of optimized prompts in the following table and will include them in the revised paper. The table showcases prompts corresponding to the high Prompt Consistency (PC) values as described in Appendix E. One immediate observation is that the optimized prompts are not always interpretable to humans, echoing our motivation that human-crafted prompts may be suboptimal for model performance. Moreover, our approach often supplements class-specific details that may be overlooked by humans. For example, **gps crop** emphasizes geospatial context for AnnualCrop, **jungle** suggests dense vegetation for Forest, and **floodsaved**, **port**, and **bahamas** convey water-related cues for River and SeaLake. These examples illustrate how Stealix uncovers latent features that the victim learns. | Class| Prompt with high PC | | --------| -------- | | AnnualCrop | sdc ngc icular **gps crop** scaled **farming** pivot plane ⃣@ seen piszurich t colton 2 | |Forest |colombian seva **jungle** spectral रrgb visible sp detected slicresolution ��ि xxl sdk | | River | nxt nav nasa ifclearer ouk **floodsaved** immensalzburg **port** overlooking salzburg deplo_ thumbs | |SeaLake |fiawec apurwreck **bahamas** visible sli(!) rh sd usaf calf y infront nearby visible usaf | > Q2. Additionally, can you give some examples how the prompts change during the evoluation. This would shed more light into why stealix works so well compared to human-crafted prompts We provide an example to illustrate prompt evolution. In Figure 5, the seed image for the "Person" class includes a prominent dog, leading to the first-generation prompt — "chilean vaw breton cecilia hands console redux woodpecker northwestern **beagle** sytracker **collie** relaxing celticsped" — which generates dog images and results in prompt consistency (PC) of 0. Stealix then uses the misclassified image as a negative example and refines the prompt to — "syrian helene pasquspock hands thumbcuddling sheffield stuck smritihouseholds vulnerable kerswednesday humormindy intestin" — removing dog-related features and achieving PC = 1. This example shows how Stealix evolves prompts by filtering out misleading features using victim feedback. We will revise the paper to include this example for better clarity. > Q3. What is the author's intuition why the attack works so much better than the baselines for a simple dataset like Cifar-10? What do the prompts look like in this case? Thank you for the question. As discussed in the **Diversity comparison** (Line 377, right column) and shown in Table 3, the better performance stems from the greater diversity in our synthetic data, enabled by prompt evolution. E.g., one optimized prompt for the cat class — "punisher desktop **kittens siamese** beef **twins** personality bosnicorgi schnautuxedo 일tuxedo satellite consecutive **desktop**" — includes fine-grained categories such as "siamese" and relational cues like "twins" to encourage multiple distinct instances. Additionally, terms like "desktop" provide varied contextual environments. In contrast, simple prompts like "a photo of a cat" tend to produce less diverse images. These diverse prompts are generated automatically by Stealix, without requiring a human in the loop. > Q4. Do you think the same attack method/philosophy would work for other kinds of models? (e.g. image segmentation, text classification). It might be possible to generalize to other tasks. For image segmentation, recent work [1] shows that prompts can guide image generation toward specific segmentation layouts, suggesting that prompt refinement based on mask consistency could be a feasible direction. For text classification, the image generator could be replaced with a language model, with prompts optimized to generate inputs aligned with what the classifier has learned. [1] Yumeng Li, Margret Keuper, Dan Zhang, and Anna Khoreva. "Adversarial supervision makes layout-to-image diffusion models thrive." ICLR 2024. > Q5. How would stealix perform on datasets with even more classes? Stealix is expected to scale to more classes. We optimize prompts per class and treat all non-target classes as negatives, ensuring focus on class-specific features. Unlike baselines that generate images without considering predicted classes, Stealix actively steers synthesis toward the intended class.
null
null
null
null
null
null
Grammar-Forced Translation of Natural Language to Temporal Logic using LLMs
Accept (poster)
Summary: The paper introduces Grammar Forced Translation (GraFT), a framework for translating natural language into temporal logic. GraFT simplifies the translation process by restricting the output tokens to a limited set, using the unique properties of each task and exploiting the known grammar of temporal logic during training and inference. This approach improves end-to-end translation accuracy and out-of-domain accuracy compared to state-of-the-art methods. ## update after rebuttal I have no further comments and maintain the initial assessment of the paper. Claims And Evidence: Intuitively, the benefits of focusing on valid grammer states for a more focused gradient are readily apparent. The shortcomings of a Causal Language Modelling (CLM)-based approach in AP Grounding with respect to a Masked Language Modeling (MLM)-based approach are sensible and results support the hypothesis. In other literature, Temporal Logic (TL) describes a more general logic over time and subsumes Linear Temporal Logic (LTL). It is unclear how the logic operators defined in the Sec. 2.1 differ from standard LTL and the paper could replace all references to TL with LTL (or $\texttt{LTL}_f$). Methods And Evaluation Criteria: The datasets used are off-the-shelf and provided from previous work. Comparisons made for performance as well as performance on training with a limited subset of the data are reasonable and provide suitable evidence that GraFT is more data efficient and robust than other CLM-based models. Theoretical Claims: The theory provides an intuition to the effectiveness of grammar constraints and appears valid. Experimental Designs Or Analyses: Table 4 compares NL2TL with the proposed method (GraFT) which does not consider time intervals in the temporal logic. Are the comparisons fair given than NL2TL contains the added ability to handle these time intervals (as a part of Signal Temporal Logic)? Supplementary Material: I glanced at the appendix to better understand the datasets used. Relation To Broader Scientific Literature: The proposed method makes advances in generating Temporal Logic specs from Natural Language which can be used as a step in various pipelines that require a formal language as input [1, 2]. A high accuracy in this translation could generalize various methods to accepting natural language as input (and not just an expression in formal logic). The method does not consider grounding predicates in other modalities like vision/images [3]. Essential References Not Discussed: References using Temporal Logic: [1] Robust Counterexample-guided Optimization for Planning from Differentiable Temporal Logic, Dawson & Fan, IROS 2022 [2] Co-learning Planning and Control Policies Constrained by Differentiable Logic Specifications, Xiong et al, ICRA 2024 [3] Lang2LTL-2: Grounding Spatiotemporal Navigation Commands Using Large Language and Vision-Language Models, Liu et al, 2024 Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - The double quotes should be fixed for LaTeX e.g. P2L67C2 “go to the red room → Use `` instead of " - Minor typos: - P6L287C1 - “We know turn” Questions For Authors: 1. In Algorithm 1, how is the temporal logic grammar function (`grammar.get_valid()`) defined or implemented? 2. Can the approach be extended to handle time intervals as in NL2TL? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Remarks for all reviewers:** We would like to thank all reviewers for their time and detailed feedback. With two weak accepts and two weak rejects (with one weak reject indicating that they would consider raising their score if provided additional experimental results), the paper is on the borderline to be accepted. In this overall response, we summarize the strengths of the paper and how we have addressed the reviewers' concerns. 1. **Writing and organization**: Clear and well-structured writing with easy-to-follow organization [72U5] 2. **Ease of implementation**: The framework is conceptually simple yet effective, making it likely to be adopted by others [n686] 3. **Advantage of Encoder only vs. CLM**: Notable achievement in outperforming GPT-4o using only a BERT+T5 combination [72U5, JKCj] 4. **Limited number of APs during grounding** [VmqZ]: The AP grounding with BERT used a maximum of 5 APs per input sequence. To address this, we conducted an additional evaluation of BERT and GPT-4 on the AP grounding task (presented in our response to reviewer 72U5). Our new evaluation shows results on sequences with 6-10 APs and 11-15 APs, in addition to the original 1-5 AP examples used in the original evaluation. We believe that the original benchmarks only contained up to 5 APs, as humans typically struggle to reason about a larger number of entities simultaneously. 5. **Limited number of CLMs are used in the evaluation** [n686]. Our original experimental evaluation only compared with one state-of-the-art CLM, GPT-4o-mini. To address this limitation, we performed multiple additional evaluations. In particular, we conducted two evaluations of AP grounding using Gpt-4 and GPT-4o. In addition, we conducted an evaluation of end-to-end translation with NL2TL using Gpt-4 grounded inputs. The results of these evaluations are presented in our response to reviewer n686. 6. **How does our work distinguish itself from existing work in the program synthesis community** [VmqZ, n686 ]: To the best of our knowledge, this is the first time that several of the concepts in our submission have been applied to NL-to-TL translation, which is a contribution itself. Moreover, we do not simply present an off the shelf implementation of existing techniques (i.e grammar constrained decoding). We contribute a novel approach to training seq2seq models for the NL-to-TL task, outlined in section 3.2. Lines 278-288 of our submission describes a key difference in our training approach when compared against existing grammar-based techniques. Additionally, the use of BERT for AP grounding appears to be an entirely novel application that yields significant benefits over the standard CLM-based approach. **Beginning of direct response to JKCj**: Q4: In Algorithm 1, how is the temporal logic grammar function (grammar.get_valid()) defined or implemented? A4: The grammar.get\_valid(state) function takes the current parser state for the sequence, which is maintained per sequence and is updated when each new token is processed using the grammar.update\_state(state, token\_id) function, where state is the current state and token_id is the token that we've just parsed. The grammar state holds a list of tasks that need to be completed before parsing ends. When a "(" token is parsed, we push a "(" element to the state stack for the sequence. For example, the state of a sequence may be ["formula", "(", "prop"] and the token we are parsing is "\_". When we call grammar.update\_state(state, "\_"), the new state is ["formula", "(", "prop\_"]. Now we must parse the next token. We will call grammar.get\_valid(state), which will return a set of valid token ID's {1, 2, 3, 4, 5}, because we know that "prop_" can only be followed by a digit 1-5 (based on our dataset; if we expected more props we would have increased this to include all potential prop IDs). The full implementation of our grammar is provided in the reproducibility zip file in /TIME/TemporalLogicGrammar.py. Q5: Can the approach be extended to handle time intervals as in NL2TL? A5: Yes, this is absolutely possible. The only requirement for extending GraFT to other temporal logic formalism is the adaptation of the chosen formalism from terminal-based into token-based. We decided to limit the considered temporal logics to LTL in order to keep the paper more streamlined. However, we will consider extensions to time-bounded LTL and STL in our future work. Additionally, we will address the minor typos pointed out in the comments and suggestions.
Summary: This paper introduces Grammar-Forced Translation (GraFT), a framework to translate natural language (NL) into temporal logic (TL) using large language models (LLMs). Claims And Evidence: The claims regarding the performance improvements and complexity reduction of GraFT are convincingly supported by empirical evidence. Methods And Evaluation Criteria: Seems to make sense Theoretical Claims: I check the section 3.2.2, and it seems to make sense Experimental Designs Or Analyses: I think some implementation details (e.g., hyperparameter selection, number of runs per experiment) could be better documented to ensure reproducibility. Supplementary Material: I checked, they seem to make sense Relation To Broader Scientific Literature: The paper builds upon and extends prior work in NL-to-TL translation in several ways. The work connects to broader research on structured prediction, grammar-guided generation, and efficient fine-tuning of language models. Essential References Not Discussed: n/a Other Strengths And Weaknesses: 1. The framework is conceptually simple yet effective, making it likely to be adopted by others 2. The approach is data-efficient, performing well even with limited training examples 3.The focus is primarily on improving accuracy rather than computational efficiency; it's unclear if GraFT introduces any additional computational overhead 4.While the approach reduces the need for domain-specific training data, it still requires a modest amount of in-domain data for optimal performance I would like to see more empirical result with different LLM other than gpt4-o-mini. If the author can show this, I will raise my score. Other Comments Or Suggestions: The figures could be improved for better readability, particularly Figures 3 and 4 Questions For Authors: How does it differ from top-k sampling? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Before proceeding, please read our response addressed to all reviewers found in our rebuttal for reviewer JKCj.** Q1: I think some implementation details (e.g., hyperparameter selection, number of runs per experiment) could be better documented to ensure reproducibility. A1: We have included training scripts for the models that should allow for our results to be reproduced. This was provided in the supplementary materials zip file. For the sake of clarity, we are happy to provide the exact hyperparameters used for training. BERT and T5 were both trained for 3 epochs with an LR of 2e-5. T5 was given |training data|/3 * 0.075 warm up steps. Identical hyperparameters were used for training T5 used in NL2TL and GraFT. Q2: I would like to see more empirical results with different LLM other than gpt4-o-mini. If the author can show this, I will raise my score. A2: We have performed an evaluation of NL2TL with GPT4o and GPT-4 as the grounding model, an end-to-end evaluation that uses GPT-4 as the grounding model, and an end-to-end evaluation of an T5 without an AP grounding model. These new results are presented below: **End-to-End Evaluation of NL2TL with more models (Updated with 4 new entries: ungrounded seq2seq trained on 500 and 2000 examples, NL2TL with GPT-4 each trained on 500 and 2000 examples):** | Approach | Data Quantity | AP Grounding Model | Translation Model | CW | GLTL | Navi | |--------------------|---------------|--------------------|-------------------|-------|-------|-------| | Ungrounded Seq2Seq | 500 | - | T5 | 59.60 | 46.80 | 43.40 | | NL2TL | 500 | GPT-4o-mini | T5 | 93.00 | 83.80 | 80.40 | | NL2TL | 500 | GPT-4 | T5 | 91.50 | 82.60 | 83.70 | | GraFT | 500 | BERT | T5 | 97.70 | 91.50 | 85.00 | | Ungrounded Seq2Seq | 2000 | - | T5 | 68.10 | 55.90 | 56.30 | | NL2TL | 2000 | GPT-4o-mini | T5 | 98.20 | 97.40 | 86.70 | | NL2TL | 2000 | GPT-4 | T5 | 96.70 | 96.20 | 90.00 | | GraFT | 2000 | BERT | T5 | 99.90 | 99.80 | 99.10 | Q3:How does it differ from top-k sampling? A3: In top-k sampling, we direct generation using the K highest scoring the tokens at each step. In Grammar-forced Training (GraFT), we first outright eliminate all of the grammatically incorrect tokens (based on the previous LABEL token, not previous PREDICTED token) before computing the loss. Likewise during inference with GraFT, we perform this same operation, but based on the previous predicted token, because of course labels are not available during evaluation. In top-k sampling, the model may still be informed by a list of tokens that includes grammatically invalid options. We eliminate that possibility altogether using grammar-forcing. Q4: The focus is primarily on improving accuracy rather than computational efficiency; it's unclear if GraFT introduces any additional computational overhead A4: Experimentally, we have observed that GraFT training requires 15-20% more time than ordinary training with T5. We will provide a figure that displays the computational overhead for GraFT vs T5 during training and inference, to be placed in the appendix of our final submission. Q6:The figures could be improved for better readability, particularly Figures 3 and 4 A6: We can improve the readability by increasing the font size and reducing the presence of less-relevant details of the T5 architecture. These changes should bring the relevant details of our framework into focus. We will address these concerns in the final version of the paper- but we are unable to make changes to our submission PDF during the review period.
Summary: The authors propose GraFT, an innovative framework that employs masks to ensure the syntax correctness of generated LTL programs. The approach first utilizes BERT to extract atomic propositions (APs) and map them into a predefined set of equivalent classes for co-references. Then, it leverages T5 to learn the translation step. GraFT improves end-to-end translation accuracy by 5.49% and out-of-domain translation accuracy by 14.06% on average across three benchmarks. Claims And Evidence: The authors list four contributions; however, they primarily describe the framework rather than explicitly stating the key contributions. Methods And Evaluation Criteria: I have several questions regarding the methodology: **Q1.** Why is the vocabulary size for atomic propositions (APs) limited to only 6? If the narration spans an entire movie lasting over two hours, wouldn’t it require significantly more atomic propositions? **Q2.** How are the grounded nouns utilized in the **LTL** formula? Could you provide an example? **Q3.** The dataset descriptions are insufficiently explained. Regarding the **GW** dataset, I could not find details on its context, size, input, and output in either the main text or the supplementary material. Theoretical Claims: I have check the algorithm and it makes sense. Experimental Designs Or Analyses: See Methods And Evaluation Criteria Supplementary Material: Yes, I’ve been looking for the dataset description, but so far, I’ve only found the code. Relation To Broader Scientific Literature: This work is highly relevant to the syntax-guided program synthesis community. In fact, the masking strategy used to ensure program syntax correctness has been around for quite some time. See Essential References Not Discussed for further reference. Essential References Not Discussed: [1] Bunel, Rudy, et al. "Leveraging grammar and reinforcement learning for neural program synthesis." arXiv preprint arXiv:1805.04276 (2018). [2] Netz, Lukas, Jan Reimer, and Bernhard Rumpe. "Using grammar masking to ensure syntactic validity in llm-based modeling tasks." Proceedings of the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems. 2024. Other Strengths And Weaknesses: My main concern with this work is how it differentiates itself from existing literature in the program synthesis community. Other Comments Or Suggestions: For data efficiency and accuracy results, it would be nice to compare against the NL2TL baseline. Questions For Authors: See Methods And Evaluation Criteria, and Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Before proceeding, please read our response addressed to all reviewers found in our rebuttal for reviewer JKCj.** Q1. Why is the vocabulary size for atomic propositions (APs) limited to only 6? If the narration spans an entire movie lasting over two hours, wouldn’t it require significantly more atomic propositions? A1. This limitation was introduced by the existing datasets rather than by the authors. We choose a prop id space of 6 tokens because the sentences in our dataset have a maximum of 5 APs. Each token that is not part of these is assigned the 0-class, and any tokens which are part of an AP are assigned an integer ID corresponding to which AP it is a part of. However, our proposed approach can certainly be extended to handle more than 5 APs. **Please see our response to reviewer 72U5, where we have provided results for up to 15 APs.** For any number of APs, our approach of performing grounding using an encoder only model outperforms using CLMs. We would also like to point out that in the “2 hour long narration” example, it seems that the reviewer is referring to a *trace* with a large number of AP's, rather than a temporal logic expression. The difference being a *trace* is a time-ordered sequence of states over which some specification can be checked (the specification being a temporal logic expression). For example, we may specify in natural language that “The identity of the culprit is not revealed until the film is in the third act.”, and we may check that specification against a trace (i.e, extended document that holds the dialogue). Moreover, we believe that the 5 AP maximum observed in the dataset reflects the general expectation that the input NL sentences to our framework are roughly equivalent to a single human query, which typically do not contain hundreds of APs. Q2: How are the grounded nouns utilized in the LTL formula? Could you provide an example? A2: The grounded AP's represent the parts of the sentence which will serve as atomic predicates in the resulting TL expression. An example NL input may be: "When the apple falls from the tree, pick it up and eventually put it in the basket." The grounding system would label every token in the input with the AP it relates to. For the purposes of this example I will just label the words rather than the true subword tokens used by BERT. The output sequence from BERT may look like: [0,1,1,1,1,1,1,0,2,2,2,0,0,3,3,3,3,3], and we construct a new sentence based on this sequence: "When the prop_1, prop_2 and eventually prop_3." And we store the corresponding tokens in a dictionary: {prop_1: "the apple falls from the tree", prop_2: "pick it up", prop_3: "put it in the basket"}. The purpose of this is to simplify the NL prior to translation without sacrificing altogether the semantic nuance and complexity of the original natural language; the relevant segments are stored in a dictionary so they can be used downstream to determine their truth values. Q3: The dataset descriptions are insufficiently explained. Regarding the GW dataset, I could not find details on its context, size, input, and output in either the main text or the supplementary material. A3: We do not reference any GW dataset - Is this question perhaps about CW or GLTL? If so, there is a table in the appendix (A1) with information on the datasets. The table contains the number of unique NL sentences, number of unique LTL sentences, and the number of unique words that appear in each dataset. Q4: For data efficiency and accuracy results, it would be nice to compare against the NL2TL baseline. A4: We have performed the requested evaluation and the results can be viewed using the link (https://docs.google.com/document/d/e/2PACX-1vTYbY_0G_5EUXs1sumgRB7q3zz_3gCQRU0g8OCFXGFvxH2thZ2NqFNzxXWujADzOTj1uBICDQxRCcFi/pub). The results show that our approach outperforms NL2TL with even more margin than the T5 baseline. This stems from the fact that our method of grounding the APs is superior to the method in NL2TL. We had originally used T5 to isolate the improvement from the training. In regard to the missing essential reference: This does appear to be an essential reference which we should include in our discussion of the background work. While the authors approach to pruning the space of target programs appears similar to our proposed approach (GraFT), there is a key difference relating to how we obtain the grammatical state (or as the authors describe it on page 7, the “current context”). In the author’s approach, the syntax checker functions the same during both training and inference, establishing the current context using previously generated tokens. However in GraFT, the syntactic validity of tokens at position t is dependent on the label at position t-1, rather than the generated token at position t-1, as would be used in the author’s proposed method. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying my questions. I have raised my score accordingly. --- Reply to Comment 1.1.1: Comment: We thank all reviewers for their careful reading of our submission, and for providing valuable feedback. Our work was recognized for its clear organization, ease of implementation, and promising results in natural language to temporal logic translation. While the reviewers initial evaluation considered the paper borderline (two weak accepts, two weak rejects), we resolved the reviewers' concerns during the rebuttal period and there is now a consensus among the reviewers to accept the paper (four weak accepts). Key Strengths Clarity, organization, and reproducibility: Multiple reviewers commended the paper for clear writing and logical structure. We maintain this clarity in our explanation of additional evaluations. Additionally, we have supplied the code required to reproduce our results in the supplemental materials, as well as a description of our training environment and hyperparameters. Encoder-only approach: One of our notable contributions is outperforming strong generative models (including GPT-4o) using BERT. This approach demonstrates the unique advantages of masked language models that are often overlooked in recent work. Practical value: The GraFT framework is intuitive and straightforward to implement, and integrates well with other seq2seq models, making it likely to be adopted by the community. Novelty: Our submission is, to our knowledge, the first to apply BERT to the task of AP grounding. Additionally, the limited prior work on grammar-constrained training maintains a grammatical context with respect to previously generated tokens (as would be done in grammar-constrained decoding). In contrast, our approach leverages the insight that such sequences are rarely correct in the early stages of training, leading us to force grammatical constraints relative to the ground-truth sequence, rather than the (often erroneous) sequence generated during training. Resolved Concerns: Limited number of APs: We supply an additional evaluation that demonstrates the continued success of our BERT-based AP grounding approach for AP quantities up to 15. Additional LLMs: We conducted an additional evaluation in which GPT-4 and GPT-4o were used for grounding and translation, in order to fully explore the current capabilities of generative models. End-to-end Evaluation: We supply additional entries in our end-to-end evaluation, including an evaluation of unaided language models, and NL2TL with different foundation models functioning as the AP masker. By providing more extensive evaluations, clarifying our methodology, and comparing against additional baselines, we have resolved the primary issues raised by the reviewers. We believe our paper offers a comprehensive contribution: it introduces a novel and effective method for NL-to-TL translation, clearly justifies its design choices, and offers robust empirical evidence of its advantages across various scenarios. We again thank all reviewers for their insightful comments and believe that our revisions substantively address their feedback, as evidenced by their updated evaluations.
Summary: This paper introduces a framework for translating natural language to temporal logic by restricting output token space during both grounding and translation phases. The framework employs a masked language model for atomic propositions grounding and a fine-tuned sequence-to-sequence model for translation. Using a BERT+T5 combination, GraFT demonstrates significant improvements in both end-to-end translation accuracy and out-of-domain scenarios compared to existing methods. The paper provides mathematical justification for token restriction benefits and evaluates the framework on CW, GLTL, and Navigation benchmarks. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. See **Weaknesses** and **Strengths** below. Supplementary Material: No. Relation To Broader Scientific Literature: GraFT's token space restriction approach bridges general NLP techniques with formal language translation tasks. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Weaknesses:** - The method for obtaining grounded NL sequences for T5 training lacks clear explanation. The paper fails to justify whether these grounded NL sequences are suitable as learning objectives for CLMs. - The difference in output formats between CLM and MLM for grounded NL (Figure 2) requires clarification - the paper should explain whether this stems from fundamental architectural differences between the two models **Strengths:** - Clear and well-structured writing with easy-to-follow organization - Innovation points are well-articulated - Results are presented clearly and comprehensively - Notable achievement in outperforming GPT-4o using only a BERT+T5 combination Other Comments Or Suggestions: N/A Questions For Authors: **Questions:** - The baseline models used for comparison (from 2023) may be outdated. More recent baselines should be considered to strengthen the comparative analysis Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Before proceeding, please read our response addressed to all reviewers found in our rebuttal for reviewer JKCj.** Q1: The method for obtaining grounded NL sequences for T5 training lacks clear explanation. The paper fails to justify whether these grounded NL sequences are suitable as learning objectives for CLMs. A1: In GraFT, the grounded sentences are obtained by passing the original NL sentences to BERT, which provides a mapping between the AP labels and segments of the sentence. The BERT model must be fine-tuned on labeled data, which is provided in the LTL datasets (these datasets include information on the span of APs in the NL sentences). In the NL2TL framework, an LLM is prompted to generate the AP labels and segments within the sentence, rather than directly labeling the tokens of the input as is done by BERT. In response to the question of whether grounding NL sequences is suitable as a learning objective for CLMs, we present the tables below. The tables contain AP grounding results for 3 ranges of AP quantities (1-5 (A), 6-10 (B), and 11-15 (C)) for an MLM and an off-the-shelf CLM. Our position is that CLMs are *not* an optimal choice for this learning objective. (A) | Model | Objective | CW (%) | GLTL (%) | Navi (%) | |-------------|-----------|--------|----------|----------| | GPT-4o-mini | Causal | 97.76 | 95.84 | 83.97 | | GPT-4o | Causal | 95.02 | 93.53 | 86.08 | | GPT-4 | Causal | 96.24 | 94.68 | 87.28 | | DistilBERT | Masked | 95.80 | 93.83 | 99.99 | | RoBERTa | Masked | 98.34| 96.96 | 99.99| | BERT | Masked | 98.58 | 97.35 | 99.99 | (B) | Model | Objective | CW (%) | GLTL (%) | Navi (%) | |-------------|-----------|--------|----------|----------| | GPT-4 | Causal | 81.88 | 80.41 | 83.02 | | DistilBERT | Masked | 94.20 | 91.75 | 99.99 | | RoBERTa | Masked | 96.37 | 95.66 | 99.99 | | BERT | Masked | 97.10 | 96.52 | 99.99 | (C) | Model | Objective | CW (%) | GLTL (%) | Navi (%) | |-------------|-----------|--------|----------|----------| | GPT-4 | Causal | 70.34 | 69.80 | 72.24 | | DistilBERT | Masked | 92.86 | 90.63 | 98.54 | | RoBERTa | Masked | 95.13 | 94.67 | 99.73 | | BERT | Masked | 95.78 | 96.44 | 99.91 | Q2: The difference in output formats between CLM and MLM for grounded NL (Figure 2) requires clarification - the paper should explain whether this stems from fundamental architectural differences between the two models A2: The fundamental difference is that BERT classifies each token in the input as PROP_ID or 0, where the PROP_ID class is an integer that corresponds to an AP that appears one or more times in the sentence, and the 0 class indicates that a token is not part of any AP. In contrast, a CLM is *generative* and predicts (the next) future token, rather than classifying the tokens it received as input. Our evaluation results for MLM vs CLM AP grounding (provided above) support our position that the MLM training objective is better suited to the task of AP grounding than the CLM objective. Q3: The baseline models used for comparison (from 2023) may be outdated. More recent baselines should be considered to strengthen the comparative analysis A3: In our supplemental evaluation results provided in our response to reviewer **n686**, we include recently trained models including GPT-4 and GPT-4o, trained in 2024. To our knowledge, our evaluation compares against the current state-of-the-art temporal logic translation frameworks.
null
null
null
null
null
null
A Tale of Two Structures: Do LLMs Capture the Fractal Complexity of Language?
Accept (poster)
Summary: This article examines whether LLMs exhibit long memory under different conditions. It reports that temperature settings and prompting methods may disrupt long memory. Claims And Evidence: The authors claim that temperature settings and prompting methods could destroy long memory, and this finding is robust to the choice of architecture, including Gemini, Mistral, and Gemma. The architectures tested here are very similar, making it unclear to what extent this finding applies more broadly. Additionally, the paper is not well-structured and contains many undefined concepts. Methods And Evaluation Criteria: The method involves examining two important long-memory properties, typically studied as Hurst/Hölder exponents. The authors conducted diverse experiments to determine whether long memory holds. However, since the experiments were conducted only with Gemini models, the scientific significance of their findings remains unclear. Theoretical Claims: This paper is empirical only and there are no theoretical claims. Experimental Designs Or Analyses: The empirical design involves testing Gemini, Mistral, and Gemma using different prompting methods and settings. Supplementary Material: The supplementary material includes all prompts, settings, and results from their experiments, spanning 59 pages. Relation To Broader Scientific Literature: There is previous work beyond what the authors cited that has examined long memory in LLMs and its differences from natural language. Essential References Not Discussed: Evaluating Computational Language Models with Scaling Properties of Natural Language Shuntaro Takahashi, Kumiko Tanaka-Ishii, Computational Linguistics, Volume 45, Issue 3 - September 2019 Other Strengths And Weaknesses: In this work, several concepts appear undefined. For example, in the Introduction, the paper states, 'We refer the reader to (Alabulmohsi 2024) for the exact definitions of these quantities.' However, the paper should be self-contained, and it is not the reader's responsibility to seek out definitions elsewhere. Additionally, different LLM settings, including 'beta' and others, are used as common terms, but these settings must be clearly defined and consistently applied within the paper. The paper contains mistakes. For example, Willinger et al. (1995) is cited in the statement: 'In language, such self-similarity is attributed to its recursive structure.' However, the title of Willinger et al.'s paper is Self-Similarity in High-Speed Packet Traffic, which has no connection to language. Other Comments Or Suggestions: This large report may contain valuable findings, but it is the authors' responsibility to demonstrate their significance. Questions For Authors: What would the findings be if you conducted your test with Mamba? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for taking the time to review our paper and sharing your concerns. While we wish to clarify certain aspects, we have taken your points seriously and conducted additional experiments to address them. We believe these new results significantly strengthen the paper, demonstrating the robustness and broader applicability of our conclusions. We are pleased to report that our core findings remain consistent across this broader range of experiments. Please find our detailed responses below: **More Models** We appreciate your concern regarding the diversity of models. We want to clarify that our initial experiments were conducted consistently across three models: Gemini 1.0 Pro, Mistral-7B, and Gemma-2B, as detailed throughout the results sections (e.g., Figures 3, 4, 6, 7, etc.). We are not relying solely on Gemini. However, we agree that demonstrating robustness across a wider range of architectures is beneficial. So, we have now conducted additional experiments using the RAID dataset (https://arxiv.org/pdf/2405.07940), which contains texts generated by 11 other models (e.g. GPT, LLAMA, Cohere, … ) in many domains. We will add these new results to the supplementary material of the revised version of the paper. We have found that our conclusions continue to hold. For example, as before, only the Hurst exponent (H) is well-correlated with text quality [[Link to Figure]](https://postimg.cc/wt9WXvg0). This observation holds across the 11 models and 7 domains in RAID, reinforcing our earlier result. In addition, natural language still has a tighter distribution of fractal parameters compared to LLM-generated text, particularly for S with low decoding temperature [[Link to Figure]](https://postimg.cc/y3mr7652). These new experiments provide strong evidence that our conclusions regarding the fractal properties of LLM text are not limited to the initial models but generalize more broadly across the current LLM landscape. Please refer to our response to Reviewer `YDw714` for a detailed overview of the new experiments as well as the full list of new figures. **Presentation** We regret that the paper was perceived as not well-structured. We organized the study around 9 precise research questions (Section 3) to provide a systematic analysis, and all questions were thoroughly answered with extensive experiments. We would appreciate it if you could help us understand what is lacking so we can improve it? **Missing Reference** Thank you for highlighting this work. This is indeed relevant, as it studies statistical properties (e.g. Zipf’s law) in language models predating 2019. We will incorporate it into the related works section. **Self-Containment** Thank you for this suggestion to improve self-containment. We will add a dedicated section to the supplementary materials providing a clear, self-contained explanation of the Hölder (S) and Hurst (H) exponents as well as the decoding temperature $\beta$. **Incorrect Citation** Thank you for catching this typo. We have fixed it. **Question Regarding Mamba** This is an excellent question. Exploring whether our findings hold for fundamentally different architectures like State Space Models (SSMs) is a valuable direction for future research. However, analyzing Mamba was beyond the scope of the current study, since our focus was on auto-regressive models. We will mention SSMs as an avenue for future research in the revised paper. **Clarifying Our Contribution** Our main contribution is that fractal analysis offers a novel and insightful lens for understanding the capabilities and limitations of LLMs in replicating the complex statistical structures of natural language. As we show in the paper, various strategies, like the decoding temperature and prompting method, can impact fractal parameters even when log-perplexity scores seem to be unaffected. In addition, this work contributes to “DNN Science” by treating LLMs as phenomena to be studied rigorously, highlighting important questions, and conducting comprehensive experiments to answer them thoroughly. We systematically investigate how controllable variables affect LLMs’ ability to mimic human text structure. This approach not only offers a complementary evaluation methodology (comparing generated text's fractal dimensions to natural language) but also deepens our scientific understanding of how these models function and where they still fall short of human linguistic complexity—a core area of interest for our community. **Summary** Thank you again for your valuable feedback. We believe the additional experiments, clarifications, and revisions significantly strengthen the paper and directly address the concerns raised, and we hope that they resolve them. If you have any remaining concerns, please let us know so we can respond to them during the rebuttal period. Otherwise, we would appreciate it if you consider revising your score.
Summary: The paper examines whether large language models (LLMs) replicate the fractal characteristics of natural language. Using a dataset of 240,000 LLM-generated articles, the authors analyze fractal parameters (Hölder and Hurst exponents) across three models (Gemini 1.0 Pro, Mistral-7B, Gemma-2B), decoding temperatures, and prompting strategies. Key findings: (1) LLMs exhibit wider fractal parameter variation than natural language, with larger models performing better; (2) temperature and instruction tuning impact self-similarity and long-range dependence; (3) more informative prompts do not always improve fractal alignment, revealing a double descent effect; (4) fractal parameters correlate with text quality and detection potential. The authors claim to release the GAGLE dataset to aid further research. Claims And Evidence: The paper provides strong empirical support for most claims. However, the claim that results hold across a variety of model architectures is weaker, as only three models (Gemini 1.0 Pro, Mistral-7B, Gemma-2B) are tested, with two from the same ecosystem. Expanding to more diverse architectures (e.g., LLaMA, GPT-4, Claude) would improve generalisability. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem. The large-scale dataset (240,000 articles) covering diverse domains and generation settings strengthens the empirical foundation. However, while the chosen models and prompting strategies offer useful insights, a broader range of architectures, as I believe, would strengthen the paper. Theoretical Claims: The paper primarily focuses on empirical analysis rather than formal theoretical proofs. No explicit mathematical proofs were checked, but the statistical methodology appears sound. Experimental Designs Or Analyses: The study is well-structured, using Hölder and Hurst exponents as key statistical metrics and analyzing a large dataset (240,000 articles) across multiple models, temperatures, and prompting strategies. The dataset spans various domains, ensuring diversity in text sources. However, a potential limitation is the limited model diversity—testing only three architectures (Gemini 1.0 Pro, Mistral-7B, and Gemma-2B) may not fully generalise findings across different LLM families. Supplementary Material: Key supplementary materials relevant to the experiments are: Appendix A (prompting templates), Appendix C (data card for the GAGLE dataset), and Appendix E (sample documents). These sections support the experimental setup and provide transparency in data generation and fractal parameter estimation. Relation To Broader Scientific Literature: The paper builds on research in statistical properties of natural language and LLM-generated text, expanding beyond log-perplexity-based evaluations. It aligns with findings from [1] on fractal patterns in text. It connects to prior works on LLM-generated text detection, such as Mireshghallah et al. and Gehrmann et al., by proposing fractal parameters as a distinguishing feature [2, 3]. [1] Alabdulmohsin, Ibrahim, Vinh Q. Tran, and Mostafa Dehghani. "Fractal Patterns May Illuminate the Success of Next-Token Prediction." (2024). [2] Mireshghallah, Niloofar, et al. "Smaller language models are better zero-shot machine-generated text detectors." Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers). 2024. [3] Gehrmann, Sebastian, Hendrik, Strobelt, Alexander, Rush. "GLTR: Statistical Detection and Visualization of Generated Text." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, 2019. Essential References Not Discussed: I believe that, in some sense, the fractal properties of text were investigated in [4] for artificial text detection. While it takes a different approach using fractal dimension, it would be really interesting to understand how it corresponds to your method. [4] Tulchinskii, Eduard, et al. "Intrinsic dimension estimation for robust detection of ai-generated texts." Advances in Neural Information Processing Systems 36 (2023): 39257-39276. Other Strengths And Weaknesses: Strengths: - **(S1)** Contribution to Existing Research: the paper continues previous work on fractal properties in language (Alabdulmohsin et al., 2024) and expands it by evaluating LLM-generated text across different models, decoding temperatures, and prompting strategies. This provides many valuable insights and findings. - **(S2)** Dataset: As the authors claim, they will release the GAGLE dataset (240,000 articles), which also explores an interesting aspect – the variation in contextual information provided during prompting. This dimension has not been included in other datasets related to artificial text detection. - **(S3)** Clarity: The paper is well-structured, with a clear explanation of experimental methodology and key findings. The figures effectively illustrate trends in fractal parameters. Weaknesses: - **(W1)** Limited Model Diversity: The study tests only three models (Gemini 1.0 Pro, Mistral-7B, Gemma-2B), which may not be sufficient to generalize conclusions across different LLM architectures. Expanding the analysis to models like LLaMA, GPT-4, or Claude would strengthen the findings. - **(W2)** Unclear Differentiation Between AI and Human Text: While fractal parameters reveal structural differences between LLM-generated and human text, the study does not demonstrate that these differences enable reliable classification, as authors also mention in Limitations. - **(W3)** Theoretical Justification: While the empirical results are strong, the paper would benefit from more discussion on why fractal parameters should generalize across different LLMs, rather than just showing observed correlations. Other Comments Or Suggestions: I believe, it would be better if authors included the background about fractal characteristics or at least explicitly state how the computations are done. Now one has to look up to Alabdulmohsin et al. (2024) to understand it. Some small drawbacks: - Missing Figure reference in appendix B - Fig 3, Fig 4, and maybe somewhere else: typo GEMMMA - Please specify in all figures’ captions what G-P, M-7 and G-2 means. - Fig 5 is unreadable Questions For Authors: - **(Q1)**: How well do fractal parameters generalize across different LLM architectures? If authors could include other models in the dataset it would strengthen the statements. If it is not possible, author could probably use other open-source datasets like RAID [5] or COLING Workshop on MGT Detection [6]. - **(Q2)** Can fractal parameters reliably differentiate AI-generated from human text? - **(Q3)** I wanted to clarify if a causal model in Figure 1 is something defined in literature? If yes, could you please provide some references. If no, I believe it should be stated in the paper explicitly. [5] Dugan, Liam, et al. "Raid: A shared benchmark for robust evaluation of machine-generated text detectors." (2024). [6] Wang, Yuxia, et al. "GenAI content detection task 1: English and multilingual machine-generated text detection: AI vs. human." (2025). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, We thank you for your detailed review and constructive feedback. We are pleased that you have found our experiments thorough, claims well-supported, the overall study well-structured, and the findings valuable and insightful. As you stated, GAGLE includes various prompting strategies, unlike other public datasets. Your primary concerns regarding the diversity of the models were particularly helpful. We have taken these points seriously and conducted substantial additional experiments specifically to address them. We believe these new results significantly strengthen the paper, demonstrating the robustness and broader applicability of our conclusions. We are pleased to report that our core findings remain consistent across this broader range of experiments. Please find our detailed responses below: **More Models** We have followed your advice and used the RAID dataset, which contains texts generated by 11 models (e.g. GPT, LLAMA, Cohere ...) in many domains, addressing the specific gap you noted. We will add these new results to the supplementary material of the revised version of the paper. We have found that our conclusions continue to hold. For example, as before, only the Hurst exponent (H) is well-correlated with text quality [[Link to Figure]](https://postimg.cc/wt9WXvg0). This observation holds across the 11 models and 7 domains in RAID, reinforcing our earlier result. Please refer to our response to Reviewer `YDw714` for a detailed overview of the new experiments as well as the full list of new figures. **Missing Reference** Thank you for bringing up this missing reference. This is quite relevant, since they propose using the intrinsic dimension of the data manifold as a metric that distinguishes natural language from AI-generated texts, similar to how we show separation using fractal parameters. Tulchinskii et al.'s scope, however, is limited to detecting synthetic texts, whereas we show that fractal analysis offers a novel and insightful lens for understanding the capabilities and limitations of LLMs in replicating the complex statistical structures of language. As we show in the paper, various controllable variables, like the decoding temperature and prompting method, can impact fractal parameters even when average log-perplexity score seems to be unaffected. We believe this approach can deepen our scientific understanding of how these models function and where they still fall short of human linguistic complexity—a core area of interest for our community. We will add this reference to the related work section with a brief discussion in the revised paper. **Relevance to Detection** We acknowledge your observation regarding detecting LLM-generated texts. As we mention in the paper, we do not focus on detecting synthetic texts in this work. However, we do believe that fractal parameters might prove useful for detection and we leave this to future research. Our results demonstrate that fractal structures are often more difficult for LLMs to replicate accurately than simpler statistical properties captured by perplexity (e.g. Figures 4/6/7/10 and Table 3). We plan to explore this direction more in the future. **Background About Fractals** Thank you for this suggestion to improve self-containment. We will add a dedicated section to the supplementary materials providing a clear, self-contained explanation of the Self-Similarity (S) and Hurst (H) exponents and how they are computed. **Causal Model** Regarding the causal model in Figure 1, this is a conceptual model we introduce to hypothesize why variations in prompt information density might influence the fractal structure of generated text, even if models are calibrated at the next-token level. It serves to motivate our investigation into prompting methods. Appendix B provides a concrete example illustrating this potential effect. We will clarify this in the revised paper. **Typos** We greatly appreciate you catching these details! We will correct the typos, fix the missing figure reference in Appendix B, improve the readability of Figure 5, and ensure all captions clearly define abbreviations. **Summary** Thank you again for your valuable feedback. We believe the additional experiments, clarifications, and revisions significantly strengthen the paper and directly address the concerns raised, and we hope that they resolve them. If you have any remaining concerns, please let us know so we can respond to them during the rebuttal period. Otherwise, we would appreciate it if you consider revising your score. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful and detailed responses! The clarifications and revisions, as I believe, would improve the paper, and I have raised my score accordingly.
Summary: This paper investigates whether LLMs can replicate the fractal complexity found in natural language. The authors use the Hölder exponent (S) to examine self-similarity and Hurst exponent (H) for long range dependence. Authors have explored a large range of models, sampling temperature and prompts etc. Through thorough evaluation, they found that 1. Larger models are better at replicate the fractal structures of natural language than small models. 2. High decoding temperature improve similarity to natural text. 3. Prompting strategies impact text fractality non-monotonically. Claims And Evidence: For the 9 points of analysis, the claims are supported by accompanying figure. I also like that authors experiment with different scoring models to show that results are consistent across different scoring models. Methods And Evaluation Criteria: The authors uses two main metric (H and S) to evaluate how model generated data might have different fractal structures compared to natural text. Overall, those two metrics are established measures by prior work. Furthermore, authors examine how different factors impact those two metrics - the factors examined by authors are relevant factors that might impact a model's generation. Therefore, the methods overall make sense. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design mainly concerns about (1) how to score those two metrics (2) how to generate synthetic data while ensuring there is a natural text baseline to compare against. The experimental design on data synthesis is valid. Supplementary Material: I read Appendix B to get an intuitive understanding of how eq (1) differs from eq (2). Relation To Broader Scientific Literature: This paper is closest to the line of works on detecting LLM generated texts. However, the authors emphasizes that the primary goal is not to achieve detection. Based on the experiment results, it does not seem likely that the metrics used in this analysis could be a reliable way to detect LLM generated texts. While the analysis is interesting to see how different axis can change the characteristics of generated text, I it might only have limited implications to this line of work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: 1. The paper introduces fractal parameters (Hölder exponent S and Hurst exponent H) as a new way to analyze self-similarity and long-range dependence in natural and LLM-generated text. The approach goes beyond surface-level comparisons, as mostly done by previous work. Weakness: 1. If I read this paper as an evaluation paper, I am not too sure if the evaluation results made on three models, which differ by architecture, pretraining data and training paradigm. To this extent, while it is interesting to see how these models differ, whether the conclusion can generalize remain unclear. 2. If I read this paper as trying to understand whether those two metrics could be interesting for the detection community, there is no strong evidence that the two proposed metrics can be used reliably as a detection metric. Furthermore, there are no experiments in the paper examining whether those metrics can be used for detection. Other Comments Or Suggestions: There are a lot of analysis organized by 9 questions, and I enjoyed reading about these analysis. However, I think because there are so many points, after reading the paper I do not know what the main message of the paper is and what the main contribution is in helping detecting LLM generated text. If authors think that this work contributes to a different line of research that I am missing, please let me know! Questions For Authors: See comments. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer, We thank you for your detailed review and constructive feedback. We are pleased that you have found our work comprehensive, the evaluation valid and thorough, the use of fractals novel, and the paper well-organized and easy to read. Your primary concerns regarding the generalizability of our findings and the clarity of the main contribution were particularly helpful. We have taken these points seriously and conducted substantial additional experiments specifically to address them. We believe these new results significantly strengthen the paper, demonstrating the robustness and broader applicability of our conclusions. We are pleased to report that our core findings remain consistent across this broader range of experiments. Please find our detailed responses below: **Generalizability of findings** Thank you for highlighting this point. We have now conducted additional experiments using the RAID dataset (https://arxiv.org/pdf/2405.07940), which contains texts generated by 11 models (e.g. GPT2/3/4, LLAMA, Cohere, MPT, and Mistral) in many domains. We will add these new results to the supplementary material of the revised version of the paper. We have found that our conclusions continue to hold. For example, as before, only the Hurst exponent (H) is well-correlated with text quality [[Link to Figure]](https://postimg.cc/wt9WXvg0). This observation holds across the 11 models and 7 domains in RAID, reinforcing our earlier result. In addition, natural language still has a tighter distribution of fractal parameters compared to LLM-generated text, particularly for S with low decoding temperature [[Link to Figure]](https://postimg.cc/y3mr7652). These new experiments provide strong evidence that our conclusions regarding the fractal properties of LLM text are not limited to the initial models but generalize more broadly across the current LLM landscape. Please refer to our response to Reviewer `YDw714` for a detailed overview of the new experiments as well as the full list of new figures. **Main Message** We appreciate the opportunity to clarify the main message and contribution of our work. Our central message is that fractal analysis offers a novel and insightful lens for understanding the capabilities and limitations of LLMs in replicating the complex statistical structures of natural language. As we show in the paper, various strategies, like the decoding temperature and prompting method, can impact fractal parameters even when log-perplexity scores seem to be unaffected. This goal is in line with earlier works, such as (Meister & Cotterell, 2021), who argued that the evaluation of LLMs should go beyond log-perplexity and also consider how well LLMs capture other “statistical tendencies” observed in natural language. Our key contribution lies in introducing and validating this fractal analysis framework. In addition, this work contributes to “DNN Science” by treating LLMs as phenomena to be studied rigorously, highlighting important questions, and conducting comprehensive experiments to answer them thoroughly. We systematically investigate how controllable variables affect LLMs’ ability to mimic human text structure. This approach not only offers a complementary evaluation methodology (comparing generated text's fractal dimensions to natural language) but also deepens our scientific understanding of how these models function and where they still fall short of human linguistic complexity—a core area of interest for our community. We hope this answers your concern. We will clarify this message in the revised version of the paper. **Relevance to Detection** We acknowledge your observation regarding detecting LLM-generated texts. As we mention in the paper, we do not focus on detecting synthetic texts in this work. However, we do believe that fractal parameters might prove useful for detection and we leave this to future research. Our results demonstrate that fractal structures are often more difficult for LLMs to replicate accurately (e.g. Figures 4/6/7/10 and Table 3). We plan to explore this direction more in the future. **Summary** Thank you again for your valuable feedback. We believe the additional experiments, clarifications, and revisions significantly strengthen the paper and directly address the concerns raised, and we hope that they resolve them. If you have any remaining concerns, please let us know so we can respond to them during the rebuttal period. Otherwise, we would appreciate it if you consider revising your score.
Summary: This study constructs a dataset named GAGLE, comprising 240,000 AI and human-generated language instances. It employs fractal parameters, including Self-Similarity and Long-Range Dependence (LRD), to examine the differences between various language model sizes and architectures compared to human texts.The research aims to offer a new evaluation perspective for language models by analyzing their fractal characteristics, thereby extending beyond traditional perplexity metrics. By investigating the statistical patterns in model-generated texts, this work enhances our understanding of current language models' capabilities and limitations in replicating natural language complexity and structure. This analysis is crucial for advancing more accurate and natural text generation technologies. Claims And Evidence: - Some claims in the article are supported by clear and convincing evidence and are consistent with previous research. For example, Q1 mentions that the perplexity of LLM-generated text is lower, and Q2 indicates that generated text with higher temperature is closer to human-like output. However, some claims lack convincing evidence. For instance, in Q6, the study on "how fractal parameters relate to the quality of output" does not clearly specify how the quality of the article is measured, making the claim less convincing. Q8 lacks experimental data to support its argument. Additionally, Q9 overlooks the fact that the dataset types chosen for the study are consistent, failing to address the review/reddit-type data, which represents a notable gap in the research. - The current dataset (GAGLE), sourced from Wikipedia, BigPatent, Newsroom, and BillSum, leans towards academic texts and may not fully capture the differences between LLM-generated and human text in informal contexts, such as social media. Methods And Evaluation Criteria: The current dataset (GAGLE), sourced from Wikipedia, BigPatent, Newsroom, and BillSum, leans towards academic texts and may not fully capture the differences between LLM-generated and human text in informal contexts, such as social media. Additionally, the evaluation criteria are somewhat limited. The impact of fractal analysis could be further explored by examining the accuracy of downstream tasks. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: **Rationale**: - The calculation of fractal parameters (such as the Hurst exponent) provides a quantitative analysis of the statistical properties of LLM-generated text, which is innovative. The findings of Q1, Q2, Q3 Q4 are relatively consistent with the previous findings **Issues**: - Q6, the study on "how fractal parameters relate to the quality of output" does not clearly specify how the quality of the article is measured, making the claim less convincing. Q8 lacks experimental data to support its argument. Additionally, Q9 overlooks the fact that the dataset types chosen for the study are consistent, failing to address the review/reddit-type data, which represents a notable gap in the research. - The figures and tables are disorganized: the order of tables and figures is not coherent (e.g., Figure 4/5 do not define IT and PT, and there is a discontinuous reference between Figure 3 and Figure 4). As a result, terms like IT (Instruction Tuning) and PT (Pre-Training) are not clearly defined, which affects readability. Supplementary Material: Not applicable. Relation To Broader Scientific Literature: No. Essential References Not Discussed: No. Other Strengths And Weaknesses: Overall, the contributions of this article are claimed as follows: 1. Analyzing the factors that contribute to LLMs replicating natural language fractal characteristics. 2. Exploring the impact of prompts on the fractal structure of text. 3. The results are applicable to various model architectures. 4. Releasing a dataset containing 240,000 articles. However, the contributions in points 3 and 4 are limited. For point 3, the article only uses the Gemini 1.0 Pro, Mistral-7B, and Gemma-2B models, without considering classic and more-widely used models such as GPT, Llama, or the latest o1 and DeepSeek-R1 model. As for point 4, the dataset is not comprehensive, mainly including academic texts and neglecting the importance of data in informal contexts. Other Comments Or Suggestions: - Carefully review the relationship between tables and their respective positions in the article to ensure readability. - Increase the diversity of the data used. - Carefully check the connection between the proposed arguments and the experiments, as some points (e.g., Q8) lack experimental support. - Incorporate a wider variety of models into the analysis. Questions For Authors: - How does the fractal relationship between LLMs and humans evolve with the development of LLMs (e.g., from ChatGPT to DeepSeek-R1)? - How does the fractal relationship between LLMs and humans differ across different types of datasets? - Previous studies have shown that substantial portions of text from the training data of LLMs can be extracted using carefully designed prompting techniques. It would be valuable to explore whether incorporating novel human data can help determine if the model's retention of human data leads to a fractal relationship that more closely mirrors human characteristics. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive feedback. We are pleased that you have found our work innovative and valuable. We have carefully considered your concerns and have conducted new experiments to address them, particularly regarding the diversity of models and datasets. We believe these additions substantially strengthen the paper's contributions and generalizability. We are pleased to report that our core findings remain consistent across this broader range of experiments. Please find our responses below: **1. Limited Scope of Models and Datasets** We have now conducted additional experiments using the RAID dataset (https://arxiv.org/pdf/2405.07940), which contains texts generated by 11 models (e.g. GPT, Llama, ...) in domains that include Reddit and reviews, addressing the gap you noted. We'll add these results to the supplementary material. **Summary of Findings:** *(Please note that Q2/4/5/8 are not applicable here because we don't control the prompts in RAID and we score using Gemini Pro 1.0)* * **Q1 (Log-perplexity)**: [[link to Figure]](https://postimg.cc/qznYYzVd) Consistent with our results, greedy decoding and instruction tuning yield lower perplexity than human text, but pretrained models at $\beta=1$ show perplexity similar to human text. * **Q3 (Fractals in IT Models)**: [[link to Figure]](https://postimg.cc/r0Svw9Rh) Our findings still hold: Instruction tuning affects the Hurst exponent (H), especially at low temperatures (leading to higher H), while Self-Similarity (S) remains largely unaffected. * **Q6 (Text Quality)**: [[link to Figure]](https://postimg.cc/wt9WXvg0) As before, only the Hurst exponent (H) is well-correlated with quality. This observation now holds across the 11 models and 7 domains in RAID, reinforcing our earlier result. * **Q7 (Distribution of Fractals)**: [[link to Figure]](https://postimg.cc/y3mr7652) Natural language still has a tighter distribution of fractal parameters compared to LLM-generated text, particularly for S with low decoding temperature. * **Q9 (Data analysis)**: We've repeated the analysis of Table 3 in RAID. The results are below. Interestingly, it seems challenging for LLMs to replicate humans in *poetry*, and this only becomes evident when we look into the Self-Similarity exponent. |Dataset|S log-ratio|H log-ratio|PPL log-ratio| |---|---|---|---| |abstracts|$-0.13\pm0.05$|$0.16\pm0.02$|$-0.59\pm0.09$| |books|$-0.10\pm0.04$|$0.07\pm0.01$|$-0.66\pm0.04$| |news|$0.18\pm0.02$|$0.11\pm0.01$|$-0.53\pm0.04$| |poetry|$\bf0.50\pm0.02$|$0.05\pm0.01$| $-0.67\pm0.10$| |recipes|$0.10\pm0.05$|$0.05\pm0.01$|$-0.75\pm0.04$| |reddit|$-0.07\pm0.02$|$0.11\pm0.01$|$-0.67\pm0.06$| |reviews|$0.08\pm0.02$|$0.13\pm0.02$|$-1.23\pm0.11$| **2. Clarity on Quality Measurement (Q6)** As stated briefly in Lines 308-310, we use Gemini Pro 1.0 to auto-rate the quality of generated texts. The prompt template & examples of responses are in Appendix A.3 and we provide examples of quality ratings generated by Gemini in Appendix E. Please note that all of the auto-ratings are included in the released GAGLE dataset. **3. Experimental Support for Claims (Q8)** In Q8, the experimental results are discussed in Lines 375-385. For instance, when predicting the scoring model, we get an accuracy of 97.0% with and without including the generating model in the predictors. We hope this clarifies your concern. **4. Organization of Figures** We apologize for the issues with figure organization and readability. We'll improve them as much as possible within the template constraints, and define PT/IT in Figures 4/5. **5. Answers to Questions** - *Q: Evolution of fractal parameters with the development of LLMs?* This is an insightful question. While our study doesn't provide a longitudinal analysis across model generations, we hypothesize that fractal parameters of LLMs will converge towards those of human language as models improve. One evidence for this is in Figure 3, where more capable models have fractal parameters closer to natural language. - *Q. Difference across types of datasets?* We study this question in the paper in Table 3, and have now expanded this analysis with the RAID dataset (see point 1 / Q9 above). We do observe that LLMs seem to be more capable of replicating humans in some domains (e.g. articles) over others (e.g. poetry), as discussed above. - *Q. Can memorization affect self-similarity?* We agree this is a very interesting question. Unfortunately, we don't have an answer yet, and will this leave for future research. **Summary** Thank you again for your valuable feedback. We believe the additional experiments, clarifications, and revisions significantly strengthen the paper and directly address the concerns raised, and we hope that they resolve them. If you have any remaining concerns, please let us know so we can respond to them during the rebuttal period. Otherwise, we would appreciate it if you consider revising your score.
null
null
null
null
null
null
AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML
Accept (poster)
Summary: &nbsp; The authors introduce a multi-agent LLM framework for full pipeline AutoML. The authors perform an extensive empirical study against relevant baselines, demonstrating that their AutoML-Agent outperforms both frontier models and partial pipeline language agents. The authors' experiments are informative, showcasing that for simpler, tabular tasks, partial pipeline agents remain competitive. The paper is well-written and the authors release an extensive codebase for their agent. As such I recommend acceptance with the potential to increase my score if the points below are addressed. &nbsp; ## **Post-Rebuttal Update** &nbsp; I have upgraded my score to 5 post-rebuttal follow the authors' extensive additional experiments and clarifications during the author-reviewer discussion phase. I am strongly in favor of accepting the paper and wish to express to the AC my interest in championing the paper for consideration as a spotlight/oral. &nbsp; Claims And Evidence: &nbsp; The claims made by the authors are empirical in nature. In terms of reproducibility, the authors provide an anonymous GitHub link to their codebase which is well documented. &nbsp; Methods And Evaluation Criteria: &nbsp; 1. It would be worth discussing the relationship between the current approach and the Orchestrator-Workers paradigm of [7]. It seems as though the Agent Manager functions as an orchestrator. 2. How does AutoML-Agent compare against Agent K [2] on Kaggle-style problems? This seems like it would be a relevant comparison given that the bespoke AutoML baselines in the current paper are chosen based on their performance in Kaggle notebooks. &nbsp; Theoretical Claims: &nbsp; Not applicable. &nbsp; Experimental Designs Or Analyses: &nbsp; 1. What is the sensitivity of AutoML-Agent's performance to the user’s prompt with task description, requirements, and/or constraints? It may be worth designing a sensitivity analysis by having e.g. 3 levels of clarity in the user prompt ranging from vague to precise instructions assuming they pass the request verification stage. 2. What is the sensitivity to the system prompts used by the various agents? &nbsp; Supplementary Material: &nbsp; 1. It would be great if the dataset details were standardized i.e. providing the number of examples, number of features, variable type of the labels etc. 2. The skeleton Python script in Section C.1 seems quite restrictive if it is intended to be a central component of AutoML-Agent. Do the authors believe such a template will generalize to AutoML problems not considered in the paper? &nbsp; Relation To Broader Scientific Literature: &nbsp; It may be worth referencing [4] in relation to the use of LLMs for HPO. It may also be worth discussing the relationship. Between AutoML-Agent and the works of [1,3,5,6,7,8]. Of particular note is [2] which tackles Kaggle competitions and by its nature will need to perform aspects of AutoML. &nbsp; Essential References Not Discussed: &nbsp; Below I list the references I have cited elsewhere in the review. &nbsp; **__REFERENCES__** &nbsp; [1] Narayanan, S., Braza, J.D., Griffiths, R.R., Ponnapati, M., Bou, A., Laurent, J., Kabeli, O., Wellawatte, G., Cox, S., Rodriques, S.G. and White, A.D., 2024. [Aviary: training language agents on challenging scientific tasks.](https://arxiv.org/abs/2412.21154) arXiv preprint arXiv:2412.21154. [2] Grosnit, A., Maraval, A., Doran, J., Paolo, G., Thomas, A., Beevi, R.S.H.N., Gonzalez, J., Khandelwal, K., Iacobacci, I., Benechehab, A. and Cherkaoui, H., 2024. [Large language models orchestrating structured reasoning achieve Kaggle grandmaster level.](https://arxiv.org/abs/2411.03562) arXiv preprint arXiv:2411.03562. [3] Tang, X., Liu, Y., Cai, Z., Shao, Y., Lu, J., Zhang, Y., Deng, Z., Hu, H., An, K., Huang, R. and Si, S., 2023. [ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code.](https://arxiv.org/abs/2311.09835) arXiv preprint arXiv:2311.09835. [4] Liu, T., Astorga, N., Seedat, N. and van der Schaar, M., [Large Language Models to Enhance Bayesian Optimization.](https://openreview.net/forum?id=OOxotBmGol) In The Twelfth International Conference on Learning Representations 2024. [5] Tang, J., Fan, T. and Huang, C., 2025. [AutoAgent: A Fully-Automated and Zero-Code Framework for LLM Agents.](https://arxiv.org/abs/2502.05957) arXiv e-prints, pp.arXiv-2502. [6] Kon, P.T.J., Liu, J., Ding, Q., Qiu, Y., Yang, Z., Huang, Y., Srinivasa, J., Lee, M., Chowdhury, M. and Chen, A., 2025. [Curie: Toward Rigorous and Automated Scientific Experimentation with AI Agents.](https://arxiv.org/abs/2502.16069) arXiv preprint arXiv:2502.16069. [7] Fourney, A., Bansal, G., Mozannar, H., Tan, C., Salinas, E., Niedtner, F., Proebsting, G., Bassman, G., Gerrits, J., Alber, J. and Chang, P., 2024. [Magentic-One: A generalist multi-agent system for solving complex tasks.](https://arxiv.org/abs/2411.04468) arXiv preprint arXiv:2411.04468. [8] Hu, X., Zhao, Z., Wei, S., Chai, Z., Ma, Q., Wang, G., Wang, X., Su, J., Xu, J., Zhu, M. and Cheng, Y., 2024, July. [InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks.](https://proceedings.mlr.press/v235/hu24s.html) In International Conference on Machine Learning (pp. 19544-19572). PMLR. [9] Lála, J., O'Donoghue, O., Shtedritski, A., Cox, S., Rodriques, S.G. and White, A.D., 2023. [PaperQA: Retrieval-augmented generative agent for scientific research.](https://arxiv.org/abs/2312.07559) arXiv preprint arXiv:2312.07559. &nbsp; Other Strengths And Weaknesses: &nbsp; Overall, I think the paper could be strengthened by performing a sensitivity analysis on the various agent prompts as well as comparing against Agent K [2] on external Kaggle datasets. Otherwise I believe the contribution is still solid. &nbsp; Other Comments Or Suggestions: &nbsp; 1. There are some missing capitalizations in the references e.g. "GPT-4", "LLM", "AutoML", "AI". 2. In the introduction, no parentheses around "e.g. natural language and computer vision) 3. The link to the codebase was quite difficult to find (I believe it is only given in Section C of the appendix?). It may be worth moving this to the abstract or introduction. 4. It would be worth running GPT or Claude over the codebase code to write documentation. 5. It would be worth spelling out the acronym RMSLE as root mean square log error when it is first introduced in the main paper. 6. Line 183, typo, there is a space that should be inserted. Same on line 370. &nbsp; Questions For Authors: &nbsp; 1. The prompt agent requires instruction fine-tuning. How is this instruction fine-tuning expected to generalize to new problem classes not considered in the paper? Will the prompt agent LLM need to be instruction-tuned from scratch? Is EvoInstruct expected to be effective in generating instruction, response pairs for all AutoML problems? 2. What is the reason for the gulf in the level of encouragement for the various agents in the system prompt e.g. prompt agent receives the underwhelming, "You are an assistant project manager in the AutoML development team." Whereas data agent receives the hyperbolic, "You are the world’s best data scientist of an automated machine learning project". 3. For the arXiv paper search component in Appendix D, what do the authors think of the potential for a tool such as PaperQA [9] for improving performance in this step? 4. The prompt for Retrieval-Augmented Planning contains in-context examples. How were these generated? Are they contained in the GitHub repo? If so, it is difficult to find them. 5. In Section 3.5, the authors state, "if no dataset is found, we rely on the inherent knowledge of the LLM". Presumably this does not result in a successful outcome if there is no data present? &nbsp; Ethical Review Concerns: &nbsp; Not applicable. &nbsp; Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and constructive review and appreciate your recognition of our paper’s strengths. Below, we address your specific comments. > Relationships to Magentic-One [7] and Agent K [2] **R1** Thank you for pointing out these valuable concurrent papers that we had previously missed. Although our Agent Manager functions similarly to [7], it performs constraint-aware plan selection and multi-stage verification, making it more task-specific and grounded. For Agent K [2], the current version is not open-sourced, which limits direct comparison. Nevertheless, we provide a qualitative analysis. Although Agent K also leverages LLMs to orchestrate ML pipelines, it is tailored for Kaggle competition settings and relies on a training-based approach with high search overhead. In contrast, AutoML-Agent introduces a platform-agnostic, RAP and multi-stage verification framework designed for broader AutoML applications beyond Kaggle with a *training-free* search method. We have clarified these connections and distinctions in Related Work, including those in [1–8]. > Sensitivity to prompts and rationale for system prompts **R2** While agent-specific prompt design is not the primary focus of this paper, for **user prompts**, our results suggest that AutoML-Agent is minimally sensitive to prompt specificity. In both *constraint-free (**somewhat vague**)* and *constraint-aware (**precise**)* settings, the system achieves high success rates with comparable average NPS (0.804 vs. 0.810), indicating robustness to prompt variation. Clearer prompts generally lead to better outcomes, but the system is designed to handle vague instructions gracefully: the combination of the verification mechanism and RAP allows the system to tolerate a range of prompt qualities. Even when the Prompt Agent interprets vague inputs broadly, execution verification ensures that only valid and high-performing solutions are accepted. For **system prompts**, please refer to `Reviewer mnKA-R4`. > Restrictive skeleton Python scripts **R3** The provided skeleton serves as a default scaffold for typical ML tasks, offering a high-level structure without enforcing task- or data-specific code. It enables agents to complete the template using *theoretically any* specialized modules by following the TODO list. We believe this approach is significantly more flexible than the data-specific templates used in the DS-Agent framework. Typically, the only component that requires adjustment for new tasks is the evaluation metric. > Instruction fine-tuning in Prompt Agent **R4** For entirely new problem classes, we could extend the existing model through additional fine-tuning or by providing a few in-context examples. EvoInstruct can be used to generate instruction–response pairs in the new domain to support this adaptation. To provide some context, instruction fine-tuning was initially necessary to ensure precise parsing of user inputs for interoperability between agents, as this work began before OpenAI supported accurate structured outputs. With recent updates to GPT models, we can now directly use a JSON schema, eliminating the need for fine-tuning in many cases. The Prompt Agent can accurately parse user queries by referencing the schema explicitly. The schema is available at `/prompt_agent/schema.json`. > Potential use of PaperQA [9] **R5** We agree that a tool like [9] could improve the literature retrieval component. Incorporating PaperQA-style scientific querying could enhance citation relevance and support more context-aware prompt generation, ultimately leading to better planning decisions. We will acknowledge this direction in the paper and consider it for future iterations. > In-context examples for Retrieval-Augmented Planning **R6** The in-context examples (`plan_knowledge`) are **automatically generated on the fly**. They are *extracted*, *summarized*, and *organized* from raw retrieved documents and texts (e.g., search results, arXiv papers, Kaggle notebooks) before being passed to the Agent Manager for planning. We have updated the GitHub repository to include examples at `/example_plans/plan_knowledge.md`. > "if no dataset is found...". **R7** When we mention relying on the LLM’s inherent knowledge, we are referring to a fallback mechanism intended to produce the most plausible output given limited context. However, in the absence of actual data, the pipeline will ultimately fail during final implementation verification due to runtime errors. Our approach assumes that a dataset is either provided (e.g., image classification) or retrievable via search (e.g., node classification). If no dataset is found, the agent will prompt the user to supply one when in interactive mode. We will revise the text to clarify this assumption. > Writing suggestions **R8** We have addressed all the mentioned issues, including a table with standardized details (see anonymous GitHub). This will be included in the final version. --- Rebuttal Comment 1.1: Comment: &nbsp; Many thanks to the authors for their rebuttal. I consider the majority of points addressed. &nbsp; 1. **Comparison to Agent K**: Many thanks to the authors for pointing out that the source code for Agent K is not yet open-sourced. As such, the authors cannot be expected to run Agent K. The Kaggle competitions that Agent K is evaluated on however, are present in Figure 8 of the paper. Given that the authors state that AutoML agent is capable of tackling a broader range of AutoML problems relative to Agent K, is there any reason why the authors could not evaluate AutoML agent on the same Kaggle competitions to enable a direct comparison? &nbsp; 2. **Prompt Sensitivity Analysis**: Many thanks for the authors for clarifying that there is an implicit prompt sensitivity analysis present when considering the constraint-free vs. the constraint-aware settings. &nbsp; I remain in favor of accepting the paper. If a direct comparison against Agent K on a subset of the same Kaggle competitions used in the paper (or a convincing justification for why this is not possible could be provided) I will increase my score. &nbsp; --- Reply to Comment 1.1.1: Comment: Thank you very much for acknowledging our responses and giving a chance to further clarifications. We greatly appreciate your support for acceptance. > Comparison to Agent K **R1** Thank you for the clarification and for pointing us to Figure 8. During the initial rebuttal phase, our primary focus was on locating the source code to enable a direct and fair comparison using our experimental setup and the NPS metric. We would like to clarify that while we can indeed evaluate AutoML-Agent on the same Kaggle competition datasets, a fully direct, apples-to-apples comparison remains infeasible due to the absence of Agent K’s numerical results. However, as suggested by the reviewer, we estimated the quantile and task-specific performance of Agent K from Figure 8 using the Kaggle API. We report results on *eight* competitions—selected from both the lowest and highest performance quantiles in each category—in the table below, given time constraints. | **Competition ID** | **Learderboard Quantile** | | **Task-Specific Performance** | | | ------------------------------------------------------- | ------------------------- | ------------ | ----------------------------- | ------------ | | (↓ indicates lower task-specific performance is better) | **Agent K** (From Figure 8) | **AutoML-Agent** | **Agent K** (Derive from Rank) | **AutoML-Agent** | | _No Medal_ | | | | | | restaurant-revenue-prediction (↓) | 8~9 | 57 | 2279272.777~2280826.272 | 1859766.392 | | playground-series-s3e14 (↓) | 88~89 | 91 | 331.167~331.173 | 330.141 | | _Bronze_ | | | | | | nlp1000-ml-challenge | 75~76 | 29 | 0.993~0.994 | 0.720 | | dogs-vs-cats-redux-kernels-edition (↓) | 93~94 | 82 | 0.054~0.055 | 0.079 | | _Silver_ | | | | | | nlpsci | 89~90 | 89 | 0.809~0.810 | 0.808 | | home-data-for-ml-course (↓) | 98~99 | 91 | 13187.602~13193.958 | 14869.145 | | _Gold_ | | | | | | world-championship-2023-embryo-classification | 90~91 | 100 | 0.571~0.571 | 0.609 | | sign-language-image-classification | 99~100 | 88 | 0.977~0.978 | 0.971 | In terms of task-specific downstream performance, **except for the `nlp1000-ml-challenge` and `home-data-for-ml-course` datasets, AutoML-Agent performs comparably or better than Agent K**. It is worth noting that, unlike Agent K, our AutoML-Agent does not leverage iterative feedback directly from leaderboard scores or expert-built, task-specific tools during the search process. We believe that incorporating these kinds of tools could further enhance AutoML-Agent's performance. We hope these experimental results, though not a perfect direct comparison, sufficiently address your concern. > Prompt Sensitivity Analysis **R2** Thank you for acknowledging our response. In addition to the analysis of user prompts, the reviewer may also refer to https://anonymous.4open.science/r/AutoML-Agent/example_plans/prompt_sensitivity.md, which presents results related to the "system prompt" sensitivity.
Summary: The paper introduces AutoML-Agent, a novel multi-agent framework leveraging large language models (LLMs) to automate the entire AutoML pipeline, including data retrieval, preprocessing, model selection, hyperparameter optimization, and deployment. The proposed framework employs a retrieval-augmented planning strategy, specialized agents for decomposing and executing tasks, and multi-stage verification processes to enhance reliability and efficiency. Extensive experimental evaluation demonstrates that AutoML-Agent consistently outperforms various baselines across multiple datasets and tasks, achieving higher success rates, superior downstream task performance, and better computational efficiency. Claims And Evidence: The authors' claims regarding the superiority of AutoML-Agent in handling the full AutoML pipeline and achieving higher efficiency and performance are convincingly supported by robust empirical evidence. Results from experiments on seven diverse tasks and fourteen datasets show clear performance improvements over existing state-of-the-art AutoML frameworks and LLM-based agents, with success rates and performance metrics (accuracy, RMSLE, F1-score) explicitly presented. The ablation studies effectively support the claims regarding the importance of the retrieval-augmented planning and multi-stage verification mechanisms. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense and align with current practices in the field of AutoML. The comprehensive experimental setup, including diverse data modalities (image, text, tabular, graph, and time series data), and clearly defined metrics (e.g., accuracy, RMSLE, F1-score, Rand index) offer a fair and thorough assessment of the framework’s capabilities. The multi-agent architecture, retrieval-augmented planning, and multi-stage verification approaches are logically well-structured and suitable for tackling complex, full-pipeline AutoML challenges. Theoretical Claims: The paper does not make specific theoretical claims requiring formal proofs. Experimental Designs Or Analyses: The experimental designs and analyses are sound and rigorous. The comparison against robust baselines (e.g., AutoGluon, DS-Agent, SELA, GPT-3.5, GPT-4) under both constraint-free and constraint-aware scenarios effectively validates the proposed framework. The detailed ablation studies further substantiate the significance of each component, such as retrieval-augmented planning and multi-stage verification. Supplementary Material: he supplementary material thoroughly supports the main paper, providing detailed pseudocode (Algorithm 1), additional discussions on limitations, prompts, and extended results. The supplementary documentation enhances reproducibility and clarifies the methodological steps, particularly in agent specifications, retrieval-augmented planning, and prompt parsing. Relation To Broader Scientific Literature: The paper appropriately situates its contributions within the broader literature, effectively referencing previous AutoML frameworks (AutoGluon, AutoML-GPT, SELA), LLM-driven frameworks (DS-Agent, HuggingGPT), and planning methodologies (retrieval-augmented planning, multi-stage verification). Its innovative integration of retrieval-augmented strategies and structured task decomposition clearly addresses existing limitations in the efficiency and generalizability of prior approaches. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Potential high computational cost associated with iterative planning and multi-agent verification processes. Dependence on external APIs and models (GPT-4), which could impact cost. Other Comments Or Suggestions: - Clarifying the computational and monetary cost implications more explicitly in practical deployments could strengthen the paper’s discussion. - A more explicit discussion of limitations and potential failure modes beyond provided tasks could further improve the presentation. Questions For Authors: Have you evaluated the robustness of your system when external knowledge sources or APIs provide noisy or outdated information? Can you elaborate on your framework’s adaptability to ML tasks significantly differing from those tested (e.g., reinforcement learning or recommendation systems)? How might AutoML-Agent be extended to accommodate these areas? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and encouraging review. We greatly appreciate your positive assessment of our paper’s novelty, empirical rigor, and technical soundness. Below, we address your thoughtful concerns. > High computational cost **R1** We would like to clarify that, apart from the final implementation verification step, AutoML-Agent relies solely on inference throughout the entire search process, including planning. Unlike many prior works, this design ensures efficiency, as runtime does not scale with dataset size or model complexity. Our modular design also enables parallel execution of sub-tasks, further reducing wall-clock time. That said, while computational demands remain low (as discussed in `Reviewer Son5-R1`), using high-performing LLMs for each agent may still incur monetary costs (as noted in `Reviewer mnKA-R3`). > Dependence on external APIs and models **R2** We agree that relying on a proprietary model like GPT-4o entails cost and reliability implications. However, our framework is model-agnostic—agents can operate with any sufficiently capable LLM. We used GPT-4o to demonstrate state-of-the-art performance, but open-source or local models can be substituted to eliminate external API dependencies, albeit with some performance trade-offs. Given the rapid advancements in LLMs, we expect costs to decrease and become increasingly justified by the resulting performance gains. > Computational and monetary cost implications in practical deployments **R3** In practical deployments, the computational and monetary costs of running AutoML-Agent primarily depend on the number of planning iterations (typically 3–5), the complexity of the downstream task (e.g., data modality, task type), and the size and access method of the LLM (e.g., API vs. self-hosted). After pipeline generation, the cost shifts to standard training and inference of the downstream models, which depends on the specific components selected. > Explicit discussion of limitations and potential failure modes **R4** We have revised the limitations section to include a more explicit discussion of potential failure modes beyond the evaluated tasks. For specific failure cases, please also refer to our response to `Reviewer mnKA-R3`. > Robustness to noisy or outdated information **R5** Our design does incorporate measures to be robust to bad information. During RAP, the agent cross-verifies information by retrieving from multiple sources against user's requirements before deriving insight knowledge, and the multi-stage verification will catch issues if a retrieved piece of instruction leads to an error (since the code won’t execute or will produce a poor result, prompting a fix). As the inclusion of outdated information is highly unlikely due to retrieval constraints, we focused our evaluation on robustness to noisy information. We simulated two scenarios: - Pre-Summary Injection: Injecting unrelated or low-quality examples *before* insight extraction and planning. - Post-Summary Injection: Injecting noisy examples *after* insight extraction but before planning, i.e., mixing noisy inputs with the useful insights. Given the user requirements, these noises are generated by an extra agent (i.e., adversarial agent) prompted to create 'unhelpful' and 'fake' insights that hinder the planning process. | **Dataset** | **AutoML-Agent** | **w/ Pre-Injection** | **w/ Post-Injection** | | ---------------------- | ---------------- | -------------------- | --------------------- | | smoker-status | 0.762 | 0.768 | 0.853 | | click-prediction-small | 0.352 | 0.347 | 0.133 | | mfeat-factors | 0.940 | 0.930 | 0.915 | | wine-quality-white | 0.652 | 0.615 | 0.670 | | colleges | 0.878 | 0.658 | 0.896 | | house-prices | 0.090 | 0.089 | 0.087 | | **_Average_** | 0.612 | 0.568 | 0.593 | Our results show that AutoML-Agent is robust to such noise. Its built-in error correction and multi-stage verification mechanisms significantly mitigate the impact of noisy inputs, ensuring that the final model performance **remains largely unaffected**. Thanks to this suggestion, we observed that noise injection can even lead to better performance in particular cases. We conjecture that this may be because the Agent Manager is implicitly forced to further distinguish between useful and non-useful information. Due to the space limit, the generated plans can be found at `/example_plans/plan_with_noises.md` in the anonymous GitHub. > Adaptability to other ML tasks (e.g., RL or RecSys) **R6** Please kindly refer to `Reviewer Son5-R5`.
Summary: The paper introduces AutoML-Agent, a multi-agent LLM framework that automates the full AutoML pipeline from data retrieval to model deployment. Unlike prior approaches that focus on specific pipeline components (e.g., hyperparameter optimization or feature engineering), AutoML-Agent leverages retrieval-augmented planning (RAP) and multi-stage verification to ensure correctness, efficiency, and adaptability. The paper presents a modular, multi-agent architecture where specialized agents handle data processing,model selection, hyperparameter tuning, and deployment. The retrieval agumented planning strategy enhances exploration while multi stage verification ensures the correctness of the generated code. Experiments across seven ML tasks and fourteen datasets show that AutoML-Agent outperforms AutoGluon, DS-Agent, GPT-3.5, GPT-4 and human baselines in terms of success rate, normaluzed performance score, and comprehensive score. Additionally, AutoML-Agent is eight times faster than search-based methods like SELA while maintaining compettive performance. ## update after rebuttal I thank the authors for the additional clarifications and stay with my evaluation that I consider this paper to be an accept for the conference. Claims And Evidence: AutoML-Agent automates the entire ML pipeline frm data retrieval to deployment: The paper provides an end-to-end framework design with clear descriptions of agents handline each stage (data, model, deployment). Experiemnts show successful deployment-ready models. The retrievel-augmented planning strategy imprvoes search efficiency: The ablation study shows that RAP + plan decomposition outperforms naive planning, confirming its effectiveness. The multi-stage verification improves code correctness: Experimental results indicate that verification prevents failueres, reducing deployment errors. AutoML-Agent is more efficient than searhc-based AutoML methods like SELA: AutoML-Agent achievs similar performance in the experiments while being eight times faster. Methods And Evaluation Criteria: The proposed evaluation metrics make sense, as they measure both pipelines correctness and downstream model performance. The benchmark datasets are diverse, ensuring sufficient generalizability. However, scalability testing on large datasets is missing, which could impact real-world applicability. Theoretical Claims: N/A. Experimental Designs Or Analyses: Experiments span seven different ML tasks, making the results generalizable in general. The ablation study properly isolates effects of RAP and multi-stage verification. The hyperparaameter study analyzes how varying the number of plans affects performance. However, scalability on larlge datasets is not evaluated and the study lacks a statistical significance test in the comparisons with the baselines. Supplementary Material: No. Relation To Broader Scientific Literature: The paper covers traditional AutoML techniques such as AutoGluon, AutoSklearn, TPOT. It also discssues LLM-based AutoML methods and it highlights how AutoML-Agent improves upon past work by incorporating multi-agent collaboration and RAP. However, TabPFN as another very recent AutoML tool is missing in the comparisons. Essential References Not Discussed: Hollmann, Noah, et al. "TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second." The Eleventh International Conference on Learning Representations. Hollmann, Noah, et al. "Accurate predictions on small data with a tabular foundation model." Nature 637.8045 (2025): 319-326. Other Strengths And Weaknesses: # Strengths - The multi-agent design is a novel approach to AutoML - The full-pipeline AutoML capability is impactful for democratizing ML - The paper is well-structured and well-written. # Weaknesses - Lack of real-world deployment testing. Other Comments Or Suggestions: The authors should test AutoML-Agent on large-scale datasets and compare to TabPFN on the small scale datasets of tabular data. Questions For Authors: - How does AutoML-Agent scale on large datasets? Would it remain efficient? - Can AutoML-Agent be extended to reinforcement learning tasks? - What are the main failure cases of AutoML-Agent? - How does the framework handle edge cases in code generation? Do hallucinations still occur? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing our contributions and for your thoughtful, detailed feedback. Below, we respond to your specific concerns. > Scalability concerns **R1** This is an important point. A key motivation behind AutoML-Agent is to reduce the computational overhead of the search process. Our framework achieves this by relying solely on LLM inferences for agent communication and leveraging retrieval-augmented knowledge for planning, avoiding expensive training-feedback loops. `Figure 4c` and `Table 11` demonstrate the scalability of our approach. AutoML-Agent has a time usage standard deviation of only **1 minute**, compared to SELA’s **14 minutes**, across datasets ranging **from ~1K to 143K** instances. As a training-free method, AutoML-Agent maintains stable search time, whereas training-based methods like SELA exhibit fluctuations depending on dataset and model size. This suggests our framework scales efficiently without performance degradation. More precisely, our computational complexity is approximately $O(1 + m)$, compared to $O(s + m)$ for training-based methods, where $s$ denotes the search-time training feedback and $m$ the model training time before deployment. > Lack of a statistical significance test **R2** Currently, `Tables 5`, `6`, and `7` report the results with standard deviations over *five* independent runs. While we did not include formal statistical tests, reporting standard deviations is a common practice to convey the reliability of performance differences. Although we believe this is sufficient, please let us know if there is a specific test you would like to see. > Comparisons with TabPFN **R3** TabPFN is indeed a highly relevant and efficient method for tabular classification. We would like to clarify that TabPFN is *already included* in our baselines under the **Human Models** category ($\S$C.3). For your convenience, we summarize the results below. | **Models** | **Banana** | **Software** | | ----------------------------- | ---------- | ------------ | | Human Models (**TabPFN**) | 0.976 | 0.669 | | AutoGluon | 0.980 | 0.524 | | GPT-3.5 | 0.587 | 0.094 | | GPT-4 | 0.390 | 0.285 | | DS-Agent | 0.766 | 0.523 | | ***AutoML-Agent*** | 0.987 | 0.664 | > Lack of real-world deployment testing **R4** We acknowledge the importance of real-world deployment testing. To this end, we designed our framework with a deployment stage via a Gradio API, showing a proof-of-concept using benchmark datasets. While full deployment in production environments involves additional considerations (e.g., infrastructure, latency, data privacy, and so on), we view this as a promising direction for future work. > Extension to RL tasks **R5** AutoML-Agent’s modularity allows for extension to RL tasks. While our current implementation focuses on (un-)supervised pipelines, the framework is not fundamentally limited to these settings. Extending AutoML-Agent to RL would require incorporating domain-specific modules—for example, agents for environment interaction, action space design, and reward handling (see https://www.automl.org/autorl-survey). As noted in the paper, tasks such as RL or RecSys would benefit from additional agents to manage actor-environment interactions and reward modeling. Thanks to its modular design, one could integrate an “RL agent” that interfaces with an environment (e.g., OpenAI Gym), while the planner generates RL-specific steps such as policy training and evaluation. Although this extension is certainly feasible, it could be substantial enough that an AutoML-Agent for RL—perhaps termed AutoRL-Agent—would merit contributions worthy of a separate paper. > What are the main failure cases of AutoML-Agent? **R6** Please kindly refer to `Reviewer mnKA-R3`. > Edge cases in code generation **R7** We appreciate the reviewer’s concern regarding LLM hallucinations in code generation. Our framework is specifically designed to mitigate this issue through two core mechanisms: RAP and multi-stage verification. - RAP grounds the planning agent’s decisions in real-world information by retrieving external resources, rather than relying solely on the LLM’s internal memory, which can produce hallucinated content. This ensures that generated plan steps are more likely to be valid and based on established approaches. - Our multi-stage verification process acts as a safety net. If the LLM produces incorrect or nonsensical code, the implementation stage will flag it, triggering a corrective attempt. This dual mechanism—prevention through retrieval and correction through verification—has proven effective (please refer to `Reviewer EaoR-R5` for robustness experiments on noisy information). To further reduce risk, we provide agents with structured pipeline skeletons, which help constrain generation.
Summary: In this paper, the authors propose a novel multi-agent framework tailored for full-pipeline AutoML, including initialization, planning, and execution, incorporating 5 types of agents, such as Agent Manager, Prompt Agent, Data Agent, Model Agent, and Operation Agent. Results on 14 tasks and 5 baselines demonstrate the stronger performance of the proposed models. Claims And Evidence: The introduction of the paper is well-written and the challenges of this topic are well-motivated. The authors aim to propose an Auto-ML multiagent system that can take care of both data and model aspects and try to solve the challenges of planning and accurate implementation. Methods And Evaluation Criteria: Although the method is complex and not easy to understand the full details. The author provides clear figures and an appendix that helps the read er to understand it. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments are sound because of sufficient datasets, baselines, and experiments. The biggest concern is that the ablation study is not sufficient enough. For instance, is each agent important? How do these workers contribute to the final score? What are the errors or detailed analyses of these workers? Also, how did you design the prompt for each agent? Why not use others? Since the system is rather complex, a very careful analysis is needed to make it clear how it works and how it makes errors. Supplementary Material: Did not see Supplementary Material. Relation To Broader Scientific Literature: Auto-ML domain. The full-stack Auto-ML using LLM is the main related work of this paper. Essential References Not Discussed: Generally sufficient. Other Strengths And Weaknesses: The paper is well-written and well-motivated. Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments on the novelty, motivation, and empirical rigor of our work. We are glad to address your concerns below. > Method complexity **R1** Thank you for pointing this out. In the final version, which permits an extra page, we will **move key details currently located in the appendix, such as pseudocode summaries, into the main text**. We hope these changes will make the framework easier to understand for a broader audience. > Is each agent important? How do these workers contribute to the final score? **R2** Yes, each agent is essential. Without even one of them, the entire framework would not function properly, considering the current implementation. This is due to a tight coupling between each agent, integrated tool use, and its corresponding step in the pipeline at the code level for stable interoperability. As a result, removing any single agent would cause the system to fail at runtime. Specifically, the *Prompt Agent* ensures that user instructions are correctly interpreted and converted into structured input, enabling reliable downstream planning and execution. The *Data Agent* performs crucial data-related tasks that inform model search with domain-aware characteristics, directly enhancing model quality. The *Model Agent* drives performance by executing training-free model search, hyperparameter tuning, and profiling to select optimal candidates, thereby raising the normalized performance score. The *Operation Agent* ensures that high-performing models are correctly implemented and executable, completing the pipeline with valid, deployable code—vital for success rate metrics. Finally, the *Agent Manager* orchestrates the entire process, ensuring that all steps execute in the correct sequence and that each agent’s output is validated before proceeding, thereby enforcing the correctness of the entire pipeline. > What are the errors or detailed analyses of these workers? **R3** In this submission, we provide intermediate outputs at each stage ($\S$E) to illustrate what each agent produces—for example, the parsed JSON from the Prompt Agent and outputs from the Data and Model Agents. Building on this, we will expand our analysis to include typical failure modes. For instance, the Data Agent may mishandle uncommon data formats (e.g., failing to impute missing values or referencing incorrect column names), while the Model Agent may select suboptimal models under constrained search spaces. **Our multi-stage verification process detects and corrects most of these issues.** As shown in $\S4.3$, certain variants fail to produce runnable code without specific modules, whereas the full system identifies and fixes such errors via feedback. The Agent Manager flags plans that violate user constraints (e.g., low accuracy), while implementation verification catches runtime errors (e.g., incomplete code or incorrect references) from the Operation Agent and requests fixes. Notably, we also encountered consistent challenges with smaller or less capable models. As discussed in $\S$B.4, models like LLaMA-2-7B, Qwen1.5-14B, and even GPT3.5 often failed at complex planning or executable code generation. Common issues included incomplete code, altered comments without proper continuation, or unchanged code outputs. These patterns align with prior findings in DS-Agent and Data Interpreter, suggesting such limitations are systemic to smaller models rather than specific to our framework. We aim to reassure the reviewer that we have carefully studied “how it makes errors” and that the final results are reliable. > How did you design the prompt for each agent? **R4** We deliberately design prompts to optimize agent behavior with specific instructional steps based on role: - Creative or interpretive agents (Agent Manager, Prompt Agent) benefit from low-pressure, neutral framing to avoid overconfidence. The Prompt Agent adopts a neutral “assistant project manager” persona to support coordination and interpretation. - Execution-oriented agents (Data, Model, Operation Agents) respond better to strong, authoritative personas that promote precision and task adherence. The Data Agent is framed as “the world’s best data scientist” to encourage confident, detailed analysis and emphasize responsibilities. These specialized prompts guide the LLM toward more accurate outputs compared to using a single, general-purpose prompt. Persona-driven system prompts reliably shape agent behavior. As suggested by `Reviewer L7kx`, we tested *five* prompt variations for each agent, differing in tone and task specificity. Due to space limitations, the experimental results with example outputs can be found at `/example_plans/prompt_sensitivity.md` in our anonymous GitHub repository. Overall, agents are not highly sensitive to exact phrasing **as long as their roles were clearly defined**, which is also reinforced through the user prompts—making the framework robust to variations in system prompts.
null
null
null
null
null
null
Patch-wise Structural Loss for Time Series Forecasting
Accept (poster)
Summary: Traditional loss functions, such as Mean Squared Error, often miss structural dependencies in time series forecasting. This paper proposes a Patch-wise Structural Loss to improve accuracy by focusing on patch-level structural alignment. It uses Fourier-based Adaptive Patching to divide the series and incorporates local statistical features—correlation, variance, and mean—with dynamic gradient weighting. Testing shows enhanced forecasting performance across multiple datasets and models. Claims And Evidence: There are no obvious problematic claims in the paper. Methods And Evaluation Criteria: The design and presentation of the method both make sense. The evaluation of the method also aligns with the established standards in the field. Theoretical Claims: No question about theoretical claims. Experimental Designs Or Analyses: The design of the main experiments is comprehensive, as the PSloss is applied to various model architectures and achieves relatively good results. The ablation experiments are thorough, and their design and analysis help me better understand the method. Supplementary Material: I primarily examine the experimental section in the appendix, focusing on the results presented on Appendix D to H. Relation To Broader Scientific Literature: 1. Most current time series works, such as PatchTST[1] and iTransformer[2], use MSE loss as the optimization objective. This paper, however, highlights the limitations of using MSE loss for optimization. 2. The paper employs the Pearson Correlation Coefficient[3] to characterize correlation loss and the Kullback–Leibler (KL) divergence to characterize variance loss. 3. Inspired by previous works on balancing multi-task losses, such as [4-5], the paper proposes Gradient-based Dynamic Weighting to achieve balanced optimization. References: - [1] A Time Series is Worth 64 Words: Long-term Forecasting with Transformers - [2] iTransformer: Inverted Transformers Are Effective for Time Series Forecasting - [3] Pearson correlation coefficient - [4] Multi-task learning as multiobjective optimization. - [5] Sparsetsf: Modeling long-term time series forecasting with* 1k* parameters. Essential References Not Discussed: No essential reference not discussed. Other Strengths And Weaknesses: Strengths: 1. The paper is well written and easy to understand. 2. The paper includes comprehensive evaluations. The diverse ablation study helps to understand the proposed approach. Weaknesses: 1. The shortcomings of MSE presented in the paper are not only its limitations as an optimization objective but also as a metric. However, to align with the work in the field, the paper still uses MSE and MAE as main metrics. Figure 3 demonstrates how PSloss contributes to the final prediction results, but I believe the authors could provide more quantitative metrics to further illustrate this point. Other Comments Or Suggestions: No more comments or suggestions. Questions For Authors: Question 1: Why use adaptive patching? What are the drawbacks of fixed-length patching? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback. Below are our responses to your concerns and suggestions. # [W1] Additional quantitative metrics for evaluating PS loss performance To provide a more comprehensive evaluation, we incorporated additional metrics: **Dynamic Time Warping (DTW)**, **Time Distortion Index (TDI)**, and **Pearson Correlation Coefficient (PCC)**, to assess the performance of PS loss. A detailed explanation of these metrics is provided below: | Metric | Definition | Interpretation | |--|--|--| | Dynamic Time Warping (DTW) | Measures the minimum cumulative distance between two sequences after applying an optimal non-linear alignment. | Lower DTW indicates that the prediction closely matches the ground truth after optimal alignment. | | Time Distortion Index (TDI) | Quantifies the amount of temporal warping or distortion required to achieve the optimal alignment obtained by DTW. | Lower TDI indicates less temporal adjustments for optimal alignment, while a higher TDI signifies greater distortion.| | Pearson Correlation Coefficient (PCC) | Measures the linear relationship between two sequences. | Higher PCC indicates better preservation of the sequence's overall trend and structure. | On the iTransformer model, **PS loss consistently improves all three shape-aware metrics**, indicating better structural alignment. On the ETTh2 dataset, the iTransformer trained with MSE achieves lower DTW scores, reflecting smaller numerical differences after optimal alignment. However, the higher TDI in this case suggests that the alignment requires more extensive temporal warping, which indicates greater structural distortion compared to the forecasts generated using PS loss. |Metrics|DTW||TDI||PCC|| |-|-|-|-|-|-|- |Loss|MSE|**+PS**|MSE|**+PS**|MSE|**+PS** |ETTh1|7.355|**7.324**|7.888|**6.959**|0.514|**0.530** |ETTh2|**6.891**|7.016|24.723|**22.705**|0.299|**0.342** |ETTm1|6.568|**6.435**|12.451|**11.381**|0.538|**0.557** |ETTm2|5.913|**5.611**|26.969|**22.495**|0.325|**0.387** |Weather|5.410|**5.409**|41.440|**40.343**|0.324|**0.352** Due to space limitations, we report the average metric values across all forecasting lengths. **Please refer to ([Table 6](https://anonymous.4open.science/r/PS_Re/T6.pdf)) for full results.** # [Q1] Drawbacks of fixed-length patching Fixed-length patching requires grid search over a predefined set of patch lengths, which introduces **computational overhead and lacks adaptability across datasets**. |Fixed Patch length|3|6|12|24|48|96| |-|-|-|-|-|-|-| |96|0.379|**0.378**|0.379|0.383|0.383|0.386| |192|0.430|0.429|**0.428**|0.431|0.431|0.433| |336|0.473|**0.473**|0.474|0.480|0.480|0.483| |720|0.496|0.499|**0.480**|0.493|0.493|0.508| |Avg.|0.444|0.445|**0.440**|0.446|0.450|0.453| In our ETTh1 experiments, the best-performing fixed patch size was found to be $P = 12$, which matches the patch length estimated by our adaptive patching strategy using the dominant period $p$. This demonstrates that our method can **automatically identify an appropriate patch length** without manual tuning.
Summary: This paper proposes the Patch-wise Structural (PS) loss function for time series forecasting. The PS loss improves the alignment of local statistical properties (correlation, variance, and mean), addressing the limitations of traditional point-wise loss functions like MSE. By incorporating patch-level analysis, PS loss enhances the ability to model complex temporal structures. Extensive experiments on 7 real-world time series datasets demonstrate that PS loss significantly outperforms traditional methods, improving forecasting accuracy across various models, including LLM-based forecasting models. ## update after rebuttal I support for accept Claims And Evidence: The claims made in the paper are well-supported by experimental evidence. The experiments in the submission and appendix demonstrate that the proposed method is effective and robust across different models and architectures. The source code and detailed experimental procedures further enhance the study's reproducibility. Methods And Evaluation Criteria: The methods and evaluation criteria used in this paper are well-suited for the problem of time series forecasting. The authors choose relevant benchmark datasets and employ suitable evaluation metrics to assess model performance. The integration of PS loss with MSE is clearly explained, and the experimental setup is sound. The use of both quantitative and qualitative results strengthens the validity of the conclusions. Theoretical Claims: I have checked the proofs and the theoretical claims behind PS loss, which are robust, and do not find any issues. Experimental Designs Or Analyses: The experimental design is solid and thorough, and the results support the claims made in the paper. However, there are several issues that need to be discussed: 1. The paper does not provide a detailed comparison between the GDW strategy and grid search for loss coefficient selection. It would be helpful if the authors could offer a deeper analysis of the GDW strategy’s effectiveness. 2. The authors should provide visualizations or analyses showing how the different loss weights evolve during training, to better understand how the GDW strategy influences model optimization. Supplementary Material: I have reviewed the supplementary material, including the experimental setup and the source code, and found no errors. The supplementary material is consistent with the content of the main paper and provides additional clarity on the implementation and experimental procedures. Relation To Broader Scientific Literature: The proposed PS loss builds on existing time series forecasting loss functions but introduces a more flexible and localized approach. The introduction of patch-wise structural alignment is a novel contribution that distinguishes this work from existing methods. This innovation positions PS loss as a valuable contribution to the field of time series forecasting and makes it relevant to both academic research and practical applications. Essential References Not Discussed: The authors have effectively cited and discussed relevant work in time series forecasting, loss functions, and the patching mechanism in time series forecasting in the paper. Other Strengths And Weaknesses: Strengths: 1. The paper presents a clear and novel contribution to time series forecasting by addressing the limitations of traditional loss functions through the integrate seamless with PS loss, which provides a more precise method for structural alignment and achieves more practical predictions. 2. The gradient-based dynamic weighting strategy is a novel contribution that enhances the effectiveness of PS loss by adjusting the weight of each component based on gradient magnitudes, improving robustness without the computational cost of grid search. Weaknesses: 1. The paper includes a sensitivity analysis of the hyperparameters \lamba and \delta, but it lacks a detailed explanation of how they vary in different scenarios. It is recommended that the authors clarify the parameter settings. Other Comments Or Suggestions: Minor inconsistency in notation: In Figure 2, \alpha is used for L_Corr, \beta for L_Mean, and \gamma for L_Var, but these notations are inconsistent with the rest of the manuscript and formula. It would be beneficial to unify the notation. Questions For Authors: 1. About the ablation study, Could the authors explore whether PS loss can replace MSE loss entirely while achieving similar results in terms of forecasting accuracy? 2. Since PS loss involves local analysis, how does it affect training time, especially for large-scale datasets such as ECL? Would training time significantly increase for large datasets? 3. In the zero-shot forecasting experiment, do longer forecasting horizons benefit more from PS loss, or is its impact more significant in short-term forecasting? 4. How does PS loss compare with other loss functions such as FreDF (Learning to Forecast in Frequency Domain)? Does the combination of PS loss with these other functions enhance forecasting performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback. Below are our responses to your concerns and suggestions. # [E1] Comparison between GDW and grid-search To evaluate GDW against a traditional grid search for selecting loss weights, we conducted experiments on the ETTh1 dataset using iTransformer. Both methods use the same overall PS loss weight $\lambda$. For grid search, coefficients $\alpha$, $\beta$, and $\gamma$ were chosen from {0.3, 0.5, 0.7, 1.0}, totaling **64 runs per prediction length**. We report the best and average performance from grid search for comparison: |Method|GDW||Grid Search (Best)||Grid Search (Average)|| |-|-|-|-|-|-|-| |Metric|MSE|MAE|MSE|MAE|MSE|MAE| |96|**0.379**|**0.396**|0.380|**0.396**|0.385|0.398| |192|**0.428**|**0.424**|**0.428**|**0.424**|0.432|0.426| |336|0.474|0.453|**0.473**|**0.450**|0.483|0.456| |720|**0.480**|**0.478**|0.483|0.479|0.513|0.494| |Avg|**0.440**|0.438|0.441|**0.437**|0.453|0.444| **GDW achieves performance comparable to the best grid-searched results, while avoiding exhaustive tuning**. Moreover, it reflects the intuition that the weights of correlation, variance, and mean should evolve dynamically during training to maintain balanced attention across all three loss terms, which static coefficients cannot assure. # [E2] Visualization of loss weights generated by GDW We visualize the evolution of weights generated by GDW in [Figure 1](https://anonymous.4open.science/r/PS_Re/F1.pdf) and made the following observations: - **Weight Range:** The weights for correlation, variance, and mean have different ranges, reflecting the inherent variation in their gradient magnitudes, which highlights the need for adaptive balancing. - **Weight Evolution:** Correlation weight tends to decrease, while variance and mean weights increase. This does not imply shifting focus, but rather ensures equilibrium among components, allowing structural alignment to be preserved during optimization. This confirms that GDW adaptively balances multiple objectives throughout training, improving stability and convergence. # [W1] Hyperparameter settings The PS loss weight $\lambda$ is selected from {0.1, 0.3, 0.5, 0.7, 1.0, 2.0, 3.0, 5.0, 10.0}. The patch length threshold $\delta$ is chosen from {24, 48}. # [Q1] PS loss as a standalone objective Results show that PS loss alone yields comparable accuracy to MSE+PS, demonstrating its effectiveness as a **standalone optimization objective** ([Table 8](https://anonymous.4open.science/r/PS_Re/T8.pdf)). |Model|iTransformer||TimeMixer|| |-|-|-|-|-| |Loss|MSE + PS|PS Only|MSE+PS|PS Only| |ETTh1|0.440|**0.439**|0.437|**0.429** |ETTh2|**0.375**|0.380|0.369|**0.364** |ETTm1|**0.396**|**0.396**|**0.375**|0.377 |ETTm2|**0.281**|0.282|**0.270**|0.274 |Weather|**0.253**|**0.253**|0.243|**0.242** # [Q2] PS loss complexity on large datasets We report the empirical runtime cost by measuring the average **seconds per epoch** during training using **iTransformer** across three datasets: **ETTh1 (small), Weather (medium), and ECL (large)**: |Dataset|MSE|PS|Time Increase |-|-|-|- |ETTh1|1.96|2.66|0.71 |Weather|10.63|14.04|3.40 |ECL|25.02|30.20|5.18 Despite the added cost, **the runtime increase is modest and justified by performance gains**. # [Q3] Zero-shot performance across forecasting lengths We extended zero-shot experiments based on iTransformer to forecast lengths of **96, 336, and 720**, in addition to 192 (reported in the paper). PS loss improved accuracy in **33 out of 36** settings, confirming its robustness across both short- and long-term horizons. See [Table 9](https://anonymous.4open.science/r/PS_Re/T9.pdf) for details. |Model|96||336||720|| |-|-|-|-|-|-|- |Loss Function|MSE|+PS|MSE|+PS|MSE|+PS |ETTh1→ETTh2/m1/m2|0.499|**0.446**|0.537|**0.575**|0.579|**0.566** |ETTh2→ETTh1/m1/m2|0.626|**0.540**|0.641|**0.614**|0.686|**0.645** |ETTm1→ETTh1/h2/m2|0.420|**0.385**|0.514|**0.472**|0.550|**0.506** |ETTm2→ETTh1/h2/m1|0.622|**0.485**|0.792|**0.541**|0.870|**0.554** |Imp|-|**15.58%**|-|**11.39%**|-|**15.42%** # [Q4] Combination of PS loss and FreDF loss FreDF focuses on frequency-domain alignment to **mitigate label autocorrelation**, while PS loss emphasizes **patch-wise structural alignment** in the time domain. Their goals are complementary. We evaluated MSE+FreDF, MSE+PS, and MSE+PS+FreDF using iTransformer as backbone. |Loss|MSE+FreDF||MSE+PS||MSE+PS+FreDF|| |-|-|-|-|-|-|- |Dataset|MSE|MAE|MSE|MAE|MSE|MAE |ETTh1|0.443|0.437|0.440|0.438|**0.436**|**0.433** |ETTm1|0.404|0.406|**0.396**|**0.397**|**0.396**|**0.397** Results show that combining the two losses yields **either improved or comparable performance**, supporting their compatibility. [Table 10](https://anonymous.4open.science/r/PS_Re/T10.pdf) provides full results. --- Rebuttal Comment 1.1: Comment: The authors have provided clear and satisfactory responses to my concerns. Their clarification of the generalization issue and the motivation behind modeling patch-level alignment is convincing. The novelty of the proposed framework is better justified, and the new experiments and metrics further support the method's effectiveness. I am now more confident in the contribution of this work and support its acceptance. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful review of our work. We sincerely appreciate your valuable feedback and your confidence in our contribution. Thank you once again for your insightful suggestions and continued support!
Summary: Most previous time series forecasting models use MSE as the loss function, which treats each time step independently and neglect the structural dependency among steps. To fill the gap, this work proposes Patch-wise Structural (PS) Loss. PS Loss first splits target series into patches with patch size determined by FFT. Then correlation, variance and mean losses are computed within each patch and averaged. A gradient-based dynamic weighting mechanism is used to balance the weights of three losses during training. Experiments on real-world datasets show that PS loss can boost forecasting accuracy of both traditional and LLM-based models. Claims And Evidence: Claims are well supported. Methods And Evaluation Criteria: Overall, the methods and evaluation are convincing. I have a few minor questions or concerns as follows: 1. PS loss utilizes FFT to detect the period in the target series for patching. What if there is no periodic pattern in the target? How does PS loss perform on such datasets (considering most datasets used in experiments have obviously daily periods)? 2. In gradient-based dynamic weighting, why does mean loss require further adjustment by Equation 12 among three losses? What is the purpose behind this design, and has there been an ablation study conducted on it? 3. PS loss focuses on structural dependency, but the evaluation metrics are still point-wise MSE and MAE, which may not effectively measure structural consistency. The authors should not be criticized for this, as they are standard metrics. However, I would still like to inquire whether there are other candidate metrics (like DTW) that could better reflect structural information. Theoretical Claims: Not applicable, no new theoretical claims is proposed. Experimental Designs Or Analyses: 1. Considering that FFT brings additional computation cost, how is the efficiency of PS loss w.r.t different forecasting lengths and channel numbers? 2. Can PS loss along without MSE be used as the loss function? How does it perform? Supplementary Material: I have reviewed all the appendix. Relation To Broader Scientific Literature: The primary objective of this paper is to enhance time series forecasting, with outcomes that can be effectively applied across diverse downstream domains Essential References Not Discussed: Not applicable, references are generally comprehensive. Other Strengths And Weaknesses: Overall, this is a commendable work. The topic is significant, as most recent studies have primarily focused on backbone designs, leaving loss functions relatively underexplored. The presentation is clear and easy to follow. Should the authors adequately address my concerns, I would be happy to raise the score. Other Comments Or Suggestions: See my comments and suggestions above. Questions For Authors: See my questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback. Below are our responses to your concerns and suggestions. # [M1] PS loss performance on non-periodic targets When there is no clear periodic pattern in the target series, the dominant frequency—i.e., the one with the highest amplitude in the FFT spectrum—does not necessarily correspond to a true periodic component in the data. Instead, it typically falls into one of two categories: - **Short period ($p<\delta$)**: This often results from high-frequency components such as local fluctuations or noise. In this case, Equation (3) yields a short patch length, which still allows the model to focus on finer-grained local structure. - **Long period ($p>\delta$)**: This typically corresponds to low-frequency background components or weak global trends. In this case, the patch length will be capped at $\delta$ to prevent excessively large patches that could hinder fine-grained comparisons. This design allows PS loss to adapt to both periodic and non-periodic series, using frequency content to guide patch granularity, while $\delta$ prevents overly large patches. On the Exchange dataset, which lacks a clear periodic pattern, PS loss still improved MSE by **6.43%** on DLinear, demonstrating its effectiveness. # [M2] Purpose of mean loss refinement The purpose behind this design is to **first focus on aligning the shape of the series, and then gradually increase attention to the value offset**. As correlation and variance alignment improve during training (indicated by increasing $c$, $v$), the model allocates more weight to $L_{mean}$ to refine value-level offsets. Ablation on iTransformer and TimeMixer (ETTh1) confirms its effectiveness. |Method|iTrans*+PS||W/o c&v||TimeM*+PS||W/o c&v|| |-|-|-|-|-|-|-|-|- |Metric|MSE|MAE|MSE|MAE|MSE|MAE|MSE|MAE |96|**0.379**|**0.396**|0.380|0.396|**0.366**|**0.392**|0.368|0.391 |192|**0.428**|**0.424**|0.429|0.425|0.421|0.421|**0.418**|**0.420** |336|**0.474**|**0.453**|0.480|0.458|**0.489**|**0.453**|0.498|0.457 |720|**0.480**|**0.478**|0.505|0.492|**0.474**|**0.463**|0.480|0.464 |Avg|**0.440**|**0.438**|0.448|0.443|**0.438**|**0.432**|0.441|0.433 # [M3] Additional metrics for structural evaluation Beyond MSE/MAE, we include additional shape-aware metrics: **DTW, TDI, and PCC**, using iTransformer for evaluation (Please see Reviewer Pd9N [W1] for metric details). **PS loss consistently improves all three metrics**, indicating better structural alignment ([Table 6](https://anonymous.4open.science/r/PS_Re/T6.pdf)). |Metrics|DTW||TDI||PCC|| |-|-|-|-|-|-|- |Loss|MSE|+PS|MSE|+PS|MSE|+PS |ETTh1|7.355|**7.324**|7.888|**6.959**|0.514|**0.530** |ETTh2|**6.891**|7.016|24.723|**22.705**|0.299|**0.342** |ETTm1|6.568|**6.435**|12.451|**11.381**|0.538|**0.557** |ETTm2|5.913|**5.611**|26.969|**22.495**|0.325|**0.387** |Weather|5.410|**5.409**|41.440|**40.343**|0.324|**0.352** # [E4] Time complexity analysis of PS loss ### **1. Theoretical time complexity analysis** We analyze the time complexity of PS loss with respect to the forecast length $T$, number of channels $C$, and hidden dimension $d$. The overall complexity arises from three main components: - **Fourier-based Adaptive Patching (FAP):** The complexity of this component is dominated by the Fast Fourier Transform (FFT), which is $O(T\log T)$ per channel. Since FFT is applied to each of the $C$ channels, the total time complexity is $O(C\cdot T\log T)$. - **Patch-wise Structural Loss (PS):** The series is split into $N \approx \frac{2T}{P}$ patches, where $P$ is the patch length. Calculating correlation, variance, and mean over each patch requires $O(P)$ operations. Given $C\cdot N$ patches, the total complexity becomes $O(C\cdot N\cdot P) = O(C\cdot T)$. - **Gradient-based Dynamic Weighting (GDW):** The gradient computation for each loss component w.r.t. the model output has shape $d\cdot T$, leading to a complexity of $O(d\cdot T)$. Therefore, the overall time complexity of PS loss is **$O(C\cdot T\log T+C\cdot T+d\cdot T)$**. ### **2. Actual run time overhead** We report the empirical runtime cost by measuring **seconds per epoch** using **iTransformer** across three datasets: **ETTh1 (small), Weather (medium), and ECL (large)**: |Dataset|MSE|PS|Time Increase |-|-|-|- |ETTh1|1.96|2.66|0.71 |Weather|10.63|14.04|3.40 |ECL|25.02|30.20|5.18 Despite the added cost, **the runtime increase is modest and justified by performance gains**. # [E5] PS loss performance without MSE Results show that PS loss alone yields comparable accuracy to MSE+PS, demonstrating its effectiveness as a **standalone optimization objective** ([Table 8](https://anonymous.4open.science/r/PS_Re/T8.pdf)). |Model|iTransformer||TimeMixer|| |-|-|-|-|-| |Loss|MSE+PS|PS Only|MSE+PS|PS Only| |ETTh1|0.440|**0.439**|0.437|**0.429** |ETTh2|**0.375**|0.380|0.369|**0.364** |ETTm1|**0.396**|**0.396**|**0.375**|0.377 |ETTm2|**0.281**|0.282|**0.270**|0.274 |Weather|**0.253**|**0.253**|0.243|**0.242** --- Rebuttal Comment 1.1: Comment: Thank you for your response, especially for conducting additional experiments during the rebuttal process. I will maintain my score of 3 and vote for acceptance. Moreover, I suggest that the experiments on complexity and new metrics (including their definitions, calculation methods, etc.), as well as the corresponding analysis, should be added to the final camera-ready version. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful review of our work. We sincerely appreciate your valuable feedback and will ensure the additional experiments and analysis are incorporated into the final version. Thank you once again for your insightful suggestions and support!
Summary: The authors propose a novel patch-wise structural (ps) loss, which is designed to enhance structural alignment by comparing time series at the patch level. By leveraging local statistical properties, e.g., correlation, variance, and mean, PS loss captures nuanced structural discrepancies overlooked by traditional point-wise loss. Experiments demonstrate that the PS loss can improve the performance of the state-of-the-art models across diverse real-world datasets. ## update after rebuttal I support for accept. Claims And Evidence: The effectiveness of different module designs has been validated by the experimental results. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for time series forecasting. Theoretical Claims: I have checked the correctness of the proofs for the theoretical claims. Experimental Designs Or Analyses: The manuscript has some weaknesses in the experiments. Please see Questions for details. Supplementary Material: I have reviewed all contents in the supplementary material. Relation To Broader Scientific Literature: The introduce of patch-wise structural loss is beneficial for time series forecasting. Essential References Not Discussed: Some important baselines e.g., Ada-MSHyper [1], and FAN [3], need to be compared. Please see Questions 1 and 2 for details. [1] Shang Z, Chen L, Wu B, et al. Ada-MSHyper: adaptive multi-scale hypergraph transformer for time series forecasting. NeurIPS, 2024. [3] Ye W, Deng S, Zou Q, et al. Frequency Adaptive Normalization For Non-stationary Time Series Forecasting. NeurIPS, 2024. Other Strengths And Weaknesses: 1.The paper presents a notable innovation by exploring the integration of patch-wise structural loss into time series forecasting, a direction scarcely addressed by existing methodologies. 2.The organization of this paper is clear and the paper is well written. Other Comments Or Suggestions: No Questions For Authors: 1. As the proposed loss function is specifically designed for time series forecasting, the comparative experiments should not only focus on long-term time series forecasting but also encompass other experimental settings, e.g., short-term time series forecasting and ultra-long-term time series forecasting, as mentioned by existing methods [1, 2]. 2. Since some latest methods [1, 3] also improve model performance by introducing constraints or loss functions, as a loss specifically designed for time series, the authors should elaborate on the differences between their proposed loss function and these existing designs in the related work Section. In addition, more comparative experiments should be conducted to validate the effectiveness of the proposed loss functions against these advanced loss functions. 3. In Section 4.6, the authors demonstrate that the PS loss can improve generalization to unseen datasets. Given that LLMs have also been proven to exhibit strong generalization capabilities under zero-shot settings [4, 5, 6], it is suggested to study whether the PS loss can further enhance the performance of LLMs under zero-shot settings. The authors should validate this through additional experiments. 4.The design of the PS loss appears to be somewhat complex. The authors are suggested to include time complexity analysis. 5. To provide an intuitive understanding of the performance improvements brought by the proposed loss, the authors should explicitly list the performance gains in terms of percentage improvements. [1]Shang Z, Chen L, Wu B, et al. Ada-MSHyper: adaptive multi-scale hypergraph transformer for time series forecasting. NeurIPS, 2024. [2]Jia Y, Lin Y, Hao X, et al. WITRAN: Water-wave information transmission and recurrent acceleration network for long-range time series forecasting. NeurIPS, 2023. [3]Ye W, Deng S, Zou Q, et al. Frequency Adaptive Normalization For Non-stationary Time Series Forecasting. NeurIPS, 2024. [4]Zhou T, Niu P, Sun L, et al. One fits all: Power general time series analysis by pretrained LM. NeurIPS, 2024. [5]Liu Y, Qin G, Huang X, et al. Autotimes: Autoregressive time series forecasters via large language models. NeurIPS, 2024. [6]Jin M, Wang S, Ma L, et al. Time-LLM: Time Series Forecasting by Reprogramming Large Language Models. ICLR, 2024. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback. Below are our responses to your concerns and suggestions. # [Q1] PS loss on ultra-long-term and short-term forecasting We evaluated PS loss on **ultra-long-term (T = {1080, 1440, 1800, 2160})** and **short-term (T = {12, 24, 48})** forecasting tasks using iTransformer and DLinear. We report averaged MSE results. Please refer to [Table 1](https://anonymous.4open.science/r/PS_Re/T1.pdf) and [Table 2](https://anonymous.4open.science/r/PS_Re/T2.pdf) for full results. - **Ultra-long-term**: MSE reduced by **7.38% (iTransformer)** and **11.01% (DLinear)**. |Model|iTransformer||Dlinear|| |-|-|-|-|- |Loss Function|MSE|+PS|MSE|+PS |ETTh1|0.753|**0.693**|0.696|**0.628** |ETTh2|0.545|**0.494**|1.241|**1.127** |ETTm1|0.577|**0.536**|0.487|**0.474** |ETTm2|0.480|**0.466**|0.557|**0.463** |Imp.|-|**7.38%**|-|**11.01%** - **Short-term**: MSE reduced by **3.43% (iTransformer)** and **1.60% (DLinear)**. |Model|iTransformer||Dlinear|| |-|-|-|-|- |Loss Function|MSE|+PS|MSE|+PS |PEMS03|0.110|**0.107**|0.239|**0.235** |PEMS04|0.105|**0.101**|0.283|**0.279** |Imp.|-|**3.43%**|-|**1.60%** These results demonstrate the effectiveness of PS loss across both **short-term** and **ultra-long-term** forecasting tasks. # [Q2] Comparison with Ada-MSHyper and FAN The contributions of Ada-MSHyper and FAN differ from our PS loss, as they address distinct challenges: - **Ada-MSHyper** introduces a **hypergraph transformer** with a **graph constraint loss** to enhance multi-scale interaction modeling through hypergraph learning. - **FAN** proposes a frequency-based adaptive **normalization method** to address both trend and seasonal non-stationary patterns. - **PS (Ours)** presents a novel **loss function** that enhances structural alignment between predictions and ground truth via patch-wise statistical metrics. We also combined PS loss with both methods. Please refer to [Table 3](https://anonymous.4open.science/r/PS_Re/T3.pdf) and [Table 4](https://anonymous.4open.science/r/PS_Re/T4.pdf) for the full results. - **Ada-MSHyper + PS.** PS loss improves the average performance by **8.28% (MSE) and 4.68% (MAE)**. |Method|Ada-MSHyper||Ada-MSHyper+PS|| |-|-|-|-|- |Dataset|MSE|MAE|MSE|MAE |ETTh1|0.137|0.262|**0.132**|**0.254** |ETTh2|0.107|0.231|**0.105**|**0.227** |Imp.|-|-|**8.28%**|**4.68%** - **FAN + PS.** When using DLinear as the backbone, PS loss further improves the average performance of FAN by **2.07% (MSE) and 2.31% (MAE)**. |Method|Dlinear+FAN||Dlinear+FAN+PS|| |-|-|-|-|-| |Dataset|MSE|MAE|MSE|MAE| |ETTh1|0.444|0.485|**0.439**|**0.479** |ETTh2|0.137|0.262|**0.132**|**0.254** |Imp.|-|-|**2.07%**|**2.31%** These results demonstrate that while **FAN and Ada-MSHyper focus on different aspects, PS loss can still further improve their performance** by enhancing the structural alignment of the forecasted series. # [Q3] PS loss on LLM-based models for zero-shot forecasting We conducted zero-shot forecasting experiments with LLM-based models: OFA, AutoTimes, and Time-LLM. PS loss improved forecasting accuracy with average MSE reductions of **2.07% (OFA)**, **6.33% (AutoTimes)**, and **7.29% (Time-LLM)**. Please refer to [Table 5](https://anonymous.4open.science/r/PS_Re/T5.pdf) for the full results. |Model|OFA||AutoTimes||Time-LLM|| |-|-|-|-|-|-|- |Loss Function|+MSE|+PS|+MSE|+PS|+MSE|+PS |ETTh1→ETTh2/m1/m2|0.410|**0.417**|0.421|**0.421**|0.420|**0.405** |ETTh2→ETTh1/m1/m2|0.461|**0.454**|0.568|**0.499**|0.506|**0.450** |ETTm1→ETTh1/h2/m2|0.335|**0.336**|0.359|**0.346**|0.349|**0.338** |ETTm2→ETTh1/h2/m1|0.411|**0.359**|0.445|**0.373**|0.424|**0.374** |Imp|-|**2.07%**|-|**6.33%**|-|**7.29%** # [Q4] Time complexity analysis of PS loss We analyze the time complexity of PS loss with respect to the forecast length $T$, number of channels $C$, and hidden dimension $d$. The overall complexity arises from three main components: - **Fourier-based Adaptive Patching (FAP):** The complexity of this component is dominated by the Fast Fourier Transform (FFT), which is $O(T\log T)$ per channel. Since FFT is applied to each of the $C$ channels, the total time complexity is $O(C\cdot T \log T)$. - **Patch-wise Structural Loss (PS):** The series is split into $N \approx \frac{2T}{P}$ patches, where $P$ is the patch length. Calculating correlation, variance, and mean over each patch requires $O(P)$ operations. Given $C \cdot N$ patches, the total complexity becomes $O(C \cdot N \cdot P) = O(C\cdot T)$. - **Gradient-based Dynamic Weighting (GDW):** The gradient computation for each loss component w.r.t. the model output has shape $d \cdot T$, leading to a complexity of $O(d \cdot T)$. Therefore, the overall time complexity of PS loss is **$O(C \cdot T \log T + C \cdot T + d \cdot T)$**. # [Q5] Performance gains in terms of percentage improvements We now report percentage improvements throughout the paper and summarize them in [updated Table](https://anonymous.4open.science/r/PS_Re/T7.pdf). --- Rebuttal Comment 1.1: Comment: Thank you for your response, especially for conducting additional experiments during the rebuttal process. I will maintain my score of 3 and vote for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful review of our work. We are sincerely grateful for your valuable feedback and your recognition of the additional experiments. Thank you once again for your insightful suggestions and support!
null
null
null
null
null
null
FSTLLM: Spatio-Temporal LLM for Few Shot Time Series Forecasting
Accept (poster)
Summary: The paper presents FSTLLM, a novel Spatio-Temporal Large Language Model (LLM) framework designed for few-shot time series forecasting. The model effectively integrates domain knowledge through a fine-tuned LLM and a graph-based learning approach to capture spatial-temporal correlations. Experimental results on real-world datasets demonstrate superior performance compared to existing baselines. Claims And Evidence: The claim that "The model effectively integrates domain knowledge through a fine-tuned LLM and a graph-based learning approach to capture spatio-temporal correlations" is well-supported by experimental results on real-world datasets. Additionally, the case study effectively demonstrates FSTLLM’s reasoning ability derived from domain knowledge. Methods And Evaluation Criteria: The proposed method primarily applies a fine-tuned LLM to enhance baseline methods such as STGNNs. The evaluation criteria and settings are consistent with existing works. Both the methodology and evaluation framework are appropriate for the problem at hand. Theoretical Claims: I have checked equations 1 – 8, and they are correct and align with the submitted code. Experimental Designs Or Analyses: The experimental design is sound. Specifically, FSTLLM significantly enhances few-shot forecasting performance of STGNNs while also incorporating reasoning ability, which is crucial for end users. However, some points need to be addressed: 1. The paper does not clearly explain how FSTLLM crafts its node descriptions and node pattern analyses, as introduced in Section 3.3 (Domain Knowledge Injection Module). 2. The study presents fine-tuning and inference case study results only on the Nottingham dataset. Including a demonstration on the ECL dataset would provide a more comprehensive understanding of the model's applicability. Supplementary Material: I have reviewed the submitted code. The code appears to be well-structured and functions as expected. Relation To Broader Scientific Literature: This work builds upon recent advancements in using LLMs for forecasting tasks, such as GPT4TS and Time-LLM. While these studies demonstrate the potential of LLMs in processing time-series data, they primarily focus on fine-tuning LLMs with numerical input. FSTLLM advances this line of research by incorporating domain knowledge through structured prompt engineering combined with an STGNN backbone, making LLMs more context-aware for time-series forecasting. Essential References Not Discussed: The paper comprehensively covers relevant literature, but I suggest including the following recent work: ‘From News to Forecast: Integrating Event Analysis in LLM-Based Time Series Forecasting with Reflection’ (NeurIPS 2024). This study also fine-tunes an LLM in a textual format to address time-series forecasting challenges. Other Strengths And Weaknesses: Strength: 1. The paper introduces FSTLLM, a novel framework that effectively integrates LLMs with Spatio-Temporal Graph Neural Networks (STGNNs) to enhance time-series forecasting. 2. Experimental results demonstrate that FSTLLM consistently outperforms state-of-the-art baselines across multiple forecasting horizons on real-world datasets. 3. The adaptable architecture of FSTLLM allows integration with existing time-series forecasting models. Weakness: 1. As mentioned in the ‘Experimental Designs Or Analyses’ section, it is important to clarify how node descriptions and patterns are derived and to include demonstrations on both the Nottingham and ECL datasets. 2. A comparison between FSTLLM and the reference paper mentioned in ‘Essential References Not Discussed’ would strengthen the discussion. 3. The rationale for fine-tuning an LLM remains unclear. Given that all domain knowledge is embedded in the prompt, why not leverage powerful LLMs such as GPT-4o or DeepSeek R1 directly for inference-based forecasting enhancement? Other Comments Or Suggestions: 1. Page 1, Lines 11–14: "The fundamental of time series forecasting methodologies" → Should be "The fundamentals of time series forecasting methodologies". 2. Page 1, Lines 46–47: "Accurate forecasting requires precise modeling on two dimensions" → Should be "Accurate forecasting requires precise modeling of two dimensions". 3. Page 4, Lines 206–210: "these textual data" → Should be "this textual data". Questions For Authors: Please address the three weaknesses listed in the ‘Other Strengths And Weaknesses’ section and all typos listed in ‘Other Comments Or Suggestions’. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Response to Reviewer We thank the reviewer for the thoughtful comments. We address each point below and will incorporate the corresponding revisions into the manuscript. **Q1: Node Description, Pattern Analysis, and Case Study** We used ChatGPT-4o to generate both node descriptions and pattern analyses. To ensure reproducibility, we will include the prompt templates in the Appendix. Specifically: * Node Description Prompt: >“You are given a feature description from [carpark description link] and user reviews for this carpark at [Google review link]. Write an inductive summary about the carpark based on contents from both sources.” * Node Pattern Analysis Prompt: > “I will provide one week of parking lot records for a carpark in England. This record is collected every 15 minutes starting from 12:01 PM on 2016-10-26. Please describe the natural pattern of parking availability, including diurnal and weekly variations, and highlight peak and dip periods observed in the data. The records are: [extracted training data].” Additionally, we have conducted a full fine-tuning and inference case study on the ECL dataset. Due to space limitations in this response, the full results and analysis will be included in the Appendix of the revised manuscript. **Q2: Discussion of "News2Forecast" (Wang et al., 2024, NeurIPS)** We will include a discussion of this work in Section 2.3 Related Work, and revise the manuscript as follows: >“News2Forecast (Wang et al., 2024) enhances time series forecasting by integrating social events through LLM-based agents using reflection and reasoning. It fine-tunes a pre-trained LLM to align textual and numerical data, thereby improving forecasting accuracy.” While this work focuses on aligning news events with time series data, it does not account for spatial dependencies across different time series. Furthermore, it may suffer from reduced robustness when training samples are limited or when relevant external textual content from news is unavailable. In contrast, FSTLLM explicitly models spatial-temporal dependencies and is designed to generalize under limited supervision. **Q3: Rationale for Fine-Tuning vs. Direct Prompting with Powerful LLMs** We appreciate this important question. The motivation for fine-tuning instead of relying solely on inference from powerful LLMs is twofold: * **Modeling Temporal Dynamics:** Off-the-shelf LLMs (e.g., GPT-4o, DeepSeek R1), when used via prompting, do not effectively model the complex temporal dependencies inherent in time series data. Their predictions tend to be shallow, often yielding weighted approximations based on two provided historical input and numerical prediction, rather than learning temporal patterns explicitly. * **Learning Temporal Representations via Weight Updates:** Fine-tuning enables the model to update its internal weights, allowing it to capture nuanced dynamics and structure within the time series. This results in significantly improved forecasting performance compared to inference-only approaches. As evidenced in prior work, fine-tuned models such as GPT4TS and Time-LLM consistently outperform inference-based models like PromptCAST, further justifying our design choice. Please let us know if further clarification is needed. We are grateful for the reviewer’s constructive feedback. --- Rebuttal Comment 1.1: Comment: I appreciate the author's patient response, which largely addressed my doubts. I will slightly increase my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for taking the time to revisit your review. we truly appreciate your thoughtful reconsideration and the updated score.
Summary: This work introduces a framework called FSTLLM. This framework provides enhanced few-shot time series forecasting performance by integrating LLMs with the STGNN backbone. Specifically, it leverages LLMs for spatial correlation modeling, an STGNN network for spatio-temporal pattern modeling, and a domain knowledge injection module for improved predictions. FSTLLM outperforms state-of-the-art baselines, highlighting superior accuracy and robustness in real-world datasets. Claims And Evidence: The claims in the paper are generally supported by experimental results and case studies. Methods And Evaluation Criteria: Yes. The proposed methods and evaluation criteria are well-suited for few-shot time series forecasting. FSTLLM integrates LLMs for spatial correlation modeling, an STGNN for spatio-temporal pattern modeling, and a domain knowledge injection module for improved predictions. The model is evaluated on real-world datasets (Nottingham Parking and ECL) using standard metrics (MAE, RMSE, MAPE), with a few-shot setup to test its adaptability. Theoretical Claims: Yes, the mathematical equations and the claims in the paper appear correct. There are no particular theoretical proofs in the paper. Experimental Designs Or Analyses: The experimental design and analyses in FSTLLM are methodologically sound, with well-defined datasets, 12 baselines, and standard evaluation metrics (MAE, RMSE, MAPE). The few-shot setting is realistic, using two real-world datasets (Nottingham Parking & ECL) to test generalizability. The analysis demonstrates consistent performance improvements, supported by ablation studies (assessing model components) and a few-shot integration study, showing FSTLLM’s ability to enhance other forecasting models like GPT4TS and iTransformer. Supplementary Material: I have reviewed the submitted Appendix A – G and the submitted code looks fine. Relation To Broader Scientific Literature: FSTLLM advances spatio-temporal forecasting by leveraging LLMs to enhance spatial correlation modeling, surpassing traditional STGNNs. Unlike existing LLM-based models such as GPT4TS (NeurIPS 2023) and TimeLLM (ICLR 2024), it integrates node-specific domain knowledge for more context-aware predictions, leading to improved forecasting performance. By utilizing LLMs’ few-shot capabilities, FSTLLM enhances forecasting in data-limited settings while also providing reasoning ability, distinguishing it from classical STGNNs like GTS (ICLR 2021) and STEP (KDD 2022). This work introduces a novel approach that combines LLMs with an STGNN backbone rather than relying on a single architecture. Researchers working on other time series tasks like imputation and classification can use this method as a foundation for integrating LLMs with other models. Essential References Not Discussed: Two important work are missing. [1] Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts, ICLR 2025. [2] DUET: Dual Clustering Enhanced Multivariate Time Series Forecasting, KDD 2025. Other Strengths And Weaknesses: S1. The problem studied in this work (few-shot time series forecasting) is an important field given real-world applications, where data collection is often limited. S2. The paper is well-written and easy to follow. The experimental analysis illustrates that the proposed FSTLLM is superior to the existing methods. W1. Terminology Clarity: The term "candidate node embedding" in Section 3.1 (lines 205-206) is unclear. It would help to specify whether it refers to another node in the graph or an alternative embedding technique. W2. Few-Shot Integration Details: Section 4.4 claims that "we replace the numerical prediction tokens generated by the STGNN backbone with those produced by alternative transformer-based methods." However, implementation details are not clearly specified—particularly whether the LLM-Enhanced Graph Construction Module was considered or omitted, given that GPT4TS does not model multivariate correlations while other transformer-based methods do. W3. Data Efficiency Evidence: The introduction claims that "these models typically require large volumes of training data...collecting such data is time-consuming and resource-intensive." However, the experimental results primarily focus on forecasting accuracy rather than explicitly demonstrating FSTLLM’s data efficiency. Additional experiments comparing performance with varying data availability could reinforce this claim. Other Comments Or Suggestions: Typo in the Introduction section, line 36: Missing comma before "offer mechanisms to jointly." Typo in the Methods section, line 163: "The numerical prediction tokens is still suboptimal" should be "The numerical prediction tokens are still suboptimal." Questions For Authors: I will consider changing my evaluation of the paper based on the author’s response to the following concerns. 1. Discussing Time-MoE in this paper on a few-shot time series forecasting task. 2. Clearly explain what "candidate node embedding" refers to, as listed in W1. 3. Explain the implementation in detail in section 4.4, the Few-Shot Learning Integration Study listed in W2. 4. Discuss and provide experimental results to demonstrate FSTLLM’s data-efficient advantage as listed in W3. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely thank the reviewer for the insightful and constructive feedback. Below, we address each point in detail and outline the corresponding changes we will make in the revised manuscript. **Q1: Missing References — DUET and Time-MoE** We appreciate the reviewer’s suggestion and will incorporate both DUET and Time-MoE into our related work discussion: DUET will be added to Section 2.1 (Classical Neural Network-based Methods), lines 90–93: >“DUET (Qiu et al., 2025) introduces a framework to tackle multivariate time series forecasting with heterogeneous temporal patterns and complex inter-channel dependencies. It features a Temporal Clustering Module (TCM) for handling temporal heterogeneity, and a Channel Clustering Module (CCM) that applies a novel channel-soft-clustering mechanism in the frequency domain to model inter-channel relationships while mitigating noise.” Time-MoE will be discussed in Section 2.3 (Large Language Models), lines 126–130: >“Time-MoE (Shi et al., 2025) presents a scalable architecture for time series forecasting using a sparse mixture-of-experts design to enhance efficiency while maintaining high model capacity. Trained on the extensive Time-300B dataset (over 300 billion time points across nine domains), Time-MoE scales up to 2.4 billion parameters and achieves strong forecasting accuracy. However, its expert routing mechanism is less effective in few-shot scenarios, limiting performance under data-constrained settings.” **Q2: Clarification of “Candidate Node Embedding”** Thank you for pointing this out. The term refers to the embedding of a specific candidate node in the graph structure. We will revise the corresponding sentence to reader: >“... the embedding of a candidate node selected from the spatio-temporal graph.” This revision will improve clarity and remove ambiguity. **Q3: Integration Experiment Setup** We agree that further clarification is needed. We will revise Section 4.4 (Few-Shot Learning Integration Study), lines 407–410, as follows: >“To assess the plug-and-play potential of our LLM-enhanced graph construction, we substitute the numerical prediction tokens generated by the STGNN backbone with those from various transformer-based forecasting models. Specifically, we remove both the LLM-enhanced graph construction module and the STGNN backbone, and replace them with external transformer-based methods without modifying or retraining those models. This allows us to evaluate the generality of our integration strategy.” This ensures the experiment strictly evaluates modular compatibility in a plug-and-play fashion. **Q4: Evidence of Data Efficiency** Thank you for highlighting this important point. To evaluate data efficiency, we compare FSTLLM trained on just 3 days of data with baselines trained on 30 days of data, using the Nottingham dataset. The results are summarized below: | Method | MAE | RMSE | MAPE (%) | |-------------------|-------|--------|-----------| | FSTLLM (3 days) | 22.84 | 83.68 | 22.33 | | GTS (30 days) | 23.54 | 90.88 | 23.02 | | GPT4TS (30 days) | 27.50 | 86.88 | 25.70 | | PatchTST (30 days)| 33.07 | 95.20 | 30.13 | | DLinear (30 days) | 32.82 | 92.64 | 26.63 | These results show that FSTLLM with only 3 days of training data outperforms all baselines trained with 10× more data, highlighting the strong data efficiency of our approach. This table will be added to the Appendix in the revised version. We appreciate the reviewer’s detailed feedback and believe that the above revisions will substantially improve the clarity and completeness of our submission.
Summary: Considering the heavy time cost to collect data for training deep learning time series forecasting models, this work focuses on enhancing forecasting performance with limited training data. This paper propose a graph construction module to ensure stable graph construction used in Spatio-Temporal Graph Neural Networks (STGNNs). Furthermore, it proposed an LLM fine-tuning methodology to enhance forecasting performance with LLM’s embedded knowledge and common-sense reasoning ability. Finally, experiments on two real-world datasets demonstrate this method’s solid and robust performance. Claims And Evidence: This paper is application-driven, and claims are supported by experimental results. - Claim 1: The Proposed FSTLLM achieves enhanced few-shot time series forecasting performance. Evidence 1: Supported by experimental results in Table 1 and Table 2. - Claim 2: The LLM Enhanced Graph Construction module is able to enhance graph construction by embedding contextual information of spatial nodes. Evidence 2: Supported by experimental results in Table 3 from ablation study. - Claim 3: The Domain Knowledge Injection module enables humanlike consideration and few-shot forecasting performance enhancement. Evidence 3: Supported by experimental results in Table 3 from ablation study and case study visualization in reasoning subsection, section 4.2 Performance of LLM. - Claim 4: FSTLLM can augment various time series forecasting models, enhancing their performance in few-shot set- tings without updating their parameters. Evidence 4: Supported by experimental results in Table 4 from section 4.4 Few-Shot Learning Integration Study. Methods And Evaluation Criteria: There are detailed running examples and complexity comparison covering 12 baselines. 2 datasets. 3 metrics. The selection of baselines aligns with domain of time series forecasting. The evaluation criteria, MAE, RMSE, and MAPE are standard evaluation criteria for time series forecasting tasks. Theoretical Claims: The equations are correct and align with the code submitted. Proposed prompt structure align with the real use case shown in Appendix. Experimental Designs Or Analyses: The design of the experiments is clear and comprehensive and support FSTLLM’s claims. Details as below. One experimental setting can be discussed to enhance the quality of this work: though using 7 days of data to simulate limited data situations is commonly used, some works also include 3 days of data for comparison as well, such as used in TransGTR (KDD 2023) and CrossTReS (KDD 2022). Supplementary Material: The source code and the dataset are available on the submission page. I have checked FSTLLM’s fine-tuning and evaluation codes as well as the STGNN backbone codes. Relation To Broader Scientific Literature: The paper contributes to the growing body of research on improving time series forecasting in data-scarce environments compared to methods requiring massive training data such as STGNNs (GraphWaveNet, GTS, STSGCN), Transformer-based methods (PatchTST, iTransformer), and LLM methods (Time-LLM, GPT4TS). Essential References Not Discussed: As far as I know, this work includes the majority of research works related to few-shot time series forecasting from STGNNs, Transformer-based methods, and LLM-based methods. One paper omitted in this work could be ‘AutoTimes: Autoregressive Time Series Forecasters via Large Language Models’ published in NIPPS 2024, which adapts LLM for time series forecasting tasks. Other Strengths And Weaknesses: Strength. + This paper is well written and clear. + Extensive experiments, including abundant baselines, real-world datasets, and comprehensive exploration of model performance have been conducted to show the effectiveness of FSTLLM. Weakness. - Discussing on experiment setting of 3 days data for comparison as well such as used in TransGTR (KDD 2023) and CrossTReS (KDD 2022). - Explanation more on integration with existing methods, for instance, which exact components are removed and replaced by the existing methods. Does existing methods need to be re-train in order to suit FSTLLM’s framework. Other Comments Or Suggestions: line 80: "that can commonly used in time series forecasting" → "that are commonly used in time series forecasting" line 163: "After this, due to the limited training data, the numerical prediction tokens is still suboptimal in capturing temporal dynamics." "is" → "are" Questions For Authors: In general, this is an innovative and solid work focus on few-shot time series forecasting task. Serval improvements can be achieved to improve the quality of this paper. These improvements include discussing AutoTimes, discussing experimental settings of using 3 days data to simulate a lack of data, and further explanation on integration experimental details. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer We sincerely thank the reviewer for the constructive feedback. We address each of the comments below and will update the manuscript accordingly. **Q1: Discussion of AutoTimes** Thank you for the suggestion. We will add the following discussion to Section 2.3 (Related Work): >"AutoTimes (Liu et al., 2024) repurposes decoder-only large language models (LLMs) for autoregressive time series forecasting by mapping time series inputs into the embedding space of language tokens. This method leverages the sequential modeling strength of LLMs to generate variable-length future predictions without updating the LLM weights." We acknowledge its relevance and will highlight the distinction between our fine-tuning-based approach and AutoTimes’ frozen-weight generation scheme. **Q2: 3-Day Few-Shot Forecasting Experiment on Nottingham Dataset** Due to time constraints, we performed a 3-day training experiment using the Nottingham dataset and a representative subset of baselines. The results are presented below: | Method | MAE | RMSE | MAPE (%) | |------------|-------|--------|-----------| | FSTLLM | 22.84 | 83.68 | 22.33 | | GTS | 29.57 | 87.44 | 26.04 | | GPT4TS | 33.24 | 93.92 | 26.95 | | PatchTST | 34.65 | 97.45 | 32.38 | | DLinear | 37.52 | 95.87 | 30.62 | These results demonstrate that FSTLLM consistently outperforms strong baselines under limited data scenarios. This further validates the robustness and adaptability of our framework in few-shot settings. **Q3: Clarification on Integration Experiment Setup** Thank you for pointing this out. We will revise lines 407–410 in Section 4.4 (Few-Shot Learning Integration Study) to provide a clearer explanation: >“To assess the plug-and-play potential of our LLM-enhanced graph construction, we substitute the numerical prediction tokens generated by the STGNN backbone with those from various transformer-based forecasting models. Specifically, we remove both the LLM-enhanced graph construction module and the STGNN backbone, and replace them with external transformer-based methods without modifying or retraining those models. This allows us to evaluate the generality of our integration strategy.” Please let us know if any further clarification is needed. We appreciate your detailed review and helpful suggestions. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts and detailed responses, which have clearly addressed my earlier concerns. I do have one additional query regarding the domain knowledge injection block in FSTLLM. This component seems effective in enhancing forecasting performance through structured fine-tuning. Given this promising result, I'm curious whether integrating domain knowledge from external datasets [1] or from cities with richer data availability [2] could potentially further boost FSTLLM's forecasting accuracy. Have the authors considered exploring this direction? If so, what are their thoughts on its potential benefits? [1] Transferable Graph Structure Learning for Graph-based Traffic Forecasting Across Cities [2] Selective Cross-City Transfer Learning for Traffic Prediction via Source City Region Re-Weighting --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your insightful question. We agree that incorporating additional data from closely related domains (as described in [1]) could indeed further improve forecasting performance through our domain knowledge injection block. However, our primary objective has been few-shot time series forecasting, and we have thus far limited our scope to in-domain data. With respect to works such as [2] and [3], which transfers knowledge from different time series domains (subway, bike sharing, and ride hailing), we believe their temporal dynamics differ from each other. In our fine-tuning experience, even when input features appear similar, underlying seasonality and demand patterns may vary substantially. Different to dedicated transfer learning framework that usually design multiple encoder block to process each domain separately, mixing data from such divergent domains in the LLMs fine-tuning stage can introduce domain shift and inconsistencies, thereby risking performance degradation rather than improvement. Thank you again for your thorough review and valuable feedback. [1] Transferable Graph Structure Learning for Graph-based Traffic Forecasting Across Cities [2] Selective Cross-City Transfer Learning for Traffic Prediction via Source City Region Re-Weighting [3] Cross-Mode Knowledge Adaptation for Bike Sharing Demand Prediction using Domain-Adversarial Graph Neural Networks
Summary: This paper proposes a time-series prediction framework that leverages the prior knowledge of LLMs. Based on Author’s introduction about the framework, the framework can be flexibly applied to any advanced time-series prediction model (such as the STGNNs mentioned in the related works). The experiments were conducted on two datasets—one public and the other self-collected (and will be made public later). The results show that the framework is effective in boosting the state-of-the-art. Overall, this is a novel and effective framework. Although it comes with a certain level of computational cost, it is tolerable, as it only LORA fine-tuning a LLM on a single A6000 GPU. The weaknesses are discussed in the following sections. Claims And Evidence: The main claim of the paper is that LLMs contain reasoning ability and prior knowledge about locations, which can help with time-series modelling. This claim is based on the widely accepted understanding that LLMs aim to model the world and encode a vast number of everyday concepts and relationships in their hidden states. The authors extract features based on this knowledge. In the experiments, they report the effectiveness of the overall framework, along with ablation studies for each module. The evidence is convincing. Methods And Evaluation Criteria: My simplified understanding of the method is: LLM knowledge is introduced before the input of STGNN, and in-context learning is applied on the output of STGNN, and STGNN serves as the backbone of the framework to achieve robust forecasting performance and reasoning to end users. The evaluation is primarily based on time-series prediction accuracy compared to previous representative methods. The use of MAE, RMSE, and MAPE are standard, and there is nothing additional to comment on. Theoretical Claims: This paper does not focus on theoretical proofs. Experimental Designs Or Analyses: The authors use one public dataset and one self-collected dataset, with the latter also submitted for review (Nottingham.h5). There are no obvious issues; the settings (such as evaluate a 15/45/60-mins window) are commonly used in the literature. Supplementary Material: I reviewed the dataset they uploaded. I also checked the provided code, and it appears to be complete and readily accessible. Relation To Broader Scientific Literature: The paper primarily cites relevant works in time-series forecasting and large language models, such as TimeLLM and GPT4TS. The baselines are up-to-date. Essential References Not Discussed: The paper lacks sufficient citations related to LLM in-context learning. In-context learning in LLMs is likely beneficial to the model, as seen from their instruction prompt design. The method requires some preliminary predictions incorporated into the prompts, allowing the LLM to refine and improve them further. I suggest they cite the following paper to help readers better understand and utilize in-context learning: Min, S., Lyu, X., Holtzman, A., Artetxe, M., Lewis, M., Hajishirzi, H., & Zettlemoyer, L. (2022). Rethinking the role of demonstrations: What makes in-context learning work?. arXiv preprint arXiv:2202.12837. Other Strengths And Weaknesses: Strengths: Already discussed in the previous sections. The notation is consistent, the experimental results are useful, and the proposed method is sufficiently novel. Weaknesses: - The description of Alpha-max is unclear. A value of 1 corresponds to standard SoftMax, while 2 represents a more concentrated distribution. Is the chosen value <1, 1- 2 or >2? - The authors need to explicitly describe the role and numerical value of this hyperparameter, as it appears to be important. - The ways to prepare the node description and node pattern analysis are unclear. The authors need to explain more on this preparation, whether they are directly collected from website or specific data engineering operations needed. Other Comments Or Suggestions: Typo corrections: In the abstract: “methodologies is” -> “methodologies are” The authors should proofread the entire paper again. Questions For Authors: Please clarify the issue regarding the Alpha-max parameter and node description preparation mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely thank you for your valuable feedback. We address your comments in detail below and will revise the manuscript accordingly: **Q1. Citation of In-Context Learning** We appreciate your suggestion regarding the citation of in-context learning. We will revise the manuscript by incorporating appropriate references in lines 57–60, as follows: > “In contrast, Large Language Models (LLMs) demonstrate strong capabilities in common sense reasoning (Zhao et al., 2023), making them particularly effective for integrating domain-specific and contextual knowledge via in-context learning (Min et al., 2022), in addition to fine-tuning. Furthermore, LLMs exhibit robust performance in few-shot and zero-shot learning scenarios, which are highly relevant for data-scarce forecasting tasks.” **Q2. Clarification of Alpha-Entmax Value** Thank you for pointing out the need for clarity regarding the α value in the alpha-entmax transformation. As noted in **Appendix D (Implementation Details)**, we employ an α value of 2.0 in our design. To enhance visibility, we will explicitly state this choice again in **Appendix F (The Alpha-Entmax Function)**, along with a brief justification for its selection. **Q3. Node Description and Pattern Analysis Methodology** We appreciate your interest in the generation of node descriptions and pattern analysis. We utilized ChatGPT-4o to assist in these tasks, and we will provide the prompt templates used in the Appendix to ensure reproducibility. Specifically: * Node Description Prompt: >“You are given a feature description from [carpark description link] and user reviews from [Google review link]. Please synthesize an inductive description of the carpark using content from both sources.” * Node Pattern Analysis Prompt: >“I will provide one week of parking lot records for a carpark in England, recorded every 15 minutes starting from 12:01 PM on 2016-10-26. Please describe the temporal usage patterns of the carpark, including variations throughout the day and across weekdays versus weekends. Indicate observed peak and low-demand periods, along with any consistent behavioral trends in parking availability. The records are: [extracted training data].” We will incorporate both prompts into the appendix to ensure methodological transparency. Please let us know if further clarification is needed. Thank you once again for your thoughtful and constructive comments.
null
null
null
null
null
null
Fast and Low-Cost Genomic Foundation Models via Outlier Removal
Accept (poster)
Summary: The paper "Making Genomic Foundation Models more Foundational Requires Outlier Removal: A Case Study on DNABERT-2" introduces GERM, an outlier-free genomic foundation model (GFM) designed to improve quantization robustness and low-rank adaptation efficiency. The authors argue that eliminating outliers in attention mechanisms significantly enhances computational efficiency and model performance under resource constraints. Empirical results demonstrate substantial improvements in quantization and fine-tuning metrics compared to DNABERT-2. While the work addresses critical challenges in deploying GFMs, several methodological and experimental aspects require clarification and validation. Claims And Evidence: 1. The claim that GERM reduces computational costs and improves quantization robustness is supported by experiments showing a 92.14% reduction in average kurtosis and 82.77% in maximum infinity norm. Also, the authors validate the connection between outlier metrics and practical deployment, such as inference time in resource-constrained environments. 2. The authors demonstrate that GERM outperforms DNABERT-2 by 37.98% in fine-tuning and 64.34% in quantization, validating the effectiveness of the proposed "outlier-free" method. 3. The small-step continual learning strategy (GERM-T) avoids retraining from scratch, which is pragmatic. However, its performance degradation in 4-bit quantization (Table 1) suggests limitations in outlier mitigation for extreme compression. Methods And Evaluation Criteria: **Outlier-Free Attention**: Replacing Softmax with a modified Hopfield layer is innovative, but it lacks a detailed comparison to other outlier suppression techniques mentioned in supplementary material, such as clipped attention and gated mechanisms. Theoretical Claims: 1. Definition of the outlier in the context of the paper. Especially considering that classical definition of outlier in statistics and ML is that observation is not from the distribution that is being modeled. 2. The theoretical analysis in Appendix B assumes non-singular weight matrices and ideal low-rank adaptation conditions. While mathematically sound, these assumptions may not hold for real-world GFMs with complex parameter interactions. Experimental Designs Or Analyses: The experimental design does not entirely make sense to me. I have several concerns: **LoRA Experiments:** I would also suggest to separate the experiments to two fold, if LoRA+Quantization is one of the focus on this work. In a first experiment, we can investigate the impact of LoRA (with FP 32/16) **Model Size**: The largest GFM used in this paper is NT-2.5B. If there are larger models available, I recommend that the authors include them in the experiments to provide a more comprehensive evaluation. Supplementary Material: Yes, I reviewed all parts of the author's supplementary material Relation To Broader Scientific Literature: This work aligns with efforts to optimize transformer-based models for resource-constrained settings, such as QLoRA and SmoothQuant. The authors also discuss state-of-the-art models like HyenaDNA and NT-2.5B. Essential References Not Discussed: There are some other GFMs should include in literature review like Evo which Integrates Hyena operators for efficient sequence modeling. Other Strengths And Weaknesses: **Strengths**: - Novel integration of outlier-free attention with ALiBi for variable-length sequences. - Comprehensive evaluation across multiple quantization methods and adaptation strategies. **Weaknesses**: - Unclear practical impact of outlier metrics, questions like how kurtosis reduction translates to real-world deployment gains need further discussion. - Incomplete discussion of limitations, particularly GERM-T’s performance trade-offs. Other Comments Or Suggestions: I do not have additional comments or suggestions Questions For Authors: Questions are already listed above. I have no further questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: The updated manuscript can be accessed anonymously at [link](https://www.dropbox.com/scl/fi/itpm5n21pfu3at01bofab/germ_icml2025.pdf?rlkey=zl8noukikpz1s4493b752s9uj&e=1&st=qma9ihc9&dl=0). > **Reviewer's Comment**: a detailed comparison to ... **Response**: We thank the reviewer for the feedback and the opportunity to clarify our evaluation of outlier suppression techniques. We would like to highlight that we have already included a detailed comparison of GERM with alternative outlier suppression methods in **Table 23** and **Appendix E.8** of the supplementary material. We are happy to provide further clarification if needed. > **Reviewer's Comment**: Definition of the outlier… **Response**: We appreciate the opportunity to clarify the definition of **outliers** in our work and how it differs from the classical statistical definition. A more detailed explanation is provided in the response to the reviewer `AD8a`. > **Reviewer's Comment**: The theoretical analysis in Appendix B assumes… **Response**: Thank you for the helpful feedback. We acknowledge that our assumptions may not hold universally. However, we believe they remain practical in real-world GFMs for three reasons: 1. Weight matrices in large foundation models rarely become singular. Overparameterization and regularization typically prevent this. 2. The low-rank assumption may not hold in every scenario. Yet it helps us show the strong expressiveness of LoRA tuning transformers with outlier-free layers. Many theoretical analyses use similar assumptions, so they form a reasonable setup. 3. Under these conditions, our results show that LoRA-tuned outlier-free transformers match or exceed the expressiveness of softmax-based architectures. These assumptions may not apply everywhere, but they do not undermine our theoretical insights. > **Reviewer's Comment**: LoRA Experiments... **Response**: We appreciate the reviewer’s suggestion to disentangle the effects of LoRA and quantization for a more granular analysis. In our revised manuscript, we add additional experiments and analysis for LoRA+Quantization in **Table 24 and Appendix E.9**. > **Reviewer's Comment**: larger models available?... **Response**: Thank you for your advice to include larger models for a more comprehensive evaluation. In our study, we select **NT2.5B** as the largest model for the classification task because it represents the most suitable model architecture designed for genomic sequence classification. While larger models such as **Evo** and **GeneOcean** exist, they are fundamentally designed as generation models rather than classifiers. These models prioritize sequence generation capabilities rather than directly optimizing for classification accuracy. As a result, they differ significantly in architecture, objective function, and training strategy. So, making a direct comparison with GERM in a classification context is less appropriate. > **Reviewer's Comment**: Unclear practical impact of outlier metrics... **Response**: Thank you for the comment. We clarify that the outlier metrics, such as kurtosis and max infinite norm, are empirically shown to correlate with model quantizability—that is, robustness to performance degradation under quantization [1,3]. Prior studies [2,4] demonstrate that outliers significantly reduce quantized model performance, and thus reducing these metrics has direct implications for improved performance of quantization. [1] Bondarenko, et al. "Quantizable transformers" [2] Wei, Xiuying, et al. "Outlier suppression" [3] Chmiel, Brian, et al. "Robust quantization" [4] Dettmers, Tim, et al. "Gpt3. int8 ()" > **Reviewer's Comment**: other GFMs should include in literature review... **Response**: We appreciate the reviewer's suggestion to expand our literature review by including additional GFMs. In response, we incorporate Evo in the related work section of our revised version. > **Reviewer's Comment**: Incomplete discussion of limitations?... **Response**: We thank the reviewer’s suggestions regarding the need for a more complete discussion of GERM-T’s limitations. We update the discussion of GERM-T's limitations in our revised manuscript, specifically in **Appendix F**. --- Rebuttal Comment 1.1: Comment: Thank you for your response and the insightful experiments. The authors have addressed most of my concerns, and I have accordingly increased my original score. --- Reply to Comment 1.1.1: Comment: We are very pleased to have addressed your concerns and thank you very much for raising the score!
Summary: This article introduces the outlier-free Hopfield layer into the Genomic foundation model to achieve a better trade-off between performance and efficiency. They also propose a continued training approach to avoid the additional cost of training from scratch. Comprehensive experimental results demonstrate that the outlier-free design significantly reduces performance degradation during quantization or fine-tuning. ## update after rebuttal I have read the authors' rebuttal and my concerns have been addressed. Claims And Evidence: The article does not clearly explain what an outlier is or how to quantify it. I suggest that the authors provide quantitative or qualitative evidence to demonstrate that introducing the outlier-free structure can indeed mitigate the outlier phenomenon. Methods And Evaluation Criteria: Yes, they have conducted comprehensive evaluations on various baselines, including downstream fine-tuning and quantization, to support their conclusions. Theoretical Claims: NA Experimental Designs Or Analyses: Yes, the experimental design here includes a solid and thorough ablation study. Supplementary Material: Yes, I have reviewed their experimental results conducted in resource-constrained environments. Relation To Broader Scientific Literature: Their research contributes to improving the accessibility of genomic foundation models by providing the community with a more lightweight yet efficient model, which helps accelerate scientific research within the community. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: The updated manuscript can be accessed anonymously at [link](https://www.dropbox.com/scl/fi/itpm5n21pfu3at01bofab/germ_icml2025.pdf?rlkey=zl8noukikpz1s4493b752s9uj&e=1&st=qma9ihc9&dl=0). > **Reviewer's Comment**: does not clearly explain what an outlier is or how to quantify it... **Response**: We thank the reviewer for the helpful comments and the opportunity to clarify our definition of outliers and provide theoretical analysis for their mitigation. ## **Definition of Outliers in Our Work** The definition of **outliers** in our work differs from the classical statistical definition. In traditional statistics and machine learning, outliers are typically defined as data points that fall outside the modeled distribution. In contrast, in our context, we define **outliers** as **tokens or activations that disproportionately influence the attention mechanism**, despite containing little or no meaningful information. A more detailed explanation is provided in the response to the reviewer `AD8a`. ## **Q2: How Does softmax_1 Mitigate Outliers?** In our paper, we introduce **Softmax1** equation, a modified softmax function designed to mitigate the effects of outliers: $$ \text{Softmax1}(S)_i = \frac{\exp(S_i)}{1 + \sum_j \exp(S_j)} $$ Where $\( S = QK^\top / \sqrt{d} \)$ represents the scaled dot product in the attention mechanism. **Key Improvements in Softmax1:** 1. **Suppression of Low-Value Tokens:** Unlike standard softmax, which assigns **non-zero probabilities** to all tokens — even those with highly negative scores — Softmax1 allows low-information tokens to receive **near-zero probabilities**. This behavior is crucial in genomic models where repetitive sequences or spacer elements resemble no-op tokens. 2. **Controlled Attention Distribution:** Softmax1 suppresses the broadening of the attention distribution, ensuring that the model remains focused on biologically relevant regions rather than noisy patterns. --- ## **Theoretical Support for Softmax1’s Outlier Resistance** - **Standard Softmax Behavior:** Standard softmax assigns non-zero probabilities to all tokens, even those that ideally should receive no attention. In extreme cases: $$ \lim_{x_1 \to -\infty} \ldots \lim_{x_k \to -\infty} \text{Softmax}(x)_i = \frac{1}{k} > 0 $$ - **Softmax1 Behavior:** In contrast, Softmax1 ensures that tokens with highly negative scores are assigned probabilities that collapse to zero: $$ \lim_{x_1 \to -\infty} \ldots \lim_{x_k \to -\infty} \text{Softmax1}(x)_i = 0 $$ This limiting behavior ensures that low-information tokens, such as repetitive motifs or spacer tokens, are effectively ignored, stabilizing the attention mechanism. --- ## **Empirical Evidence for Outlier Mitigation** To validate that Softmax1 effectively mitigates outliers, we provide both **quantitative** and **qualitative** evidence: - **Quantitative Evidence:** As shown in Table 1, GERM achieves a **92.14% reduction in kurtosis** and an **82.77% reduction in the maximum infinity norm** compared to DNABERT-2. These reductions demonstrate that GERM effectively suppresses extreme values associated with outliers. - **Qualitative Evidence (Attention Distribution Plots):** Visualizations in **Appendix E.10** in revision illustrate that DNABERT-2 exhibits sharp, irregular attention spikes corresponding to outlier tokens, while GERM maintains a smoother, more stable attention distribution. --- ## **Conclusion** We appreciate the opportunity to clarify our definition of outliers and their impact on genomic foundation models. Our theoretical analysis highlights how the attention mechanism illustrates the formation of outliers, while the Softmax1 equation mitigates their influence by reducing the amplification of low-information tokens. The combination of improved theoretical design and strong empirical evidence reinforces GERM’s ability to suppress outliers, enhancing its stability and performance in genomic modeling tasks.
Summary: This paper describes an adaptation of the DNABERT genomic foundation model to reduce the impact of outliers in attention mechanism. The outlier phenomenom was first observed in large language models, where it was shown attention mechanisms can learn to pay large attention to irrelevant tokens, like the [SEP] token. This is believed to stem from cases where an attention head needs something equivalent to a no-op, where it does not attend anywhere. These outliers manifest themselves as certain embeddings in each layer having large magnitude and cause problems for quantization, due to the resulting large range of values to be quantized. Quantization is a key component of making genomic foundation models practical, in the sense that they could be deployed in APIs for non-computational scientists to use in exploratory research. The authors incorporate the outlier-free Hopfield layer of (Hu et al., 2024a) into the DNABERT model and demonstrate that this reduces the impact of outliers and retains better performance after quantization than the original BERT model. They also demonstrate that this model has better performance after low-rank adaptation and provide an efficient continual learning approach that enables the incorporation of this layer in a pretrained model without retraining from scratcn, and that the performance of this version falls between the GERM model (outlier-free Hopfield layer retrained from scratch) and the original DNABERT model on almost all evaluations. Claims And Evidence: The authors claims, as described above, are demonstrated with a comprehensive set of experiments. Methods And Evaluation Criteria: Ther authors use the same set of benchmarks as previous genomic foundation models. Theoretical Claims: I did not check the theoretical claims (in the appendix) in detail. Experimental Designs Or Analyses: The experiments look sound to me. Supplementary Material: I briefly read the supplementary material, but did not check the proofs in detail. Relation To Broader Scientific Literature: This work is highly relevant to the broader scientific literature, given the increased interest in genomic foundation models and the likely requirement to use techniques like quantization and low-rank adaptation to make their use practical. Essential References Not Discussed: To the best of my knowledge, the authors discuss all the relevant research. Other Strengths And Weaknesses: The paper is well-written, with clear motivation and clearly described experiments. As mentioned above, the contributions are useful and carefeully evaluated. One possible criticism is that the paper combines ideas that have previously been demonstrated for large language models, rather than providing a technical innovation in itself. However, I think the paper makes a solid contribution as a piece of empirical work others can bulld on. Other Comments Or Suggestions: n/a Questions For Authors: While we have a good explanation of the source of outliers in language models, it wasn't obvious to me what kind of input feature or token would lead to an outlier in a genomic foundation model. Do the authors have an intuition for this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: The updated manuscript can be accessed anonymously at [link](https://www.dropbox.com/scl/fi/itpm5n21pfu3at01bofab/germ_icml2025.pdf?rlkey=zl8noukikpz1s4493b752s9uj&e=1&st=qma9ihc9&dl=0). > **Reviewer's Comment**: what would lead to an outlier in a GFM... **Response**: Thank you for your insightful question about the nature of outliers in genomic foundation models. In our work, we define **outliers** as **tokens or activations that disproportionately influence the attention mechanism**, despite containing little or no meaningful information. These outliers emerge when softmax amplifies the attention probabilities of tokens that ideally should receive minimal or zero focus. We use the following attention mechanism to analyze this behavior: $$ \text{Output} = \text{Residual} \left( \text{Softmax} \left( \frac{Q(K)^\top}{\sqrt{d}} \right) V + X \right) $$ As shown in (Hu et al., 2024), if the attention input \(X\) already contains sufficient information, the attention mechanism within the residual connection should ideally behave like an **identity transform**, producing near-zero attention outputs: $$ \text{Softmax} \left( \frac{QK^\top}{\sqrt{d}} \right) V \approx 0 $$ In such cases, tokens with **high values in \(V\)** — which may represent biologically significant features — should still receive **near-zero attention probabilities**. ## Why Classic Softmax Fails The problem arises from how the softmax function normalizes probabilities. Softmax enforces that probabilities sum to 1, which inherently magnifies the attention probabilities assigned to **low-value tokens**. This unwanted amplification broadens the attention score distribution and introduces **outliers** — tokens that exert a disproportionate influence despite their low information value. Genomic models such as DNABERT-2 face a critical challenge from outliers - including repetitive elements, low-complexity regions, and non-coding segments - similar to no-op tokens. Despite their limited biological significance, these regions receive disproportionately high attention weights through standard softmax operations, consequently suppressing the model's focus on biologically relevant genomic features. ## Intuition Behind Outliers in Genomic Models Outliers in genomic models typically arise from sequence patterns that produce anomalous query-key interactions in the attention mechanism. Though genomic data lacks traditional "words" like language models, certain biological patterns produce similar effects. Key examples include: ### 1. **Low-Complexity Regions (e.g., Poly-A or Poly-T Sequences)** Genomic sequences frequently contain repetitive base patterns (e.g., `AAAAA...`, `TTTTT...`). These sequences carry minimal unique information yet can produce large, uniform dot-product values in the attention mechanism. This causes softmax to assign exaggerated probabilities to these low-information tokens, effectively making them outliers. ### 2. **Repetitive Motifs and Tandem Repeats** Genomic repetitive elements such as microsatellites and tandem repeats, which contain patterns (e.g., `(CA)n`, `(GAA)n` repeats) that generate artificially inflated attention scores due to their inherent self-similarity. However, such regions lack corresponding biological information, often resulting in softmax overemphasizing them as if they were biologically significant. ### 3. **Boundary and Spacer Elements (e.g., Alignment Padding or Non-coding Spacer Sequences)** In genomic datasets, artificial padding sequences, non-coding segments, or spacer sequences are sometimes introduced to ensure proper sequence alignment. These tokens are intended to have no biological relevance, yet softmax’s behavior inadvertently amplifies their attention scores, creating noise that distorts meaningful patterns. ## Impact on Genomic Analysis In genomic foundation models like DNABERT-2, these outliers negatively impact performance by: - **Increasing Error Rates:** Outliers divert attention away from biologically meaningful regions, reducing prediction accuracy for tasks like mutation site identification. - **Destabilizing Fine-Tuning:** During fine-tuning, excessive focus on low-information tokens increases noise in gradient updates, limiting convergence stability. - **Masking Important Features:** Outliers may overshadow rare but critical genomic patterns, reducing the model’s capacity to detect subtle but meaningful biological signals. > **Reviewer's Comment**:.One possible criticism is…. **Response**: We appreciate the reviewer’s feedback on the importance of innovation. We also appreciate the reviewer's comments, acknowledging that our paper makes a solid contribution as an empirical study that others can build upon. This paper presents experimental evidence demonstrating that outlier-free methods offer an efficient solution for improving low-rank adaptation and quantization. We restate our contribution in the response to the reviewer `sdac`.
Summary: This paper addresses the limitations of current GFMs, particularly DNABERT-2, when applying low-bit quantization and parameter-efficient fine-tuning methods like LoRA. The authors attribute performance degradation to outliers in attention mechanisms and propose GERM, a variant using an outlier-free attention mechanism (softmax₁). They also present GERM-T, a continual learning-based model for adapting GFMs under resource constraints. Experiments show that GERM and GERM-T perform better than baselines under low-precision and constrained environments. Claims And Evidence: Some central claims, such as the harmful effects of outliers in attention distributions on quantization and LoRA performance, are not fully supported with direct quantitative evidence. While statistics like kurtosis and max norm are provided, detailed distributional plots or causal analysis connecting these outliers to downstream performance degradation are lacking. The claim of improved efficiency through outlier removal is empirically supported but not rigorously motivated. Methods And Evaluation Criteria: The evaluation framework is reasonable, employing multiple benchmarks (e.g., variant effect prediction, promoter identification) and standard metrics like MCC. The methods, such as GERM and GERM-T, are clearly described and evaluated under realistic constraints (e.g., low-precision hardware). However, the lack of detailed analysis on where and how outliers hurt model behavior weakens the methodological clarity. Theoretical Claims: Supplementary material (Section A) includes theoretical analysis on the softmax₁ function, particularly proving that it produces attention distributions with a bounded second moment, which helps prevent outlier values. While the result is cited from prior work (Hu et al., 2024), the paper correctly restates the theoretical guarantees that motivate the use of softmax₁. No new proofs are introduced, but the prior claims are accurately presented. Experimental Designs Or Analyses: Experiments are well-structured and provide comparisons across multiple quantization settings. However, the motivation—attention outliers—is not directly evaluated through visualization or per-layer analysis of attention score distributions. This gap undermines the connection between hypothesis and results. Supplementary Material: The appendix was reviewed, specifically Section D.7 on Nucleotide Transformer results. These confirm some generality of GERM across different GFMs. Relation To Broader Scientific Literature: The paper builds on DNABERT-2 and outlier mitigation in transformers (e.g., softmax₁ from Hu et al. 2024). While its application to genomics is relevant and timely, the primary technique has already appeared in prior literature, reducing the novelty of the proposed approach. Essential References Not Discussed: The authors appropriately cite related work on outlier-free transformers and GFMs including DNABERT, DNABERT-2, HyenaDNA and Nucleotide Transformer. Further reference that could enhance the current reference would be Evo [1]. [1] Sequence modeling and design from molecular to genome scale with evo Other Strengths And Weaknesses: - **Strengths**: Practical focus on resource-limited settings; strong empirical results under quantized and efficient setups. - **Weaknesses**: Limited novelty (reuses existing attention method), weak quantitative motivation, and missing connection between proposed changes and biological interpretability. Other Comments Or Suggestions: Some useful comments: 1. Clarify how “outliers” are defined and provide attention distribution visualizations. 2. Consider evaluating biological interpretability or downstream relevance more thoroughly. Questions For Authors: 1. Can you provide attention score distributions (e.g., histogram or density plots) to support the outlier hypothesis? 2. What is the operational definition of an “outlier” in your analysis (e.g., percentile-based, norm threshold) and how it relates and impacts the biological findings? 3. Given that softmax₁ is not novel, what do you consider the core technical contribution of this work? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: The updated manuscript can be accessed anonymously at [link](https://www.dropbox.com/scl/fi/itpm5n21pfu3at01bofab/germ_icml2025.pdf?rlkey=zl8noukikpz1s4493b752s9uj&e=1&st=qma9ihc9&dl=0). > **Reviewer's Comment**: However, the motivation …. **Response**: We thank the reviewer for suggesting a direct attention score analysis to validate the outlier hypothesis. To address this point, we add detailed visualizations of the attention score distributions in **Figure 3 and Appendix E.10**, where we compare DNABERT-2 and GERM. The results demonstrate that GERM suppresses attention outliers, leading to more focused and efficient attention patterns. This analysis provides direct evidence supporting our hypothesis. We clarify this connection in the revised manuscript and invite the reviewer to examine Figure 3 and Appendix E.10 for a detailed analysis of these findings. > **Reviewer's Comment**: While statistics like .. **Response**: We sincerely appreciate the reviewer’s insightful suggestion about the need to more rigorously establish the connection between attention outliers and downstream performance degradation. In response, we have added detailed attention score distribution visualizations in Appendix E.10 and Figure 3, including per-layer heatmaps comparing DNABERT-2 and GERM. For a formal definition of “outliers”, we respectfully refer the reviewer to our response to Reviewer `AD8a`. We also discuss in our response to Reviewer `8UsM` how existing literature has demonstrated the negative impact of attention outliers on model performance. We believe these additions strengthen the theoretical and empirical motivation of our work and clarify the causal relationship between attention outliers and performance. > **Reviewer's Comment**: Limited novelty… **Response**: We appreciate the reviewer’s feedback on the importance of novelty. While our approach builds upon established methodologies, we would like to highlight our key innovation: we are the first to integrate outlier removal to simultaneously enable **(1) robust quantization and (2) accelerated low-rank adaptation** for genomic foundation models. This contribution is important because genomic data presents unique challenges—such as extreme sparsity, high variability, and frequent outliers in attention mechanisms—which differ significantly from those in traditional NLP. 1. **First to achieve accelerated LoRA for Genomic Models via Systematic Outlier Removal**: Our work pioneers the use of LoRA in genomic foundation models like DNABERT-2. Applying LoRA directly to DNABERT-2 results in performance degradation due to genomic data-specific outliers. This novel integration of the outlier-free Hopfield mechanism enables effective low-rank adaptation alongside robust quantization, achieving a 37.98% improvement in fine-tuning performance compared to DNABERT-2. 2. **Adapting Techniques for Genomic Challenges**: The genomic domain demands unique adaptations due to sparse and highly variable tokenization methods like k-mer and BPE. The outlier-free Hopfield layer required significant adjustments to mitigate domain-specific outliers, reducing kurtosis and infinity norm values by 92.14% and 82.77%, respectively, across 27 genomic datasets. This ensures both robust quantization and efficient fine-tuning in resource-constrained settings. 3. **GERM-T for Continual Learning**: Beyond LoRA, GERM-T introduces a novel continual learning strategy that avoids training from scratch while effectively leveraging outlier-free layers. This approach focuses on resource-constrained genomic research, making adaptable fine-tuning without compromising performance possible. 4. **Empirical Validation**: Our work provides the first empirical evaluation of integrating outlier mitigation into LoRA fine-tuning and quantization for genomic models, achieving a 64.34% improvement in quantization robustness compared to DNABERT-2. These results validate the effectiveness of our proposed modifications and their impact on genomic tasks. By adapting and extending these methods, we address domain-specific challenges while advancing genomic modeling. We hope this response clarifies the novelty and significance of our contributions, and we are happy to provide further details or analyses if needed. > **Reviewer's Comment**: What is the operational definition … **Response**: We appreciate the opportunity to provide more details about the significance of outliers in transformer-based models. A more detailed explanation is provided in the response to the reviewer `AD8a`. > **Reviewer's Comment**:.Further reference …. **Response**: We appreciate the reviewer's suggestion to expand our literature review by including additional GFMs. In response, we incorporate Evo in the related work section of our revised version.
null
null
null
null
null
null
Separating Knowledge and Perception with Procedural Data
Accept (poster)
Summary: The paper introduces a novel method to fully compartmentalize visual memory by training representation models exclusively with procedural data, thus eliminating the risks associated with privacy and bias inherent in real-world data. The main findings include achieving near state-of-the-art performance on standard benchmarks: procedural models perform comparably to or better than models trained on real data on fine-grained classification tasks and show strong zero-shot segmentation abilities. Importantly, the approach enables perfect data unlearning by simply removing images from the visual memory database, without requiring retraining. Claims And Evidence: The paper's claims are well-supported by experimental evidence. Methods And Evaluation Criteria: The methods and evaluation criteria are well-chosen and appropriate. Theoretical Claims: The paper makes no explicit formal theoretical claims that require proof verification. Experimental Designs Or Analyses: The experimental analyses are generally sound. Specific analyses reviewed include: 1. Classification accuracy across multiple fine-grained and general datasets: Methodology sound; experiments clearly conducted. But the details need to be clarified. 2. PCA and visual analysis of representations: Provides a helpful and rigorous visual assessment of model embedding qualities and limitations. 3. Privacy evaluation: The comparison of memory-based model accuracy vs. non-private training images is insightful and convincing. Supplementary Material: The submission does not include supplementary material. Relation To Broader Scientific Literature: The paper situates itself well within existing literature on procedural data learning, memory-based models, and privacy in AI. It builds explicitly upon previous procedural image generation work and visual memory approaches. The comparison with prior methods is clearly articulated, and contributions such as new procedural data generation methods are well-positioned within the existing literature. Essential References Not Discussed: The paper cites key related work effectively. Other Strengths And Weaknesses: Strengths: 1. The approach is novel, clearly motivated, and practical in contexts with high privacy concerns. 2. Experiments and analyses are thorough, demonstrating effectiveness in multiple domains, including fine-grained classification and segmentation. 3. Well-organized and clearly written, with useful visualizations (e.g., Figures 9 and 10). Weaknesses: 1. Limitations around segmentation due to excessively local procedural embeddings are noted but not deeply addressed. Further insights or solutions could strengthen the paper. Other Comments Or Suggestions: See questions. Questions For Authors: Can you clarify all baselines' settings? For example, how to train and fine tune the models in Table 1? What dataset is used? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and suggestions, and for agreeing that our approach is novel and shows strong abilities with perfect data unlearning. Below, we clarify how to run our baselines. We will also add a new figure to the camera ready that makes the connection between the limitations at KNN segmentation and classification more explicit. ## 1 Clarification of baselines We train a vision transformer (Small ViT) for each dataset (ImageNet, Places, Shaders KML Mixup, Shaders KML, Shaders Mixup, Shaders, and Stylegan), using the recipe and architecture of the original DINO paper [1]. In particular, we used the hyperparameters that yielded the best results on the original DINO on ImageNet for all models, rather than hyper-optimizing for performance on each specific dataset. This results in a much more rigorous evaluation, as the optimal ImageNet hyperparameters are more likely to be bad than good for procedural non-realistic data. These hyperparameters are: learning rate 1e-3, batch size 512, optimizer AdamW, num epochs 100, and DINO head out dim 65536. We will include them in the Supplementary Material of the camera ready. These models are then used without any fine-tuning to obtain all the results, including Figure 5, Table 1, Figures 9 and 10, and Tables 2 and 3. ### 1.1 Table 1 As mentioned above, we train a single ViT model on each real and procedural dataset. The second column of Table 1 shows the dataset, while the first shows the type of data: target, realistic, and procedural. The white-box models are non-neural approaches included for reference. We obtain the numbers using the nearest networks evaluation method of the original DINO paper [1]. Normalized embeddings are calculated for the train and validation splits, and class predictions for the validation examples are obtained by taking a majority vote of the nearest neighbours in the training set. We evaluate all models using 10, 20, 100, 200 neighbours and keeping the best result. ### 1.2 Table 2 and Figure 9 For Table 2, we again take the exact same models (without further training) and compute dense embeddings on the COCO validation dataset. We then apply PCA with 3 channels, which are visualized in Figure 9, and compute their R^2 correlation with the human segmentation labels. ### 1.3 Table 3 For Table 3, we do the same as for Table 1 but with the medical MNIST datasets. ### 1.4 Figure 5 For Figure 5, we use the original dreamsim [2] code for evaluating baseline models. In particular, they obtain embeddings for the reference, option A, and option B images, and choose option A if it has greater cosine similarity with the reference than B and vice-versa. ## 2 Expanding on excessively local embeddings We will include the hard-label equivalents of Figure 9, which the reviewer thought was insightful, and Figure 12 into the camera ready. In contrast to the soft-label PCA, this figure shows a hard segmentation of the image. The soft-label PCA provides an overview of the model’s internal representation at all granularities, while the hard-label PCA delves deeper into a specific granularity. Combined they offer a more complete analysis of the model’s capabilities. From the hard-label PCA, we gain further evidence that while realistic data models recognize and segment images into objects, procedural data models cannot due to having never seen them during training. This figure makes the link between limitations at KNN segmentation and classification explicit. Excessively local representations that do not encompass whole objects impair searching for similar images both locally for segmentation, and globally for classification (when pooled into a class token). A potential solution could consist of more sophisticated algorithms than Nearest Neighbours that take into account a larger context and thus can bypass the excessive locality limitation, but we leave this to future work. If the reviewer found these additional experiments and explanations convincing, we kindly ask them to raise their score to Accept. - [1] Emerging Properties in Self-Supervised Vision Transformers; Caron et al. - [2] DreamSim: Learning New Dimensions of Human Visual Similarity using Synthetic Data; Fu et al.
Summary: This paper introduces a memory-based approach to visual perception by training an embedding model solely on procedurally generated data, then using real data embeddings in a separate memory for classification and segmentation tasks. The authors emphasize advantages in unlearning and privacy, aiming to decouple training on real data from downstream usage. Claims And Evidence: The authors claim improved control over unlearning and privacy with minimal reliance on real data, and they provide empirical demonstrations supporting these benefits. Evidence for the overall performance is present but could be strengthened with additional baselines. Methods And Evaluation Criteria: Using a memory-based k-nearest neighbors (KNN) approach and various procedural data sources makes sense for testing the proposed unlearning/privacy framework. The paper employs established benchmark tasks for classification and segmentation to illustrate feasibility. Theoretical Claims: No formal proofs are offered. The core theoretical motivation revolves around the idea that separating real data from the training process (by relying on procedural data and a memory-based scheme) should mitigate privacy risks. However, the paper does not provide a fully rigorous explanation of how procedural data itself, beyond simpler domain adaptation arguments, translates into enhanced interpretability or privacy guarantees. Experimental Designs Or Analyses: The unlearning experiments, where samples can be removed from the memory without retraining the embedding model, fit logically with the proposed method. Nevertheless, a direct comparison to standard classifier-based unlearning methods would help clarify whether the current approach strikes the best balance between accuracy and privacy. Supplementary Material: No supplementary material submitted. Relation To Broader Scientific Literature: This work aligns with existing research on using synthetic data for privacy-preserving machine learning approaches Essential References Not Discussed: Related works are well discussed. Other Strengths And Weaknesses: 1. Motivation for memory-based approach: Beyond simplifying unlearning, it is unclear what additional advantages this framework offers; the paper’s rationale would benefit from a more convincing demonstration of its utility beyond privacy concerns. 2. Accuracy trade-offs: The KNN-based classification and segmentation yield relatively low accuracy compared to contemporary classifiers (often exceeding 80% on ImageNet-1K). The paper should compare a “classifier + unlearning” pipeline to demonstrate whether the privacy gains truly justify the performance costs. 3. Procedural data domain gap: The reliance on procedurally generated imagery raises questions about domain shifts, as these synthetic images differ stylistically from real-world data. The authors do not fully analyze how this gap may contribute to reduced accuracy, nor provide ablation studies to quantify its impact. Other Comments Or Suggestions: I suggest the authors to additionally evaluate on OOD variants of ImageNet like ImageNet-v2, ImageNet-Sketch, etc. Questions For Authors: See comments above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and for agreeing we provide empirical demonstrations on the benefits of our approach. In particular, we appreciate their identification of areas which could be strengthened, such as clarifying the benefits of our approach and additional baselines, which we write below ## 1 Motivation for memory approach Our work builds on top of prior research on memory, in particular Geirhos et al. 2024. They argue that precisely because the real world changes; a static model (such as a standard classifier) would require constant retraining or fine-tuning, which is not feasible especially given the scale of modern models. A flexible visual memory allows for efficiently adding even billion-scale data, removing data through unlearning, and an interpretable decision-mechanism. However, prior approaches had a problem: the feature embeddings are themselves trained on data. We identify four consequences. First, while adding and removing data from the memory is easy, doing so from the embeddings is not. Second, a large corpus of data is required to train the embeddings, which could be not available (low-resource setting) or not allowed (privacy setting). Lastly, counterfactual interpretability with data used to train the embeddings is difficult. Our proposal to use procedural data effectively addresses all four. ### 1.1 Efficient unlearning Procedural embeddings allow for efficient provable unlearning of all real data used. Classifier unlearning methods fine-tune or retrain the weights, which is expensive and not infallible. Moreover, they also use the ”contaminated” weights as an initial point which wouldn't satisfy a legal request. In these situations where classifier unlearning is not feasible or allowed, procedural data is effective and efficient ### 1.2 Low resource setting Training an embedding model requires lots of data. In situations where it is not available, procedural data yields strong performance. As seen in Table 1, procedural models actually beat the Places model on some fine-grained classification tasks. This shows that procedural data may sometimes perform better than out-of-distribution real data ### 1.3 Privacy setting If directly training on the data is not acceptable, our approach offers an elegant solution. With procedural embeddings, any medical information is non-existent in the model's weights. Moreover, as we see in Table 3, procedural models match or exceed the best result from the original MedMNIST paper [1] in 7/10 datasets ### 1.4 Counterfactual interpretability Prior work in AI ethics identified the concept of a “right to explanation” to people affected by automated decisions. The paper [2] showed that a “counterfactual”, defined as the data that, if it were absent, would change the model decision, satisfied key requirements for both people and firms. Computation of counterfactuals requires efficient computation of “what if” scenarios in absence of certain data, which we can obtain with unlearning. Our method enables counterfactual analysis for all real data used. Given a decision, we search for the counterfactual by sequentially unlearning the top neighbour and checking if the answer changes ## 2 Accuracy trade-off Our setting is self-supervised learning without labeled data. We use classification only as a proxy task for evaluation. In this setting, training on ImageNet obtains 69% accuracy, while training on realistic data (Places) obtains 47%. Training on procedural data is within 10% of the latter, which is quite significant given it is non-realistic. We are not saying that procedural data should be used when standard classifiers are possible and appropriate, only that in situations like described in point 1, procedural data is highly performant and desirable ## 3 Procedural data domain gap + additional baselines We actually have that discussion in the paper from line 271 to the end of page 5, in Figure 11, and in Section 5. We mention how lack of real-world objects in procedural data leads to the performance gap. The additional baselines requested can be found below. The results are very similar to those in the original paper. On the three ImageNet-v2 variants (INv2 Top Quality, Threshold0.7, and MatchedFrequency), the gap of the best procedural model to Places is still 10%. On ImageNet-Sketch (IN-S) it's even better, as the gap to Places is 3%: procedural models are able to search for similar neighbour sketches. In the fine-grained dataset Stanford Dogs, procedural models again beat Places | Dataset | INv2-TQ | INv2-Th | INv2-MF | IN-S | Dogs | - | - | - | - | - | - | ImageNet | 69.9 | 64.4 | 56.6 | 66.5 | 65.6 | Places | 46.1 | 40.8 | 35.4 | 56.6 | 22.2 | S. KML Mixup | 35.3 | 30.7 | 26.0 | 52.9 | 29.5 - [1] MedMNIST Classification Decathlon; Yang et al - [2] Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR; Wachter et al --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response of the authors. While some of my questions are addressed, my major concern remains unressolved. Compared to merely explaining the advantages of memory-based methods, I would prefer to see direct metrics that prove the superiority of your approach. While the current claims intuitively make sense, they are not sufficiently persuasive. When new data arrives, changing already trained embeddings is indeed more challenging, but it requires less storage space and introduces less additional computational load than KNN. Therefore, the authors need to carefully compare the accuracy trade-offs between your model and traditional classifiers, but I have not seen any explicit analysis and results so far. Moreover, the main work of this paper is applying existing memory-based approaches to procedural data, with limited innovation or contribution to support an ICML-level publication. Thus, I maintain my original score of a rejection.
Summary: This paper introduces a novel process for training neural networks using procedural data: Shaders KML and Shaders KML Mixup. Despite relying on simple programs and lacking any direct resemblance to the target distribution, the proposed method achieves impressive results in K-NN-based classification. The paper’s core contribution is K-Means Leaves (KML), a data-driven masking strategy that replaces simplistic masking with k-means clustering to diversify training patterns. This approach significantly enhances the variety of features leveraged during training, addressing a key limitation of traditional procedural data generation. The paper further demonstrate the versatility of KML by showcasing its benefits across multiple applications, including memory-based classification (via nearest neighbors), semantic segmentation, unlearning tasks, and privacy-preserving scenarios with formal guarantees. Claims And Evidence: The paper compellingly demonstrates the broad utility of the procedural method for classification and segmentation tasks. That said, the performance improvements between Shaders and Shaders Mixup appear modest in most benchmarks (excluding CUB). This suggests the added value of Mixup in this framework may merit further discussion, particularly regarding the claim that increased dataset diversity is a “key driver of performance.” To bolster this argument, it might help to clarify how the KML masking procedure contributes—for example, by highlighting whether specific components (e.g., feature types, training phases) benefit disproportionately, or if certain induced biases (e.g., geometric consistency) play an underappreciated role. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical proofs were considered. Experimental Designs Or Analyses: I checked the designs for : * Alignment with humans in the NIGHTS dataset, it would be good to provide significance values on the difference between shaders and the improvement proposed in the paper. * K-NN classification, in fine grained and general Imagenet datasets. * Segmentation performance Supplementary Material: I didn't find the sup material. Relation To Broader Scientific Literature: The paper contributes to the area of training with procedural data. Essential References Not Discussed: I think the paper cites the most recent work on this domain. Other Strengths And Weaknesses: Strenghts * The paper shows the application of procedural training on multiple tasks, extending previous papers on the same topic. * The method is actually quite simple, but it shows promise in improving the frontier on the topic. Weakness * It would strength the paper if more evidence on the benefit or the rational before the masking procedure with respect to previous works in the area. Other Comments Or Suggestions: * Perhaps is better to list the method as "ours" across to ease readability and clarity. Questions For Authors: * Maybe not super necessary given the approach of visual memory perception, but what is the expected upper bound of these models? i.e. Linear decoding for classification instead of KNNs * If the one above is done, an explainability analysis would give also very interesting insights of what features are learned by these models. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for agreeing that the approach is novel and has broad utility, as well as their insightful comments, questions, and suggestions. We answer the questions and follow up on the suggestions below ## 1 Benefits of KML over Mixup We agree the benefits of KML may initially appear modest (sans CUB). As suggested, we clarify that KML especially improves the variety of visual stimuli the model is exposed to, which leads to higher variety features for downstream tasks. To show this, we compute the Vendi Diversity (VD) with the ImageNet model and the linear classifier performance. These two metrics provide additional evidence of the benefits of KML complementary to KNN classification accuracy. Moreover, the improvements in KNN classification are better observed by looking at the relative reduction in the gap to real data rather than the raw increase. ### 1.1 VD VD [1] measures the entropy of the eigenvalues of the similarity matrix of pairs in a dataset. It has been used for generative modelling and dataset curation in many domains such as molecules and images. We use the ImageNet model to create the similarity matrix, acting as a proxy for human vision to measure the diversity of visual stimuli in our procedural datasets. - S. KML Mixup 38.6 - S. KML 36.5 - S. Mixup 30.6 - Shaders 22.1 Indeed, while Mixup improves the diversity of Shaders by 8.5, our KML improves it by 14.3. Combining KML and Mixup yields a larger improvement of 16.5. Our intent in line 176 was not to make the claim ourselves but rather to cite [2], which showed that improved diversity is a key driver of performance for non-realistic data, as our motivation for developing KML. We apologize for the confusion ## 1.2 Linear decoding We thank the reviewer for their suggestion of analyzing linear decoding. We had actually trained linear decoders on ImageNet, but did not include them initially as we focused on visual memory perception. We will include these numbers, as well as GradCam [3] visualisations for explainability, in the camera-ready supplementary material - S. KML Mixup 47.3 - S. KML 47.1 - S. Mixup 44.8 - Shaders 43.1 With linear decoding, S. KML beats S. Mixup by 2.2%, despite the KNN performances being equivalent. Moreover, the gains from adding Mixup to both Shaders and S. KML are much smaller. This suggests that Mixup mainly reduces bad features, which can also be pruned by the decoder, while KML yields either better or a greater amount of useful features. The two approaches are complementary, which is why Shaders KML Mixup obtains the strongest performance overall. ### 1.3 Relative gap reduction Additionally, gains in classification accuracy are better observed by looking at the relative reduction in the gap to real data rather than the raw increase. This is because of diminishing returns as we get closer to the performance ceiling of procedural data (given by realistic data). Measured relatively, our KML reduces the gap by 18.5% and 20.5% on Flowers and ImageNet respectively. The relative improvement in CUB is actually a very similar 20.4%. The Food dataset is a special case, as S. Mixup matches the real data Places but our S. KML Mixup beats it. We calculate relative increase as follows (s_kml_mixup_accuracy - s_mixup_accuracy) / (real_data_ceiling - s_mixup_accuracy) using the ImageNet model as the ceiling for Flowers and CUB classification, and the Places model for ImageNet classification ## 2 Significance values of NIGHTS We agree that differences are minimal. NIGHTS was included as a task where procedural models have reached the level of real models. A z-test determined Places, S. KML, and Shaders are all equivalent to the 5% level. We will add this result and the standard deviations for Fig. 5 to the camera ready If the reviewer found these additional experiments and explanations convincing, we kindly ask them to raise their score to Accept. - [1] The Vendi Score: A Diversity Evaluation Metric for Machine Learning; Friedman et al - [2] Learning to See by Looking at Noise; Baradad et al - [3] Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization; Selvaraju et al
null
null
null
null
null
null
null
null
Adapting While Learning: Grounding LLMs for Scientific Problems with Tool Usage Adaptation
Accept (poster)
Summary: The paper **"Adapting While Learning: Grounding LLMs for Scientific Problems with Tool Usage Adaptation"** introduces **Adapting While Learning (AWL)**, a two-component fine-tuning approach that enables LLMs to intelligently decide when to rely on internal reasoning or external tools for solving scientific problems. The first component, **World Knowledge Learning (WKL)**, trains LLMs to internalize knowledge by learning from tool-generated solutions, while the second, **Tool Usage Adaptation (TUA)**, classifies problems as easy or hard and trains the model to use tools only when necessary. Tested on six scientific benchmarks—including custom datasets in climate science, epidemiology, and mathematics—the method improves answer accuracy by **28.27%** and tool usage accuracy by **13.76%**, even surpassing **GPT-4o** and **Claude-3.5** on specialized tasks. AWL introduces a **more efficient and adaptive problem-solving paradigm**, reducing over-reliance on costly computational tools while enhancing scientific reasoning in LLMs. Claims And Evidence: The main contribution of this paper are: (1) we construct 4 datasets that include various scientific domains. The descriptions, and construction method should be clearly stated in the main text. However, I can not find this. These things are stated in supplementary materials. (2) The second innovation is designed adapting while learning. The two components are: World Knowledge Learning and Tool Usage Adaption. However, I can not understanding why World Knowledge Learning and Tool Usage Adaption can be summarized as adapting while learning. I seems to adapting while inference? (3) The world knowledge learning and tool usage adaptation are not novel. The first one is very similar to SFT. The tool usage learning method is also largely borrowed from "Learning to Use Tools via Cooperative and Interactive Agents" Zhengliang Shi, Shen Gao, Xiuyi Chen, Yue Feng, Lingyong Yan, Haibo Shi, Dawei Yin, Pengjie Ren, Suzan Verberne, Zhaochun Ren, Learning to Use Tools via Cooperative and Interactive Agents Methods And Evaluation Criteria: Tool Usage Accuracy: The metric for tool usage accuracy is not clearly defined. It is unclear whether this metric assesses the model's ability to choose correctly between internal reasoning and tool usage or if it measures the correctness of tool-assisted answers. A precise definition of this metric is crucial for interpreting the results accurately. The overall method is reasonable. However, these methods seem to be not novel, as discussed in Claims And Evidence. Also: The TUA component is designed to enable the model to decide when to use external tools based on problem difficulty. However, the criteria for classifying problems as 'easy' or 'hard' are not thoroughly explained. A clear understanding of this classification is essential to assess the validity of the tool usage decisions and the overall effectiveness of the TUA mechanism. Theoretical Claims: NO theoretical claims. Experimental Designs Or Analyses: While providing codes in the supplementary materials, could you provide comprehensive information on the four custom-created datasets, including data collection methods, annotation processes, and validation protocols? Additionally, will these datasets be made publicly available to facilitate reproducibility and further research? How do you define and measure 'tool usage accuracy'? Does this metric assess the model's ability to decide appropriately between internal reasoning and tool usage, or does it evaluate the correctness of tool-assisted answers?​ What specific criteria or thresholds do you use to classify problems as 'easy' or 'hard' within the TUA component? How consistent are these classifications across different domains and datasets? Supplementary Material: Yes. The dataset and the code. Relation To Broader Scientific Literature: The paper's key contributions align with existing research on enhancing large language models (LLMs) through adaptive tool usage and internal knowledge integration. The proposed **Adapting While Learning (AWL)** framework, comprising **World Knowledge Learning (WKL)** and **Tool Usage Adaptation (TUA)**, mirrors efforts in **Retrieval-Augmented Generation (RAG)**, where LLMs access external information to improve accuracy and reduce hallucinations. Additionally, the concept of LLMs autonomously determining when to utilize external tools parallels advancements in creating AI agents capable of reasoning and acting, such as the **ReAct** pattern, which integrates LLMs as planners that "think out loud" before executing actions. By constructing new scientific datasets across various domains, the paper contributes to the broader endeavor of grounding LLMs in specialized knowledge, enhancing their problem-solving capabilities in complex scientific contexts. Essential References Not Discussed: The paper's key contributions would benefit from discussing related works that have explored similar themes of adaptive tool usage and reasoning in large language models (LLMs). 1. **Program-Aided Language Models (PAL):** Gao et al. introduced PAL, where LLMs generate Python programs to solve complex problems, effectively integrating external computational tools to enhance reasoning capabilities. 2. **Automatic Multi-Step Reasoning and Tool-Use (ART):** Paranjape et al. proposed ART, enabling LLMs to perform multi-step reasoning and decide when to invoke external tools, aligning closely with the adaptive tool usage discussed in the current paper. 3. **TaskMatrix.AI:** Liang et al. developed TaskMatrix.AI, connecting foundation models with millions of APIs, thereby allowing LLMs to interact with external systems and tools to complete various tasks. 4. **Gorilla:** Patil et al. presented Gorilla, an approach where LLMs are connected with massive APIs, facilitating dynamic tool usage based on the task requirements. Other Strengths And Weaknesses: The most weakness of this paper is its novelty and its writing. Other Comments Or Suggestions: NO Questions For Authors: At this time, I do not have additional issues to raise beyond those already discussed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We address your concerns below: --- ## Information on the custom-created datasets > "...The descriptions, and construction method should be clearly stated in the main text. However, I can not find this...." > "Could you provide comprehensive information on the four custom-created datasets, including data collection methods, annotation processes, and validation protocols?" In our current paper, key information about four custom-created datasets is already presented in Section 4.1 and Figure 2 in the main text. The construction methods are detailed in Lines 220-230 and illustrated in Figure 2. More detailed information is presented in Appendix A. We will enhance clarity in our revised paper and guide readers to the corresponding sections. > “Additionally, will these datasets be made publicly available to facilitate reproducibility and further research?” Yes, the datasets and code will be made public upon publication. ## Clarification of “Adapting While Learning” > "... However, I can not understanding why World Knowledge Learning and Tool Usage Adaption can be summarized as adapting while learning. I seems to adapting while inference?" "Adapting While Learning" refers to adaptation occurring during the knowledge-learning process. The "learning" refers to the World Knowledge Learning phase where the model learns scientific knowledge through tool interactions. Both adapting (tool usage adaptation) and learning (world knowledge learning) are built into the loss term of the fine-tuning process (Eq 6), not at inference time. If one were to create "adapt while inference," they would need additional procedures (e.g., Monte Carlo Tree Search) during inference, which is not the case in our framework. ## Novelty of our work and relationship with previous literature > "The world knowledge learning and tool usage adaptation are not novel. The first one is very similar to SFT." There appears to be a misunderstanding. Our innovation is NOT about SFT technology, but in adapting it to enable a new LLM-based learning paradigm for scientific problem-solving: learning knowledge from scientific simulators and learning to decide when to seek help from tools depending on problem complexity. Similar to how human scientists approach a problem, our training paradigm enables LLMs to use scientific simulation tools adaptively. We demonstrated strong empirical performance in several scientific domains including climate science, epidemiology, etc. > “The tool usage learning method is also largely borrowed from 'Learning to Use Tools via Cooperative and Interactive Agents'” Our work is fundamentally different from theirs. The work of Shi et al. (2024) primarily focuses on constructing datasets to enhance tool usage ability, i.e., they only focus on tool calling, which can lead to over-reliance on tool calls (see our results in Table 2). This potential over-reliance is actually a motivation for our development of this work. ## Definition of Tool Usage Accuracy > "The metric for tool usage accuracy is not clearly defined." > "How do you define and measure 'tool usage accuracy'? ..." Tool usage accuracy is clearly defined in Section 4.3 (Lines 321 left col. - 303 right col.) in the main text and we provide potential 5 additional tool usage metrics in Appendix E. This metric assesses the model's ability to choose correctly between internal reasoning and tool usage. We can simplify our writing to better guide readers in our revised version. ## Easy/Hard Problem Classification > "The criteria for classifying problems as 'easy' or 'hard' are not thoroughly explained." > "What specific criteria or thresholds do you use to classify problems as 'easy' or 'hard' within the TUA component? How consistent are these classifications across different domains and datasets?" We describe this classification in Lines 186-193 and provide additional details for open-ended questions in Lines 248-254. Questions are classified as "easy" or "hard" based on the model's direct-answer accuracy—if a model answers correctly without tools, it's "easy"; otherwise, "hard." This classification is dynamic across datasets and models, but the rule for dividing is consistent. We will simplify our writing for a more concise and clear information delivery in the revised paper. ## References Not Discussed Thank you so much for these suggested references. We will discuss them in the "LLM Tool Usage" paragraph in Section 2 of the revised paper. We kindly note that Gorilla is already included (Line 20, Line 100, right col). --- Thank you again for your time and effort. We hope our response addresses your concerns. If the issues have been resolved, we’d appreciate your consideration in the evaluation. Please feel free to share any additional feedback. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal carefully. The rebuttal address my concerns.
Summary: This paper presents Adapting While Learning i.e AWL, a novel fine-tuning method to improve the performance of LLMs on scientific problems by adaptively using external tools based on question complexity. It tackles the issue of either hallucinations obtained thorugh fine tuning or excessive reliance on tools by training models in two stages: World Knowledge Learning for internalizing solutions provided by tools, and Tool Usage Adaptation for adaptive tool usage decisions based on problem difficulty. The authors evaluate the approach using six benchmarks spanning mathematics, physics, epidemiology, and climate science, achieving significant accuracy improvements and outperforming state-of-the-art models such as GPT-4o and Claude-3.5. Four new scientific datasets are developed to further test the effectiveness of the proposed method. The results demonstrate the AWL model substantially improves both answer accuracy (28.27%) and tool usage accuracy (13.76%), surpassing state-of-the-art models on complex, custom datasets. Claims And Evidence: **Claim:** The authors claim that the AWL improves accuracy and optimizes tool usage decisions adaptively **Evidence:** Empirical evaluations show significant accuracy improvements (up to 28.27% higher answer accuracy and 13.76% improved tool accuracy) **Claim**: AWL significantly outperforms both baseline and state-of-the-art LLMs on custom and challenging scientific benchmarks **Evidence**: Comparison against state-of-the-art models (GPT-4o and Claude-3.5) on both public (MATH, SciBench) and four custom-created scientific datasets supports the claim that the proposed method outperforms existing models, especially on specialized, novel datasets. Methods And Evaluation Criteria: * Supervised Fine-tuning and Preference Optimization methods are used to teach LLMs the internal scientific knowledge generated from tools. * Tool Usage Adaptation (TUA) classifies the problems into "easy" and "hard" based on the LLM’s performance without tools, adapting the training approach accordingly. * The answer accuracy is measured based on correct responses to both multiple-choice and numerical-answer problems. * The tool usage accuracy evaluates whether the model correctly decides when to use or skip tools based on the complexity classification (easy/hard) of the problem, ensuring efficiency and reliability Theoretical Claims: I did check the theoretical claim, there is one claim about the loss function in the paper on line 194 and it makes sense to use ensemble method to avoid the performance degradation. Experimental Designs Or Analyses: * The paper conducts extensive experiments, comparing multiple baselines (including different configurations of Llama3.1, GPT-4o, Claude-3.5) on diverse scientific datasets. * Ablation studies clearly demonstrate that both components (WKL and TUA) are necessary and effective. * The study on robustness to noise is critical and thoughtfully executed, further validating the method's resilience against real-world uncertainties and data variability. * The design for open-ended questions, involving preference optimization (DPO), effectively addresses the complexity inherent to problems lacking definitive single solutions by explicitly evaluating solution proposals based on predefined domain metrics. Supplementary Material: No Relation To Broader Scientific Literature: The contributions are indeed very useful, this research encourages further exploration of enabling LLMs to adapt to tool usage specially in the AI for Science field which would help improve the performance of the LLMs and also they can better assist humans in research. Essential References Not Discussed: No Other Strengths And Weaknesses: ## Strengths The paper is well written and addresses a very important problem regarding LLMs use for scientific research The Novel framework considered for this papers is useful and makes sense, getting some of the inspiration from the way humans handle the problems and then select the tools. The models selection, the methodology along with the experiments conducted are detailed in depth ## Weaknesses May be they could have considered more variations in loss functions and also showed some result with some experiments why is the proposed loss function better than the existing one and what are the other possible ones. Also increase the partition to not just easy/hard but to more sections . Also it would be interesting to see prompt tuning/prompt injection as comparison where the initial query is considered and based on the tool usage prompt is updated/injected according to the tool usage. This would remove the fine tuning part and make it little easier. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your invaluable feedback. We address your suggestions below: --- ## Variations in loss functions > "May be they could have considered more variations in loss functions and also showed some result with some experiments why is the proposed loss function better than the existing one and what are the other possible ones." We thank you for this suggestion. Our work focuses on defining a new paradigm for LLMs to learn to solve scientific questions adaptively; as long as that target can be fulfilled, the loss term definition (also, similarly, the specific finetuning frame) is orthogonal to our work. We kindly find this ablation is not strongly related to our work. ## Increasing Problem Partitions > "Also increase the partition to not just easy/hard but to more sections." This is a very insightful observation and a great suggestion for future work. In this work, we are the first of our kind to encourage LLMs to choose smartly when to use tools, hence the setting is simplistic and binary. But surely, in realistic scientific or engineering cases, we can have different granularity of tools for different requirements of a given problem. As noted in Lines 420–424 (right col.), we have already acknowledged this limitation and discussed the potential of extending the framework to support more fine-grained partitions in future work. We appreciate the reviewer highlighting this direction and will further discuss it. ## Functionality of Prompt Tuning > "Also it would be interesting to see prompt tuning/prompt injection as a comparison where the initial query is considered and based on the tool usage prompt is updated/injected according to the tool usage." We appreciate this suggestion. In our current paper, Appendix B already shows prompts instructing models (without finetuning) to decide tool usage based on question difficulty. To further investigate the functionality of prompt tuning (PT) on these tasks, we added additional experiments using few-shot prompting with direct-answer and tool-usage examples. We show its performance compared with the base model and our method on answer accuracy and tool usage accuracy. **Answer Accuracy** ||Mujoco|PDE|Climate|Epidemiology|MATH|SciBench|Avg.| |-|-|-|-|-|-|-|-| |Base|57.14|59.17|76.67|58.89|55.89|29.17|56.16| |PT|61.43|59.17|76.67|58.89|49.41|26.07|55.27| |Ours|**64.17**|**78.33**|**83.33**|**74.44**|**62.35**|**34.17**|**66.13**| **Tool Usage Accuracy** ||Mujoco|PDE|Climate|Epidemiology|MATH|SciBench|Avg.| |-|-|-|-|-|-|-|-| |Base|51.50|50.00|50.75|50.86|50.09|60.22|52.24| |PT|54.08|50.00|50.96|48.63|53.19|55.09|51.99| |Ours|**61.80**|**66.67**|**75.50**|**66.61**|**62.09**|**62.75**|**65.90**| The experimental results indicate that few-shot prompting does not consistently improve the model's performance and may even degrade it in most cases. We will include the above results as a baseline in both Table 1 and Table 2 in our revised paper. --- Thank you again for your time and effort. Our paper benefits from your suggestions. We hope our response addresses your concerns. We welcome any further comments and are happy to address them. --- Rebuttal Comment 1.1: Comment: Thank you for the feedback, I would like to keep my score.
Summary: This paper proposes a new fine-tuning method called “Adapting While Learning (AWL)” that addresses the challenges of using large language models (LLMs) to solve scientific problems. LLM is effective for simple scientific problems, but can hallucinate on complex problems, and while integration with external tools is a solution, models fine-tuned specifically for tool use tend to be unnecessarily dependent on the tools even for simple problems. AWL is a takes a two-stage approach inspired by the way human experts evaluate the complexity of a problem before choosing a solution. In the first stage, “World Knowledge Learning (WKL)”, the LLM internalizes scientific knowledge using answers generated by the tool. In the second stage, “Tool Usage Adaptation (TUA)”, the model classifies problems as “easy” or “difficult” based on the accuracy of its direct answers, and trains the model to respond directly to easy problems and use the tool for difficult problems. In experiments using six datasets from fields such as climate science, epidemiology, and mathematics, the model with AWL achieved an average of 28.27% improvement in response accuracy and 13.76% improvement in tool usage accuracy compared to the baseline model, and in four custom-made datasets, it outperformed state-of-the-art models such as GPT-4o and Claude-3.5. ## Post-Rebuttal Update After reviewing the authors' rebuttal, I find that they have adequately addressed the concerns raised in my review. The authors provided additional information on: 1. Validation on larger models: They conducted experiments with Qwen2.5-14B-Instruct, showing consistent performance improvements with their method on larger models. 1. Hyperparameter sensitivity analysis: They provided a sensitivity analysis on the number of samples (k=1,3,5) used to assess problem difficulty, demonstrating the stability of their approach across different parameter settings. 1. Computational cost reduction analysis: They added analysis on tool usage frequency in open-ended problem-solving, showing significant reductions in unnecessary tool use. 1. Error analysis: They introduced a new error analysis section categorizing error types, which shows that after training, the proportion of reasoning errors and knowledge gaps decreased, while errors stemming from "agent limitations" (problems unsolvable even with tools) increased. These additional analyses further support the robustness and practicality of the proposed method. The authors have also effectively addressed concerns raised by other reviewers. I maintain my original assessment and recommendation for this paper. Claims And Evidence: The main claim of the paper is that the proposed AWL approach improves LLM's scientific problem-solving ability and enables adaptive switching of tool use according to the difficulty of the problem. This claim is well supported by the following evidence: In terms of response accuracy, the AWL-trained model outperforms the base model on all datasets, showing an average accuracy improvement of 28.27%. In particular, the authors' custom dataset outperforms even state-of-the-art models such as GPT-4o and Claude-3.5. In terms of tool use accuracy, the AWL model achieves an average accuracy of 65.90%, compared to other models (which achieve around 50% tool use accuracy). The difficulty analysis of the MATH dataset demonstrates the AWL model's ability to increase tool use according to the difficulty of the problem. Ablation experiments show that both WKL and TUA components are necessary for optimal performance, confirming that a single component alone is not sufficient. Experiments on robustness to noise show that the AWL model maintains response accuracy even at high noise levels. Experiments on extending the model to open-ended questions also show that the model combining AWL and DPO performs well in generating constrained responses. This evidence is clear and convincing, supporting the claims made in the paper. Methods And Evaluation Criteria: The proposed method and evaluation criteria are appropriate for the application of scientific problem solving. The proposed method, AWL, attempts to capture the essence of actual scientific reasoning, which is the adaptive use of tools according to the complexity of the problem. It models the natural behavioral pattern of human scientists, who directly solve simple problems and rely on computational tools for complex problems, and this is reasonable in the context of scientific problem solving. Two main metrics are used as evaluation criteria: accuracy of answers and accuracy of tool use, which are suitable for evaluating the effectiveness of the method from multiple perspectives. In particular, the accuracy of tool use is defined as “the ability to use the tool for difficult problems and not use the tool for easy problems”, which directly corresponds to the purpose of the proposed method. The evaluation dataset also uses custom datasets that cover a wide range of scientific domains, including climate science, epidemiology, and PDEs, in addition to general benchmarks such as MATH and SciBench, making it suitable for evaluating the generality of the method. In addition, by using datasets that include problems of different difficulty levels, the adaptive tool-use ability can be appropriately evaluated. Furthermore, to evaluate the scalability of the method to open-ended problems, the method combined with DPO was also evaluated, which corresponds to realistic scenarios of scientific problem solving. Theoretical Claims: This paper focuses mainly on experimental methods and results, and does not include any major theoretical arguments, including rigorous theoretical proofs. However, it does provide a conceptual explanation of the design principles of AWL and a logical basis for why both WKL and TUA are necessary. These explanations are intuitive and consistent with the experimental results. Experimental Designs Or Analyses: The experimental design and analysis are sound overall. The reviewer verified the following points: 1. **Diversity of data sets**: The six data sets used cover a variety of scientific domains, including climate science, epidemiology, and PDE, and are suitable for evaluating the generality of the method. 2. **Choice of baseline**: Using Llama-3.1-8B as the base model and comparing it with state-of-the-art models such as GPT-4o, GPT-4o-mini, Claude-3.5-Sonnet, and Llama-3.1-70B is appropriate. 3. **Evaluation metrics**: In addition to the two main metrics of response accuracy and tool usage accuracy, metrics have also been introduced to evaluate the appropriateness of tool usage for both simple and difficult questions, allowing for a multifaceted evaluation. 4. **Ablation experiments**: Ablation experiments have been conducted to evaluate the individual effects of WKL and TUA, demonstrating the necessity of both components. 5. **Robustness to noise**: The robustness of the proposed method is evaluated through experiments in which the noise level of the training data is varied. The only concern is that the validation is limited to models other than Llama and large-scale models (10B+). It would also have been better if the sensitivity analysis of hyperparameters (especially the threshold that determines the difficulty of the problem) had been carried out in more detail. Supplementary Material: I checked the data sets, prompts, and response details as Supplementary Material in the pdf. I also read about the details of fine-tuning and additional experimental results. I also confirmed that the following source code exists. I did not actually run the program. 1. LLM solves scientific problems by communicating with external tools. 2. Solutions are generated based on communication with tools. 3. The LLM is evaluated against the questions, and the data set is classified into easy and difficult questions. 4. The model is further trained by combining the data from World Knowledge Learning (WKL) and Tool Usage Adaptation (TUA). Relation To Broader Scientific Literature: The contributions of this paper are related to three research areas: LLM alignment, LLM training for scientific problem solving, and LLM tool use. As LLM alignment techniques, the paper adopts SFT (supervised fine-tuning) and DPO (direct preference optimization), which are based on existing studies such as Rafailov et al. (2024) and Ouyang et al. (2022). While existing studies on LLM training for scientific problem solving rely on expert annotation or distillation from strong models (Thulke et al., 2024; Zhang et al., 2024b), our study proposes an automatic knowledge acquisition approach using tools. In terms of LLM tool use, while existing research such as Toolformer by Schick et al. (2023) proposes methods for teaching LLM specific tool use patterns, this study is new in that it focuses on the ability to adaptively determine tool use according to the complexity of the problem. In particular, Yu et al. (2024) pointed out the lack of adaptability in LLM's tool-use decisions, and this study directly addresses this issue. It is also interesting that it takes inspiration from human cognitive science (Payne et al., 1993; Kruger & Dunning, 1999) and incorporates the insight that humans evaluate the complexity of a problem before choosing a solution. Essential References Not Discussed: The paper focuses on the specific issue of using adaptive tools to solve scientific problems, and the relevant key literature is appropriately cited and discussed. However, the context could be strengthened by adding further references to the following points 1. The cognitive science literature on metacognition and self-assessment: The paper cites the work of Kruger & Dunning, but a reference to recent research on LLM self-assessment (e.g. Kadavath et al., “Language models (mostly) know what they know”) could help to provide a deeper understanding of the challenges involved in models assessing their own abilities. 2. Unsupervised and self-supervised problem difficulty evaluation: By referring to recent research on methods for LLM to autonomously evaluate the difficulty of problems (e.g. Sun et al., “Self-Evaluation Guided Beam Search for Reasoning”), The authors can consider alternative approaches to the TUA component. 3. Causal effects of tool use: By referring to research on causal understanding when LLM uses tools (e.g., Yao et al., “React: Synergizing reasoning and acting in language models”), it may be possible to consider more effective tool use strategies. Although the absence of these references does not cause a critical problem in understanding the main contribution of the paper, it would help to place it in a wider research context. Other Strengths And Weaknesses: **Strengths:** 1. **Practical solutions**: The authors propose practical approaches that correspond to practical scenarios of scientific problem solving, from simple to complex problems. 2. **Original ideas**: Our approach is inspired by human cognitive processes, and provides a new perspective on adaptive tool use according to the complexity of the problem. 3. **Comprehensive experiments**: The effectiveness of the proposed methods is verified from multiple perspectives, including evaluation across diverse scientific domains, ablation experiments, and noise robustness verification. 4. **Dataset contribution**: New scientific problem-solving datasets are constructed and released for the research community. 5. **Extension to open-ended problems**: Extensions are proposed to handle open-ended scientific problems beyond canned answers. **Weaknesses:** 1. **Lack of verification of generality and scalability**: The effectiveness of the proposed method has only been verified to a limited extent on LLM other than Llama and on larger-scale models (10B+). 2. **Knowledge transfer between domains**: A more convincing result would have been obtained if there had been a detailed analysis of knowledge and ability transfer between different scientific domains. 3. **Real-world scientific research applications**: What are the specific examples (use cases) of the target problem in this research? The paper provides conceptual explanations, but does not provide detailed descriptions of specific use case scenarios. There is limited detailed examination of the applicability of the proposed method in actual scientific research environments. 5. **Evaluation by human experts**: There is a lack of qualitative evaluation by human experts, particularly regarding the quality of answers to open-ended questions. Other Comments Or Suggestions: Suggestions for further improving the quality of the paper: 1. Adding a sensitivity analysis of hyperparameters (especially the problem difficulty threshold) would further demonstrate the stability and generality of the proposed method. 2. Adding a quantitative analysis of computational cost reduction would further demonstrate the practical benefits of the proposed method. 3. Adding an analysis of knowledge transfer between different datasets and domains would provide further insight into the model's generalization ability. 4. By adding validation on larger models (10B+) and a variety of models, you can demonstrate the scalability of the proposed method. 5. By adding an error analysis section and analyzing the patterns of cases where the proposed method fails, you can identify areas for future improvement. 6. By adding an evaluation of the quality of answers to open-ended questions by human experts, you can more strongly demonstrate the practical value of the proposed method. 7. By providing detailed descriptions of application cases and usage scenarios in actual scientific research environments, the practical significance of the proposed method can be more clearly demonstrated. Questions For Authors: I look forward to responses to the concerns I have raised above, but I have no further questions. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive suggestions. We address your concerns below. --- ## Validation on Larger Models > ...limited to models other than Llama and large-scale models(10B+) We additionally included Qwen2.5-14B-Instruct and trained it with our method. We conducted experiments on 2 custom and 2 public datasets. We present models’ answer accuracy under $P\_n$ and $P\_i$, Tool Usage Accuracy (TUA), and Tool Use Ratio (TUR), which corresponds to Table1, 2, 13 in the paper respectively. |Method|PDE|Mujoco|MATH|SciBench| |-|-|-|-|-| |Base-$P\_n$|61.67|54.28|74.12|17.50| |Ours-$P\_n$|78.33|60.00|81.18|56.67| |Base-$P\_i$|69.17|44.28|79.41|46.67| |Ours-$P\_i$|80.00|62.85|82.35|65.83| |Base-TUA|48.91|50.00|48.45|48.84| |Ours-TUA|63.58|54.16|54.69|58.54| |Base-TUR (↓)|99.17|100.00|95.88|93.33| |Ours-TUR (↓)|13.33|27.14|1.76|44.17| Results show consistent gains in answer and tool usage accuracy, with reduced tool use. It shows our method also benefits larger models. ## Problem Difficulty Hyperparameters > ...a sensitivity analysis of hyperparameters (especially the problem difficulty threshold)... We conducted a sensitivity analysis on the number of samples (k = 1, 3, 5) used to assess LLMs' answer accuracy and partition questions by difficulty. We trained different models with these thresholds on MATH and SciBench datasets, with the results presented below. Answer Accuracy (pass@k) |k|MATH(Base)|SciBench(Base)|MATH(Ours)|SciBench(Ours)| |-|-|-|-|-| |1|54.71|17.50|62.09|30.83| |3|65.88|30.00|72.35|54.16| |5|74.11|37.50|75.88|55.83| Tool Usage Accuracy |k|MATH(Base)|SciBench(Base)|MATH(Ours)|SciBench(Ours)| |-|-|-|-|-| |1|50.09|60.22|62.09|62.75| |3|57.73|52.38|64.37|58.74| |5|62.16|52.22|65.36|58.27| Results show that our method remains stable across different hyperparameters about the problem difficulty threshold. ## Computational Cost Reduction > ...a quantitative analysis of computational cost reduction... Currently, Tables 4 and 13 have already presented the tool use ratio of different models and showed that our method can reduce unnecessary tool usage thus reducing computational cost. Additionally, we conducted an analysis on open-ended problem-solving which allows multiple tool usage. The numbers in the table represent the average times of tool usage across all questions. It further demonstrates that our method brings computational cost reduction. ||Base|Ours| |-|-|-| |Climate|7.21|2.70| |Epidemiology |2.80|0.42| ## Error Analysis > ...error analysis and case patterns... We add a new section on error analysis. We categorize errors into: 1. Problems unsolvable even with tool usage (agent limitation); 2. Problems solvable with tools but answered incorrectly. The latter category is further divided into: a. Calculation mistakes, b. Reasoning errors, and c. Knowledge gaps. Following standard practices in benchmark papers, we provide GPT-4o with the errors and their corresponding correct solutions and ask it to annotate their error types. The error type distributions are read below. Base model ||Mujoco|PDE|Climate|Epidemiology|MATH|SciBench| |-|-|-|-|-|-|-| |Agent Limitation|35.29|35.83|27.40|36.99|45.45|29.47| |Calculation Mistakes|0.00|0.00|0.00|0.00|2.60|7.37| |Reasoning Errors|45.10|55.83|24.66|23.29|46.75|56.84| |Knowledge Gaps|19.61|8.33|47.95|39.73|5.19|6.31| Our trained model ||Mujoco|PDE|Climate|Epidemiology|MATH|SciBench| |-|-|-|-|-|-|-| |Agent Limitation|47.97|80.00|40.91|45.45|49.23|32.53| |Calculation Mistakes|3.25|3.33|4.55|0.00|4.62|7.23| |Reasoning Errors|39.02|16.67|13.64|22.73|41.54|54.22| |Knowledge Gaps|9.76|0.00|40.91|31.82|4.62|6.02| After training, the proportion of Reasoning Errors and Knowledge Gaps decreased and the proportion of Agent Limitation increased. This suggests that: 1. Our method enables the model to learn scientific reasoning and domain-specific knowledge; 2. The remaining errors after training are mainly concentrated on questions that cannot be correctly solved even with tool usage. This analysis points out the key challenges that future work should address. ## Potential Knowledge Transfer Cross-domain generalization isn't the primary focus of this work. We have discussed it in our future work section (Line 416-420, right col.). We will further highlight it. ## Application Descriptions We have provided examples of application cases and usage scenarios in Figure 2 and Appendix A (Pages 12-18). We’ll clarify application relevance in revisions. ## Human Evaluation for Open-ended Questions For these questions, our evaluation is based on the simulation result given the LLMs' proposals, which is intrinsically stable. Manual reviews are out of the scope of this work, but we will include some discussion. --- Thank you again for your time and effort. We hope our response has addressed your concerns. If the issues have been resolved, we’d appreciate it if you could reflect it in your evaluation. We welcome any further comments.
Summary: The paper introduces a two-component fine-tuning approach that first trains a model on direct reasoning (WKL) and then selectively incorporates tool usage based on the assessed complexity of the scientific problem. The method is empirically validated across six scientific benchmark datasets, demonstrating improved efficiency and accuracy, balancing direct reasoning and tool utilization based on task difficulty. Claims And Evidence: The claim of novelty and methodological depth is not strongly supported, as the method primarily leverages straightforward dataset-splitting strategies and existing fine-tuning technologies. Additionally, claims about performance advantages are inconsistently supported, with high performance only clearly demonstrated on self-created datasets where details like dataset split methodology and comparison settings (different prompts) are unclear, limiting a fair and thorough evaluation against existing methods. Methods And Evaluation Criteria: The proposed methods and evaluation criteria generally make sense. However, the method's reliance on a fixed, accuracy-based split of easy and hard problems may limit generalizability and robustness, as complexity could vary significantly depending on the model and context. Theoretical Claims: There is no proof for theoretical claims. Experimental Designs Or Analyses: The approach of splitting datasets into "easy" and "hard" problems based solely on the WKL-trained model's accuracy may lead to inconsistent splits for different models, potentially affecting comparability and fairness. The method achieves higher performance primarily on self-constructed datasets but underperforms on public benchmarks. The authors did not clearly specify the configuration (P_n, P_i, P_f) used for baseline models in their comparisons. Supplementary Material: I have briefly reviewed the code in the supplementary material. Relation To Broader Scientific Literature: The proposed method aligns closely with existing concepts of conditional computation, adaptive inference, and selective prediction. Although practically useful, the approach mainly applies established techniques without significantly extending them. Essential References Not Discussed: No Other Strengths And Weaknesses: Please address the concerns raised in the questions. Other Comments Or Suggestions: I increased my rating one nudge after the discussion phase. Questions For Authors: 1. Regarding tool usage adaptation, the methodology involves splitting the dataset into easy and hard instances and training the model with different traces (with/without tool use). While intuitive, this approach lacks theoretical guarantees. The model primarily learns patterns from the dataset, but its reliability diminishes when the dataset changes or when the question format shifts (e.g., from multiple-choice to direct QA). A more robust strategy is needed to address this issue. 2. The performance across datasets and models in Table 1 appears inconsistent. The proposed method achieves the highest scores on self-made datasets (Mujoco, PDE, Climate, Epidemiology), whereas GPT-4o performs best on public datasets (MATH, SciBench). One possible explanation is that GPT-4o is unfamiliar with these tasks and custom tools, while the proposed model has been specifically trained on them. Additionally, it is unclear which setting (P_n, P_i, P_f) is used for other models. 3. Concerns also arise regarding tool usage accuracy. First, the settings (P_n, P_i, P_f) used for different models are not explicitly stated. Second, the easy/hard split should vary across models, but it is unclear whether the authors use a fixed or dynamic split. Lastly, not all scientific tools are resource-intensive (e.g., Python scripts); in such cases, the relevance of the proposed metric remains questionable. Ethical Review Concerns: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We address your concerns below: --- ## Novelty and Methodological Depth > "The claim of novelty and methodological depth ... existing fine-tuning technologies." We are not claiming novelty in fine-tuning technologies. Rather, our innovation primarily lies in defining a novel learning paradigm for scientific problem-solving with LLMs, i.e., learning knowledge from scientific simulators and switching to tools intelligently based on problem complexity, which fits human expert behavior in scientific domains. We are the first to propose this paradigm, and our empirical experiments demonstrate that our paradigm can strike a great balance between answer accuracy and tool calling cost. ## Performance Consistency on Public Datasets > "...claims about performance advantages are inconsistently supported...", "...in Table 1...GPT-4o performs best on public datasets." The performance difference on public datasets is largely due to model size, as the strongest baselines such as GPT-4o are significantly larger than our base model (8B). To support this, we additionally include 2 larger open-source models Qwen2.5-{14B/32B}-Instruct as base models for our method and conduct experiments on MATH and SciBench, with results presented below, where TUA means Tool Usage Accuracy: |Model|MATH(14B)|SciBench(14B)|MATH(32B)|SciBench(32B)| |-|-|-|-|-| |Base-$P\_n$|74.12|17.50|81.77|60.83| |Base-$P\_i$|79.41|46.67|84.71|65.83| |Base-TUA|48.45|48.84|47.49|48.55| |Ours-$P\_n$|81.18|56.67|84.71|62.50| |Ours-$P\_i$|82.35|65.83|**85.89**|**69.17**| |Ours-TUA|54.69|**58.54**|**55.42**|56.69| It can be observed that our framework applied to a 32B model can outperform GPT-4o on MATH and achieve comparable performance on SciBench. We will append these results in our revised paper. ## Dataset Splitting > "...the easy/hard split should vary across models, but it is unclear whether the authors use a fixed or dynamic split..." Yes, our easy/hard splitting is relative to each model's problem-solving ability, i.e., dynamic. We described it in Lines 186-193 and provided additional details for open-ended questions in Lines 248-254. > "...splitting datasets...may lead to inconsistent splits for different models... lacks theoretical guarantees." The intuition behind this design decision is that different models, by construction, have different levels of problem-solving ability; hence, a fixed split is actually counter-intuitive. If one were to define a fixed split, it would inherently introduce biases using the reference for such a split, regardless of whether that reference is a human expert or an LLM. As for theoretical guarantees, may you clarify what kinds of theoretical guarantees you are referring to? In Section 3.3, we have already stated that our methods ensure models learn the correct decision patterns through supervised training objectives. Our experiments empirically verify this, which demonstrates that the model successfully internalizes these patterns and applies them consistently. Extra theoretical analysis is rare and trivial in NLP application works. ## Robustness to Format and Dataset Shifts > "...its reliability diminishes when the dataset changes or when the question format shifts (e.g., from multiple-choice to direct QA)." Our datasets already include numerical problems (Lines 231-241) and open-ended questions (Lines 241-247, Figure 3b), which are both direct QAs. Experimental results on these questions are consistent with those on multi-choice questions. It demonstrates our method's robustness to problem format shifts. ## Experiment Settings > "...which setting ($P\_n$, $P\_i$, $P\_f$) is used for other models." In Table1, Llama3.1-70B-Instruct, GPT4o, GPT4o-mini, Claude3.5-Sonnet are vanilla models, i.e., they were evaluated without tool assistance ($P\_n$). We will clarify this in the final version. ## Relevance of Tool Usage Metric for Lightweight Tools > "Not all scientific tools are resource-intensive (e.g., Python scripts)..." We agree that not all scientific tools are resource-intensive, but many of them are, especially in realistic scientific and engineering applications. These applications, such as climate modeling, and epidemic prediction, would require much more significant computing time than LLM inference (e.g., days for simulation vs. seconds for LLM inference), due to inherent complexity (such as high spatial resolution). Our work targets those use cases and serves as a starting point to apply LLM automation. Apart from those cases, we also incorporated commonly used benchmarks such as MATH and SciBench, and Python is the most commonly used tool for them. The strong performance of our method shows that our pipeline is applicable to any tools, regardless of their cost. --- Thank you again for your time and effort. We hope our response resolves your concerns and would be grateful for your consideration in the evaluation. We welcome any further comments. --- Rebuttal Comment 1.1: Comment: Thanks authors for their clarifications. I maintain that: -The proposed learning paradigm primarily focuses on defining the model’s input and output, which leans more towards an engineering contribution. -Comparing the proposed method directly with the base model may be inappropriate. A more reasonable comparison would be against a standard SFT model under the same tool usage conditions. -Format and Dataset Shifts refer to scenarios where the model is trained on one format or dataset but evaluated on another. For example, the model might be trained on multiple-choice questions but tested on direct question answering using the same dataset. This setting evaluates the model’s robustness in making appropriate tool usage decisions. -The comparison in Table 1 may be unfair if the baseline models are not allowed to use tools, while the proposed method is. A significant portion of the improvement may stem from tool usage rather than the learning paradigm itself. --- Reply to Comment 1.1.1: Comment: We’re glad to hear that your previous questions on performance consistency, dataset splits, theoretical guarantees, and tool usage metrics have been addressed. We now respond to your new points in detail. --- ## Contribution Type and Significance > "... focuses on defining the model’s input and output, which leans more towards an engineering contribution." While we don't claim novelty in model training techniques, our key contribution is introducing a novel learning paradigm for scientific problem-solving with LLMs, which goes beyond engineering. This paradigm enables LLMs to learn scientific knowledge through simulator interaction and make intelligent tool selection decisions based on problem complexity. We believe research contributions can take various forms including: model architecture design, theoretical analysis, and new paradigms for existing problems. Our work falls inside the last category. We believe it would be inappropriate to discount a work just because it lacks contributions to a certain category, as few papers contribute to all of them. We also kindly note that our work’s practicality for scientific problem-solving has been supported by Reviewers tT3B, n1ey and qrRN who acknowledge our work's novelty. ## Baseline Comparison > "Comparing the proposed method directly with the base model may be inappropriate. A more reasonable comparison would be against a standard SFT model under the same tool usage conditions." We appreciate this suggestion and would like to highlight that **we have already performed such an ablation study in Section 5.2** of the paper. Specifically, we evaluated: 1. Base model without fine-tuning 2. SFT model trained with only scientific knowledge distillation 3. SFT model trained with only intelligent tool usage 4. Our full method combining both components Setting 3 corresponds to the suggested SFT baseline with tool usage. The method underperforms ours in both answer accuracy and tool usage accuracy. The results of this ablation study demonstrates the necessity and contribution of both World Knowledge Learning (WKL) and Tool Usage Adaptation (TUA) components in our proposed method for optimal performance. ## Inference-time Dataset Shifts > "Format and Dataset Shifts refer to scenarios where the model is trained on one format or dataset but evaluated on another..." Thank you for this insight. We acknowledge the importance of studying cross-format generalization. However, it would require significantly more diverse training data than feasible in a single study. Our current work focuses on establishing a new paradigm for scientific problem-solving with LLMs. Cross-format generalization is out of this single work’s capacity but can naturally be a future work direction. ## Fairness of Comparisons > "The comparison in Table 1 may be unfair ... the improvement may stem from tool usage rather than the learning paradigm itself." We want to clarify that Table 1 presents fair and controlled comparisons under different tool usage settings: 1. Under $P_n$ (no tools) setting, our model outperforms both the base model and closed-source models. In this comparison, **the improvement clearly stems from the learning paradigm rather than tool usage**, since no tools are allowed for any of the models being compared. 2. Under $P_i$/$P_f$ (with tools) settings, compared with the base model, our method (1) uses tools more intelligently and selectively, (2) achieves higher answer accuracy with fewer tool calls. This supports that our approach improves both tool usage intelligence and the model's problem-solving ability under tool-using setting. We did not include tool-augmented results for closed-source since the goal of our work is not to fine-tune models to use tools more accurately, thus this comparison is unnecessary. Furthermore, even dedicated open-source tool-using LLMs [1, 2] underperform SOTA closed-source models due to model size limitations. However, to further address the question, we conducted additional experiments where we compared our model with gpt-4o under tool-augmented settings ($P\_i$) on four datasets. The results show that our trained model performs competitively with GPT-4o, reaffirming the effectiveness of our approach. | | PDE | Mujoco | MATH | SciBench | | - | - | - |- | - | | GPT-4o | 82.50 | 75.00 | 79.32 | 50.83 | | Llama-3.1-8B-Instruct (Base) | 59.17 | 57.14 | 55.89 | 29.17 | | Llama-3.1-8B-Instruct (Ours) | 78.33 | 64.17 | 62.35 | 34.17 | | Qwen2.5-14B-Instruct (Base) | 69.17 | 44.28 | 79.41 | 46.67 | | Qwen2.5-14B-Instruct (Ours) | 80.00 | 62.85 | 82.35 | 65.83 | We will clarify these comparisons in our revision. --- Thank you again for your continued engagement. We hope the above responses address your questions. [1] Qin et al., ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs, ICLR 2024 (Spotlight) [2] Zeng et al., AgentTuning: Enabling Generalized Agent Abilities for LLMs, ACL 2024
null
null
null
null
null
null
Reward-free World Models for Online Imitation Learning
Accept (poster)
Summary: This paper presents an approach called IQ-MPC for online imitation learning using reward-free world models. It can alleviate some issues in IL, such as handling complex, high-dimensional inputs and intricate environmental dynamics without explicit reward modeling. Key ideas include leveraging decoder-free world models to learn environmental dynamics purely in latent spaces, using an inverse soft-Q learning objective to stabilize learning, and integrating MPC with a gradient-free planning strategy. The main findings demonstrate superior performance across benchmarks like DMControl, MyoSuite, and ManiSkill2, significantly outperforming baselines in terms of stability, sample efficiency, and robustness to high-dimensional and visual input scenarios. Claims And Evidence: The authors claim that IQ-MPC achieves stable, expert-level performance in tasks with high-dimensional observation or action spaces and complex dynamics. This claim is supported by empirical evaluations on diverse benchmarks, where IQ-MPC outperforms existing methods. The paper presents quantitative results demonstrating improved performance metrics, thereby providing clear evidence to support its claims. Methods And Evaluation Criteria: The proposed methods and evaluation criteria align well with the problem at hand. Leveraging reward-free world models allows the framework to focus on modeling environmental dynamics without relying on explicit reward signals, which is suitable for imitation learning scenarios. I think the authors can also take the inference time into consideration, and also try some universal world models for diverse tasks to prove some scalability. Theoretical Claims: The theorems are derived from other works. Experimental Designs Or Analyses: The authors mention lots of model-based IL algorithms in the related works but do not compare them as baselines. Only XXX + SAC are considered, which is wired and requires more explanation. Supplementary Material: Yes. The visualization parts, some experimental details, and theorems. Relation To Broader Scientific Literature: The paper clearly situates itself within existing literature on imitation learning (GAIL, SQIL, IQ-Learn, CFIL) and model-based reinforcement learning (Dreamer, TD-MPC). It effectively extends prior works by removing explicit reward dependencies in world models and providing a novel integration of inverse Q-learning with MPC for imitation learning. The paper notably complements the findings from IQ-Learn (Garg et al., 2021) and TD-MPC series (Hansen et al., 2022, 2023), extending these frameworks into a cohesive IL solution with clear practical advantages, especially in complex control tasks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weakness: I think the work is a little incremental since it heavily relies on IQ-Learn and TD-MPC and cannot easily be applied to other model-based algorithms. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your review and your recognition of its strengths, including its clear positioning within the literature, effective extensions of prior works, and practical advantages in complex tasks. Below, we provide detailed responses to your comments. ### 1: Lack of Comparison with Model-Based IL Algorithms The authors mention many model-based imitation learning (IL) algorithms in the related works but do not compare them as baselines. Only XXX + SAC are considered, which seems unusual and requires further explanation. **Answer:** We have conducted experiments with HyPER [1], the model-based version of HyPE. However, we were unable to achieve reasonable scores using this algorithm on most of our tasks. Therefore, we opted to use the model-free version, HyPE, as our baseline. Additionally, we compared our approach with EfficientImitate [2], a model-based IL algorithm that combines EfficientZero with an adversarial imitation learning approach. The results of the comparison on MyoSuite dexterous hand manipulation tasks are shown below: | Method | EfficientImitate | IQ-MPC (Ours) | |-------------------|--------------------|-----------------| | Key Turn | 0.81 ± 0.05 | 0.87 ± 0.03 | | Object Hold | 0.56 ± 0.08 | 0.96 ± 0.03 | | Pen Twirl | 0.31 ± 0.07 | 0.73 ± 0.05 | These results demonstrate the effectiveness of our approach compared to EfficientImitate in complex dexterous hand manipulation tasks. --- ### 2: Incremental Contribution and Limited Generalizability The work appears somewhat incremental as it heavily relies on IQ-Learn and TD-MPC, and it may not generalize easily to other model-based algorithms. **Answer:** While our method builds upon IQ-Learn and TD-MPC, our contribution lies in developing a novel approach for online imitation learning using reward-free world models. Furthermore, the methodology we propose for learning the Q function and eliminating the need for a reward model can potentially be applied to other latent world models. However, further investigation is necessary to determine whether our approach generalizes effectively to other model-based algorithms. We consider this a promising direction for future research. [1] Ren, J., Swamy, G., Wu, Z. S., Bagnell, J. A., & Choudhury, S. (2024). Hybrid Inverse Reinforcement Learning. *arXiv preprint arXiv:2402.08848.* [2] Yin, Z. H., Ye, W., Chen, Q., & Gao, Y. (2022). Planning for Sample Efficient Imitation Learning. *Advances in Neural Information Processing Systems, 35*, 2577-2589.
Summary: This paper gives an approach for online imitation learning with reward free world models that learn dynamics efficiently in latent space. The method is evaluated in DMControl, MyoSuite, and ManiSkill2 against several baselines. Claims And Evidence: Several key claims are made in this paper. 1. The author claims a novel approach that leverages reward free models for online imitation learning. This claim is well supported and the method learns dynamics in latent space efficiently. 2. The method consistently achieves stable, expert-level performance. While the method tends to perform better than baselines on the tasks shown, it has considerable variance in some of the tasks and does not reach expert performance, for example in Figure 3 pen twirl and Figure 10 pick cube and lift cube. The evaluation tasks chosen are on the easy side and on the manipulation tasks the method does not perform consistently better than baselines. 3. The method recovers the rewards for inverse reinforcement learning. This is a main claim in the contribution but is not supported in the main text with some evidence in the appendix. The plot for reward recovery correlation is only shown for one environment. Methods And Evaluation Criteria: The benchmark environments make sense but are on the easy side. The experiments could be made better with harder evaluation tasks for example in ManiSkill2. Theoretical Claims: The theoretical claims are sound. Experimental Designs Or Analyses: The paper compares with relevant baselines across multiple environments. There are a lot of ablation studies to isolate the contribution of different components and test time analysis. Supplementary Material: The supplementary material gives more details on ablation studies and experiments and proofs. Relation To Broader Scientific Literature: The paper is relevant to online imitation learning, which is an active area of research. Essential References Not Discussed: The essential references are discussed. Other Strengths And Weaknesses: The paper addresses a key limitation in prior work, namely the need for explicit reward modeling. It uses novel approaches for optimization and planning and contains theoretical support to back up the claims. The method demonstrate high performance on the baseline tasks. The paper is clear and methodology is motivated. Some ablations in the paper are performed only on a single task which can be made better. The method can be evaluated on harder tasks which is a main weakness. Other Comments Or Suggestions: No other comments or suggestions Questions For Authors: 1. How does the method compare to baselines in more complex manipulation tasks? 2. Is there empirical evidence for generalizing to out of distribution states? 3. What are the sample efficiency trade offs for reward free method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your review and your recognition of its strengths, including our novel approach, theoretical support, strong performance, and clear presentation. Below, we provide detailed responses to your comments. ### 1: Harder Evaluation Tasks in ManiSkill2 The experiments could be improved by evaluating on harder tasks in ManiSkill2. How does the method compare to baselines in more complex manipulation tasks? **Answer:** We have conducted experiments on ManiSkill2 using the Lift Cube and Pick Cube tasks in Appendix E.2. Additionally, we extended our evaluation to more challenging tasks in ManiSkill2, and the results are shown below: | Method | IQL+SAC | CFIL+SAC | IQ-MPC (Ours) | |--------------|----------------|---------------|----------------| | Pick YCB | 0.10 ± 0.05 | 0.00 ± 0.00 | 0.31 ± 0.06 | | Stack Cube | 0.23 ± 0.08 | 0.01 ± 0.01 | 0.57 ± 0.05 | These results demonstrate that our approach achieves superior performance in more complex tasks compared to the baselines. --- ### 2: Generalization to Out-of-Distribution (OOD) States Is there empirical evidence supporting generalization to OOD states? **Answer:** To provide empirical evidence for OOD state generalization, we compare our performance with a Behavior Cloning (BC) baseline trained on the same expert dataset. Since BC is fully offline and tends to struggle with OOD states, it serves as a suitable comparison. The evaluation was conducted on MyoSuite dexterous hand manipulation tasks, with results presented below in terms of success rate: | Method | BC | IQ-MPC (Ours) | |---------------|----------------|----------------| | Key Turn | 0.31 ± 0.15 | 0.87 ± 0.03 | | Object Hold | 0.26 ± 0.11 | 0.96 ± 0.03 | | Pen Twirl | 0.04 ± 0.02 | 0.73 ± 0.05 | These results indicate that our approach significantly outperforms BC, serving as indirect empirical evidence of our method's superior OOD generalization capabilities. --- ### 3: Sample Efficiency Trade-Offs for Reward-Free Methods What are the sample efficiency trade-offs for reward-free methods? **Answer:** The primary advantage of our approach in terms of sample efficiency arises from its model-based methodology. However, we are not entirely clear on the specific concerns regarding the sample efficiency trade-offs mentioned in your question. Would you kindly clarify further? --- ### 4: Evidence Supporting Reward Recovery The method's claim of recovering rewards for inverse reinforcement learning is not well-supported in the main text, with only one environment shown in the reward correlation plot. **Answer:** In Appendix G of our manuscript, we further support our reward recovery claim by computing the Pearson correlations for reward recovery across four different tasks, as presented in Table 7. This complements the reward correlation plot in Figure 16, providing additional evidence for the validity of our approach.
Summary: The paper proposes a world-model-based approach for online imitation learning. The method aims to address limitations in current imitation learning techniques by leveraging world models to improve training stability and performance. The authors demonstrate that their approach achieves better training efficiency and stability across a wide range of environments with diverse observation spaces. The proposed method is empirically compared against three state-of-the-art baselines, showing superior results. ### Update after rebuttal: The authors have provided clear and helpful responses to my concerns. The rationale for using planning over RL is now better justified, and the discussion of limitations and potential generalization was appreciated. Code availability further improves confidence in reproducibility. Claims And Evidence: The paper claims that using a world model enhances the performance of online imitation learning. This claim is supported by extensive empirical evaluations demonstrating improved stability and efficiency compared to three baselines. However, further clarification on why planning was chosen over reinforcement learning (RL) for environment interaction sampling would strengthen the justification for the approach. Methods And Evaluation Criteria: The methodology is sound, employing a well-defined world model framework. The evaluation criteria include performance metrics across multiple environments, making the study comprehensive. However, additional discussion on the choice of baselines and potential limitations in generalization to unseen tasks would be beneficial. Theoretical Claims: The theoretical aspects appear well-grounded, and the method aligns with established principles in imitation learning and model-based reinforcement learning. Experimental Designs Or Analyses: The experimental setup is robust, involving diverse environments to test generalizability. The comparisons with state-of-the-art baselines further support the effectiveness of the proposed method. Supplementary Material: Unfortunately, the authors did not provide any code appendix, making it hard to assess the reproducibility of the presented results. The text appendices were not further checked. Relation To Broader Scientific Literature: The paper builds on prior work in imitation learning and model-based RL, contributing a novel integration of world models for stability improvements. Essential References Not Discussed: To the best of my knowledge, related work has been discussed in sufficient extent. However, the authors may want to consider Altmann et al. 2024, "Discriminative Reward Co-Training" as additinal reated work. Other Strengths And Weaknesses: Strengths: - The paper is well-written and easy to follow. - The approach is simple yet effective in addressing limitations in imitation learning. - Theoretical grounding is solid, and empirical validation is comprehensive. Weaknesses: - The rationale behind using planning instead of RL for sampling interactions needs further elaboration. - A deeper discussion of failure cases and limitations would strengthen the contribution. - The lack of a code appendix makes it difficult to assess the reproducibility of the results. Other Comments Or Suggestions: Providing a code appendix and open-sourcing the implementations would significantly improve the reproducibility of the results. A discussion on the computational cost of the approach compared to baselines would be useful. Questions For Authors: Why was planning chosen over reinforcement learning for sampling environment interactions? How does this choice impact performance in different settings? Could the proposed method generalize to unseen tasks or domains beyond the evaluated environments? Do the authors plan to release the code to improve result reproducibility? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your review and your recognition of its strengths, including its clarity, the simplicity and effectiveness of our approach, and the solid theoretical grounding with comprehensive empirical validation. Below, we provide detailed responses to your comments. ### 1: Rationale for Using Planning Instead of Reinforcement Learning The rationale behind using planning instead of reinforcement learning (RL) for sampling interactions needs further elaboration. Why was planning chosen over RL for sampling environment interactions? How does this choice impact performance in different settings? **Answer:** Planning is particularly effective for complex tasks with high-dimensional observation and action spaces, such as the Dog Run task. On simpler tasks with low-dimensional observation and action spaces, like the Walker Run task, the performance difference between planning and directly using the policy prior for action sampling is not as significant. Moreover, the planning process is initialized with the RL policy prior, which means it generally enhances performance. --- ### 2: Discussion of Failure Cases and Limitations A deeper discussion of failure cases and limitations would strengthen the contribution. **Answer:** Thank you for the suggestion. We have discussed the potential instability caused by an imbalance between the policy and critic in the paragraph titled Balancing Critic and Policy Training on Page 5 of our manuscript. While we mitigate this issue using gradient penalty techniques, which help achieve strong performance across various tasks, the potential for instability may persist in other settings. We will provide a more detailed explanation and additional examples in the revised manuscript. --- ### 3: Generalization to Unseen Tasks or Domains Could the proposed method generalize to unseen tasks or domains beyond the evaluated environments? **Answer:** TD-MPC2 [1] has demonstrated the capability to generalize to unseen tasks after multi-task training. However, we did not conduct multi-task training for our IQ-MPC method. As a result, its generalization to unseen tasks would be limited. Expanding our approach to enable better generalization through multi-task training is an interesting direction for future research. --- ### 4: Code Availability and Reproducibility The lack of a code appendix makes it difficult to assess the reproducibility of the results. Do the authors plan to release the code to improve result reproducibility? **Answer:** To ensure reproducibility, we have provided our code in the following anonymous repository for review: https://anonymous.4open.science/r/reward-free-C1D5/. We also plan to make our code publicly available in the future. --- ### 5: Additional Related Work To the best of my knowledge, the related work has been discussed to a sufficient extent. However, the authors may want to consider Altmann et al. (2024), *"Discriminative Reward Co-Training"* as additional related work. **Answer:** Thank you for the suggestion. We will incorporate this additional related work in the revised manuscript. [1] Hansen, N., Su, H., & Wang, X. (2023). TD-MPC2: Scalable, Robust World Models for Continuous Control. *arXiv preprint arXiv:2310.16828.*
Summary: The paper proposes a reward-free world model approach, IQ-MPC, for online imitation learning, addressing challenges in high-dimensional and complex tasks. The method integrates decoder-free latent dynamics models with inverse soft-Q learning, eliminating explicit reward modeling. By reformulating the optimization in Q-policy space and leveraging model predictive control (MPC), the approach achieves stable, expert-level performance on benchmarks including DMControl, MyoSuite, and ManiSkill2. Claims And Evidence: 1. IQ-MPC outperforms baselines in tasks with high-dimensional observations and complex dynamics and mitigates instability during training. The compelling experimental results across locomotion and manipulation tasks over other baselines support this claim. 2. The reward-free world model learns effectively from expert demonstrations without explicit rewards, and can decode reliable rewards from the learned Q-values. The theoretical bijection between Q value and reward spaces is established. Moreover, Figure 16 and Table 7 show positive correlations between decoded and ground-truth rewards. Methods And Evaluation Criteria: Methods: The integration of latent dynamics with inverse soft-Q learning is novel, and the gradient-free MPC planning aligns with recent trends (e.g., TD-MPC). Evaluation: The benchmarks are diverse, covering low- and high-dimensional, state-based and visual-based, locomotion and manipulation tasks, which aligns with the problem of complex IL. The use of multiple baselines provides a fair comparison. However, The evaluation of noisy dynamics is limited to one task (Walker Run) with minor perturbations and lack comparison with other baselines, insufficient for a robust assessment. Theoretical Claims: The paper includes theoretical analysis in Section 4.2 and Appendix H, focusing on the learning objective’s suboptimality bound (Lemma 4.1) and consistency loss (Appendix H.3). Lemma 4.1 bounds the suboptimality of the policy based on distribution discrepancy between agent and expert and dynamics modeling error. The proof in Appendix H is mathematically sound, aligning with standard RL theory. Appendix H.3 links consistency loss to minimizing KL divergence between true and learned dynamics under Gaussian assumptions, but this analysis seems relatively trivial. Experimental Designs Or Analyses: The experimental design is generally sound. The diverse task set and ablation on expert trajectory numbers enhance the evaluation’s robustness. However, the evaluated scope on visual tasks is limited (only 3 tasks). The noisy dynamics test (Figure 14) is limited to one task and minor noise with no comparison to baselines, may be insufficient to validate robustness claims. Supplementary Material: Yes, all parts. The provided additional experimental results and analysis are comprehensive, supporting the main text. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to imitation learning, model-based rl and inverse rl, particularly in the areas of decoder-free world models and inverse soft Q-learning. Essential References Not Discussed: I don't think there are any essential references missing, but I'm not completely sure. I would still recommend considering the opinions of other reviewers. Other Strengths And Weaknesses: Strengths: 1. Novel integration of inverse soft-Q learning with reward-free, decoder-free world models. 2. Comprehensive evaluation and analysis. 3. The paper is well-structured and explains its methodology clearly. Weakness: 1. MPC’s computation overhead (Appendix F) is acknowledged but not fully quantified against baselines. 2. Robustness claim on noisy setting needs more experimental backing. Other Comments Or Suggestions: The limitation statement of the proposed method is not found in the paper. It is recommended that the authors explicitly discuss the limitations of the proposed approach. Questions For Authors: 1. Why do the authors use such a large number of expert trajectories in the experiment (e.g., 100 expert trajectories for visual tasks in DMC, totaling 100 × 500 expert transition steps)? Given that the total number of online training steps is 1M and the common choice in this setting is just 10 expert trajectories [1][2], this seems unusually high. Although Figure 5 demonstrates the method’s robustness to the number of expert trajectories, showing that performance remains solid with just 10 expert trajectories, this ablation study is only conducted on two tasks. I am curious about the reasoning behind the initial choice. 2. Do the authors plan to extend the proposed method to real-robot experiments? I am eager to see the results in real-world settings, as the paper only presents experiments in simulated environments. [1] Rafailov R, Yu T, Rajeswaran A, et al. Visual adversarial imitation learning using variational models[J]. Advances in Neural Information Processing Systems, 2021, 34: 3016-3028. [2] Wan S, Wang Y, Shao M, et al. Semail: eliminating distractors in visual imitation via separated models[C]//International Conference on Machine Learning. PMLR, 2023: 35426-35443. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your review and your recognition of its strengths, including our novel approach, comprehensive evaluation, and clear presentation. Below, we provide detailed responses to your comments. ### 1: Limited Scope on Visual Tasks and Noisy Dynamics Test The evaluated scope on visual tasks is limited (only 3 tasks). The noisy dynamics test (Figure 14) is restricted to one task and minor noise with no comparison to baselines, which may be insufficient to validate robustness claims. **Answer:** We have conducted additional experiments with visual observations on the Quadruped Walk task, as well as noisy environment dynamics experiments on the Cheetah Run and the dexterous hand manipulation task Key Turn. The table below compares the performance degradation in terms of success rate under noisy environment dynamics between our approach and IQL+SAC: | $p_{tremble}$ | IQL+SAC | HyPE | IQ-MPC | |----------------|----------------|--------------|----------------| | 0 | 0.72 ± 0.04 | 0.55 ± 0.09 | 0.87 ± 0.03 | | 0.005 | 0.54 ± 0.13 | 0.43 ± 0.14 | 0.79 ± 0.07 | | 0.01 | 0.28 ± 0.16 | 0.33 ± 0.13 | 0.61 ± 0.12 | The results demonstrate that our approach is more robust to noisy environment dynamics compared to IQL+SAC and HyPE. Additional experimental results for Quadruped Walk with visual observations and Cheetah Run with noisy dynamics are available in the anonymous repository: https://anonymous.4open.science/r/reward-free-C1D5/. Regarding concerns about the noise level, our noisy setting aligns with HyPE [1], which applied $p_{tremble}$ values of 0.01 and 0.025 for locomotion tasks, comparable to our noise levels in Figure 14. We hope the experimental results provided above sufficiently address your concerns. [1] Ren, J., Swamy, G., Wu, Z. S., Bagnell, J. A., & Choudhury, S. (2024). Hybrid Inverse Reinforcement Learning. *arXiv preprint arXiv:2402.08848.* --- ### 2: MPC Computation Overhead MPC’s computation overhead (Appendix F) is acknowledged but not fully quantified against baselines. **Answer:** We have provided a comparative analysis of computational overhead between our approach (both with and without MPC implementation) and other baseline methods (IQL+SAC, CFIL+SAC, HyPE, HyPER) in Figure 15 of our manuscript. Should any further clarification or additional comparative data be required to support your review, we would be happy to provide more detailed explanations or conduct supplementary analyses. Please feel free to share any specific aspects you would like us to elaborate on, and we will gladly address them accordingly. --- ### 3: Number of Expert Trajectories Why do the authors use such a large number of expert trajectories (e.g., 100 expert trajectories for visual tasks in DMC, totaling 100 × 500 expert transition steps)? Given the total number of online training steps is 1M and the common choice in this setting is just 10 expert trajectories [1][2], this seems unusually high. While Figure 5 shows the method’s robustness to fewer expert trajectories, this ablation is only conducted on two tasks. What is the reasoning behind the initial choice? **Answer:** Our approach can also effectively learn using only 10 expert trajectories across all three visual tasks, consistent with the settings in [1][2]. The results using 10 expert trajectories are available in our anonymous repository: https://anonymous.4open.science/r/reward-free-C1D5/. We initially chose to use 100 demonstrations to maintain consistency in the number of expert trajectories across tasks of varying difficulty, as using only 10 expert demonstrations for all tasks would be challenging. --- ### 4: Real-World Experiments Do the authors plan to extend the proposed method to real-robot experiments? I am eager to see the results in real-world settings, as the paper only presents experiments in simulated environments. **Answer:** Yes, we plan to extend our method to real-world experiments in future work. Given our method's promising performance in simulation, we believe it has strong potential to tackle real-world robotic tasks. We appreciate your interest and look forward to sharing future results. --- Rebuttal Comment 1.1: Comment: The authors' response addressed most of my concerns. I will maintain my positive score.
null
null
null
null
null
null
Generation from Noisy Examples
Accept (poster)
Summary: This paper studies the model of language generation in the limit, first introduced by Kleinberg and Mulainathan at NeurIPS 2024 and later extended by Lee et al. It builds on seminal work by Gould and Angwin from the 1960s, which has had a profound influence on learning theory. The study this language generation model is nascent and has several open problems. This work tackles a fundamental open problem: learning from adversarially injected noise. Specifically, the authors consider a scenario in which an adversary inserts a finite number of incorrect examples into the learner’s data stream. The paper defines several models of language generation that mirror the three frameworks proposed by Lee et al.: uniform generation, non-uniform generation, and generation. Additionally, the authors introduce two further models that require the learner’s number of mistakes to be independent of the noise level. For the primary three models, they characterize conditions under which uniform and non-uniform generation can be achieved in a noisy setting and provide several sufficient conditions for generation with noisy examples. Notably, these conditions demonstrate that noisy generation is feasible for all countable collections – a result that strengthens the stark contrast between language identification and language generation, showing that the latter remains possible even in the presence of adversarial corruptions. The paper’s results are primarily existential or non-computational, relying on the existence of certain oracles. The authors leave open the challenge of extending these findings to computable or algorithmic results and also leave the complete characterization of generation with noisy examples to future work. ## Update after the rebuttal I thank the reviewer's for explaining their fix to Theorem 3.1. I maintain my original rating. Claims And Evidence: The paper’s claims are generally well-supported by formal proofs, and overall, I find the arguments convincing and plausible. However, I have some reservations regarding the proof of Theorem 3.1. The proof begins by considering a set F such that the common intersection of all hypotheses in F is finite, and that there exists at least one hypothesis in F whose removal renders the common support infinite. I am not entirely convinced that such a collection F necessarily exists in all cases. For instance, consider the hypothesis class $H = \\{\mathbb{Z}_{\geq i} \cup \\{\infty\\}\mid i\in \mathbb{N}\\}$. A potential fix for the proof could be as follows: First, exhaust the common intersection of H, which is finite by definition. Then, regardless of the uniform bound d, consider the time d+C, where C is the size of the common intersection. Two cases arise: if the learner outputs an element from the common intersection, it must be a mistake since that element appears in the training set; if not, there exists some hypothesis h that does not include the learner’s output. In this scenario, one can select this h as the target hypothesis and treat all outputs prior to it (except those in the common intersection) as noise. Methods And Evaluation Criteria: Not applicable Theoretical Claims: I did not carefully check the proofs of the theorems but I did sanity check that the results sound plausible based on the existing results on language generation. Experimental Designs Or Analyses: Not applicable Supplementary Material: I checked Sections A and B in the appendix but not the remaining sections. Relation To Broader Scientific Literature: **Novelty compared to existing works** As the authors agree, the results and techniques of this paper are similar to those of Li et al. For example, the definition of the noisy closure dimension is clean but is a simple (but not immediately obvious) extension of the closure dimension introduced by Li et al. While the definitions and proof structures are similar, I think there are enough differences to justify accepting this paper to ICML. Further, the paper studies a foundational problem that is fundamentally different from the problem studied in Li et al. The fact that the characterizations end up being similar in spirit as those of Li et al. should not be considered a weakness of this paper but rather a strength. **Comparison to Related Work** I think the paper does a good job of discussing recent related works and mentioning the related works on language identification in the noisy setting. To further improve the writing, it would be useful to compare the results in this paper to existing results on language identification with noise from earlier studies. Given space constraints, this discussion could be in an appendix; but it would be very useful for the reader to see a summary of the results from prior work Essential References Not Discussed: Not applicable Other Strengths And Weaknesses: This paper is very relevant to the ICML community and one of its strengths is that it tackles a fundamental problem in the foundations of generative AI; using a recent model of language generation. Other Comments Or Suggestions: Suggestions to improve writing 1. Li et al. leave the complete characterization of generation in the limit open, and while this is not immediately relevant to the work, it would be useful to highlight this in the related works for the benefit of the reader. The authors do mention this in section 4 of discussion and open questions, but I believe it also belongs in the related work. 2. In addition, the naming of Assumption 2.1 seems unnecessarily complex. instead of writing it this way, one could for instance, say “Let $H$ be any hypothesis class consisting of infinite hypothesis” 3. The names of Definitions 2.4 and 2.5 are also somewhat confusing and difficult to distinguish. I strongly suggest renaming them to more descriptive terms like “noise level independent uniform generatability” and “noise level dependent uniform generatability.” 4. Moreover, the paper alternates between interpreting hypotheses as functions and as sets. It would be helpful for the reader to be consistent about how hypotheses are interpreted as functions or as sets. 5. Finally, given the number of interesting results, it would be very useful to include a diagram that summarizes the different notions of generation (both with and without noise) along with their corresponding dimensions. Arrows illustrating the implications between these concepts. Typos 1. Line 13 Column 2: Li ⊂ U should have ⊆? And in other pleases also, e.g., in related work 2. In Line 66 Column 1, the citations should be either chronological or alphabetical? Currently, they seem to follow neither ordering. 3. Line 81 column 2 typo repeated “in terms” 4. Line 155 column 2, has a typo, one needs to consider the size of the set (to be less than ∞). The same issue is also present in other places, e.g., Definitions 2.4 and 2.5. 5. The notation “abbreviate a finite sequence x1, . . . , xn as x1:n.” from line 183 should appear earlier in the preliminaries – before its first use. 6. Line 324 column 1, Line 347 column 1 etc. have some typos – which seem like a missing latex definition. 7. Some citations have “et al.” please present full citations Questions For Authors: Q1. Could you please confirm if the issue I mentioned in the proof of Theorem 3.1 is correct, and could you check whether it could be fixed? Ethical Review Concerns: Not applicable Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We will make sure to address all typos (i.e., typos 1-7) and suggestions (i.e., suggestions 1-5) in the camera-ready version. > Could you please confirm if the issue I mentioned in the proof of Theorem 3.1 is correct, and could you check whether it could be fixed? We thank the reviewer for catching this bug! We agree with the reviewer's finding that our proof of Theorem 3.1 is incomplete since there exists classes $F$ such that $|\bigcap_{h \in F} \operatorname{supp}(h)| < \infty$, but for every $f \in F$, we still have that $|\bigcap_{h \in F\setminus \{f\}} \operatorname{supp}(h)| < \infty$ (e.g. the example provided by the reviewer). However, the reviewer's fix to the proof of the necessity direction does *not* work, and the class they provided shows that the condition is Theorem 3.1 is *not* necessary. The reason is subtle and due to the fact that in Definition 2.4 (line 186-187), we decided to measure the "sample complexity" of the generator in terms of the number *positive* examples it has seen and not the *overall* number of unique examples, since the latter is too stringent (as evidenced by the reviewer's own proof). That is, as written in Definition 2.4 , the generator only needs to be correct after observing $d$ *positive* examples. This is unlike our definitions for noisy uniform, noisy non-uniform, and in-the-limit, where the "sample complexity" is defined with respect to the *total number* of examples (i.e the generator has to be correct after observing any $d$ examples). We agree this is confusing and we will change Definition 2.4 so that it matches the definitions for noisy uniform, noisy non-uniform, and in-the-limit. After doing so, the reviewer's necessity proof is now correct and the characterization in Theorem 3.1 holds for the modified version of uniform noisy generatability where we measure the sample complexity in terms of the *total number* of examples. We will make a note of this in the camera-ready version and acknowledge the reviewer for their contribution. Nevertheless, the current proof of the necessity direction of Theorem 3.1 gives the following necessary condition for Definition 2.4 as it is written: if there exists a subclass $F\subseteq H$ and hypothesis $f \in F$ such that $|\bigcap_{h \in F} \operatorname{supp}(h)| < \infty$ and $|\bigcap_{h \in F\setminus \{f\}} \operatorname{supp}(h)| = \infty$, then $F$ is not uniformly noisily generatable when the sample complexity is measured in terms of positive examples. Note that all finite classes whose closure is finite satisfy this property. **Thus, the main takeaway of the section remains unchanged -- uniform noisy generation is hard and only possible for finite classes that are generatable immediately.** We will correct Theorem 3.1 appropriately. The reviewer's counterexample highlights an interesting separation in generatability based on whether one measures the sample complexity in terms of only the positive examples or all examples. For completeness sake, we can "fix" Theorem 3.1 by providing a new characterization of uniformly noisy generatability when the sample complexity measures only the positive examples, like what is currently written in Definition 2.4. Claim: A class $H$ is uniformly noisily generatable if and only if $\sup_n (NC_n(H) - n) < \infty$. Proof sketch: For the necessity direction, suppose that $\sup_n (NC_n(H) - n) = \infty$. Then for every $d \in \mathbb{N}$, we can find a $t \geq d$ and a sequence $x_1, \dots, x_t$, such that $|\langle x_1, ..., x_t \rangle_{H, t-d}| < \infty.$ Hence, by padding $x_1, \dots, x_t$ with any remaining elements in $\langle x_1, ..., x_t \rangle_{H, t-d}$, we can force the Generator to make a mistake while ensuring that the hypothesis chosen is consistent with at least $d$ examples in the stream. For the sufficiency direction, if $\sup_n (NC_n(H) - n) < \infty$, then there exists a $d \in \mathbb{N}$ such that for every $t \geq d$ and distinct $x_1, ..., x_t$, we have that either $\langle x_1, ..., x_t \rangle_{H, t-d} = \bot$ or $|\langle x_1, ..., x_t \rangle_{H, t-d}| = \infty.$ Thus, the algorithm which for $t \geq d$ plays from $\langle x_1, ..., x_t \rangle_{H, t-d}\setminus \lbrace{x_1, \dots, x_{t}\rbrace}$ if $\langle x_1, ..., x_t \rangle_{H, t-d} \neq \bot$ is guaranteed to succeed. To put this into context, if one measures sample complexity with respect to the *overall* number of examples, then another characterization of uniform noisy generatability is the finiteness of $\sup_n (NC_n(H))$. Thus, the difference between measuring the sample complexity with respect to just positive examples or with respect to all samples is whether or not you subtract $n$ from $NC_n(H)$.
Summary: The paper studies the problem of language generation in the limit in a noisy setting where an adversary inserts a finite number of negative examples. The paper provides necessary and sufficient conditions for when a binary hypothesis class can be noisily generatable. The paper examines various definitions of noisy generatability. Claims And Evidence: Yes Methods And Evaluation Criteria: N/A Theoretical Claims: The proofs in the main text appear to be correct. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: The paper extends the previous work by Kleinberg & Mullainathan (2024) to the noisy examples setting. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. The paper is well-written with nice intuition, and the math is rigorous. 2. The paper provides a complete characterization of which classes are noisily uniformly generatable. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Summary: The paper extends prior work on language generation by studying the generation of new, unseen positive examples even when the example stream is adversarially contaminated with a finite number of noisy (negative) examples. It introduces new notions, namely noisy uniform generatability, noisy non-uniform generatability, and noisy generatability in the limit; these notions adapt the already introduced framework of uniform and non-uniform generation (as well as generation in the limit) to more realistic, noisy settings. The authors also introduce the setting of uniform noisy generation, but prove that it is a significantly harder problem in that even hypothesis classes with just two hypotheses might not be uniformly noisily generatable. A key contribution is the introduction of the Noisy Closure dimension, a scale-sensitive dimension that extends the Closure dimension (Li et al., 2024) and characterizes when a hypothesis class can be noisily generated, providing some sufficient and necessary conditions. The authors further demonstrate that all countable classes are noisily non-uniformly generatable, ensuring they can eventually generate new positive examples even under finite noise in the stream. Claims And Evidence: The claims are clearly stated and supported by sufficient evidence. Methods And Evaluation Criteria: N/A Theoretical Claims: Overall, the proofs of the claims are mainly correct. Some parts, however, contain some mistakes that would need some adjustment. More precisely, there is a wrong choice of indices in the proof of Theorem 3.9, as it seems that authors mistakenly swapped $s$ with $t$ starting from line 425. One would then need to fix this and verify whether the following claims still hold true. Additionally, in the proof of Theorem 3.1, the authors claim that $x\_1, \\dots, x\_d, z\_1, \\dots, z\_d$ contain $2d$ unique examples (lines 248 and 269) while this is not necessarily true, i.e., there can exist $i,j \\in \[d\]$ such that $x\_i = z\_j$. This would require a different selection of those $2d$ examples in order to ensure their uniqueness. Anyways, these are not major issues and they appear to be fixable at first glance. Experimental Designs Or Analyses: N/A Supplementary Material: Yes, I reviewed the supplementary material and I checked the math in the proofs, but not in full detail. Relation To Broader Scientific Literature: I believe the main contributions of this work nicely extend the already available results on models for (language) generation to a more realistic setting with adversarial noise in the example stream. As already remarkes, the authors do a good job in comparing to existing results and clearly framing their contribution relative to them. Essential References Not Discussed: The authors discussed the most relevant literature. Other Strengths And Weaknesses: A formal framework for the study of (language) generation was already introduced, as outlined by the authors themselves. Nevertheless, handling the presence of noise in the stream is quite interesting and requires an adaptation of ideas from previous work, without being a direct consequence of them. The authors also cover multiple settings where the difference lies in the dependence of the number of mistakes on the noise level and the hypothesis used to generate the positive examples, and extensively study their differences. Other Comments Or Suggestions: - The function $\\mathrm{supp}(\\cdot)$ is used but never formally defined. It would be better to have a definition, e.g., $\\mathrm{supp}(h) = h^{-1}(1)$. - In the definition of $\\mathcal{H}(x\_{1:d},n)$ it would be better to use a semicolon (or a pipe) to separate examples from noise level, as it makes it easier to parse $\\mathcal{H}(x\_1, \\dots, x\_d; n)$ (see preliminaries and, e.g., line 345) - Please, specify that you assume the natural numbers $\\mathbb{N}$ consist of the positive integers (excluding $0$), since it is not a universal assumption. - At line 326, it could also be helpful to add the "mistake bound" from online learning as another helpful analogy other than the "sample complexity". - Maybe specify that the assumption at line 356 in the proof of Lemma 3.8 is without loss of generality if that is the case. - It seems that the inner loop of Algorithm 1 might always stop withing one of its iterations, and so the last line might be superfluous. For instance, in the proof of Theorem 3.9 at lines 385-392, the case $j = r\_s+1$ might contradict the properties of $\\mathcal{Q}$ since it is always given $x\_{r\_s}$ as input when generating $\\hat z\_1^s, \\dots, \\hat z\_{r\_s}^s$; hence, the if condition within the inner loops should be true for at least one $i \\in \[r\_s\]$. Typos: - Line 14: "by" instead of "by the". - Line 62: "negative" instead of "positive". - Line 91: "a countably" instead of "an countably". - Line 122: "is possible" instead of "possible". - Line 179: "countably infinite" instead of "countable infinite". - Lines 182-183 should go at the beginning of Section 2, since the notation is already used before. - Line 168: $\\in \\mathrm{supp}(h)$ instead of $\\subseteq \\mathrm{supp}(h)$, or put curly brackets around sequence. - Line 202: "as a stream" instead of "of stream". - Line 204: missing period at the end. - Lines 190-197: $d^*$ instead of $d$ to be consistent with Definition 2.4. - Line 228: "form" instead of "from". - Lines 234-235: "any subset" instead of "the any subset", "for which there exists" instead of "there exists", $\\mathcal{F}\\setminus\\{f\\}$ instead of $\\mathcal{F}\\setminus f$. - Line 245: $\\mathcal{F}\\setminus\\{f\\}$ instead of $\\mathcal{F}\\setminus f$. - Line 287: "noise level" instead of "level" for clarity. - Line 288: "exist" instead of "exists". - Line 294: "as it is defined" instead of "it is defined". - Line 312: $h \\in \\mathcal{H}$ instead of $h \\in \\mathrm{supp}(h)$. - Line 319: "being" instead of "bring". - Line 291: specify "positive examples". - Line 295: "a subset of $\\mathcal{H}$" should refer more precisely to $\\mathcal{F}$. - Line 338: specify Corollary 3.4 other than Theorem 3.3 for the result to follow. - Line 381: "countable" instead of "countably infinite". - Line 420: "at most" instead of "most". - Lines 425-439, left column: all references to $t$ should actually be $s$, given line 423. - Lines 385-392, right column: all references to $t$ should actually be $s$, given line 423. - Line 397: "another" instead of "a another". Questions For Authors: - Do you believe it is possible to extend this generation framework beyond the "binary classification" one? For instance, consider generating a sequence with examples belonging to multiple classes and guaranteeing the generation of examples from some of these classes. - Do you think the framework could be extended further, other than the multiclass one mentioned above? Do you foresee any technical challenge that would require significant changes in the framework or in the assumptions made? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and suggestions. We agree with all the suggestions made by the reviewer and will make sure to incorporate these along with fixing the typos in the camera-ready version. Below, we address some questions and concerns. > More precisely, there is a wrong choice of indices in the proof of Theorem 3.9, as it seems that authors mistakenly swapped with starting from line 425. One would then need to fix this and verify whether the following claims still hold true. We thank the reviewer for catching this typo! The reviewer is exactly correct, and the claim goes through with this modification. We will make sure to make this change in the camera-ready version. > Additionally, in the proof of Theorem 3.1, the authors claim that... We thank the reviewer for this comment. However, we do believe that $x_1, \dots, x_d, z_1, \dots, z_d$ contains $2d$ points. To see why, suppose that $x_1, \dots, x_d$ is any set of $d$ distinct points in $\bigcap_{h \in F\setminus \{f\}} \operatorname{supp}(h).$ It suffices to show that $\{x_1, \dots, x_d\}$ and $\operatorname{supp}(f) \setminus \bigcap_{h \in F} \operatorname{supp}(h)$ are disjoint. Pick some $x_i \in \lbrace\{x_1, \dots, x_d\rbrace\}$. If $x_i \in \operatorname{supp}(f)$, then $x_i \in \bigcap_{h \in F} \operatorname{supp}(h)$. To see why, recall that $x_i \in \bigcap_{h \in F\setminus \{f\}} \operatorname{supp}(h)$ which means that $x_i \in \operatorname{supp}(h)$ for all $h \in F\setminus \{f\}$. Thus, if $x_i $ is also in $\operatorname{supp}(f)$, then it must be the case that $x_i \in \bigcap_{h \in F} \operatorname{supp}(h)$. Overall, this means that $x_i \notin \operatorname{supp}(f) \setminus \bigcap_{h \in F} \operatorname{supp}(h).$ On the other hand, if $x_i \notin \operatorname{supp}(f)$, then it must be the case that $x_i \notin \operatorname{supp}(f) \setminus \bigcap_{h \in F} \operatorname{supp}(h),$ completing the proof. We will make sure to clarify this in the camera ready version. > Do you believe it is possible to extend this generation framework beyond the "binary classification" one? Yes, we do believe that these results should generalize to the multiclass case through the similar setup of prompted generation from Li et al. (2024). For finite label spaces, the characterization of noisy generatability should remain unchanged and simply requires modifying the noisy closure dimension into the prompted noisy closure dimension. However, the more interesting case of infinite labels is unclear. We think this is an interesting direction of future work! > Do you think the framework could be extended further, other than the multiclass one mentioned above? Do you foresee any technical challenge that would require significant changes in the framework or in the assumptions made? Yes, we believe there are several frameworks for modeling noise that are of interest, beyond the multi class case and the setting we considered. For example, one could also consider noisy generation in a stochastic setting like that studied by Kalavasis et a. [2024]. Moreover, one can also study variants of the noisy model where, for example, the injection of noise must follow a particular rate or where the generator has query access whether an example is noisy or not. In general, in the real-world, data is most likely not worst-case, and additional feedback is often available to the generator. Thus, it is an interesting direction to study relaxations of our model by either weakening the adversary or strengthening the generator. New techniques will need to be developed to understand how much such relaxations can help. Kalavasis, Alkis, Anay Mehrotra, and Grigoris Velegkas. "On the limits of language generation: Trade-offs between hallucination and mode collapse." arXiv preprint arXiv:2411.09642 (2024).
Summary: This paper proposes an extension of the theoretical model of language generation introduced by Kleinberg & Mullainathan (2024), exploring the impact of noisy example streams on generatability. The authors introduce the concepts of "noisy uniform generatability," "noisy non-uniform generatability," and "noisy generatability in the limit" to account for the influence of noise in the example stream, addressing a significant gap in previous research which assumed a noiseless setting. Key contributions include a complete characterization of noisy uniform generatability through the Noisy Closure dimension, new conditions for noisy non-uniform generatability, and sufficient conditions for noisy generatability in the limit. The paper also demonstrates that while noisy uniform generation is more difficult than its noiseless counterpart, all finite classes are still noisily uniformly generatable. Claims And Evidence: 1. The paper extends Kleinberg & Mullainathan’s (2024) theoretical framework to include the impact of noisy example streams on generatability. Compared to the conclusions in noiseless settings, the new findings under noisy conditions are indeed a significant contribution. However, what are the technical difficulties and contributions, especially given that noise has already been extensively studied in non-generative contexts? 2. While this paper is theoretical and does not require experimental validation, would it be possible to conduct experiments? If so, how could one design and organize such experiments? 3. There appears to be a typo on line 63—should "positive examples" be "negative examples"? Methods And Evaluation Criteria: The exploration of conclusions under noisy conditions is relevant and meaningful. Theoretical Claims: The theory appears to be sound. Experimental Designs Or Analyses: See the second point in Claims And Evidence Supplementary Material: No Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We address their questions and concerns below. > However, what are the technical difficulties and contributions, especially given that noise has already been extensively studied in non-generative contexts? As the reviewer noted, noise has been extensively studied in non-generative contexts, for example in PAC and online learnability. However, to the best of our knowledge, this is the first work to rigorously formalize and study noise in the framework posed by Kleinberg \& Mullainathan (2024) and Li. et al. (2024). Our main contributions are summarized near the end of Section 1. At a high level: 1) We introduce a learning-theoretic theoretical framework for studying noise in language generation, 2) We provide necessary and sufficient conditions in terms of combinatorial dimensions for various notions of noisy generation, spanning different levels of difficulty for the generator. We feel that this work is an important stepping stone towards bridging learning theory and robustness in generative machine learning. As for technical difficulties, accounting for noise in the data stream requires new proof techniques. Indeed, since, in our model of noise, the generator does not know the number or location of noisy examples, it is a priori unclear whether and how generation of clean positive examples is possible. Nevertheless, it is possible, and our results show that indeed we need to go beyond existing results by considering scale-sensitive dimensions. This is unlike in prediction, where one does not need a scale-sensitive dimension to characterize agnostic PAC and online learnability. > While this paper is theoretical and does not require experimental validation, would it be possible to conduct experiments? If so, how could one design and organize such experiments? This is a great question, and one that we have been thinking about too! One high-level takeaway from our results is that in order to be robust to noisy examples in the training data, one needs to explicitly incorporate the fact that noisy data may be present in the training dataset during the process of training a generator. Moreover, our results hint at the fact that it might be useful to iteratively guess the level of noise in the training data and use this guess to adapt the training procedure. These two insights may lead to the following generation algorithm which could be interesting to study experimentally: start with an initial guess of the noise in the training set, use this initial guess to train a robust variant of a GAN that can take in a noise-level, use the robust GAN to estimate the amount of noise in the training data, and repeat this process. > There appears to be a typo on line 63—should "positive examples" be "negative examples"? The reviewer is exactly right! This should be negative examples. We will make sure to fix this in the final version.
null
null
null
null
null
null
Determining Layer-wise Sparsity for Large Language Models Through a Theoretical Perspective
Accept (spotlight poster)
Summary: This paper determines the sparsity rate of each layer for LLMs from a theoretical perspective, proposes that there is a "reconstruction error explosion" problem in sparse LLMs, and proposes to use a monotonically increasing arithmetic progression to determine the sparsity rate of each layer, thereby alleviating the "reconstruction error explosion" problem. Experimental results on different models and datasets show the effectiveness of this method and that it outperforms existing methods. ## update after rebuttal I am satisfied with the authors' response, including clarifications and additional experiments. I will keep my positive rating, and I hope the responses could be incorporated into the next revision. Claims And Evidence: The author claims that there exists a "reconstruction error explosion" problem in sparse LLMs which leads to severe accuracy degradation. This claim is supported by specific observations, and in Figure 1, we can see the trend of reconstruction error explosion. The author also analyzes the reasons for the "reconstruction error explosion" from a theoretical perspective, and I think there is no issue with the author's claim. Additionally, the author claims that using a monotonically increasing arithmetic progression can alleviate the above problem, thereby improving the accuracy of sparse LLMs. Extensive experimental validations support this claim made by the author. Methods And Evaluation Criteria: The layer-wise sparsity method based on monotonically increasing arithmetic progression proposed in this paper makes sense, as it effectively improves the accuracy of sparse LLMs, outperforming existing layer-wise sparsity methods. Theoretical Claims: I have examined the author's proof of the theorem and did not find any issues. The author's proof mainly involves expanding and transforming the reconstruction error based on the Frobenius norm, thereby proving that increasing the sparsity rate in earlier layers leads to an increase in reconstruction error in later layers, which consequently results in the "reconstruction error explosion". Experimental Designs Or Analyses: The authors' experimental design follows established protocols for LLM sparsification methods, including Wanda, SparseGPT, and other layer-wise sparsity approaches for comparison. The paper's experiments encompass various LLM architectures and sizes across numerous datasets, demonstrating results at different sparsity rates. I find the experimental design to be well-reasoned and effective in evaluating the proposed method's efficacy. Supplementary Material: I read the supplementary material submitted by the author, which contains the code of this paper and somehow ensures the reproducibility of the method. Relation To Broader Scientific Literature: The author divides the previous layer-wise sparsity methods for LLMs into two categories, including metric based methods (OWL, AlphaPruning and ALS) and search based methods (DSA). The former requires complex calculations and lacks theoretical guarantees of optimality, while the latter demands time-consuming search processes spanning days and the search effect is heavily dependent on human expertise. Therefore, the author proposes to derive the layer-wise sparsity rate of LLMs from a theoretical perspective, so that only a simple monotonically increasing arithmetic progression is needed, and the author guarantees from a theoretical perspective that this scheme is near optimal. I think the analysis of the issues existing in the existing layer-wise sparsity methods is reasonable, and the motivation behind determining the layer-wise sparsity rate from a theoretical perspective is great. Essential References Not Discussed: From the best of my knowledge, no essential references are not discussed. All the authors discuss are the latest sparsity methods for LLMs. The layer-wise sparsity methods that are most relevant to this paper have been discussed by the authors. Other Strengths And Weaknesses: Strengths: 1. The paper demonstrates good organization, with a methodically presented approach supported by numerous figures, tables, and reproducible code. 2. The idea of a monotonically increasing arithmetic progression to determine the layer-wise sparsity rate is interesting and novel, and the authors reveal the rationality of this sparsity rate scheme from the perspective of "reconstruction error explosion". 3. The authors provide comprehensive experimental results on various tasks and models. Compared with the existing layer-wise sparsity baselines, APT outperforms the existing baselines. The efficiency of models make them good to deploy in real-world. Weaknesses: For larger parameter LLMs, the improvements from the ATP method appear to be less substantial than those observed in smaller LLMs. Other Comments Or Suggestions: All the lines in the left picture of Figure 1 are close together and need to be zoomed in to be seen clearly. Can the author improve the clarity of this picture? Questions For Authors: 1. What are the practical advantages of LLMs sparsity as adopted in this paper compared to other acceleration techniques (such as quantization)? 2. The authors show the zero-shot accuracy of LLMs at 50% and 70% sparsity. I would like to know what the zero-shot accuracy of the ATP method would be under the 60% sparsity setting? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Thanks for your careful review and comments!** > Weaknesses (Smaller improvements for larger models.) We have observed that larger models show less performance improvement with ATP. This observation aligns with common patterns in model pruning and compression, where the returns tend to diminish as model size increases. However, our ATP still demonstrates consistent improvements across different model sizes, especially notable in larger models: 1. **Significance of Relative Gains** While absolute improvements may appear smaller for larger models, the relative gains remain significant. For example, for LLaMA-65B at 70% sparsity with Wanda, ATP reduces the zero-shot accuracy loss from 10.81% to 6.65%, representing a 38.48% relative improvement. However, the improvement of AlphaPruning is only 30.71%. Such gains can translate to meaningful performance enhancements in real-world applications. 2. **Inherent Resilience of Larger Models** Larger models naturally exhibit greater resistance to pruning. As shown in Table 1, 70% sparse LLaMA2-70B with SparseGPT experiences an zero-shot accuracy loss of only 8.30%, compared to 18.81% for LLaMA2-7B, leaving less room for further improvement. Nevertheless, our method further reduces the loss of LLaMA2-70B to 6.89%. This observation aligns with the findings of Li et al. [1]. 3. **Alignment with Theoretical Insights** The behavior aligns with recent theoretical insights. Liu et al. [2] demonstrated that larger models can maintain performance even under random pruning, supporting the idea of their inherent pruning resistance. In summary, our ATP method remains effective across all model sizes and still achieves considerable performance improvements in larger models, providing critical stability and performance benefits. [1] Li et al. Train big, then compress: Rethinking model size for efficient training and inference of transformers. [2] Liu et al. The unreasonable effectiveness of random pruning: Return of the most naive baseline for sparse training. > Other Comments Or Suggestions (Figure 1 needs to be enlarged.) Thank you for your suggestion. We will enlarge Figure 1 in our final version to make it clearer for viewing. > Question 1 (What are the advantages of sparsity compared to quantization?) Network sparsity and quantization are both effective methods for improving inference speed and reducing memory footprints, with both techniques offering significant acceleration benefits across different hardware platforms (such as CPU and GPU). However, comparisons between these two approaches in terms of efficiency may vary. Recent research suggests that network sparsity can slightly outperform quantization in inference speedup. For example, SqueezeLLM [3] demonstrates that quantizing LLaMA2-7B to 3 bit achieves a 2.1$\times$ speedup on GPU. Meanwhile, the latest sparse CPU and GPU kernels (DeepSparse and nm-vllm) provide better support for deploying sparse LLMs, achieving 2.63$\times$ CPU inference acceleration and 2.23$\times$ GPU inference acceleration for 70% sparse LLaMA2-7B, as shown in Table 6 of our paper. Additionally, compared to quantization, sparsity can better maintain and recover performance through fine-tuning, as described in [4]. Pruning and quantization are compatible and complementary approaches, combining these methods can further enhance efficiency. Table 3 in [4] shows that combining network sparsity with quantization (INT8) can achieve up to 9.08$\times$ acceleration, significantly higher than using either method alone. [3] SqueezeLLM: Dense-and-Sparse Quantization. [4] Sparse Fine-tuning for Inference Acceleration of Large Language Models. > Question 2 (Experimental results with 60% sparsity.) We present the zero-shot accuracy results of Llama-3-8B at 60% sparsity below. The sparse model is obtained using the Wanda method, and we compare ATP with other layer-wise sparsity methods. |Method|HellaSwag|Winogrande|BoolQ|OBQA|PIQA|ARC-e|ARC-c|Mean| |-|-|-|-|-|-|-|-|-| |Dense|60.19|72.77|81.35|34.80|79.71|80.09|50.43|65.62| |Uniform|37.92|59.90|68.16|19.80|67.55|59.63|27.43|48.63| |OWL|40.71|62.90|70.34|22.90|69.80|62.28|31.39|51.47| |DSA|40.16|63.01|70.09|22.80|69.05|62.20|30.51|51.12| |AlphaPruning|41.63|64.51|71.25|22.60|68.61|62.88|30.46|51.71| |**ATP**|**41.91**|**65.49**|**71.68**|**23.70**|**70.94**|**63.62**|**31.96**|**52.76**| We observe that: 1. ATP significantly improves the accuracy of Wanda, increasing the average zero-shot accuracy by 4.13% and narrowing the performance gap between sparse and dense model. 2. ATP outperforms OWL, DSA, and AlphaPruning, with 1.05% higher accuracy than the best-performing AlphaPruning. In conclusion, our ATP method demonstrates consistent improvements across various sparsity levels and significantly outperforms existing layer-wise sparsity methods. **We will incorporate the above responses into the final version. We hope that our response has addressed your concerns. Thank you!**
Summary: This paper identifies the issue of "reconstruction error explosion" in existing LLMs sparsification methods. Through theoretical analysis, it derives a layer-wise sparsity allocation method based on a monotonically increasing arithmetic progression to alleviate the above issue. Both theoretical analysis and experimental results indicate that the above sparsity allocation scheme is near optimal, and significantly improves the performance of sparse LLMs with various architectures, outperforming the existing layer-wise sparsity methods. Claims And Evidence: 1. The author claims that the existing post-training sparsification methods for LLMs suffer from the issue of "reconstruction error explosion". The author supports the above claim through proof of Theorems 3.1-3.4. Moreover, the author has depicted the "explosive" trend of the reconstruction error of sparse LLMs increasing with the layer number in Figure 1, which also supports the above claim. 2. The author proposes that the issue of "reconstruction error explosion" can be alleviated through a layer-wise sparsity rate scheme based on a monotonically increasing arithmetic progression. The author has proven the effectiveness of the above method through extensive experiments. Moreover, through the proof of Theorem 3.5 and the experiment comparing with Bayesian search, the author supports the claim that the above scheme is near optimal both theoretically and experimentally. 3. All in all, the author has effectively supported the proposed claims through theoretical analysis and experimental verification. Methods And Evaluation Criteria: The author discovers the issue of "reconstruction error explosion" existing in the sparsity methods of LLMs. This refers to the cumulative effect of reconstruction errors. Throughout the entire sparsification process, the errors from the early layers propagate and amplify in the subsequent layers, leading to a significant increase in the overall reconstruction error and a substantial decline in the model's performance. This explains well that why the existing sparse LLMs have relatively low accuracy. Moreover, the author proposes to alleviate this issue by using a layer-wise sparsity scheme based monotonically increasing arithmetic progression, which is intuitive and reasonable. The author's theoretical analysis and experimental verification both show that this layer-wise sparsity scheme makes sense. Theoretical Claims: Yes. I have checked the proofs of Theorems 3.1-3.5. The author has proven the effect of the sparsity rate on the reconstruction error, the effect of the reconstruction error of the previous layer on that of the next layer, and that the total reconstruction error of the proposed monotonically increasing sparsity rate scheme is smaller than that of any non-monotonically increasing sparsity rate scheme. I think the author's proofs are correct, and I haven't found any issues. Experimental Designs Or Analyses: Yes. The author has conducted sufficient experiments, comparing with the SOTA baselines. Experiments have been carried out on different LLMs, many tasks have been evaluated, and experiments have also been conducted on multimodal models and vision models. The author has also demonstrated the effects of combining with different compression techniques. In addition, the author has analyzed the ablation experiments of different hyperparameters, the computational efficiency of the proposed method, as well as the distribution of the layer-wise sparsity rate. Both the experimental design and analysis are reasonable and effective. Supplementary Material: Yes. I have checked the supplementary materials submitted by the author. The author's supplementary materials include the code for this paper, which contains the installation of the environment, the script for performing model pruning, and the code for evaluating the performance of sparse LLMs. The above content ensures the reproducibility of the experimental results in this paper. Relation To Broader Scientific Literature: The author claims that the existing layer-wise sparsity methods for LLMs either determine the sparsity rate of each layer by calculating the importance metric of each layer, or obtain the layer-wise sparsity rate through a search method. The importance metric of LLMs is often heuristic and require a great deal of effort to verify their effectiveness, while the search method is very time-consuming. In contrast, the author proposes that the layer-wise sparsity rate of LLMs can be determined through a monotonically increasing arithmetic progression, eliminating the need for cumbersome layer-wise importance calculations or time-consuming search processes. The above scheme is simple and effective. It can quickly determine the sparsity rate of each layer of LLMs and improve the accuracy of sparse LLMs, which is of great significance to the compression community. Essential References Not Discussed: No. The author compares the existing methods for determining the layer-wise sparsity rates of LLMs, including OWL, ALS, DSA, and AlphaPruning. These are the layer-wise sparsity methods customized for LLMs that were published at ICML or NeurIPS prior to the submission date of this paper. As far as I know, there were no other layer-wise sparsity rate methods specifically designed for LLMs before the submission date. Additionally, the author demonstrates the improvement effects of these layer-wise sparsity methods on the Wanda and SparseGPT methods, which are the latest and widely used sparsity methods for LLMs. In the appendix, the author also compares the methods for determining the layer-wise sparsity rates of CNNs and ViT models. Overall, the author has compared and discussed with many relevant and essential works. Other Strengths And Weaknesses: Strengths: 1. Originality: The layer-wise sparsity rate scheme based on a monotonically increasing arithmetic progression is novel, simple and efficient, and it does not require complex layer-wise importance calculations or time-consuming searches. The author has proven its effectiveness through theoretical analysis and experimental verification. 2. Significance: The author has conducted extensive experiments to verify the effectiveness of the proposed method, including experiments on LLMs with various architectures and parameters, as well as on multiple tasks. The proposed method has significantly improved the accuracy of sparse LLMs, and the accuracy improvement is obvious compared with the SOTA layer-wise sparsity methods. The proposed method also has a significant improvement effect on multimodal models and vision models, and can also improve the accuracy of compression techniques including quantization. In addition, sparse LLMs have an acceleration effect on CPU and GPU, which significantly improves the usability of sparse LLMs. 3. Clarity: This paper is well-written, with clear logic, a well-organized structure, and rich content. Weaknesses: 1. Under the setting of 50% sparsity rate, the proposed ATP method has a relatively limited improvement compared to the Uniform baseline, and also has a limited improvement compared to other layer-wise sparsity rate methods. 2. Some implementation details are missing, for example, how is the proposed ATP method applied to CNN, ViT and LLaVA-1.5. Other Comments Or Suggestions: Typos: Line 833: Table 7. Evaluation Metrics for Different Datasets. -> Table 7. Evaluation metrics for different datasets. Questions For Authors: 1. The author has evaluated LLMs models with a wide range of architectures, but the experimental results of the Mixture of Experts (MoE) model are lacking. However, the MoE model has received increasing attention due to its powerful performance. Can the author evaluate the proposed method on LLMs based on the MoE architecture? I think this would be beneficial for enhancing the contributions of the paper. 2. The author has presented the experimental results of models with more than 7 billion parameters. I'm wondering whether the ATP method can also bring about performance improvements for the smaller LLaMA-3.2-1B/3B models? 3. Sparse LLMs need to rely on specific sparse inference engines (such as DeepSparse and nm-vllm) to achieve acceleration effects. I am curious about the practicality of DeepSparse and nm-vllm. In other words, on which devices can I use these two inference engines to deploy sparse LLMs? Overall, I think this paper is a good paper and it is worthy of being accepted. However, it lacks some details and experiments. If the author can address the above weaknesses and questions, I will raise my score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Thanks for your careful review and comments!** > Weakness 1 (Smaller improvements for lower sparsity.) As sparsity decreases, the returns on performance improvements tend to diminish, which is consistent with common patterns in model pruning and compression. However, we believe that our ATP method still demonstrates quite impressive performance: 1. **Significance of Relative Gains** While absolute improvements may appear smaller for models with lower sparsity rates, the relative gains remain significant. For 50% sparse LLaMA-7B obtained using Wanda, our ATP method reduced the average zero-shot accuracy loss from 6.03% to 3.37%, showing a 44.11% relative improvement, compared to only 30.85% relative improvement with OWL. 2. **Inherent Capabilities of Lower Sparsity Models** At lower sparsity settings, the inherent capabilities of the model are better preserved. For a 50% sparse LLaMA2-7B obtained using Wanda, the average zero-shot accuracy loss is only 3.66%, compared to 26.55% at 70% sparsity, leaving less room for further improvement. 3. **Alignment with Theoretical Insights** Wang et al. [1] established scaling laws for sparse LLMs, revealing the relationship between sparsity rate and model performance, indicating that model performance loss is smaller at lower sparsity rates. Overall, our ATP method maintains effectiveness across all sparsity settings, offering critical stability and performance advantages. [1] Wang et al. Q-Sparse: All Large Language Models can be Fully Sparsely-Activated. > Weakness 2 (Implementation details of CNN, ViT and LLaVA-1.5.) 1. Following Wanda, we only sparsify the linear layers within each block of the ConvNeXt, and we use our ATP method to determine the sparsity rate for each block. 2. For ViT, we use ATP to determine the sparsity rate for each layer. Each layer contains modules such as attention and MLP. We use Wanda to sparsify linear layers. 3. For LLaVA-1.5, we only sparsify the Vicuna model within it. Similar to LLaMA, we determine the sparsity rate for each layer and sparsify linear layers. > Other Comments Or Suggestions (Typos.) Thanks for this great and detailed comments. We will fix this typo in the final revision. > Question 1 (Lack of MoE experiments.) We use Wanda to obtain 70% sparse MoE model Mixtral-8x7B. |Method|Wikitext2 PPL|HellaSwag|Winogrande|BoolQ|OBQA|PIQA|ARC-e|ARC-c|Mean| |-|-|-|-|-|-|-|-|-|-| |Dense|3.86|64.95|76.09|85.08|35.40|82.49|84.18|57.25|69.35| |Uniform|18.22|39.18|60.06|61.75|20.10|70.01|59.38|28.04|48.36| |OWL|16.15|40.31|60.54|62.66|21.00|70.40|60.27|29.22|49.20| |DSA|16.22|40.11|60.78|61.90|21.10|70.32|60.06|29.30|49.08| |AlphaPruning|15.77|40.52|62.12|61.78|21.40|70.89|60.34|29.35|49.49| |**ATP**|**14.30**|**41.98**|**63.89**|**62.82**|**22.50**|**71.76**|**61.12**|**30.60**|**50.67**| Our ATP method performs excellently on sparse MoE models, further demonstrating its generalizability across different model architectures. > Question 2 (Lack of Llama-3.2-1B/3B experiments.) We use SparseGPT to obtain 70% sparse Llama-3.2-1B/3B. |Method|Wikitext2 PPL|HellaSwag|Winogrande|BoolQ|OBQA|PIQA|ARC-e|ARC-c|Mean| |-|-|-|-|-|-|-|-|-|-| |Llama3.2-1B|9.65|47.72|60.46|63.91|26.40|74.48|65.45|31.31|52.82| |Uniform|129.24|27.61|51.00|58.56|13.10|57.42|34.10|19.04|37.26| |OWL|111.02|28.05|51.20|61.39|13.40|57.78|35.36|19.51|38.10| |DSA|121.70|27.93|51.11|59.98|13.10|57.51|35.02|19.31|37.71| |AlphaPruning|115.67|28.22|51.38|60.76|13.70|57.64|34.90|19.24|37.98| |**ATP**|**94.78**|**28.73**|**52.37**|**62.08**|**14.40**|**58.71**|**35.40**|**20.35**|**38.86**| |Llama3.2-3B|7.73|55.28|69.93|73.27|31.00|76.71|74.54|42.24|60.42| |Uniform|65.97|30.01|51.78|62.17|14.40|60.66|37.79|19.71|39.50| |OWL|53.40|31.73|52.72|62.20|14.60|61.43|40.49|20.39|40.51| |DSA|57.81|30.79|53.89|62.65|14.60|60.99|38.90|20.87|40.38| |AlphaPruning|60.59|30.15|52.09|62.27|14.80|61.47|38.55|20.39|39.96| |**ATP**|**46.64**|**33.66**|**59.32**|**62.49**|**17.10**|**62.29**|**41.53**|**21.19**|**42.51**| Our ATP method demonstrates consistent improvements across all parameter levels, performing well for models ranging from 1B to 70B parameters. > Question 3 (On which devices can sparse LLMs be deployed?) 1. Due to the sparsity of weights in unstructured pruning, we must use a specific sparse inference engine to accelerate inference. We use DeepSparse and nm-vllm to accelerate inference on general deployment environments, including CPUs and GPUs. 2. For CPUs, DeepSparse supports architectures including: x86 AVX2, AVX-512, AVX-512 VNNI, and ARM v8.2+, which covers most Intel, AMD, and Apple M-series CPUs. 3. For GPUs, as long as the device supports CUDA, inference acceleration can be achieved using the nm-vllm. Similarly, if CUDA is supported for other deployment environments, nm-vllm can also be used to achieve acceleration. **We will incorporate the above responses into the final version. We hope this addresses your concerns. Thank you!**
Summary: This work proposes a relatively simple monotonically increasing layer-wise sparsity schedule for LLMs where the layers near the head are more sparse than earlier layers. The method is motivated by a detailed analysis of the effect of increasing sparsity on layerwise reconstruction errors and the propagation of these errors to deeper layers. The authors’ analysis is supported by extensive empirical experiments on one-shot pruned LLMs with a wide variety of leading methods. Across the board, the proposed method yields significant quality benefits compared to the pruning baselines considered and performs on par with bayesian search. Claims And Evidence: In general the claims made are supported by the empirical data and proofs provided. However, while I am convinced that ATP benefits the one-shot pruning setting, I remain unconvinced that ATP finds the best layer-wise sparsity distribution for LLMs in general after fine-tuning the pruned network. The fine-tuning experiments conducted with LoRA (Section F.8) are not sparsity-preserving and ATP only shows a modest benefit over AlphaPruning in this setting. Methods And Evaluation Criteria: The method and evaluation criteria are typical for evaluating the performance of sparse LLMs. A wide range of downstream tasks were included as well as a perplexity analysis and performance using Neural Magic inference frameworks. Theoretical Claims: I reviewed the proofs and did not find any issues. Experimental Designs Or Analyses: * Generally, the experimental designs appeared to be sound. * However, one instance where I believe the authors could improve their results is to include fine-tuning with a *sparsity-preserving* method. LoRA adapters after fine-tuning yield a dense matrix which will destroy the sparsity introduced by pruning if merged with the original weight matrices – which is the typical approach. As such, I believe an important but missing experiment is to verify whether ATP provides any benefits after sparsity preserving fine-tuning. Examples of such methods include masked fine-tuning (simply masking weights and gradients with the sparsity introduced with pruning) or sparsity-preserving PEFT methods such as those introduced in [1-3]. [1] W. Huang et al., “Dynamic Low-Rank Sparse Adaptation for Large Language Models,” Feb. 20, 2025, arXiv: arXiv:2502.14816. doi: 10.48550/arXiv.2502.14816. [2] Y. Hu, J. Zhang, X. Chen, Z. Zhao, C. Li, and H. Chen, “LoRS: Efficient Low-Rank Adaptation for Sparse Large Language Model,” Jan. 15, 2025, arXiv: arXiv:2501.08582. doi: 10.48550/arXiv.2501.08582. [3] X. Lu, A. Zhou, Y. Xu, R. Zhang, P. Gao, and H. Li, “SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models,” May 25, 2024, arXiv: arXiv:2405.16057. doi: 10.48550/arXiv.2405.16057. Supplementary Material: I reviewed most of the additional empirical results and some of the proofs. Relation To Broader Scientific Literature: LLM efficiency is a crucial consideration given their high computational cost. Weight sparsity is one promising avenue to improve LLM performance as most models are memory-bound on current hardware. As such, reducing the number of non-zero elements in the model parameters yields potential performance gains but reducing the amount of HBM I/O and the overall VRAM required (depending on the fine-grained sparse structure and sparsity level). Essential References Not Discussed: None noted. Other Strengths And Weaknesses: ## Strengths: * Important and timely topic * Strong empirical results * Intuitive method with theoretical justification * The method proposed is efficient and does not require significantly more time or compute than the considered baselines ## Weakness: * The main weakness is the missing sparsity preserving fine-tuning comparison. Especially since the LoRA results show a convergence in accuracy between ATP and the considered baselines. A crucial finding will be whether ATP is primarily important for improving one-shot accuracy only or if these benefits also extend to the post fine-tuning setting when the sparse mask is fixed prior to fine-tuning. * While ATP improves the quality of the sparse LLMs compared to baselines, the quality still falls far short of the dense models at moderate sparsities such as 70%. Improving the quality of moderate to highly sparse LLMs remains a challenge, even with ATP. Other Comments Or Suggestions: None noted. Questions For Authors: 1. How does ATP compare with baselines when fine-tuned with a static mask or a sparsity preserving PEFT method? I would be willing to increase my score if ATP’s benefits remain after static mask fine-tuning. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Thanks for your careful review and comments!** > Weakness 1 (Missing sparsity preserving fine-tuning results.) Thank you for your suggestions. The results of fine-tuning 70% sparse LLaMA2-7B obtained by Wanda using LoSA [1], LoRS [2] and SPP [3] are below. |Method|Fine-tuning|Wikitext2 PPL|HellaSwag|Winogrande|BoolQ|OBQA|PIQA|ARC-e|ARC-c|Mean| |-|-|-|-|-|-|-|-|-|-|-| |Dense|N.A.|5.12|57.17|68.90|77.74|31.40|78.07|76.39|43.52|61.88| |Uniform|LoSA|12.68|43.23|58.55|67.04|23.00|69.07|59.06|28.58|49.79| |Uniform|LoRS|13.77|40.11|56.03|65.19|20.70|67.05|55.08|25.04|47.03| |Uniform|SPP|12.74|42.51|58.24|66.14|24.50|69.01|59.31|28.05|49.68| |OWL|LoSA|11.71|44.02|63.81|69.01|25.10|68.99|58.97|28.67|51.22| |OWL|LoRS|13.06|43.01|63.04|69.25|23.00|67.12|54.25|29.54|49.89| |OWL|SPP|11.55|44.43|64.25|68.38|24.60|69.02|59.04|29.03|51.25| |DSA|LoSA|12.09|43.71|63.01|69.35|24.30|68.07|58.06|28.71|50.74| |DSA|LoRS|13.14|43.15|62.89|69.02|22.80|66.80|53.01|29.51|49.59| |DSA|SPP|11.99|44.01|63.88|69.01|24.00|68.45|58.37|28.56|50.89| |AlphaPruning|LoSA|11.49|44.97|64.09|69.97|25.20|69.01|56.48|28.87|51.22| |AlphaPruning|LoRS|12.98|43.47|63.77|69.14|22.40|68.23|54.67|29.09|50.11| |AlphaPruning|SPP|11.54|45.05|63.01|68.02|25.40|68.93|57.91|30.55|51.27| |ATP|LoSA|**10.68**|45.88|62.95|70.84|25.50|70.98|59.58|29.94|**52.23**| |ATP|LoRS|**12.01**|44.31|61.10|68.62|24.90|69.39|59.93|29.42|**51.10**| |ATP|SPP|**10.87**|45.19|63.14|68.23|26.40|70.89|61.46|29.44|**52.11**| We observe that: 1. **After Fine-tuning, ATP still Outperforms other Methods** After fine-tuning using sparsity-preserving methods, the advantages of ATP have been further highlighted. After fine-tuning with LoSA, the perplexity of **ATP is 2.00 lower than Wanda and 0.81 lower than AlphaPruning**. In terms of average zero-shot accuracy, after fine-tuning with LoSA, **ATP achieves 2.44% higher than Wanda and 1.01% higher than AlphaPruning**. We believe that since ATP better preserves the performance of the one-shot pruned model, the performance of the sparse model is easier to recover through fine-tuning in this case. Therefore, the layer-wise sparsity rate determined by ATP still retains its advantages after fine-tuning with the sparsity-preserving fine-tuning methods and remains the optimal layer-wise sparsity. 2. **Significance of Relative Gains** While absolute improvements may appear smaller for fine-tuned models, the relative gains remain significant. For example, compared to dense model, Wanda w. LoSA shows an average zero-shot accuracy loss of 12.09%, while ATP w. LoSA shows a loss of 9.65%, **representing a relative improvement of 20.18%. In contrast, AlphaPruning w. LoSA only achieves a relative improvement of 11.83%.** Considering that the performance gap between sparse and dense LLMs further decreases after fine-tuning, the aforementioned accuracy improvement is likely to translate into more meaningful performance enhancements in practical applications. **We will include these results in our final version.** > Weakness 2 (Performance of the sparse model with 70% sparsity has a gap compared to dense model.) We have observed that performance of 70% sparse LLMs obtained by ATP still underperform compared to dense models. This observation is consistent with the common pattern in model pruning and compression, where high sparsity rates lead to significant accuracy degradation. However, we believe our ATP method still offers the following advantages: 1. **Pushing the Performance Limits of Sparse LLMs** Our ATP method addresses the challenge of performance degradation under high sparsity rates, advancing the frontier of sparse LLMs performance and narrowing the gap with dense models, which is beneficial to the community. 2. **Good Performance at Lower Sparsity Rates** Under lower sparsity settings, our ATP method further reduces the performance gap between sparse and dense LLMs. For example, the average zero-shot accuracy loss for 50% sparse LLaMA2-13B model obtained using the Wanda method is 3.52%, while our ATP method reduces this loss to 1.07%. This is highly advantageous for the practical deployment of lower-sparsity LLMs. 3. **Further Narrowing the Gap with Dense Models When Combined with Fine-Tuning** As demonstrated in Table 16 and response to Weakness 1, we show that combining ATP with PEFT techniques can further improve the accuracy of sparse LLMs. For example, the average zero-shot accuracy loss for 70% sparse LLaMA2-7B model obtained using Wanda is 26.55%. With ATP and LoSA optimizations, this accuracy loss is further reduced to 9.65%. Moreover, sparse LLMs obtained using the ATP method achieve higher accuracy, retaining this advantage even after fine-tuning. In summary, although there remains a gap between sparse and dense LLMs under high sparsity settings, our ATP method significantly narrows this gap, greatly enhancing the practicality of sparse LLMs. **Finally, we hope our response has addressed your concerns. Thank you!** --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed rebuttal; my original concerns have been adequately addressed. At this time I will maintain my original rating. --- Reply to Comment 1.1.1: Comment: Thanks for your acknowledge of our work. Thanks again for your time and effort in reviewing our paper.
Summary: The paper establishes a relationship between sparsity and reconstruction error in pruning LLMs, demonstrating that increased sparsity leads to higher reconstruction error, which propagates through linear layers. The authors support this claim through empirical evaluation on transforms and theoretical analyses on linear layers, showing that pruning strategies that do not account for this effect may result in higher total error, ultimately degrading model performance. Claims And Evidence: The authors argue that sparsity increases the reconstruction error between the output of a compressed layer and its dense counterpart. This claim is supported by both theoretical analysis of a linear layer and empirical evaluations (Figure 1). Additionally, they suggest that this reconstruction error propagates through the network—again backed by theoretical analysis of a linear layer and empirical evaluation —potentially compounding with each subsequent layer. As a result, the authors propose that per-layer sparsity should account for this propagation, with sparsity increasing monotonically as the layer number increases. To achieve this, they introduce a simple algorithm called APT, which determines layer-wise sparsity using an arithmetic progression. The soundness of the algorithm is verified by undergoing extensive evaluation to confirm its effectiveness. Furthermore, the authors provide theoretical analysis (again focusing on linear layers) to justify the necessity of a monotonically increasing sparsity-per-layer. I find the empirical evaluations—and the claims based on them—quite strong and comprehensive, even though that idea behind APT is quite simple (i.e. the simplicity actually makes it interesting and compelling). However, my main concern lies with the theoretical analysis. While, to the best of my knowledge, it is correct, it is conducted exclusively on linear layers. Yet, the paper claims from the outset (Section 3.1) to be working with transformers, which incorporate multiple architectural components such as attention mechanisms, nonlinearities, and layer normalization. These elements could significantly influence the theoretical results, making it difficult to assess the practical relevance of these findings for the architectures in question. Moreover, the paper does not address this limitation, nor does it provide any discussion on its implications (See also “Theorems.”). Since theory is posed as one of the main contributions of the paper and it is clear that the authors gave it a lot of thought, I think that this issue of discrepancy (linear layer analysis versus using it to make claims on transformers) should be addressed, since otherwise half of the work seems a little disconnected from the empirical evaluations. Methods And Evaluation Criteria: The majority of the experiments is conducted on the LLama model of various sizes. The studied algorithms for pruning include Wanda and SparseGPT, which are widely-used and well-recognized pruning approaches for zero-shot adaptation. Apart from that, additional experiments on other LLM architectures are provided in the appendix (Table 8). The paper proposes a new scheme for determining sparsity-per-layer densities, and as such compares with other commonly used schemes such as Uniform, OWL, DSA, AlphaPruning apple together with the selected pruning algorithms. In addition Table 1 includes an exploratory algorithm, ALS, and in the Appendix the authors also study some sparsity-per-layer schemes commonly used in non-LLM models (Table 14). In general, I consider the empirical evaluation solid, sound and quite extensive. Theoretical Claims: I have checked the correctness of Theorem 3.1, 3.2, Lemma 3.3, and Theorem 3.3, but only skimmed over Theorem 3.5. My general issue with the theorems is that they discuss linear layers (even with no nonlinearity), while the beginning of the Methodology sections promises transformer layers (Section 3.1, first paragraph). This is confusing - see "Claims And Evidence". Furthermore, In Theorem 3.1 and later we assume that the input to the pruned and unpruned network at layer $i$ did not change. In general I believe that any assumptions should be included in the text of the Theorem, not just appear in the proof. Experimental Designs Or Analyses: Checked Sections 4.2, 4.3, 4.4, 4.5, 4.6, the experimental design seems sound apart from the fact that I am not sure whether the reported values are averages from repeated experiments and what were the deviations. Supplementary Material: I did review the appendix, but not thoroughly (unless for proofs of Theorems 3.1-3.4), I did not review the supplementary materials (code) beyond reading the README. Relation To Broader Scientific Literature: The paper discusses the problem of discovering the optimal sparsity-per-layer budget in pruning methods for LLMs. As such, I believe the work should also discuss related work on the importance of sparsity-per-layer ratios on the outcome of pruning even for non-LLM models in the related work, eg. [1] Essential References Not Discussed: [1] Frankle, Jonathan, et al. "Pruning neural networks at initialization: Why are we missing the mark?." arXiv preprint arXiv:2009.08576 (2020). Other Strengths And Weaknesses: Strengths: - Work establishes a relationship between the sparsity and the reconstruction error in the pruning of LLMs, studying this aspect both from the empirical and theoretical perspective, showing that increased sparsity leads to increased reconstruction error, which propagates through layers. Hence schemes that do not address this issue may results in increased total error and hinder the performance of the pruned model. - Based on this observation a simple method based on arithmetic linear progression is proposed. The method is easy to use (i.e. quite straightforward use of the established relation), but at the same time produces compelling results in comparison to other sparsity schemes. - The work contains numerous experiments on various architectures, as well as modalities and per-layer sparsity schemes that were used even in non-LLM context (Table 14). - Overall, the paper is well written and easy to follow. Weaknesses: Discussed in paragraphs above (epsecially see "Claims and Evidence", "Theorems" and and “Questions”) Other Comments Or Suggestions: How do we determine that this range is actually "small"? Such a claim shouldn't be based solely on the size of the interval—after all, any interval on RR has the same cardinality as RR. :) I see that the authors conducted a grid search over this parameter in the Appendix, but what I’m really curious about is how sensitive APT is to variations in this hyperparameter. In other words, beyond showing that the optimal β tends to be of the same magnitude across different models, it would be more convincing to demonstrate that small changes in β do not drastically impact performance. This would better support the claim that β is easily tunable. Questions For Authors: 1. What is the relevance of theoretical analysis on linear networks when the focus is on transformers? Could you revise Section 3.1 to clarify any potential confusion? Specifically, why introduce transformer layers only to shift to linear layers for theoretical analysis? Additionally, which part of the model is being sparsified? Clarifying these points is crucial to properly validate the theoretical contributions of APT. (See also Clarity and Evidence and Theorems for further discussion.). **Moreover, this is the point that guided my decision on the score of the paper.** 2. Please address the concerns raised in the specific paragraphs above, aside from those already covered in Point 1. For example, the discussion on Relation to Broader Scientific Literature could be expanded to better position the paper within the context of related work. While I find this issue less critical than the concerns in Point 1, addressing it would still enhance the paper’s standing in relation to existing research. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Thanks for your careful review and comments!** > Theoretical Claims (Any assumptions should be included in the text of Theorem, not just appear in the proof.) We will restate Theorem 3.1 as: *When the input is the same, increasing the sparsity of the weights in the $i$-th layer will lead to an increase in the reconstruction error of this layer.* > Experimental Designs Or Analyses (Report average and variance of results.) Following OWL, DSA and AlphaPruning, all experimental results are conducted under a single fixed random seed. We have reported WikiText2 perplexity of 70% sparse LLaMA2-7B obtained by Wanda across five random seeds and different calibration sets. The variance across random seeds is very low, suggesting the robustness of ATP. |Method|PPL| |-|-| |Dense|5.12(±0.00)| |Uniform|74.26(±0.10)| |OWL|30.38(±0.09)| |DSA|63.71(±0.08)| |AlphaPruning|28.87(±0.06)| |**ATP**|**22.16(±0.05)**| > Other Comments Or Suggestions (How do we determine that the $ β$ range is actually "small"?) 1. In Sec. 3.3, we state that the reasonable range of $β$ is $0<β≤0.019$ for 70% sparse LLaMA3-8B. To find the best $β$, we use grid search with step size of 0.002, needing only 9 searches. We also test smaller step sizes in Table 4 but find no improvement in results. The step size of 0.002 balances search efficiency and performance well. Therefore, we claim that the reasonable range for $β$ is small, where "small" refers to the limited number of searches required. 2. We analyze the impact of $β$ on perplexity of sparse model in Figure 4, finding that $β$ significantly affects performance. Figure 2 shows that for lower average sparsity, smaller $β$ values are optimal. > Question 1 (What is the relevance of theoretical analysis on linear networks when the focus is on transformers?) In Sec. 3.1, we represent a layer's computation as $\boldsymbol{WX}$, where $\boldsymbol{W}$ is the layer's weight and $\boldsymbol{X}$ is input. A layer includes components such as attention, nonlinearities, and layer normalization. We acknowledge that there are differences between the theoretical analysis based on $\boldsymbol{WX}$ and the actual architecture, given the presence of more complex nonlinear computations in the network. However, we believe our analysis remains reasonable for the following reasons: 1. **The theoretical modeling of $\boldsymbol{WX}$ is sufficient to analyze the layer's reconstruction error.** Our method sparsifies the linear layers in Attention and MLP modules, while other components remain unaffected. These linear layers account for the majority of the parameter count and significantly influence the computation results of the layer. The sparsified linear layers dominate the computation of the reconstruction error for each layer. Although various nonlinear operations exist in the actual architecture, they typically do not fundamentally alter the reconstruction error of each layer and have minimal impact on theoretical analysis. Therefore, we believe modeling the primary computation of a layer as $\boldsymbol{WX}$ is sufficient for analyzing the reconstruction error of that layer. This is also sufficient for us to analyze how reconstruction errors accumulate and propagate across the network. 2. **Modeling the computation of modules as $\boldsymbol{WX}$ is a common practice in many works.** [A1] and [A2] simplify the computation of the CONV+BatchNorm+RELU modules in quantized convolutional neural networks as $\boldsymbol{WX}$ when analyzing reconstruction error. This approach of ignoring unnecessary computations and focusing on the core computations is a common practice, which facilitates the derivation of theoretical results. Thank you for providing valuable insights, which are very important for improving our work. **We will include the above clarification into the final version.** [A1] Up or down? adaptive rounding for post-training quantization. ICML 2020. [A2] Solving oscillation problem in post-training quantization through a theoretical perspective. CVPR 2023. > Question 2 (Discuss Frankle et al.'s work [1]) Thank you for providing such awesome work! 1. Frankle et al. [1] suggest that the effectiveness of initialization pruning mainly depends on each layer's pruning rate, rather than the specific selection of weights within layers. This highlights the importance of layer-wise sparsity rates and supports the value of our work. 2. We have discussed and compared various layer-wise sparsity methods, including those for LLMs (methods in Table 1) and CNN/ViT (methods in Table 14). Unlike all previous methods, our ATP method discovers that using a simple monotonically increasing arithmetic progression for layer-wise sparsity can achieve excellent results. We will include Frankle et al.'s work [1] in our final version. **We will include the above discussions in our final version. We hope this response has addressed your concerns and kindly ask for a more positive evaluation of our work. Thank you!** --- Rebuttal Comment 1.1: Comment: Thank you for the response. Your answers mostly cover my concerns. Let me though be a little more clear about my expectations regarding the issue of the relevance of theoretical analysis on linear networks when the focus is on transformers: Your assumption that reconstruction error grows similarly in transformers seems reasonable, especially given the empirical evidence in Figure 1. My main issue is that the paper jumps from the linear case to the transformer case without mentioning those changes or explaining the made approximations. For instance, Section 3.1 introduces LLM layers (e.g., attention, layer norms) but then abruptly applies a linear approximation in Eq. (1) without stating it as an approximation. The notation also mixes transformer layers with the “channel” convolutional terminology, which is confusing. My point is that I would like the paper to clearly reflect that the theoretical investigations are made on a simplified linear network, and then provide a separate section on how/why does results can be transferred to a transformer network (including the discussion provided by you in the response above). For instance, I would suggest restructuring Section 3 as follows: - First, discuss error propagation in linear networks, presenting all relevant theorems and proofs before introducing transformer layers (moving content from Section 3.1's second half, Section 3.2, and Section 3.4). - Next, introduce transformer layers and explicitly state where Equation (1) applies within them (my understanding is that the error is computed on each linear layer in the model, but I did not see such information in Section 3). Provide arguments for transferring insights from linear networks to transformers, backed by empirical results, follow with 3.5. From the theoretical point of view, my (intuitive) concern was that the lower bound in Lemma 3.3 may behave differently due to the attention mechanism, which introduces additional terms like $X_i^T(W_K^TW_Q-\tilde{W_k}^T\tilde{W_Q})X_i$. Would it make sense to analyze these terms (in addition to the restructuring mentioned above) rather than approximating the entire block as a linear projection? On a side note, regarding [A1-A2], approximating a convolutional layer as a linear projection seems more natural than doing so for an attention-based one, given that convolutions can be represented linearly (e.g., via DBT matrices or im2col [B1]). Either way, I slightly increased my score since apart from this “linear analysis vs transformers” point (in which I would be satisfied with a clear clarification made in the text of the paper) the authors have addressed my issues. **References:** [B1] Wang, Jiayun, et al. "Orthogonal convolutional neural networks." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. --- Reply to Comment 1.1.1: Comment: **Thanks for your further response.** > The notation also mixes transformer layers with the “channel” convolutional terminology, which is confusing. We apologize for the confusion we have caused. We will revise line 161 from "$c_{in}$ and $c_{out}$ represent the number of input and output **channels**" to "$c_{in}$ and $c_{out}$ represent the number of input and output **feature dimensions**" in our final version. > My point is that I would like the paper to clearly reflect that the theoretical investigations are made on a simplified linear network, and then provide a separate section on how/why does results can be transferred to a transformer network (including the discussion provided by you in the response above). For instance, I would suggest restructuring Section 3. Thank you for your valuable suggestions. **We will restructure Section 3 in our final version according to your suggestions.** This includes: 1. Discuss error propagation in linear networks and introduce all relevant theorems and proofs. This part includes the content from the second half of Section 3.1 and Section 3.2. 2. Add a new Section 3.3 to introduce transformer layers and discuss the relationship between the theoretical analysis of linear networks and the actual transformer network, and elaborate on the rationality of transferring the theoretical analysis based on linear networks to the actual transformer network. This part includes the content from the first half of Section 3.1 and the content we replied during the rebuttal period. 3. The existing Section 3.3 and 3.4 will be changed to the new Section 3.4 and 3.5 respectively. > From the theoretical point of view, my (intuitive) concern was that the lower bound in Lemma 3.3 may behave differently due to the attention mechanism, which introduces additional terms like $X_i^T(W^T_KW_Q-\tilde{W}_K^T\tilde{W}_Q)X_i$. Would it make sense to analyze these terms (in addition to the restructuring mentioned above) rather than approximating the entire block as a linear projection? Thank you for your feedback. We understand that your concern was that the lower bound in Lemma 3.3 might behave differently due to the attention mechanism. However, we believe that our formulation and proof of Theorem 3.2 and Lemma 3.3 are reasonable, **as they are supported by empirical evidence**. Theorem 3.2 and Lemma 3.3 shows that an increase in the reconstruction error of the previous layer in a sparse LLM usually leads to a further increase in the lower bound of the reconstruction error of the subsequent layer. In practice, this often means that an increase in the reconstruction error of the previous layer will lead to an increase in the reconstruction error of the subsequent layer. We have also observed this phenomenon in the left of Figure 1, where we have plotted the layer-wise reconstruction errors of different layer-wise sparsity methods on the LLaMA2-7B model. We can see that when the reconstruction error of the earlier layers is smaller, the reconstruction error of the subsequent layers is also smaller. Conversely, when the reconstruction error of the earlier layers is larger, the reconstruction error of the subsequent layers is also larger. **The above empirical evidence shows that, despite the presence of the attention mechanism, the lower bound of the reconstruction error still increases, and it is reasonable to approximate the entire Transformer layer as a linear projection in Theorem 3.2 and Lemma 3.3.** > On a side note, regarding [A1-A2], approximating a convolutional layer as a linear projection seems more natural than doing so for an attention-based one, given that convolutions can be represented linearly (e.g., via DBT matrices or im2col [B1]). Thank you for your feedback. We understand that the linear representations of convolutions (via DBT matrices or im2col) are intuitive. However, we believe that approximating the transformer layer as a linear projection has reasonable justifications. For example, [C1] employs Procrustes similarity analysis to discover that the embedding transformations between sequential layers in LLMs such as GPT, LLaMA, OPT, and BLOOM exhibit a near-perfect linear relationship, with linear score is 0.99. This indicates that despite the non-linear operations within Transformer layers, the mapping between adjacent layers can still be approximated as a linear transformation. Therefore, we consider it reasonable and natural to model Transformer layer computations using $\boldsymbol{WX}$. We will incorporate the above discussion into the final version. Thank you again for your valuable feedback. [C1] Your Transformer is Secretly Linear. ACL 2024 main. **Thank you again for your detailed suggestions. They are very helpful for further improving our work, and we will incorporate the above discussion into our final version.**
null
null
null
null
null
null
Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models
Accept (poster)
Summary: The paper introduces **ProLoRA**, a parameter-efficient method for model adaptation and cross-domain knowledge transfer by decomposing pre-trained weight matrices via Singular Value Decomposition (SVD). ProLoRA splits a weight matrix W into a dominant subspace and a null-space. Key contributions include a low-rank subspace-zero-space decomposition framework, efficient adaptation through partial parameter updates, and empirical validation showing competitive performance in transfer learning tasks compared to LoRA and LoRA-X. Claims And Evidence: Most claims are supported by strong experiments justification. However, I still have some concerns about the empirical evidence of the semantics between the subspace and null-space. In detail, the main contribution of this work is to decompose the weight into two orthogonal parts, thus facilitating the desired knowledge transfer. However, There lacks visualization results about what the specific meaning of this two parts and further include these explanation study can further justify the motivation. Methods And Evaluation Criteria: - The detail of the decomposition is not clear. For instance, how the U matrix is decomposed into Us,∥ Us,⊥ needs more details. Moreover, why the first equal sign in Eq.2 holds needs more explanation. - The evaluation criteria mostly make sense for the problem or application at hand. It is suggested to include other common metrics in image generation, such as FID, CLIPscore. - Since the methods include SVD within each layer, it is suggested to give the real runtime results instead of complexity analysis. Theoretical Claims: This paper does not include proofs. Experimental Designs Or Analyses: - The author experimented their methods on FouRA, it is recommended to apply more variants including VeRA, SVDiff and DoRA to verify the effectiveness of the proposed method. - The paper tests the LoRA transfer between source and target domain with limited pairs, which may weaken the generalization ability of the proposed method. Supplementary Material: The Supplementary Material includes more qualitative results. Relation To Broader Scientific Literature: It is novel to project the LoRA weight into subspace and nullspace for transferring, which can inspire future works. Essential References Not Discussed: The method have already discussed the latest baseline LoRA-X in detail. Other Strengths And Weaknesses: No further Strengths And Weaknesses. Other Comments Or Suggestions: Typo in Line 119. "The right singular matrix U_s" -> "The Left singular matrix U_s" Questions For Authors: - How well does ProLoRA perform across other types of tasks or domains, like NLP or time-series analysis? - Would additional fine-tuning be needed to adapt it effectively? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for valuable feedback and comments. Below, we provide detailed responses. [C1] However, I still .... further justify the motivation. [R1] We agree that visualizing the subspace and nullspace's roles would strengthen our justification. We will include: (a) Semantic Ablation: Ablating each component and visualizing impact on style, content, etc. (b) Qualitative Examples: Showcasing distinct component contributions in transfer scenarios. These will concretely demonstrate each projection's semantic role, justifying ProLoRA's motivation. [C2] The detail of decomposition …. needs more explanation. [R2] We appreciate reviewer's request for more detail regarding decomposition process. The SVD yields left and right singular vectors of a matrix M. The right-singular vectors corresponding to vanishing singular values of ⁠ M⁠ span null space of ⁠M⁠, and left-singular vectors corresponding to the non-zero singular values of ⁠M span the range of ⁠M [1]. [1] https://en.wikipedia.org/wiki/Singular_value_decomposition [C3] The evaluation criteria …. such as FID, CLIPscore. [R3] We appreciate suggestion to include FID and CLIPscore. We chose HPSv2 over CLIPscore because it's more sensitive to subtle improvements, especially in style/concept tasks. FID requires thousands of samples for robust estimate. [C4] Since the methods …. instead of complexity analysis. [R4] We understand reviewer's request for real runtime results given inclusion of SVD in each layer. We provide a computational complexity analysis in our response to reviewer eBAK [C7]. To summarize, while SVD is computationally intensive, ProLoRA performs SVD only once per model. The increased runtime is therefore limited to the initial decomposition, while subsequent adapter transfers benefit from the precomputed SVD. . [C5] The author experimented … FouRA … to verify the effectiveness of the proposed method. [R5] We appreciate the reviewer's suggestion to evaluate ProLoRA with more adapter variants. We have already conducted experiments with both FouRA (Table 8) and DoRA (Table 7) in the paper. SVDiff is similar to LoRA-X, but we acknowledge its value for completeness and add it in the final version. VeRA is not feasible on Diffusion model due to U-Net architecture (different KQV matrix size), but we're adding it to our LLM experiments and results are shown below. This will provide a more complete evaluation. Results are shown below for E2E-NLG Dataset. | Method | Adapter | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-LSum | |--------|-------------|---------|---------|---------|------------| | VeRA | Trained | 0.7764 | 0.6224 | 0.7524 | 0.7532 | | | Transferred | 0.7782 | 0.6343 | 0.762 | 0.7621 | | | Copied | 0.7544 | 0.6136 | 0.7481 | 0.7425 | The second results are for the SAM-SUM dataset | Method | Adapter | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-LSum | |--------|-------------|---------|---------|---------|------------| | VeRA | Trained | 0.3324 | 0.146 | 0.2722 | 0.2759 | | | Transferred | 0.3312 | 0.1422 | 0.2712 | 0.2752 | | | Copied | 0.3222 | 0.1402 | 0.26 | 0.2612 | For both datasets, it is seen that transferring VeRA v/s training VeRA from scratch for both the E2E-NLG and SAMSUM dataset produces very similar results suggesting that transferring VeRA using our proposed range space and null space projection is effective. Furthermore, copying is not effective as our proposed projection method. [C6] The paper tests LoRA transfer … of the proposed method. [R6] We acknowledge the reviewer's concern regarding the limited number of source-target pairs used in our evaluation, which could impact the perceived generalizability of ProLoRA. While our approach relies on a reasonable degree of subspace similarity between source and target models, and this similarity can be influenced even by models from different families, as shown in the paper many source-target pairs still meet this requirement. [C7] Typo in Line 119 ……. [R7] Thanks, we'll correct that. [C8] How well does ProLoRA … or time-series analysis? [R8] We've expanded our evaluation to include tasks beyond image generation. Please refer in response to [C5] and reviewer eBAK's comment [C3]. [C9] Would additional fine-tuning … effectively? [R9] To ensure we address the reviewer's intent, we would like to clarify whether the question is: "Can ProLoRA transfer be an effective initialization for subsequent fine-tuning?" We propose DreamBooth experiments comparing convergence and final performance (measuring [metrics]) with ProLoRA vs. random initialization. Please confirm if this aligns with your intent. --- Rebuttal Comment 1.1: Comment: Thanks for providing this detailed rebuttal. After reading this response, most of my concerns have been addressed. It is suggested the authors to further revise the paper according to the discussion. I will upgrade my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thanks for considering our rebuttal and improving the score. The paper will be revised according to the comments and responses.
Summary: This paper proposes ProLoRA, a zero-shot method for transferring pre-trained LoRA adapters between different text-to-image diffusion models without requiring retraining or access to original training data. The key motivation is that traditional LoRA adapters are tied to specific base models, making them difficult to reuse when models are updated/pruned/distilled, etc. ProLoRA consist of three stages of identifying module pairs, decomposing exist LoRA and transferring to the new model’s LoRA. ## update after rebuttal I appreciate the authors’ rebuttal. However, after carefully reviewing their response, I still have concerns that remain unaddressed. Additionally, several important details, evaluations, and comparisons I highlighted in my initial review are critical and still absent from the manuscript. While this paper holds strong potential for publication, I believe it is not yet ready in its current form. Therefore, I will maintain my original score. Claims And Evidence: Partially. While the paper clearly defines its task, the experimental validation appears incomplete due to the absence of critical baselines and a lack of clarity in the evaluation metrics. Baseline Comparisons: The majority of comparisons in the paper, including both results and ablation studies, are limited to the LoRA baseline (e.g., Tables 1, 3, 4, 6, and 7). However, additional baselines are essential to properly assess the contribution of the proposed method. For instance, the evaluation should include the original model without any adapter to determine whether ProLoRA provides a meaningful improvement over the base model. Additionally, a naive transfer of the source LoRA (just a copy) should be included to verify whether ProLoRA truly enhances transferability or if a simple weight copy already performs comparably. Consider Table 1, where ProLoRA is only 0.04 away from the LoRA baseline. Is this a substantial improvement, or is it within the range of natural variation? If the original model (without any adapter) produces similar results, then ProLoRA’s impact may be negligible. Likewise, if directly transferring the source LoRA (despite being trained on a different model) yields comparable or better performance, it would call into question the necessity of the proposed method. These open questions make it difficult to fully assess the contribution of ProLoRA, and addressing them with appropriate baselines would significantly strengthen the paper’s claims. In contrast, Table 2 does include adapter-free baseline models (last two rows), but surprisingly, these baselines achieve higher HPSv2 and LPIPS scores than the fine-tuned LoRA and ProLoRA. This is unexpected - why does the adapted model underperform compared to an adapter-free model? This discrepancy should be addressed, as it calls into question whether adaptation was achieved with LoRA/ProLoRA. Metric Unclarities: The paper provides a brief explanation of CSD-MMD (lines R189-R200), but fails to specify the reference image sets used for calculation. Was CSD-MMD computed against images from the original model, the LoRA-adapted model, or the training dataset? I assume it was the dataset, but the missing values from Tables 4-5 create ambiguity. Table 3 introduces CLIP-T/I metrics, what are these metrics actually? Also, these are not used consistently across the paper. Why are these metrics included only for this experiment? Shouldn’t all tasks and ablations be evaluated using a uniform set of metrics? This inconsistency makes it difficult to compare results across different sections of the paper. The paper would benefit from a clearer and more comprehensive set of baseline comparisons, along with a standardized set of evaluation metrics across experiments. Addressing these concerns would significantly improve the credibility of the claimed contributions. Methods And Evaluation Criteria: For diffusion models, yes, the proposed method and benchmarks seem appropriate. However, the generality of the method raises important questions. The approach appears quite broad and potentially applicable beyond diffusion models, much like LoRA itself. This leads to a natural curiosity: how does ProLoRA perform on other tasks or backbones, such as: - Large Language Models (LLMs) for general language understanding, - Vision Transformers (ViTs) for image classification, - Image-to-image generation models, etc. Have the authors attempted applying their method to any of these domains? Even if the results were unsuccessful, discussing why ProLoRA is effective specifically for diffusion models (and not for other tasks/backbones) would be insightful. If ProLoRA has not been tested on other backbones, including such experiments - even as preliminary results would greatly enhance the paper’s impact and Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, please see above comments Supplementary Material: Yes, only the part of extra visualization (refereed from main paper). Relation To Broader Scientific Literature: The proposed method has the potential to impact a broad range of architectures, much like LoRA, given its focus on efficient adaptation. If validated beyond diffusion models, it could contribute to knowledge transfer across widely used LoRA adapters in various domains. Essential References Not Discussed: The work of Wang et al., 2024 is cited incorrectly in line 070—the reference points to an arXiv version, while a published NeurIPS 2024 version exists. The authors should update the citation to reflect the peer-reviewed conference paper instead of the preprint. I recommend verifying all citations in the paper. Other Strengths And Weaknesses: Building on my previous comments, the proposed method appears highly general, with significant potential impact beyond diffusion models. Its deterministic, simple, and training-free nature makes it particularly appealing. However, the paper misses an opportunity to evaluate ProLoRA on other tasks and model backbones, which could provide valuable insights for future research. Expanding the evaluation scope would not only strengthen the contribution but also help position ProLoRA as a more broadly applicable method in the field of adapter transfer and parameter-efficient fine-tuning. Other Comments Or Suggestions: - Typo line 119: “right” —> “left” - Fig 1 is hard to see, I need a serious zoom-in to see any origami evidence. Better to choose another example or modify this one. - Table 2 need to be re-arragned, it is hard to read it. There is only one cross-transfer experiment An interesting potential extension of ProLoRA is its use as an initialization method for fine-tuning LoRA on a new model. If accuracy is a priority and re-training LoRA is inevitable, it would be valuable to explore whether initializing LoRA with ProLoRA (instead of default initialization) improves convergence speed and final performance. Questions For Authors: - What is CLIP-I and CLIP-T metrics? - Why LoRA is the least performing in the HPSv2 metric in Tab 5? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for valuable feedback and comments. [C1] The majority of comparisons … strengthen paper’s claims. [R1] "No LoRA" can degrade performance (e.g., produce blurry outputs in Table 4). We are currently running experiments for Tables 1. We will add results of Tables 6 & 7 in the final version. For Table 3 we have added results below and we observe that “No LoRA” produces poor performance. Following are results on concept customization | Method | CLIP-T | CLIP-I | DINOv2 | |-----------|--------|--------|--------| | No LoRA | 0.251 | 0.521 | 0.352 | | LoRA | 0.294 | 0.745 | 0.539 | | Copy LoRA | 0.300 | 0.719 | 0.475 | | ProLoRA | 0.287 | 0.737 | 0.501 | [C2] In contrast, … whether adaptation was achieved with LoRA/ProLoRA. [R2] There might be cases where HPSV2 and LPIPS score is higher for non-LoRA baseline. HPSV2 is prompt fidelity while LPIPS is diversity. Sometimes after fine-tuning on a particular style it produces similar looking images and hence LPIPS might be lower. Also, prompt fidelity might be reduced due to presence of certain style in the generated image. However, DINOV2 is still higher for cases where adapter is fine-tuned. [C3] The paper … Tables 4-5 create ambiguity. [R3] We apologize for the lack of clarity regarding CSD-MMD calculation. - General Use: We use CSD-MMD to compare generated samples from ProLoRA-transferred model against the generated samples from LoRA-trained model. This helps quantify the similarity of output distributions. - Table 4 (LCM-LoRA): CSD-MMD is not applicable to Table 4 because this table focuses on LCM-LoRA adaptation for accelerated sampling, not style transfer. CSD-MMD is primarily used to measure style similarity. - Table 5: In Table 5, CSD-MMD is computed against generated samples from LoRA-trained model (shown in the first row of the table). We will clarify these details in the final version of the paper. [C4] Table 3 introduces CLIP-T/I metrics, … significantly improve the credibility of claimed contributions. [R4] The reason for inconsistency is that our paper considers a range of tasks (style adaptation, concept customization, transfer of acceleration LoRAs), and each of these tasks has established evaluation practices in the literature. CLIP-I [1] and CLIP-T [1], for instance, are standard metrics in concept customization, measuring image and text fidelity, respectively. We will clarify all these details in the paper. [1] Radford, Alec, et al. "Learning transferable visual models from natural language supervision." ICML 2021. [C5] For diffusion models, .... enhance the paper’s impact and [R5] As detailed in our response to reviewer eBAK's comment [C3], we have conducted preliminary experiments applying ProLoRA to the TinyLlama language model on the SamSum and E2E-NLG datasets [C6] The work of Wang et al., 2024 … all citations in the paper. [R6] Thank you for pointing out incorrect citation. We will update the reference to Wang et al., 2024 to the published NeurIPS 2024 version. [C7] Building on my previous comments, …. and parameter-efficient fine-tuning. [R7] We've expanded our evaluation to include language tasks beyond image generation. As detailed in response to reviewer qRQW's comment [C5] and reviewer eBAK's comment [C3] [C8] Typo line 119: “right” —> “left” [R8] Thank you for catching the typo. We will correct that. [C9] Fig 1 is hard to see, I need a serious zoom-in to see any origami evidence. Better to choose another example or modify this one. [R9] We appreciate the feedback … visual differences in the revised version. [C10] Table 2 need to be re-arranged, it is hard to read it. There is only one cross-transfer experiment [R10] We appreciate the feedback regarding the readability of Table 2. We re-arrange the table to improve its clarity and organization. [C11] An interesting potential extension of ProLoRA … improves convergence speed and final performance. [R11] This is a great suggestion! We'll conduct DreamBooth experiments comparing ProLoRA vs. default initialization (measuring [metrics]) and share results in the next round. Does this experiment align with your suggestion? [C12] What is CLIP-I and CLIP-T metrics? [R12] CLIP-I and CLIP-T are metrics based on the CLIP model, commonly used to evaluate concept customization. CLIP-I assesses image fidelity, while CLIP-T assesses text fidelity. They are standard metrics in this area. [C13] Why LoRA ...in Tab 5? [R13] HPSv2 measures prompt fidelity (image-text alignment). LoRA fine-tuning specializes the model towards a specific concept/style. This specialization leads LoRA to prioritize the learned concept over precise adherence to the full prompt text, resulting in a lower HPSv2 score. Different metrics emphasize different qualities – HPSv2 on prompt following, while others like CSD-MMD might focus more on aspects like style consistency where LoRA could perform better. **Hope responses suffice enough to raise score**
Summary: This paper introduces ProLoRA, a method for zero-shot transfer of LoRAs between source and target diffusion models. It features a projection technique that transfers both subspace and nullspace components of source LoRAs to target models while preserving generation performance. The method works by identifying similar modules between models, decomposing the LoRA and projecting the LoRA components into the target model's weight space. Evaluations across datasets and models show comparable performance to training from scratch. ## update after rebuttal The authors addressed my concerns satisfactorily, therefore I will maintain my conditional Accept as final. Claims And Evidence: - The primary claim that the proposed work enables training-free transfer of LoRA adapters between models is well-supported by both qualitative and quantitative results. - Both subspace and nullspace projections are necessary is demonstrated through ablations and visualizations, showing performance degradation when either component is removed. - Appropriate metrics are convincing for the claim that similar performance is attained against LoRAs trained from scratch and other works. - The generalization claim about different PEFT methods (DoRA, FouRA) is supported. However, fewer experiments are presented compared to the main LoRA evaluations. Methods And Evaluation Criteria: - The methodological approach is sound, with clear mathematical formulations of the projection operations for both subspace and nullspace components. - The evaluation protocol comparing against LoRAs trained from scratch on target models provides a strong baseline for assessing transfer quality. - The chosen metrics for evaluation makes sense for measuring image generation quality, diversity, and style transfer fidelity. - The ablation studies are extensive and well-designed to understand the contributions of the proposed method. Theoretical Claims: I informally read through formulations in Sections 4.1-4.3, particularly equations (1)-(3), and look correct to me. The computational complexity analysis of the initial SVD computation is correct. Experimental Designs Or Analyses: - The experiments on style transfer datasets appear sound, with appropriate comparisons and metrics. The experiments on concepts using DreamBooth data are convincing too. Finally, LCM-LoRA experiments about acceleration show the method versatility. - The comparisons with competing methods (X-adapter, LoRA-X) are fair and highlight ProLoRA's advantages, although more extensive benchmarking would strengthen the claims. Supplementary Material: Yes, I reviewed mainly the qualitative results of the appendix. Relation To Broader Scientific Literature: The paper contributions are related to transfer methods, including knowledge distillation approaches. It is close to PEFT literature and LoRA works. LoRA-X is particularly relevant as it shares similar goals but with key methodological differences. The work connects to broader subspace analysis techniques in neural networks, though this connection could be more explicitly developed. Essential References Not Discussed: All essential related works are discussed, to my knowledge. Other Strengths And Weaknesses: ### Strengths - The method addresses a practical problem of reusing existing adapters. Given current artistic hubs of these, the proposed method can be potentially very useful. - The nullspace component analysis reveals an often-overlooked aspect of LoRA adapters' functionality. - The experiments across different adapter types demonstrates broad applicability. ### Weaknesses - The method still requires computing SVD on both source and target models, which can be computationally expensive for very large models. - The performance varies across different datasets and model pairs, this suggests some limitations in generalizability. - Following previous point, the theoretical analysis could be deepened to provide better insights into when and why the method might fail. - The evaluation is limited to text-to-image diffusion models, therefore testing on other modalities would strengthen the paper claims. - Only convolution-based architectures were explored. Current state-of-the-art models are mainly transformer-based due to scalability. Other Comments Or Suggestions: - Consider adding more quantitative analysis of when module similarity matters most for successful transfer. - The naming of metrics in tables could be more consistent throughout the paper. - A table summarizing the computational requirements for different model transfers would help readers understand practical implications. Questions For Authors: - How does the performance of ProLoRA degrade when the architectures of source and target models differ more substantially? For example, would the method work between completely different architectures like a SD1.5 to SDXL or SDXL to SD3? - Could you provide more details on how the threshold of 0.8 for subspace similarity was selected? How sensitive is the method to this hyperparameter? - How would the proposed method perform if applied iteratively across a chain of models $A \rightarrow B \rightarrow C$ compared to direct transfer $A \rightarrow C$ ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for valuable feedback and comments. Below, we provide detailed responses. [C1] The method still requires computing SVD … for very large models. [R1] We address this point in our response to Reviewer DiKB's comment [C5]. [C2] The performance varies …. limitations in generalizability. Following previous point, … why the method might fail. [R2] We attribute performance variations to mismatches in: (1) Subspace Similarity (attention differences) and (2) Architectural Divergence. To address this, we will: (a) Incorporate a transferability metric (like LoRA-X Fig. 4) to predict transfer success. (b) Analyze failure cases to identify and mitigate limitations. (c) Refine our theoretical analysis to account for subspace similarity/architectural divergence. [C3] The evaluation is limited ... strengthen the paper claims. [R3] We thank the reviewer for highlighting the limited evaluation scope. To address this, we've included initial results of applying ProLoRA to TinyLLAMA model on SamSum and E2E-NLG datasets, benchmarks. These experiments demonstrate ProLoRA's applicability beyond image generation. Copying does not produce good performance. The first results are E2E-NLG | Method | Adapter | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-LSum | |---------|-------------|---------|---------|---------|------------| | ProLoRA | Trained | 0.7882 | 0.6341 | 0.7692 | 0.7634 | | | Transferred | 0.7881 | 0.6340 | 0.7684 | 0.7642 | | | Copied | 0.7634 | 0.6123 | 0.7482 | 0.7421 | The second results are for SamSum dataset | Method | Adapter | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-LSum | |---------|-------------|---------|---------|---------|------------| | ProLoRA | Trained | 0.3461 | 0.1596 | 0.2832 | 0.2862 | | | Transferred | 0.3432 | 0.1546 | 0.2834 | 0.2852 | | | Copied | 0.3213 | 0.1422 | 0.2623 | 0.2642 | [C4] Only convolution-based architectures ... due to scalability. [R4] Our text-to-image models incorporate transformer blocks for attending to text embeddings. Additionally, the Table above includes results for language models. These show ProLoRA's applicability to pure transformer-based architectures. [C5] Consider adding more ... for successful transfer. [R5] To address reviewer's request for more quantitative analysis on the impact of module similarity for successful transfer, we wanted to clarify whether analyzing effect of transferring not only correlated modules (as in Table 5) but also non-correlated modules would suffice. Upon confirmation, we proceed to perform those experiments. [C6] The naming of .... throughout the paper. [R6] We will improve our convention in the paper. [C7] A table... understand practical implications. [R7] As suggested, please refer Table 11 of Appendix. Additionally , the transfer and inference process of X-adapter is same of 17.1s. Finetuning time might be estimated to be pretty long due to large scale training. [C8] How does performance of ProLoRA .... a SD1.5 to SDXL or SDXL to SD3? [R8] While ProLoRA is designed for training-free transfer within same architectural family, its effectiveness may decrease with greater divergence. Exploring these limits like transferring across different architectures is an interesting future direction. [C9] Could you provide ... threshold of 0.8 for subspace similarity ...? How sensitive ... hyperparameter? [R9] The initial threshold of 0.8 for subspace similarity was chosen based on empirical analysis. To assess sensitivity of ProLoRA to this hyperparameter, we conducted experiments with thresholds of 0.9 and 1.0 when transferring LoRA from SDv1.5 to Eff v1.0. These initial results suggest that ProLoRA is relatively robust to variations in threshold. We plan to include more comprehensive results in final version. | Method | Dataset | Threshold | CSD-MMD | |------------------|-------|-------|---------| | ProLoRA | Blue fire | 0.8 | 0.0025 | | ProLoRA | Blue fire | 0.9 | 0.0031 | | ProLoRA | Blue fire | 1.0 | 0.0082 | [C10] How would proposed method .... compared to direct transfer? [R10] To assess iterative transfer, we compared chained transfers (SD1.5 -> RV3 -> EffNet v1.0) against direct transfer (SD1.5 -> EffNet v1.0). Results indicate that iterative transfer degrade performance (Origami dataset), potentially due to error accumulation. We will explore this further with more models/datasets in final version. | Dataset | CSD-MMD (SD-1.5 -> Eff v1.0 ) | CSD-MMD (SD-1.5 -> RV-3 -> Eff v1.0 ) | |----------|-------------------------------|---------------------------------------| | Painting | 0.0026 | 0.0027 | | Origami | 0.0025 | 0.0045 | | Bluefire | 0.0025 | 0.0025 | --- Rebuttal Comment 1.1: Comment: I thank the authors for their work and for addressing my comments. I am satisfied with the response and will keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thanks for acknowledging our rebuttal and keeping the score.
Summary: The paper proposes ProLoRA, which can transfer the pre-trained LoRA to another target model without training. This addresses a key constraint in existing methods where LoRA adapters are trained to specific models, requiring complete retraining to a new model. ProLoRA projects source to target weight space by utilizing subspace and null space similarities and selectively targeting aligned layers. The proposed method is evaluated on diverse image generation tasks with diffusion models. ## update after rebuttal Thank you to the authors for their response. As most of my concerns have been sufficiently addressed, I am increasing my score to a weak accept. Claims And Evidence: The central claim of this paper is that ProLoRA enables effective zero-shot adaptation of parameter-efficient fine-tuning across different text-to-image diffusion models by critically incorporating both subspace and null space projections. Unlike previous approaches such as LoRA-X, which only considers subspace projection, ProLoRA's key innovation is its comprehensive projection methodology that preserves the full expressiveness of the source LoRA adapter. At the same time, the paper emphasizes that the decomposition and projection process used in ProLoRA can be executed significantly faster than training a new LoRA adapter from scratch on the target model. These are supported by the experimental section. Methods And Evaluation Criteria: The method is also evaluated on diverse generation tasks HPSv2, LPIPS, and CSD-MMD scores. Additionally, DINOv2 score is also used measure similarity between source and target generation. Wall-clock time is also used to compare time effiiciency. Theoretical Claims: The authors provide theoretical insights for their approach, using decomposition to decompose weights into subspace and null space components. The equations for projecting the source LoRA onto both spaces of the target model (Equations 2 and 3) are well-explained. While the paper demonstrates empirically that the null space projection is crucial through ablation studies, the theoretical justification for why standard LoRAs affect the null space is somewhat underdeveloped. Experimental Designs Or Analyses: The authors evaluate their methodology using various diffusion models including SDXL, Stable Diffusion v1.5, and SSD-1B, as well as different adapter types such as style, concept, and LCM-LoRA. Their assessment combines both quantitative metrics and qualitative visual examples to demonstrate the effectiveness of their approach. Supplementary Material: Supplementary material is provided after the main paper. Relation To Broader Scientific Literature: Its contribution could be extended to other areas other than image generation. Essential References Not Discussed: References are well discussed. Other Strengths And Weaknesses: Strength: - Different from LoRA-X, the method does not require additional training of LoRAs on the source model. - The method additionally leverages nullspace transfer. - The proposed method is evaluated on diverse backbones and transfer scenarios. Weakness: - The technical contribution is very marginal compared to LoRA-X. - The performance appears to be underwhelming. - The baseline for the proposed method is not clear. It seems like LoRA-X is the closest work and should be the baseline for all experiments. However, only a part of the experiments explores this comparison. - The paper claims the disadvantage of LoRA-X is that it requires pre-training. However, it is the same for ProLoRA that requires LoRA trained on a source model. - The full SVD computation has a high computational cost, with minimal benefits over the baselines. Other Comments Or Suggestions: It would be great to provide more analysis on the role of each subspace or null space projection. While the paper effectively demonstrates their empirical importance, a deeper analytical examination of their distinct contributions would strengthen the theoretical foundation. Additional insights into how each projection component preserves specific visual features or stylistic elements would enhance understanding of the transfer mechanism. Questions For Authors: Please refer to the weakness section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for valuable feedback and comments. Below, we provide detailed responses. [C0] Its contribution could be extended to other areas other than image generation. [R0] Please refer [C3] of Reviewer eBaK [C1] The authors provide theoretical insights ... why standard LoRAs affect the null space is somewhat underdeveloped. [R1] While we appreciate the feedback regarding the theoretical justification for standard LoRAs affecting the null space, we understand the concern as follows: Non-square weight matrices have inherent nullspaces. Unlike LoRA-X and SVDiff, standard LoRA finetuning doesn't constrain updates to the model's weight subspace; it allows modifications in both the weight and nullspaces, potentially causing unintended effects. This is because standard LoRA lacks the constraint of only optimizing singular values. Therefore, we propose projecting LoRA updates into both range and nullspaces during transfer to mitigate these effects. [C2] The technical contribution is very marginal compared to LoRA-X. [R2] We acknowledge that ProLoRA builds upon existing research, including LoRA-X. However, ProLoRA's approach offers distinct advantages that make it more versatile and broadly applicable. LoRA-X, by modifying only the singular values of the pre-trained model weights, effectively restricts the adapter to the weight subspace and ignores the nullspace. This constraint necessitates higher adapter ranks (e.g., 320 for LoRA-X versus 32 for standard LoRA), leading to increased inference computation. Furthermore, LoRA-X adapters are only transferable from other LoRA-X-trained models. ProLoRA, conversely, allows transfer from diverse, pre-existing adapters. Since ProLoRA isn't constrained to the pre-trained weight subspace during training, its updates are decomposed into range and null space components and then projected onto the target modules, enabling greater flexibility. [C3] The baseline for the proposed method ... only a part of the experiments explores this comparison. [R3] We evaluated the performance of a target model using two approaches: first, training a LoRA adapter from scratch with access to the training dataset, and second, transferring a LoRA adapter from a source model to the target model using ProLoRA. The performance of the LoRA adapter trained from scratch serves as an upper-bound baseline, representing the ideal performance we aim to achieve through transfer learning. We had included LoRA-X as a baseline for the style dataset in Table 10. LoRA-X comparison on LCM-LoRA models do not make sense as we use pretrained acceleration modules. Applying LoRA-X on the Dreambooth dataset for concept customization shows very poor performance and does not converge. This is likely due to the fact the LoRA-X only updates singular values which is not representative enough to add new concepts to the model. [C4] The paper claims the disadvantage of LoRA-X .... ProLoRA that requires LoRA trained on a source model. [R4] We acknowledge the reviewer's point that ProLoRA, like LoRA-X, requires pre-training on a source model. Our intention wasn't to imply that LoRA-X uniquely requires pre-training. Rather, the key distinction lies in the type of pre-trained adapters that can be leveraged. ProLoRA offers the significant advantage of being able to transfer a variety of readily available (off-the-shelf) pre-trained adapters—including standard LoRA, DoRA, and FouRA—from a source model to different target models. LoRA-X, in contrast, is restricted to transferring only LoRA-X adapters (when available and requiring fine-tuning on the source model), because it operates solely within the weight subspace, limiting the transfer methodology to that subspace. Therefore, while both require pre-training, ProLoRA's broader compatibility with diverse pre-trained adapters provides greater flexibility and accessibility. [C5] The full SVD computation has ... benefits over baselines. [R5] While acknowledging the initial SVD cost, it's a one-time investment for source/target models. ProLoRA amortizes this cost by enabling efficient transfer of diverse pre-trained adapters (style, concepts, LCM) without repeated SVD. Unlike standard LoRA, which requires training adapters from scratch for each task, ProLoRA's one-time SVD is quickly offset by the cumulative savings from multiple adapter transfers. [C6] It would be great to provide ..... transfer mechanism. [R6] We appreciate the suggestion for a deeper analysis of subspace/nullspace projection roles. To effectively address this, could you clarify the desired scope? Are you interested in: (1) Layer-wise, (2) Attention-Type (Q, K, V), or (3) Block-Level analysis? Also, what type of analysis would be most valuable: (a) Feature Visualization of how each projection affects visual features/styles, or (b) Targeted Ablation Studies isolating the impact of projections in specific layers/attention types? **Hope responses suffice enough to raise score** --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. Most of my concerns have been adequately addressed. Regarding [R6], while I do not suggest a specific experimental design, it would be beneficial to incorporate a broader range of perspectives to more clearly illustrate the distinct roles of subspace and nullspace projection. --- Reply to Comment 1.1.1: Comment: Thanks for the clarification. We would add the analyses in the final version. We also add additional experimental results for LoRA-X transfer from SDXL to SSD-1B and compute the CSD-MMD between the adapter when transferred and when trained from scratch. | Dataset | CSD-MMD | |-----------|---------| | Bluefire | 0.0618 | | Paintings | 0.0391 | | Origami | 0.0424 | From the results, we see that CSD-MMD is higher compared to ProLoRA as shown in Table 1 (SSD-1B rows). This suggests that our ProLoRA is better at transferring adapters compared to LoRA-X. Hope this addresses your comments fully and these results would be considered in making the final decision.
null
null
null
null
null
null
R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts
Accept (poster)
Summary: The paper introduces R2-T2, a method designed to optimize routing weights in multimodal Mixture-of-Experts (MoE) models during test time. R2-T2 maintains a reference set comprising samples for which the MoE model's outputs are either correct or preferred for each task. When presented with a sample from a new task, R2-T2 first identifies its neighborhood within the reference set by leveraging embeddings generated from a separate embedding model. The neighboring samples are then utilized to predict the routing weights, employing one of three techniques: gradient descent, kernel regression, or mode finding. Evaluated across multiple benchmarks, R2-T2 demonstrates superior performance compared to its MoE backbone. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. Theoretical Claims: NA Experimental Designs Or Analyses: The experimental design appears to fine, more questions in the weaknesses/questions section later. Supplementary Material: Yes. I've checked all. Relation To Broader Scientific Literature: It should be of significant interest to the broader scientific community, as the paper addresses the test-time operations of large MoE models—a topic that warrants greater attention in today's research landscape. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: * The paper is well-written, clear, and easy to follow. * It addresses the optimization of test-time routing weights in MoE models, a timely and important research direction with significant practical implications. * The performance improvements achieved by R2-T2 over its MoE backbone are impressive, demonstrating the potential of this framework for real-world applications. Weaknesses: * Concerns about the construction and use of the reference set: * The evaluation primarily focuses on academic benchmarks, where reference sets are constructed using samples of similar types. However, in more complex real-world scenarios, it may not always be feasible to predefine the most suitable task type before inference begins. In such cases: * It might become necessary to store multiple types of reference sets simultaneously, increasing storage and computational demands. * A filtering mechanism could be required to identify the appropriate reference set before applying R2-T2, adding complexity to the framework. * These factors raise questions about the practical feasibility of the method in real-world settings where task diversity and unpredictability are common. * Additionally, the effectiveness of R2-T2 heavily depends on the choice of neighborhood within the reference set, with only similar samples contributing meaningfully to the optimization process. This raises concerns: * What happens if no sufficiently similar samples exist in the reference set? * What if an incorrect reference set is chosen (e.g., using OCR data for knowledge-based VQA)? Would R2-T2 still maintain its strong performance under these conditions? * Efficiency considerations: * While the authors use FLOPs to measure efficiency, they do not account for the additional storage costs associated with the framework. These include: * The embedding model (7B parameters), which adds significant memory overhead. * The reference sets themselves, which could grow substantially larger than indicated in the paper, especially when accommodating diverse or unpredictable tasks. * Given these factors, the overall efficiency of the method may be less favorable than suggested, potentially limiting its scalability and practicality. Other Comments Or Suggestions: NA Questions For Authors: * The method also shares a lot similarities to RAG methods, which are not discussed in the paper. I wonder what's the authors' take on this matter. * Additionally, given the selected neighborhood, could in-context learning achieve performance comparable at least to mode finding? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer 8cuF Thank you for your detailed feedback! We address your comments below. >**Q1: Concern about reference set construction: Academic benchmarks use predefined sample types, but real-world scenarios may not allow task type selection before inference. It might become necessary to store multiple types of reference sets simultaneously, increasing storage and computational demands.** 1. Storage Cost: We appreciate this concern. However, our reference sets store only lightweight metadata—question text and routing weights—totaling just 3.24 MB in Parquet format, while images are loaded dynamically from Huggingface. 2. Memory Cost:As shown in Reviewer **HT7U Q1**, the additional memory and computational overhead remain minimal. 3. Scalability: Using a lightweight classifier to select a relevant subset of the reference set for each test sample, ensuring that optimization is performed on a much smaller and targeted set. (Please see **Reviewer HT7U Q5**) >**Q2: A filtering mechanism could be required to identify the appropriate reference set before applying R2-T2, adding complexity to the framework.** Thank you for your question. A similar point was addressed in **Reviewer HT7U Q5** regarding training a classifier. Please refer to that response for details. >**Q3: Additionally, the effectiveness of R2-T2 heavily depends on the choice of neighborhood within the reference set, with only similar samples contributing meaningfully to the optimization process. What happens if no sufficiently similar samples exist in the reference set? What if an incorrect reference set is chosen (e.g., using OCR data for knowledge-based VQA)? Would R2-T2 still maintain its strong performance under these conditions?** Thank you for raising this important point. Please refer to **Reviewer bdGh Q2** for no sufficiently reference samples. As for mismatched reference sets, we conducted experiments using an OCR-based subset (ST-VQA & DocVQA) as the reference for R2-T2 on SQA-IMG (knowledge-based VQA), yielding the following results: | | knowledge-based VQA | | ------------------------------------- | ------------------- | | Base (MoAI) | 83.5 | | R2-T2 (MoAI) | 88.3 | | R2-T2 (Using OCR subset as reference) | 83.8 | These results highlight that R2-T2 provides significant gains (from 83.5 to 88.3) with a task-relevant reference set but offers minimal improvement (83.8) when the reference is mismatched, emphasizing the critical role of reference set selection. >**Q4: Efficiency considerations: The framework introduces additional storage costs, particularly due to the embedding model's memory overhead (e.g., 7B parameters).** Thank you for raising this important point regarding storage and memory overhead. In addition to FLOPs, we now provide GPU memory usage comparisons (see table below). | | GPU Memory Usage | Average Accuracy | |---|---|---| | Base (MoAI) | 18GB | 74.5% | | R2-T2 (MoAI) with nv_embed_v2 | 27GB | 80.7% | | R2-T2 (MoAI) with all_mini_v6 | 20GB | 77.5% | | R2-T2 (MoAI) with Stella-En-1.5B-V5 | 22GB | 78.5% | | R2-T2 (MoAI) with Gte-Qwen2-7B | 31GB | 78.7% | Our R2-T2 framework with the nv_embed_v2 model requires 27GB of GPU memory and achieves 80.7% accuracy, while the base MoAI model uses 18GB at 74.5% accuracy. Importantly, smaller embedding models like all_mini_v6 (20GB total, 77.5%) or Stella-En-1.5B-V5 (22GB total, 78.5%) still provide significant accuracy gains with lower memory overhead. This demonstrates a scalable trade-off: users can select larger embeddings for maximum performance or opt for smaller models to balance efficiency and accuracy. >**Q5: Scalability—The reference sets could grow significantly, especially when handling diverse or unpredictable tasks.** Thank you for your insightful question. We address scalability by 1. Using a lightweight classifier to select a relevant subset of the reference set for each test sample, ensuring that optimization is performed on a much smaller and targeted set. (Please see **Reviewer HT7U Q5**) 2. We employ parquet compression techniques and caching strategies to reduce storage and memory consumption effectively. (Please see **Q1**) 3. By leveraging efficient similarity search frameworks like FAISS, we significantly improve the computation speed of neighbor retrieval even when the overall reference set is large. (Please see **Reviewer HT7U Q1**) >**Q6: Similarities to RAG & in-context learning—The method shares similarities with RAG, and could in-context learning achieve comparable performance?** Thank you for your question. Please refer to **Reviewer HT7U Q3**, where we discuss both RAG-related considerations and the role of in-context learning in comparison to our approach. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. After reviewing the rebuttal and other reviews, I remain positive about the paper overall. Most of my concerns have been addressed; however, two points still stand: 1. The choice of the reference set—while the mismatched reference set does not appear harmful, it may be redundant. 2. The comparison between RAG and ICL—using different backbones makes it challenging to draw clear conclusions. Despite these lingering concerns, I will maintain my original positive rating. I look forward to seeing future work building on the foundation of R2-T2. --- Reply to Comment 1.1.1: Comment: # Response to Reviewer 8cuF Thank you for your quick response to our rebuttal! We are glad to learn that most of your concerns have been addressed by our rebuttal. We hope the following response will resolve the remaining ones. >**Q1.The choice of the reference set—while the mismatched reference set does not appear harmful, it may be redundant.** We appreciate the reviewer’s insightful comment. In our original experiments, we did not optimize the compression of the reference set since we want to show the generalizabiliy of our method. To further assess the redundancy and the size of reference set, we conducted additional experiments where the reference set is randomly downsampled to 1/100, 1/50, 1/10, and 1/2 of the original size. The results are summarized in the table below: | | Average | |---|---| | Base (MoAI, 0 reference) | 74.5% | | R2-T2 (With 1/200 reference) | 74.6% | | R2-T2 (with 1/100 reference) | 74.6% | | R2-T2 (with 1/50 reference) | 74.8% | | R2-T2 (with 1/10 reference) | 77.5% | | R2-T2 (with 1/2 reference) | 79.8% | | R2-T2 (with full reference) | 80.7% | The new experiments do show that there exists redundancy in the original reference set, as reducing its size to 1/2 does not severely degrade the performance. This demonstrates the robustness and effectiveness of our method when only a smaller reference set is available. This also indicates that our method can be much more efficient than what we reported, if we further compress the reference set. We will study the compression problem in our future work. >**Q2.The comparison between RAG and ICL—using different backbones makes it challenging to draw clear conclusions. Despite these lingering concerns, I will maintain my original positive rating. I look forward to seeing future work building on the foundation of R2-T2.** We acknowledge that a strictly fair comparison between R2-T2 and ICL/RAG is challenging due to different backbones, because existing VLMs supporting ICL/RAG are not the MoE (vision experts) required by R2-T2, while MoE VLMs do not support ICL/RAG. VLMs' ICL/RAG on VLMs is not as common as LLMs, since ICL/RAG require VLMs to support interleaved inputs and long context, and it is still an open problem to train an VLM to achieve these capabilties. In contrast, our method offers a simpler, more straightforward, and efficient alternative that bypasses the need for such extensive training efforts, while still delivering strong performance improvements. We believe this efficiency and ease of integration make our approach a practical solution in scenarios where training a VLM to support ICL/RAG is prohibitively complex. We sincerely appreciate the time and effort you have invested in reviewing our work. We hope that the additional experiments and detailed clarifications have addressed your remaining concerns, and kindly ask if you might reflect this in your final evaluation. Thank you once again for your constructive feedback, which greatly helps us further improve our paper.
Summary: This paper proposes R2-T2, a test-time re-routing method designed to enhance multimodal mixture-of-experts (MoE) models without retraining. It addresses the limitation of suboptimal routing weights produced by pretrained routers, which often fail on complex or out-of-distribution tasks. R2-T2 introduces three strategies: Neighborhood Gradient Descent, Kernel Regression, and Mode Finding, that dynamically update routing weights for each input by referencing “successful” samples, thereby improving expert selection. Extensive experiments on eight diverse benchmarks demonstrate that R2-T2 significantly outperforms baseline MoE models, even approaching an oracle routing upper bound. Notably, it boosts smaller models to rival larger-scale vision-language models, highlighting its cost-effectiveness and scalability. The findings suggest that a well-tuned, training-free test-time adjustment can maximize MoE potential in multimodal reasoning tasks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: There is no theoretical claims for the paper. Experimental Designs Or Analyses: NA Supplementary Material: No Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: * The proposed method adjusts routing weights dynamically during inference, enabling performance gains without retraining the model. * Consistent gains bring smaller models’ performance close to or even surpassing that of LVLMs. * R2-T2 offers three optimization techniques, giving practitioners multiple avenues to customize the method for specific tasks. Weaknesses: *Reliance on the Reference Set: The method heavily relies on the reference set at test time, raising concerns about generalization and practical utility. Moreover, insufficient experiments have been conducted to explore these issues: 1. Model generalization to OOD samples: The method assumes that the reference set contains questions similar to the test question. However, for LLMs and LMMs, one of the toughest scenarios is out-of-distribution (OOD) questions. While the method could perform well when similar questions are in the reference set, it may fail if they are not. 2. Computational overhead and scalability: Because the method needs a sufficient number of samples in the reference set to find close matches, a larger set could introduce significant computational costs. The authors should investigate this more thoroughly. * If a question in the reference set closely overlaps with the test question but is of a different type, changing only a few words might not greatly reduce the measured similarity. It remains unclear how the method avoids such mismatches. Other Comments Or Suggestions: NA Questions For Authors: See Weaknesses Part Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer 1y8A Thank you for your detailed feedback! We address your comments below. > **Q1: Generalization to OOD samples—The method relies on a reference set with similar questions. How does it perform on truly out-of-distribution (OOD) cases?** Thank you for your question. Our approach is designed as a zero-shot reference method. We deliberately do not use any benchmark data for reference, meaning that our test questions is out-of-distribution (OOD) relative to the reference set. And there is no overlap between Benchmarks and Reference Set (Please see **Reviewer bdGh Q1**) For the questions are not similar with reference samples: Our Mode Finding (section 3.3 in paper) strategy does not strictly rely on identical questions but rather on the proximity of routing weights in the expert space. Even for literally different questions, if their routing weights are close to those of reference samples, it indicates that similar experts are needed. Therefore, as long as the underlying expert requirements are similar, our method generalizes well even to not similar cases. >**Q2: Computational overhead and scalability—Larger reference sets may introduce significant computational costs. Have the authors analyzed this in detail?** Thank you for your question. Please refer to **Reviewer 8cuF Q5** for your concern about computational overhead and scalability. >**Q3: Potential mismatches—If a reference question closely overlaps with a test question but belongs to a different type, how does the method avoid incorrect matches?** Thank you for this insightful question. In theory, two questions might have similar surface forms even if they are of different types. However, in practice, VLM questions tend to be short and less complex compared to those in LLMs, and our tasks come with sufficiently detailed descriptions that help disambiguate their semantic intent. This means that even if a few words are similar, the overall context captured in the embedding still reflects the true task type. Furthermore, in scenarios where subtle differences might lead to confusion, our method can be augmented with a chain-of-thought (CoT) prompt. In this variant, we prompt the model to outline a few high-level reasoning steps, and then we use the resulting CoT output as the embedding for neighbor search. This additional step helps ensure that the underlying reasoning process—and not just the superficial wording—is captured, further reducing the chance of mismatches.
Summary: The paper introduces R2-T2, a test-time re-routing method for multimodal Mixture-of-Experts (MoE) models. The core idea is to optimize routing weights during inference by leveraging reference samples with correct predictions, addressing suboptimal routing in pretrained MoE models. The method is training-free and computationally efficient. Experiments on MoAI and MoVA models across eight benchmarks demonstrate significant performance gains over base models, even surpassing larger VLMs. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The method makes sense for the problem or application at hand. Theoretical Claims: The paper does not make theoretical claims. Experimental Designs Or Analyses: Yes, all is reviewed Supplementary Material: Yes, all is reviewed Relation To Broader Scientific Literature: The work connects to MoE routing, test-time optimization, and multimodal LLMs. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The motivation is reasonable. This paper reveals suboptimal routing during inference due to the fixed, pretrained router and grounds this in empirical evidence. 2. The proposed method is training-free and computationally efficient, requiring no model parameter updates and avoiding the costs of retraining or fine-tuning. The three adopted strategies are lightweight: 3. Performance improvements across diverse tasks on strong baselines, including MoAI and MoVA, are significant. 4. The paper is well-written and easy to understand. Weaknesses: 1. Potential data contamination. The paper uses subsampled reference datasets (e.g., 5,000 samples from VQA-V2 and MathVista) but does not clarify whether these overlap with the evaluation benchmarks (e.g., MMBench, TextVQA). For instance, TextVQA is one of the source datasets of MathVista. If test samples from TextVQA are included in the reference set of MathVista, performance gains could be artificially inflated. A discussion on how contamination was avoided is critical for validity. 2. Choice of reference set. Table 1 summarizes the adopted reference and evaluation benchmarks. Does the method degrade when reference samples lack coverage of certain task types (e.g., rare spatial reasoning cases)? How does model performance change when using different reference sets and reference set sizes (e.g., 1K vs. 10K samples)? 3. More evaluation results are needed on important benchmarks, such as MMMU and ChartQA. 4. While the improvements presented are compelling, the significant increase in FLOPs shown in Table 4 raises concerns. Additionally, a comparison of latency is needed. Other Comments Or Suggestions: Please see the Weakness section. Questions For Authors: Please see the Weakness section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer bdGh Thank you for your detailed feedback! We address your comments below. > **Q1: Possible data contamination—subsampled reference datasets (e.g., VQA-V2, MathVista) may overlap with evaluation benchmarks (e.g., MMBench, TextVQA), potentially inflating results. How was this addressed?** We appreciate the reviewer’s concern and have conducted a rigorous analysis to ensure no data contamination. Our process follows a two-step screening approach: 1. Question Similarity Check – We first computed cosine similarity between evaluation benchmark questions and reference set questions using the NV_Embed_V2 embedding model. Samples with a similarity score >0.95 were flagged for further inspection. 2. Image Similarity Check – For flagged cases, we applied CLIP to measure image similarity. Only samples where both question similarity and image similarity exceeded 0.95 were classified as potential overlaps. Through this analysis, we found no overlapping samples between the reference set and evaluation benchmarks. These results confirm that our performance gains are not due to data leakage but stem from the effectiveness of our proposed method. > **Q2: Impact of reference set choice—how does the method perform when reference samples have limited coverage for certain task types (e.g., rare spatial reasoning)? What is the effect of reference set size (e.g., 1K vs. 10K samples)?** We appreciate this insightful question and have conducted further experiments to analyze these factors. 1. Limited Task Coverage: We evaluated R2-T2 on 3DSRBench, a dataset focused on rare spatial reasoning cases. Despite lower reference coverage for these tasks, R2-T2 still delivers a 4.5% improvement over the base model, demonstrating its robustness. | | 3DSRBench Accuracy | |---|---| | Base (MoAI) | 45.2 | | R2-T2 (MoAI) | 49.7 (+4.5%) | 2. Reference Set Size: We varied the reference set size and observed that even when reduced to 1/10th of its original size (e.g., 1K instead of 10K samples), R2-T2 still provides a notable boost (+2.9%). However, when reference samples are randomly selected (without ensuring task relevance), the improvement is minimal (+0.3%), underscoring the importance of well-curated reference sets. | | Average | |---|---| | Base (MoAI) | 74.5% | | 1/10 Reference set size | 77.4% | | Random choose | 74.7% | | R2-T2 (MoAI) | 80.7% | These results confirm that while reference set quality and coverage affect performance, R2-T2 remains effective even with smaller but carefully selected reference sets. > **Q3: More evaluation results are needed on important benchmarks, such as MMMU and ChartQA.** Thank you for the suggestion. We have evaluated R2-T2 on MMMU and ChartQA, with results summarized below: | | MMMU | ChartQA | |---|---|---| | Base (MoAI) | 55.7% | 67.4% | | R2-T2 (MoAI) | 61.3% (+5.6) | 71.6% (+4.2) | These results demonstrate that R2-T2 consistently improves performance across diverse multimodal tasks, reinforcing its robustness and generalizability. > **Q4: While the improvements presented are compelling, the significant increase in FLOPs shown in Table 4 raises concerns. Additionally, a comparison of latency is needed.** Please refer to **Reviewer HT7U Q1** regarding computational cost. To directly address latency, we conducted experiments on an RTX A6000: | | Avg. Running Time (per case) | |---|---| | Base (MoAI) | 7.8s | | R2-T2 (MoAI) | 25.6s (+3.3×) | While R2-T2 increases latency by 3.3×, we believe the substantial accuracy gains justify the trade-off. Moreover, optimizations such as reference set pruning and efficient kNN search can further reduce overhead. --- Rebuttal Comment 1.1: Comment: I appreciate all the additional experiments from the authors. My concerns have been addressed, and I have no more questions. I will maintain my current rating of weak accept. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our additional experiments! We're pleased to hear that our responses have adequately addressed your concerns. We appreciate your valuable feedback throughout the review process!
Summary: The authors introduce test-time re-routing (R2-T2) for vision-language MoEs. R2-T2 adapts the routing weights for each test sample based on similar samples in a reference set. In particular, three various strategies with different optimization objectives and neighbor-search spaces have been proposed, with "Neighborhood Gradient Descent" (NGD) being the most performant approach. R2-T2 is evaluated on a variety of vision language benchmarks. ## update after rebuttal: The authors have addressed most of my concerns related to comparison to other baselines and have clarified the computational cost requirements and hence I raise my rating to weak accept. Claims And Evidence: The authors backup the claims about their methods by implementing R2-T2 on two VLM MoE models: MoAI and MoVA and report results on several standard benchmark. Methods And Evaluation Criteria: Yes Theoretical Claims: There are no theoretical claims or proofs in this paper. Experimental Designs Or Analyses: Please refer to the weaknesses section on fairness of comparisons. Supplementary Material: The supplementary material provide sufficient information about the MoAI model, eval benchmarks, and reference datasets, as well as hyperparameter choices and few case studies. Relation To Broader Scientific Literature: The paper claims a test-time re-routing procedure for MoEs, and builds the method on top of two models, MoAI and MoVA. These MoEs, differ from conventional MoE architectures for transformers and operate at the larger module/model selection level. The routing in these models are limited to a single routing decision and does not have the complexities for dynamic routing in standard MoE architectures with independent routing decisions per layer leading to $\binom{N}{K}^{L}$ pathways inside the architecture. Essential References Not Discussed: The paper cites related works in the MoE literature, such as GShard and Switch transformers. However, it does not discuss how the method can be extended beyond the single layer routing decisions in the models considered. Other Strengths And Weaknesses: The idea of using re-routing in test-time for MoEs is interesting and shows the potential of improving MoE performance solely based on adjusting the routing weights. However, there are a few weaknesses which limit the applicability of the method in practice and the fairness of the evaluations: - Significant increase in computational and memory costs: One of the major motivations for using MoEs is their sparse selection mechanism which reduces the amount of activated parameters per sample. The proposed approach significantly increases the FLOPs (~7X) defeating the original purpose of using MoEs. The method also requires relying on external embedding models and retrieving KNN samples from thousands of reference points with correct predictions, followed by iterative updating of routing weights based on neighbor gradients. - The method has only been adapted for a single level routing decision which lacks the complexities of current SotA MoE models leveraging independent routing decisions throughout the network enabling numerous pathways in the architectures compared to the very limited number of pathways possible in MoAI and MoVA architectures. Extending the solution to work with standard MoE architectures will have higher impact on the field. The authors need to comment on the added memory and compute costs for scaling to such MoEs. - Evaluation with access to reference samples from the same domain: The framework assumes access to a very large scale reference set with a massive number of 5000 samples per set. The compiled reference set for general visual understanding alone is 20k samples. R2-T2 retrieves nearest neighbors from this large reference set with similar queries. Apart from the requirement to store the embedding of these samples and the expensive knn operation, the method assumes the user should manually select the type of task they are solving and hence perform the knn only in the selected subset. Additionally access to these relevant information at test time, makes it unfair to compare against all other methods in Table 3, which do not have access to such information. For example, it is known that access to few shot examples significantly boosts the performance of models compared to zero-shot evaluation. While all the methods in Table 3, effectively perform 0-shot evaluation, the current method assumes access to thousands of samples from the same task. - While the method could be framed as an Inference-Time compute approach, in which we allow for extra compute during inference (thinking harder) to increase accuracy, it can only be validated as such if the compared methods could also gain from additional compute. Some simple baselines include, 1) using multiple sampling from the base network and performing majority voting, 2) Performing multiple noisy sampling from the router followed by majority voting of independent predictions, 3) providing the compared models in Table 3 access to example correct responses as few shot samples, 4) Implementing a simple RAG approach to collect the most relevant samples to the correct prompt, followed by few-shot responses, etc. Other Comments Or Suggestions: No Questions For Authors: At present, the user needs to specify the nature of the task they want to solve to use the relevant reference set embedding for test-time adaptation. Is it possible to automatically determine the nature of the task using a classifier? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer HT7U Thank you for your detailed feedback! We address your comments below. > **Q1: Significant increase in computational and memory costs.** R2-T2 does not require significant increase in computations and memory because it only optimizes a low-dimensional routing weight vector (e.g., 6-dim for MoAI). Except NGD, the other two R2-T2 strategies are gradient-free and do not require backpropagation. Hence, compared to PEFT (LoRA, prompt, prefix tuning), re-training the MoE model or routers, R2-T2 is much more efficient. Moreover, R2-T2 significantly improves model accuracy, especially on challenging downstream tasks, making the modest additional cost worthwhile. We can further reduce R2-T2's cost by: - The embedding vectors can be pre-computed and cached (less than 1GB), which substantially reduces the runtime overhead during deployment. - We use fast similiarty search tools such as faiss to search the neighbors and compute similarity efficiently. - Our approach is flexible, allowing users to adjust the kNN neighborhood size and the number of iterative steps for a better trade-off between accuracy gains and computational cost effectively. > **Q2: The method has only been adapted for a single-level routing decision, lacking the complexity of SOTA MoE models with independent routing across multiple layers.** Our focus is on visual-centric tasks, where the primary bottleneck of most VLMs is the limited capability of a single visual encoder. In MoE VLMs such as MoAI and MoVA, MoE is applied only at the visual encoder, by replacing it with <10 experts, making a single-level routing decision sufficient for effective feature extraction. Extending to multi-layer routing—common in LLM MoE—would introduce much more computational overhead without addressing the core challenge of constrained visual expert capacity. > **Q3: The framework assumes access to a large-scale reference set, making comparisons in Table 3 unfair.** Because MoAI and MoVA do not support interleaved input and ICL/RAG, we use Qwen-VL as the base model. 1.For RAG, we retrieve similar reference samples based on embedding similarity and use them as few-shot demonstrations. 2.For ICL, we randomly choose the same task demonstrations with correct answer in reference set as few-shot demonstrations. | | RAG | ICL | |----------------|-----------------|---------------------| | 0-shot (base) | 61.7% | 61.7% | | 1-shot | 63.1% (+1.4%) | 62.4% (+0.7%) | | 2-shots | 63.6% (+1.9%) | 62.7% (+1.0%) | | 3-shots | 63.9% (+2.2%) | 62.8% (+1.1%) | | 5-shots | 64.1% (+2.4%) | 62.9% (+1.2%) | RAG improves only modestly (61.7% → 64.1%) and ICL to 62.9%, while R2-T2 boosts MoAI from 74.5% to 80.7%, showing it leverages the reference set more effectively. > **Q4: Please evaluate additional baselines, such as ensemble voting, noisy router sampling, few-shot prompting, or a simple RAG approach.** We evaluated these baselines: - Ensemble Voting – Enabled dropout during inference and performed 10 forward passes per sample, aggregating predictions via majority voting. - Noisy Routing Ensemble – Added Gaussian noise to the router’s output, ran 10 forward passes with different noise realizations, and applied majority voting. Few-Shot Prompting – Please see **Q3** RAG-style Retrieval – Please See **Q3** | | Average | |---|---| | Base (MoAI) | 74.5% | | Multiple Sampling | 74.9% (+0.4%) | | Noisy Routing Ensemble | 75.4% (+0.9%) | | R2-T2 (MoAI) | 80.7% (+6.2%) | As shown, ensemble-based methods provide only marginal gains (≤0.9%), whereas R2-T2 delivers a substantial improvement of +6.2%. These results validate the effectiveness of R2-T2 beyond what additional compute alone can achieve. > **Q5: The user must specify the task type to select the appropriate reference set. Can this process be automated using a classifier?** Automating task type selection could improve usability and efficiency, so we conducted experiments to explore this approach. We pre-annotated our reference set with three task types (visual understanding, knowledge reasoning, OCR) and extracted 4096-dimensional task embeddings using NV_Embed_V2. Then, we trained a lightweight logistic regression classifier using embedding of reference set, which only introduces $10^4$ – $10^5$ FLOPs per inference. Given a test sample, the model predicts its task type and selects the corresponding reference subset for test-time adaptation. Our results show that this automated selection reduces computational overhead while maintaining accuracy, with only a marginal drop from 80.7% to 80.4%. These findings demonstrate that task classification is a viable strategy for improving efficiency with minimal impact on performance. --- Rebuttal Comment 1.1: Comment: I thank their authors for their response and the added comparisons. The authors have addressed most of my concerns and hence I raise my rating to weak accept. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful feedback and for taking the time to review our work! It’s very encouraging to know that our rebuttal have addressed most of your concerns, and we are grateful that you have raised your rating to a weak accept!
null
null
null
null
null
null
Accelerating Large Language Model Reasoning via Speculative Search
Accept (poster)
Summary: **I am not very familiar with the subarea of this paper, so I am not highly confident in my review. I expect my review not to play a significant role in the decision.** This paper introduces SpecSearch, a method that enables a small model to collaborate with a large model at both the thought and token levels in reasoning. It accelerates TSB methods using a large LLM. ## update after rebuttal Again, **I am NOT familiar with the subarea of this paper, so I expect my review NOT to play a significant role in the decision.** Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: I feel that the writing could be improved. It was difficult for me to understand the framework clearly, and I do feel that the writing makes something overly complicated. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer zUgA We sincerely thank the reviewer for the valuable comments! We address the concerns in detail as follows. We sincerely hope that our response could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. ### Weakness 1. > **1. I feel that the writing could be improved. It was difficult for me to understand the framework clearly, and I do feel that the writing makes something overly complicated.** Thank you for the valuable suggestion. We provide a clearer description of our framework below and will incorporate it into the revised version of our paper. - **Search Framework Overview**: Our framework represents **tree nodes** as intermediate reasoning steps (thoughts) and **tree paths** as candidate solutions to multi-step reasoning problems. Each reasoning step comprises a sequence of tokens decoded by a large language model (LLM). The framework consists of three core components: a **Bi-Level Speculative Thought Generator (G)**, a **Thought Evaluator (V)**, and a **Search Algorithm**. Starting from the root node (containing the input context and question), the generator G expands the reasoning tree by producing $N$ candidate thoughts (child nodes). The evaluator V assesses their quality, which then guides the search algorithm. This iterative process builds a reasoning tree, ultimately selecting a final reasoning path. - **Bi-Level Speculative Thought Generator**: At each leaf node, the bi-level speculative thought generator produces $N$ high-quality child thoughts **efficiently**, leveraging a **draft-evaluate-reject-correct** paradigm. This process operates on two levels as follows. (1) **Drafting (Thought-Level)**: A small model rapidly drafts multiple candidate reasoning thoughts. (2) **Evaluating (Thought-Level)**: These drafts are scored by the thought evaluator to assess their contextual quality. (3) **Rejection (Thought-Level)**: Thoughts deemed lower in quality than the large model’s outputs are discarded. (4) **Correction (Token-Level)**: Rejected thoughts are refined using a lossless token-level speculative decoding method, ensuring accuracy and robustness. This bi-level design allows SpecSearch to **accelerate generation** significantly while maintaining **high-quality reasoning** throughout the search process. --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. I think the framework is clearer to me now. It would be great to illustrate the high-level picture of your framework in the paper at the very beginning (of the methodology section)! However, as I said in the review, "**I am not very familiar with the subarea of this paper, so I am not highly confident in my review. I expect my review not to play a significant role in the decision.**" So I still don't think I am able to give reliable judgment. Sorry about that. --- Reply to Comment 1.1.1: Comment: Dear Reviewer zUgA, Thank you very much for taking the time to review our paper and provide valuable comments and suggestions. We truly appreciate your effort, especially given your note about not being deeply familiar with this subarea. We are glad that the clarification helped make the framework clearer. Following your suggestion, we will include a high-level illustration at the beginning of the methodology section to better convey the core ideas of our framework to the readers. Once again, thank you for your thoughtful feedback—it has helped us improve the quality and clarity of the paper.
Summary: This paper introduces a new LLM reasoning method via speculative Tree-Search-Base reasoning. It involved a quality-preserving rejection mechanism and also has theoretical properties that means it can maintain reasoning quality compared to the large model. Experiments on math problems show up to 2–3× speedup over standard autoregressive decoding or token-level speculative decoding, with minimal accuracy drop. ## update after rebuttal I thank the authors for their detailed rebuttal. I maintain my score after reading all the other rebuttal comments. Claims And Evidence: * "A Novel SpecSearch Framework". This claim is substantiated. * "Quality-Preserving Rejection Mechanism". This claim is substantiated. * "Theoretical Guarantee", also substantiated. * "Significant Speedup and Versatility" - Empirical evidence is only given with two mathematical reasoning datasets of MATH and GSM8K, I encourage the authors to also use more, perhaps non-mathematical reasoning datasets if possible. Methods And Evaluation Criteria: The method appears to be novel, and well structured, using a small LM for draft thoughts, and a Large LM for final verification, coupled with a tree-based or beam-based search search and process reward model for scoring. The chosen evaluation metrics of accuracy and inference latency are appropriate, however the results can all benefit from error bars from running across random seeds, and more baselines. Theoretical Claims: The paper provides probabilistic bounds that, with sufficient sampling and proper thresholds, SpecSearch retains or approximates the large model’s accuracy. I did not check the proofs in detail, intuitively they make sense. Experimental Designs Or Analyses: Experiments are valid, and use MATH and GSM8K math reasoning benchmarks. A potential issue with the current setup is that the just focusing on these two math datasets could mean the approach performs of an unknown ability to other reasoning domains and or diverse tasks. Supplementary Material: Yes, skimmed parts. Relation To Broader Scientific Literature: Speculative decoding and chain-of-thought search approaches (like Tree-of-Thoughts, SEED) are cited, but: * The paper would benefit from a deeper comparison with closely related works like SEED or other structured speculative decoding methods. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: Significance: Significant inference speedup without heavily compromising accuracy with the proposed approach. However the paper could benefit from comparisons with newer speculative approaches. Clarity: The paper is well-written and easy to follow. Originality: This approach appears to be novel. Other Comments Or Suggestions: * Compare wall-clock time on real hardware with parallelization overhead considered. * Expand the discussion on failure cases where the small model’s generation misleads the system. Questions For Authors: * How sensitive is speedup to the small draft model’s performance? * Can the authors evaluate on a dataset that is not within the math domain? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to Reviewer EXez We sincerely thank the reviewer for the thoughtful and encouraging feedback! We hope our response has addressed your concerns. If so, we would be truly grateful if you would consider raising your score. If not, we welcome any further comments and will continue working diligently to improve our work. > ## Due to limited space, **we provide Tables and Figures in 14604.pdf in [Anonymous Link](https://anonymous.4open.science/r/SpecSearch-Rebuttal-4B78/14604.pdf)**. ### Weakness 1. > **1. Use more, non-mathematical reasoning datasets** We have conducted comprehensive evaluations across **three distinct dataset categories** to rigorously demonstrate the efficiency and generalizability of SpecSearch. Specifically, these include: (1) the **full GSM8K** dataset comprising 1,319 problems; (2) more challenging mathematical reasoning benchmarks, namely the **AIME** and **Olympiad** datasets; and (3) a **code-generation** benchmark. The results show that SpecSearch **consistently** and **significantly surpasses state-of-the-art approaches** across all three dataset categories, achieving speedups ranging from **2.04$\times$** to **2.84$\times$** while maintaining comparable reasoning accuracy. Please refer to **Weakness 1** in **Response to Reviewer cNpL** for detailed results. ### Weakness 2. > **2. Error bars from running across random seeds** We have evaluated SpecSearch and baselines under three random seeds to assess stability and performance. As shown in Figure 1 in 14604.pdf, SpecSearch delivers **consistent reasoning accuracy** and **significantly lower inference latency than baselines across all seeds**. > **3. Comparison with newer speculative approaches** We have compared SpecSearch with **three recent speculative methods**: two advanced speculative decoding approaches—Lookahead [1] and Eagle [2]—and one speculative tree search method, Treebon [3]. As shown in Table 5 in 14604.pdf, SpecSearch achieves **1.74× to 2.73× speedups** over these baselines, highlighting its strong acceleration capability while preserving high reasoning quality. [1] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding, ICML24. [2] EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty, ICML24 [3] TreeBoN: Enhancing Inference-Time Alignment with Speculative Tree-Search and Best-of-N Sampling, 2024.10 ### Weakness 3. > **4. A deeper comparison with SEED or other structured speculative decoding (SD) methods.** #### 1) The novelty over SEED and SD We discuss the novelty of SpecSearch compared to Scheduled Speculative Decoding (SEED) and existing SD methods, emphasizing key distinctions in our **bi-level speculative formulation**, **contextual verification**, **quality-preserving rejection strategies**, and **theoretical guarantees for reasoning quality**. Please refer to **Weakness 2** in **Response to Reviewer uX9o** for details. It is worth noting that the key distinction between SEED and standard SD methods lies in its use of **N small models** to **generate N token sequences in parallel at each node** of the reasoning tree, followed by a **Rounds-Scheduled strategy** to coordinate a shared large model for verification. However, in terms of the four aspects—**formulation**, **verification** and **rejection**, and **theoretical guarantees**—SEED remains consistent with SD methods. #### 2) The novelty over a recent structured speculative tree search method We discuss the novelty of SpecSearch compared to Treebon[3], emphasizing key distinctions in terms of **motivation**, **speculative formulation**, **rejection strategies**, and **theoretical guarantees**. Please refer to **Weakness 2** in **Response to Reviewer uX9o** for details. ### Weakness 4. > **5. Compare wall-clock time on real hardware with parallelization overhead considered** We measured wall-clock inference latency on a dual NVIDIA A800 GPU setup, including parallelization overhead. As shown in Table 6 in 14604.pdf, **SpecSearch achieves a 2.87$\times$ speedup in total latency**, fully accounting for parallelization costs. ### Weakness 5. > **6. Discussion on failure cases where the small model’s generation misleads the system** We discussed failure cases in Figure 10 in Appendix G.2 in our initial submission. Specifically, in this case, the small model misinterpreted the statement "A was born X years before B," reversing its meaning. Moreover, our **thought evaluator failed to identify this error** and **assigned a high score** of 0.89 to the incorrect reasoning step, ultimately leading to an incorrect final answer. ### Weakness 6. > **7. Speedup sensitivity to the small draft model’s performance** We have investigated SpecSearch’s performance using multiple small draft models. The results in Table 7 in 14604.pdf reveal that **SpecSearch achieves speedups ranging from 2.18$\times$ to 2.87$\times$**, underscoring its **robust acceleration capabilities across diverse small-model settings**.
Summary: This paper proposes SpecSearch to optimize thought generation through strategic collaboration between a small model and a large model at both thought and token levels. This approach efficiently produces high-quality reasoning thoughts. A key feature of SpecSearch is its quality-preserving rejection mechanism, which filters out low-quality thoughts, ensuring that only those meeting the standard of the large model are retained. Experimental results on the Qwen and Llama models show that SpecSearch achieves up to a 2.12× speedup compared to state-of-the-art methods, while maintaining comparable reasoning quality. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: N/A Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: N/A Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: The paper proposes a bi-level speculative thought generator that combines a small model and a large model at both thought and token levels. This design leverages the small model’s parallel generation efficiency for proposing diverse reasoning thoughts and the large model’s evaluation capability for filtering, achieving up to 2.12× speedup while maintaining comparable reasoning quality. Weaknesses: 1. The experiments are limited to two mathematical reasoning datasets (only subsets of MATH and GSM8K) and structured tasks. The framework’s effectiveness on more complex reasoning tasks (e.g., AIME and Olympiad bench) remains unverified, raising questions about its effectiveness and generalization ability. 2. While the integration of speculative execution with tree-search-based reasoning is practical, the core idea of combining small and large models resembles existing speculative decoding techniques (e.g., token-level draft-then-verify). The claimed "first generalization" of speculative execution to reasoning lacks a clear distinction from prior work [1]. [1] Qiu J, Lu Y, Zeng Y, et al. Treebon: Enhancing inference-time alignment with speculative tree-search and best-of-n sampling[J]. arXiv preprint arXiv:2410.16033, 2024. Other Comments Or Suggestions: See weaknesses part. Questions For Authors: See weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # Response to Reviewer uX9o We sincerely thank the reviewer for the insightful and valuable feedback. We genuinely hope our response has addressed your concerns. If it has, we would be truly grateful if you would consider raising your score. If not, we warmly welcome any further suggestions and will continue doing our best to improve the submission. ### Weakness 1. > **1. The experiments are limited to only subsets of MATH and GSM8K. The framework’s effectiveness on more complex reasoning tasks (e.g., AIME and Olympiad bench) remains unverified.** We have conducted comprehensive evaluations across **three distinct dataset categories** to rigorously demonstrate the efficiency and generalizability of SpecSearch. Specifically, these include: (1) the **full GSM8K** dataset comprising 1,319 problems; (2) more challenging mathematical reasoning benchmarks, namely the **AIME** and **Olympiad** datasets; and (3) a **code-generation** benchmark. The results show that SpecSearch **consistently** and **significantly surpasses state-of-the-art approaches** across all three dataset categories, achieving speedups ranging from **2.04$\times$** to **2.84$\times$** while maintaining comparable reasoning accuracy. Due to limited space, please refer to **Weakness 1** in **Response to Reviewer cNpL** for detailed results. ### Weakness 2. > **2. The core idea of combining small and large models resembles existing speculative decoding (SD) techniques (e.g., token-level draft-then-verify).** We discuss the novelty of SpecSearch compared to existing SD techniques, emphasizing key distinctions in terms of **speculative formulation**, **verification** and **rejection strategies**, and **theoretical guarantees**. We will include the discussion in our revised paper. - **Bi-Level Speculative Formulation**: Unlike existing SD methods focused solely on tokens, SpecSearch treats both high-level thoughts and low-level tokens as bi-level speculative tasks. This enables (1) **Structural Alignment** with reasoning frameworks, where thoughts are fundamental units, and (2) **Compatibility** with standard SD methods through low-level token-level speculation. - **Contextual Verification for Higher Acceptance and Speedup**: Unlike existing SD methods that enforce strict token-level alignment, leading to frequent rejections, SpecSearch verifies the **contextual quality** of reasoning thoughts. This allows acceptance of correct but non-aligned outputs, substantially boosting acceptance rates and achieving significant speedups. - **Quality-Preserving Rejection Mechanism**: Unlike token-level rejection in standard SD methods, SpecSearch proposes **quality-preserving thought-level rejection** based on contextual quality. It discards entire thoughts only when their quality is lower than the large model’s, ensuring high-quality reasoning throughout decoding. - **Theoretical Guarantee of Reasoning Quality**: While standard SD methods preserve token-level distributions, SpecSearch guarantees that reasoning quality remains comparable with outputs from the large model. > **3. The claimed "first generalization" of speculative execution to reasoning lacks a clear distinction from Treebon [1].** ([1] Treebon: Enhancing inference-time alignment with speculative tree-search and best-of-n sampling.) We discuss the novelty of SpecSearch compared to Treebon [1], emphasizing key distinctions in terms of **motivation**, **speculative formulation**, **rejection strategies**, and **theoretical guarantees**. We will include the discussion in our revised paper. - **Distinct Motivation**: Unlike Treebon, which targets accelerating best-of-n sampling through speculative rejection combined with tree search, SpecSearch is the first to well generalize speculative execution to tree-based LLM reasoning. - **Bi-Level Speculative Formulation**: Treebon treats fixed-length token sequences as speculative tasks, while SpecSearch introduces a **flexible bi-level approach**—modeling full reasoning thoughts as high-level tasks and tokens as low-level ones. Unlike Treebon’s fixed-length design, SpecSearch leverages LLMs' reasoning capabilities to generate semantically coherent thoughts of dynamic length. - **Quality-Preserving Rejection Mechanism**: Treebon rejects a fixed proportion of token sequences using a preset threshold. SpecSearch, instead, scores reasoning thoughts and **adaptively rejects those with lower contextual quality based on the large model’s reasoning quality**, enabling finer control and better quality preservation. - **Theoretical Guarantee**: Unlike Treebon, which lacks theoretical guarantees, SpecSearch provides **formal assurance that reasoning quality remains uncompromised**, matching that of the large model's outputs.
Summary: This paper proposes a speculative search framework, that extends speculative decoding framework to reasoning chains. SpecSearch framework works by rejecting and selecting both at thought and token levels. The authors show both performance and speedup improvements. Claims And Evidence: The claim is that SpecSearch - applying speculative decoding at both thought and token level improves reasoning quality and provides speedup. It is backed by evidence across both Qwen and LLama models across both MATH-100 and GSM8K-100 datasets. Methods And Evaluation Criteria: Yes Theoretical Claims: I did not check the theoretical claims. Experimental Designs Or Analyses: The experimental designs and corresponding analyses are sound despite one significant concern. The authors only evaluate the approach on 100 randomly selected subsets of both MATH and GSM8K datasets. While the claims of latency can be adequately claimed, the accuracy claims might not be significant and might need more samples for a convincing argument. Supplementary Material: I did not review the supplementary material Relation To Broader Scientific Literature: The contributions of this paper can be broadly useful. Speculative decoding is a standard technique both in industry and academia, and its extension to reasoning chains, if claims hold, has wide implications. Essential References Not Discussed: None that I am aware of Other Strengths And Weaknesses: (Covered above) Other Comments Or Suggestions: (Covered above) Questions For Authors: (Covered above) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to Reviewer cNpL We sincerely thank the reviewer for the insightful, valuable, and positive comments. We address the concerns in detail as follows. We sincerely hope that our response could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. ### Weakness 1. > **1. The authors only evaluate the approach on subsets of both MATH and GSM8K datasets. The accuracy claims might need more samples for a convincing argument.** We have conducted comprehensive evaluations across **three distinct dataset categories** to rigorously demonstrate the efficiency and generalizability of SpecSearch. Specifically, these include: (1) the **full GSM8K** dataset comprising 1,319 problems; (2) more challenging mathematical reasoning benchmarks, namely the **AIME** and **Olympiad** datasets; and (3) a **code-generation** benchmark. As illustrated in the following Table, SpecSearch **consistently** and **significantly surpasses state-of-the-art approaches** across all three dataset categories, achieving speedups ranging from **2.04$\times$** to **2.84$\times$** while maintaining comparable reasoning accuracy. These findings highlight SpecSearch's versatility and robustness, demonstrating substantial improvements in inference speed with minimal or no compromise in accuracy **across diverse tasks**. We will include the results in our revised paper. - **Setup** Throughout our experiments, we utilize quantized versions of Qwen2.5-72B-Instruct and Qwen2.5-7B-Instruct as the large and small language models, respectively. Additionally, we incorporate MATH-psa as the Process Reward Model and employ beam search as the search algorithm. - **Results** - **Full GSM8K Dataset (1,319 Problems)**: SpecSearch achieves a substantial **2.84$\times$ speedup** compared to the AR baseline, with only a minimal accuracy reduction of 0.83%. This result highlights SpecSearch’s capability to effectively scale to larger problem sets while preserving high reasoning accuracy. - **High-Difficulty Mathematics (AIME and Olympiad Bench)**: We conduct experiments on the AIME and Olympiad Bench (OE_TO_maths-zh_CEE) datasets. Notably, SpecSearch **maintains identical accuracy** to the SpS method while achieving **speedups of 1.21$\times$ and 1.37$\times$**, respectively. These results demonstrate the method’s effectiveness in handling challenging, competition-level mathematics problems. - **Code Generation (HumanEval)**: To assess SpecSearch beyond mathematical reasoning, we evaluate its performance on the HumanEval code-generation benchmark. The results show that SpecSearch achieves a **2.16$\times$ speedup** over the AR without any reduction in accuracy. Furthermore, it **surpasses the SpS by 1.22% in accuracy** while simultaneously delivering a **1.41$\times$ speedup**. These results underscore SpecSearch's strong generalization capabilities across diverse domains. | **MATH Dataset** | **GSM8K-1319** | | | | |:---:|:---:|:---:|:---:|:---:| | Methods | Reasoning Accuracy (\%) | Average Inference Latency (s) | Speedup (vs AR) | Speedup (vs SpS) | | | | | | | | AR | 96.66 | 144.63 | NA | 0.48 | | SpS | 96.66 | 70.04 | 2.06 | NA | | SpecSearch (Ours) | 95.83 | **50.99** | **2.84** | **1.37** | | **MATH Dataset** | **AIME** | | | | | Methods | Reasoning Accuracy (\%) | Average Inference Latency (s) | Speedup (vs AR) | Speedup (vs SpS) | | | | | | | | AR | 16.67 | 562.89 | NA | 0.57 | | SpS | 13.33 | 318.71 | 1.77 | NA | | SpecSearch (Ours) | 13.33 | **264.44** | **2.13** | **1.21** | | **MATH Dataset** | **Olypamid Bench** | | | | | Methods | Reasoning Accuracy (\%) | Average Inference Latency (s) | Speedup (vs AR) | Speedup (vs SpS) | | | | | | | | AR | 63.75 | 358.44 | NA | 0.67 | | SpS | 58.75 | 241.80 | 1.48 | NA | | SpecSearch (Ours) | 58.75 | **176.02** | **2.04** | **1.37** | | **Coding Dataset** | **HumanEval** | | | | | Methods | Reasoning Accuracy (\%) | Average Inference Latency (s) | Speedup (vs AR) | Speedup (vs SpS) | | | | | | | | AR | 85.37 | 342.18 | NA | 0.65 | | SpS | 84.15 | 223.30 | 1.53 | NA | | SpecSearch (Ours) | 85.37 | **158.43** | **2.16** | **1.41** | --- Rebuttal Comment 1.1: Comment: I think this paper definitely is improved with more experiments. I am recommending an accept but with the caveat of me not knowing the literature as best as an expert in this area. --- Reply to Comment 1.1.1: Comment: Dear Reviewer cNpL, Thank you very much for your positive feedback and for recommending our paper for acceptance! We sincerely appreciate your thoughtful comments, and we are especially grateful for your acknowledgment of the improvements brought by the additional experiments. We will include these additional results in the revised version, which we believe will further enhance the clarity and completeness of our work. Once again, thank you for your time and thoughtful review.
null
null
null
null
null
null
Equivariant Neural Tangent Kernels
Accept (poster)
Summary: This paper derives neural tangent kernels (infinite width) for group convolutional networks, and prove a equivalence between data augmentation and equivariance. They provide experimental validations for the finite-width framework. They also test group-equivariant kernels and compare them to non-equivariant ones. ## update after rebuttal I thank the authors for their rebuttal. I will keep my score since In Th. 5.2 is about invariance, and not equivariance. I feel that a result on equivariance would be needed to recommend strong acceptance, since this paper is title Equivariant NTK. Claims And Evidence: The claim: “we show an interesting relationship between data augmentation and group convolutional networks. Specifically, we prove that they share the same expected prediction at all training times and even off-manifold” is a bit hard to understand. Specifically: what do the authors mean by “expected” prediction: averaged over initializations? or over data distribution? Additionally, what is meant with “off-manifold”? In the main text, the authors refer to “ensembles” of neural networks, which I think is related. Could the authors clarify the framework and the claim there? It seems that most results apply to equivariant neural networks with average pooling layers. How about max-pooling layers? Which of the authors’ results hold or do not hold here? If the results are restricted to average pooling, then the claims should be updated accordingly. Methods And Evaluation Criteria: Yes, the datasets and benchmarks are well-suited. Theoretical Claims: For the theorems, not much is said about the number of layers. Does it matter? Do the equivariant and non-equivariant architectures have the same amount? In Th. 5.1. Is the assumption of “being related with group averaging” restrictive for the equivariant neural network? In Th. 5.2, the N^GC is G-invariant : does the theorem apply for a G-equivariant neural network? Why restricting to invariance here? Experimental Designs Or Analyses: I did not see any issue. Supplementary Material: I read through it. Relation To Broader Scientific Literature: Missing Taco Cohen’s dissertation. Could the authors discuss an extension to non-regular group-equivariant NNs (eg, steerable?): would the results generalize to this case: why or why not? How about the equivariant architectures from Finzi et al. A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups? Essential References Not Discussed: Taco Cohen’s dissertation. Other Strengths And Weaknesses: The introduction to NTK could be though to follow for someone not familiar with the topic. I would also recommend *not* starting with a reference to the appendix: the main text should be self-contained without readers having to go to the appendix. Examples of clarification point: why is it called the “frozen” NTK? Other Comments Or Suggestions: Some typos, eg We proof -> We prove. Theorem 4.2: if it’s the lifting layer, then we should have (1) and (2) instead of (l) and (l+1)? Eq (8): It cannot be the same rho acting on f or on N(f), since f and N(f) do not have the same support: f’s support is not necessarily G whereas N(f) has support G. Questions For Authors: NA. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Expectation and “off-manifold” We are thankful for pointing out this confusion. The mentioned ensembles are a collection of independently initialized NNs and the average is understood over this family. For considerations about single networks, see heading “Ensembles” in the rebuttal for reviewer hpPR. The presented theorems do not make any particular assumptions over the data distribution. The term “off-manifold” refers to the *manifold hypothesis* of the data distribution. Our presented results hold for arbitrary inputs and hence, also hold off-manifold. We will clarify this in the updated manuscript. ## Average vs. max pooling Indeed our recursion relation in Theorem 4.3 only applies to average pooling. Since the expectation value and the maximum operation do not commute this is already true for MLPs and not a specific property of GCNNs. The average pooling which appears in Theorem 5.2 on the other hand is part of our result: Training an MLP with data augmentation corresponds at infinite width to a GCNN with a group average layer. ## Network depth in Theorems 5.2 and 5.3 Indeed, both Theorem 5.2 and 5.3 hold for arbitrary number of layers. For example, the equivalence between the fully connected network defined in (33) and the corresponding GCNN defined in (34) holds for any number of layers $L$. We will add a clarifying remark. ## Assumption of NTK-relation in Theorem 5.1 Theorem 5.1 yields a condition on the NTKs to relate augmented to non-augmented networks. The group average on the right hand side of (32) is a consequence of the data augmentation. Theorems 5.2 and 5.3 then provide particular architectures that satisfy this condition. ## Extension to equivariance To extend our results to equivariant networks, one would first need to extend Theorem 5.1 to equivariant augmentation. This is indeed possible by considering a network $\mathcal{N}$ which maps feature maps $f:X\rightarrow\mathbb{R}^n$ into feature maps $\mathcal{N}(f):G\rightarrow\mathbb{R}^{n'}$ by optimizing $\mathcal{N}(\rho_{\text{reg}}(g)f)$ against targets $\rho_{\text{reg}}(g)\hat{f}$ for all $g\in G$. The result is that the mean predictions agree if the NTKs of the two networks satisfy $$ \Theta^{g,g'}(f,f')=\frac{1}{\mathrm{vol}(G)}\int_{G}\mathrm{d}h\Theta_{\mathrm{aug}}^{hg,g'}(\rho_{\text{reg}}(h)f,f'). $$ In order to find networks whose NTKs satisfy this relation, new layers need to be introduced since in the infinite-widht limit, the NTK of MLPs becomes proportional to the unit matrix in the output channels, trivializing the feature map. Here, a separation of the channels into a part which is taken to infinity and a part which is kept finite, e.g. $$ f'_i(g)=\sum\_{j=1}^{n_c}\int\_X\mathrm{d}x\kappa\_{ij}(g,x)f_j(x), \qquad n_c \to \infty, $$ which corresponds to a fully-connected layer at finite width, could be suitable. We haven’t repeated the calculations for this layer, but it would be an interesting extension of our results which we will mention in the manuscript. ## Non-regular representations Our analysis focuses on group convolutions. These can also be used for (scalar) point clouds by defining the input feature map to be $f(x)=\sum_i \delta(x-x_i)f_i$ for features $f_i$ at $x_i$. Then, the GCNN layers are equivariant with respect to transformations of the positions of the input features. However, this does not cover features transforming in vector- or tensor representations. For these, the setup would need to be extended. Although we have not performed this extension, we do not see conceptual roadblocks at the moment. ## Relation to Finzi et al. In contrast to this work, we specialize to the regular representation. This has the advantage that we can use the convolution theorem to compute the kernel recursions in Fourier space. Extending our framework to arbitrary representations would be a very interesting. A starting point would be to define suitable infinte-width limits of the relevant layers, see e.g. the section “Extension to equivariance” above. ## Introduction to the NTK We acknowledge that NTKs are a topic of substantial technical difficulty. We will revise the background section to make it more accessible stressing that the additional section in the appendix is optional background material. We will also explain the term “frozen” NTK that was adapted from other references (see e.g. Mario Geiger et al. [J. Stat. Mech. 2020]). It refers to the fact that the NTK becomes time-independent during training in the infinite width limit. ## Further remarks - We will add Taco Cohen’s PhD thesis to the literature review. - Thank you for pointing out the typos which we will fix in a revised version. - For full generality, we formulated a general layer-$\ell$ lifting layer, because one may include other non-group convolutional layers before as a preprocessing step. - We will clarify the notation in (8). Indeed, the regular representations on both sides act on functions with different domains.
Summary: This paper studies the training dynamics of equivariant neural networks via neural tangent kernels. The authors derive NTKs for group convolutions and nonlinearities, and also consider group convolutions for SO(3) in the Fourier domain (similar to spherical CNNs and G-steerable CNNs). The authors show that non-equivariant models with data augmentation converge to a specific GCNN architecture. Empirical results show that the NTK converges to the analytical expression for GCNNs. They also show that equivariant GCNNs outperform standard MLPs and data augmentation helps the standard MLPs converge to the same performance. Claims And Evidence: Yes the claims are specific and clearly supported mostly with proofs and also with empirical results. Methods And Evaluation Criteria: Yes Theoretical Claims: No, I did not check for correctness of any of the proofs. Experimental Designs Or Analyses: No Supplementary Material: I skimmed over the proofs. Relation To Broader Scientific Literature: The key contributions of this paper are deriving NTKs for group convolutions and using these results to show that a standard CNN on augmented data is equivalent to training an equivariant CNN on unaugmented data. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: - Nice derivations of the NTKs for group convolutions - Practical use case of the derived NTKs for comparison with data augmentation - Experiments seem to practically support the paper contributions/claims Weaknesses: - See questions Other Comments Or Suggestions: None Questions For Authors: - Is the comparison with data augmentation and equivariant networks limited to ensembles only? - As I am not familiar with NTKs, can the NTK for equivariant NNs describe the sample complexity improvements over standard NNs (perhaps the eigenvalues)? In the comparison with data augmentation, in my opinion, the greatest benefit of equivariant NNs over data augmentation is that they can achieve better performance with fewer data samples. E.g. a C_4 equivariant network only needs to see one sample while data augmentation with a standard network needs to see all 4 samples. I can understand why data augmentation leads to a specific GCNN, but it would seem that GCNNs would be much more sample efficient. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Ensembles Thank you for this important question. Although Theorem 5.1 is formulated in terms of ensembles, the statement in fact also holds for individual models at finite width if the infinite-width NTKs in (32) are replaced by the empirical NTKs $$ \Theta(x,x')=\left(\frac{\partial\mathcal{N}(x)}{\partial\theta}\right)^\top \left(\frac{\partial\mathcal{N}(x')}{\partial\theta}\right)\,, $$ as long as they are initialized in such a way that $\mathcal{N}(x)=\mathcal{N}^{\text{aug}}(x)=0$ $\forall x$ at initialization. Theorem 5.2 however only holds for infinite-width NTKs. Therefore, the combined statement of the equivalence of data augmentation and GCNNs holds also for individual models at infinite width, if $\mathcal{N}^{\text{FC}}(x)=\mathcal{N}^{\text{GC}}=0$ $\forall x$ at initialization. We will add a corresponding comment to the manuscript. ## Sample efficiency This is indeed an interesting line of further research for which our study lays the ground work. It is well known, see [arXiv:1912.13053], that the conditioning number of the NTK kernel is related to the convergence speed of training. One could therefore compare the conditioning number of the equivariant and standard NTKs to deduce insights into sample efficiency. This is however a task of significant technical difficulty (see Appendix B of the publication linked above which we’d need to generalize to the equivariant case) as it requires the analytical study of the NTK’s spectra as well as their corresponding phase structure which warrants follow-up work.
Summary: In this work, the authors propose a way to understand the training dynamics of equivariant models by deriving neural tangent kernels for a broad class of equivariant architectures based on group convolutions. For rototranslations in 2D and 3D, the authors show that equivariant NTKs outperform their non-equivariant counterparts as kernel predictors for CIFAR10, MNIST, medical image classification, and property prediction on the QM9 dataset. The theoretical formulation consists of recursive relations for the NTK and the NNGP for group convolutional layers, lifting layers, and group pooling layers. These allow efficient calculation of these kernels for arbitrary group convolutional architectures and thus provide the necessary tools to analytically study their training dynamics in the large width limit. Experimental claim: Networks trained with data augmentation converge to group convolutional networks at infinite width. ## post rebuttal I wish to keep my score after reading the authors' rebuttal. I do not wish to update the score as the experiments still feel limitive ( although the authors do provide an additional experiment in the rebuttal) Claims And Evidence: Yes, they are clear. The one mentioned below is unclear and if the authors could provide clarification, that would be helpful. For the experimental claim: ln 419.(column 2) '.. data augmentation converge to group convolutional networks at infinite width. We verify that this also holds approximately at finite width', but it does not seem fully justified, as it is a much simpler experiment. Methods And Evaluation Criteria: The proposed experiments, datasets, and evaluation criteria make sense for the problem and method proposed. Theoretical Claims: I have looked at the proof for Theorem 5.1 only. It has no issues. Experimental Designs Or Analyses: Yes. See weaknesses. Supplementary Material: Yes, E and F sections. Relation To Broader Scientific Literature: A few important citations are missed in the paper and thus, the connections to them are not made. The key contributions to the paper are related to the broader literature; for example, the connection to spectral components and NTk have been made in literature. However, the GP formalism with equivariant NTK is relatively novel. Although claims in [1] are of a similar flavor as this paper, the theoretical formalism is different. 1. On the Ability of Deep Networks to Learn Symmetries from Data: A Neural Kernel Theory, Perin et al. Essential References Not Discussed: Missed references: 1. On genuine invariance learning without weight-tying, Mokalev et al. [TAGML, ICML 23] 2. On the Ability of Deep Networks to Learn Symmetries from Data: A Neural Kernel Theory, Perin et al. 3. Data Augmentation vs. Equivariant Networks: A Theory of Generalization on Dynamics Forecasting, Wang et al. [ICMLworkshop 22] 4. Fast, Expressive Equivariant Networks through Weight-Sharing in Position-Orientation Space, Bekkers et al. [ICLR 24] 5. Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks, Borelon et al. [ICML 22] 6. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks, Canatar et al. [Nat Comms 2021] Other Strengths And Weaknesses: ## Strengths - The paper is well motivated theoretically, and the experimental section supports the claims made in the paper. - The paper is fairly well-written. ## Weaknesses - The experiments for data augmentation vs finite width of neural networks are only shown for simpler toy datasets like CIFAR10 and MNIST - Important citations and connections are missing. Other Comments Or Suggestions: Additional discussion/connection to missed papers in literature would be useful. Questions For Authors: 1. What about comparing data augmentation and finite depth neural network work for more complex tasks like force prediction in QM9? 2. Elaborate on ln 419 (column 2) 'we proved that networks trained with data augmentation converge to group convolutional networks at infinite width. We verify that this also holds approximately at finite width.' Explain what is considered approximate here. 3. ln 422. (column 1) Molecular energy regression. We know that energy is a scalar and an invariant quantity for a molecule with SO(3) transformations. Compared to a vector quantity like force prediction/regression, energy is a relatively easier task (as it is a scalar). Comment on the proposed claims in the paper in the light of different tasks based on its geometric complexity. and see Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Approximate results at finite width Our theoretical claims hold for infinitely wide networks and in the ensemble mean, i.e. for infinitely large ensembles (for comments on extensions to single networks see rebuttal to reviewer hpPR, heading “Ensembles”). In this case, they predict exact agreement between the mean prediction of the invariant GCNN (due to group pooling) and the augmented network throughout training for arbitrary inputs. In our experiments, we test the agreement of the mean predictions for finite ensembles of finite-width networks and find that the predictions align more for larger ensembles. As expected, alignment is imperfect due to finite width, thus we call these results “approximate”. We will clarify this in the revised version. ## Extension to equivariant infinite width regression An extension of our framework to equivariant tasks like quantum mechanical force field prediction is indeed an interesting point. Using the GCNN layers we analyze, it is straightforward to perform regression on targets that are signals on $\mathrm{SO}(3)$, i.e. $f_\text{target}: \mathrm{SO}(3) \to \mathbb{R}^{n_\text{out}}$ in an equivariant way. Here, equivariance is understood with respect to the regular representation, see eq. (5). An equivariant model that is suitable for regression on vector quantities like forces necessitates an output layer that is equivariant with respect to the defining representation of $\mathrm{SO}(3)$. For this, the presented framework would need to be extended by additional layer types and their corresponding kernel relations. (One approach could be to take the argmax of the signal on $\mathrm{SO}(3)$ at the output layer and apply the resulting rotation to a fixed reference vector.) Furthermore, the infinite-width limit of the non-equivariant networks needs to be taken with care in this case. We have outlined a possible way to achieve this in the reply to reviewer *kzsw* under “Extension to equivariance”. Similarly as mentioned in the discussion by Cohen et al. [ICLR 2018], an extension to e.g. steerable CNNs would also be suitable approach. We consider this an interesting further research direction and will add a discussion to the manuscript. ## Complexity of the finite width experiments We recognize the interest in experiments with more demanding tasks. Since our theorems draw a connection between data augmented and invariant GCNN ensembles, we need to choose an invariant task to test the analytic results numerically. We have now repeated the experiment on a subset of the *NCT-CRC-HE-100K* data set (https://zenodo.org/records/1214456) consisting of histological images. We have trained on down-sampled images with a resolution of `32x32` pixels and produced OOD samples in this input space. The results are shown in https://postimg.cc/hQzcRpkf, which were obtained in the same fashion as Figure 4 and 6. We use the same architecture as for the CIFAR10 experiment apart from twice as many channels in each hidden layer and we also adapted the learning rate. Each curve is computed 20 times, we plot the means for the metric over the runs and their standard deviations. The larger ensembles reach a validation accuracy of around 80%. The results further support our theoretical claims and we will add them to the manuscript. ## Citations > 1. On genuine invariance learning without weight-tying, Mokalev et al. [TAGML, ICML 23] > > 1. On the Ability of Deep Networks to Learn Symmetries from Data: A Neural Kernel Theory, Perin et al. > 2. Data Augmentation vs. Equivariant Networks: A Theory of Generalization on Dynamics Forecasting, Wang et al. [ICMLworkshop 22] > 3. Fast, Expressive Equivariant Networks through Weight-Sharing in Position-Orientation Space, Bekkers et al. [ICLR 24] > 4. Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks, Borelon et al. [ICML 22] > 5. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks, Canatar et al. [Nat Comms 2021] We thank the reviewer for suggesting a number of relevant references and will add them to a revised version of the manuscript. ### Relation to the work by Perin et al. [arXiv:2412.11521] Thank you for pointing out Perin et al. We will include this very recent and interesting contribution in the revised manuscript. As you point out, although they study a similar problem, they focus on a very specific toy problem for which they analyze the spectrum of the NTK in detail. The only equivariant architecture they consider are CNNs. In contrast, our framework captures the learning dynamics of arbitrary group convolutional neural networks.
null
null
null
null
null
null
null
null
Concentration Distribution Learning from Label Distributions
Accept (poster)
Summary: This paper proposes Concentration Distribution Learning (CDL), a new variant of Label Distribution Learning (LDL). In CDL, in addition to the common label distribution that represents the descriptions of each class, there is a dimension that represents the background information. Based on an assumption about the generation process of the concentration distribution, an estimation method is proposed to determine the concentration distribution. Experimental results validate the effectiveness of the proposed method, and a new CDL dataset is also proposed. Claims And Evidence: The claim that traditional label distribution learning does not take the background into account is novel in the literature and also makes sense. It is reasonable that two images with different backgrounds may have similar label distributions, and traditional label distributions may not describe the images well. Therefore, I think the novelty of the proposed setting is good Methods And Evaluation Criteria: - The proposed method can learn a good label distribution under the proposed assumption. However, I think that the proposed assumption may be a bit strong. First, it assumes that the background vector is an all-one vector, which may be a bit easy since the contribution of the background to each class may be equal. - Second, it assumes that the concentration distribution is an addition of the previous label distribution and the new concentration distribution. This may be more complex in real applications. I think this point needs to be clarified. Theoretical Claims: No theoretical claims are made in this paper. Experimental Designs Or Analyses: - The experiments consist mainly of two parts. The first part is a synthetic experiment, where traditional LDL experimental datasets are used and the last dimension is considered as a background. Such an experimental setting may differ from real-world scenarios because the last dimension may not serve as the background and its generation assumption may not be consistent with the assumption proposed in this paper (Eq. (1)). - For the second part, a new CDL dataset is constructed, and it is surprising to see that the results of the proposed method are very similar to the ground-truth label (Fig. (4)). I am curious with these good experimental results. Can the authors elaborate more on the success of the proposed method on this dataset? Also, it might be helpful to include more experimental results on more data points. Supplementary Material: I have read the supplementary material. It is good to provide the code of the paper for reproducibility. Relation To Broader Scientific Literature: Not available. Essential References Not Discussed: There are no essential references that need to be discussed. Other Strengths And Weaknesses: ### Strengths - The novelty of the new CDL setting is good and practical for LDL literature. This may open new research directions for LDL. - The proposed method is simple and effective, validated by extensive experimental results. ### Weaknesses - The writing of the paper needs to be improved. There are some typos and unclear expressions in the paper. For example, the word "are" is missing in line 18 of the abstract. There are some similar problems in the main text, which should be carefully reviewed and revised. - Some equations seem to be redundant. For example, the left column on page 4 contains some equations that can be greatly simplified, since only simple calculations are performed. - The assumption of the background vector generation process is too simple, which may limit the applications in the real-world scenarios. Other Comments Or Suggestions: - The writing of the paper can be improved and revised to be clearer. - More justification of the data generation model and its practicality in real-world applications should be given. - Theoretical analysis of the proposed method can be done to improve the paper. I am willing to increase my score if my concerns can be addressed properly. Questions For Authors: - I have a question about the CDL examples in the introduction. Fig. 1 is quite intuitive that different images with significant differences in the background should have different label distributions. Fig. 2 is not similar because the difference, i.e. the expression, may not correspond to the background. It is the main object of the image. Therefore, I am not sure that the core idea of the paper that the background of the image is important can be applied. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time and effort on reviewing our paper. In what follows, we will address your **questions** in detail. **Experimental Designs Or Analyses 1: The traditional LDL experimental datasets may differ from real-world scenarios because the last dimension may not serve as the background and its generation assumption may not be consistent with the assumption proposed in this paper (Eq. (1)).** The paradigm of concentration distribution learning is a new concept, so there are no available CDL datasets. To this end, we construct the first real-world CDL dataset in our paper and conduct experiments on it to verify the effectiveness of our model of exvacating background concentration. And by hiding the last dimension of LDL datasets, we transform LDL datasets into CDL datasets. If our model can recover this hidden dimension, which is regarded as the background concentration, its ability to learn concentration distribution can be proved. **Experimental Designs Or Analyses 2: Can the authors elaborate more on the success of the proposed method on the CDL dataset? Also, it might be helpful to include more experimental results on more data points.** Thanks for your advice. In the final version of this paper, we will present more results on the constructed CDL dataset and some other data points. **Weakness 1 \& 2: The writing of the paper needs to be improved. There are some typos and unclear expressions in the paper, and some equations seem to be redundant.** Thanks for pointing out our shortcomings. In the final version of our paper, we will carefully correct these errors and simplify our equations. **Weakness 3: The assumption of the background vector generation process is too simple, which may limit the applications in the real-world scenarios.** The paradigm of concentration distribution learning is a new concept raised by this paper. Since we have no further prior knowledge about the background concentration distribution pattern, the assumption that it is evenly spread across each class of the real label distribution is general. Furthermore, the experimental results also validate the rationality of this assumption. **Other Comments Or Suggestions 1: The writing of the paper can be improved and revised to be clearer.** Thanks for your advise. This paper will be carefully revised in its final version. **Other Comments Or Suggestions 2: More justification of the data generation model and its practicality in real-world applications should be given.** For the data generation model, please refer to **Weakness 3** of this response. Additionally, in the final version of this paper, we will present more real-world applications of concentration distribution learning. **Other Comments Or Suggestions 3: Theoretical analysis of the proposed method can be done to improve the paper.** Please refer to the response of **Weakness 1** to reviewer 3FJ2. **Question: Fig. 2 is not similar because the difference, i.e. the expression, may not correspond to the background. It is the main object of the image. Therefore, I am not sure that the core idea of the paper that the background of the image is important can be applied.** The "background concentration" is named after the background, but it is an abstract concept and does not actually refer to the background of the image. In the case of facial expression, the background concentration doesn't refer to any expression, and it represents the description degree of “no emotion”. In other words, the stronger the emotion is, the lower the background concentration it should be assigned. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, which has addressed my concerns. Therefore, I have increased my score.
Summary: The paper proposes a new label distribution learning method based on a novel hypothesis. Hypothesis: In existing label distribution learning, there is an issue where the current labels within the label set cannot adequately describe the samples. Based on this hypothesis, the authors suggest that concentration can be used to describe the extent to which existing labels describe the samples, and accordingly, the label distribution is augmented and expanded. The authors, based on this hypothesis, expand and augment the label distribution and then design a label distribution learning algorithm tailored to this hypothesis, setting concentration as a part of all label distributions. Experiments show that the algorithm proposed in the paper performs well on almost all datasets used, possessing the optimal average ranking, and the gap in the average ranking metric with other comparison algorithms used is quite significant. ## update after rebuttal After discussing with the authors, I believe the research problem addressed in this paper is important and the methodology employed demonstrates novelty. However, some aspects of the presentation remain unclear. Taking these factors into consideration, I have decided to raise my score to 3. Claims And Evidence: I raise doubts about the label distribution assumption proposed in the paper. The authors believe that in the existing label distribution learning, the labels in the label space are insufficient to describe the samples, which contradicts the basic assumption in the consensus definition of label distribution learning that ‘the labels in the label space can fully describe each sample.’ The approach taken in this paper is to add an extra ‘concentration’ to each label distribution component, meaning that each label distribution component in the dataset requires an additional quantity to correctly describe each sample. Therefore, in my opinion, the issue raised by this paper points to the difference between the relative description level of label distribution description and the absolute description level of the original description. Methods And Evaluation Criteria: I am rather skeptical about the significance of comparing the experiments in this paper, which involve label distribution learning with concentration terms, to traditional label distribution datasets that do not include concentration terms. Theoretical Claims: This paper does not provide any proofs or theoretical analysis. Experimental Designs Or Analyses: Yes, I suspect that it is not sufficient to test the effectiveness of the algorithm solely through the Friedman test on the average ranking, as this is influenced by the number of comparison algorithms and the quantity of datasets. Would it be better to add a two-tailed t-test to verify the superiority or inferiority against each comparison algorithm? Supplementary Material: Yes, the code part. Relation To Broader Scientific Literature: This paper points out a new issue related to label distribution learning, which has not been considered before. Essential References Not Discussed: After my review, I believe that the paper has adequately cited and discussed the prior work related to its key contributions. I have not identified any uncited or undiscussed work that is essential for understanding the context or the key contributions of the paper. Other Strengths And Weaknesses: Strengths: 1. The issue addressed in the paper is indeed one of the problems in label distribution learning, namely, the inconsistency in the intensity of the original label descriptions and the label distribution descriptions after normalization. The authors introduce a new concept of concentration in label distribution to address the mismatch in descriptive strength between the relative description level of label distribution and the absolute description level of labels. 2. The authors propose a new framework for label distribution learning based on the concept of concentration, offering new methods and perspectives for label distribution learning. Weaknesses: 1. The real-world examples used by the authors to illustrate the role of concentration are inappropriate; both sets of examples could resolve the mismatch in descriptive strength by adding extra labels, which is not convincing. 2. The validation of the algorithm’s performance should not be considered as one of the contributions of the entire paper. 3. No theoretical analysis or claims are provided to further validate the robustness and effectiveness of the proposal. Other Comments Or Suggestions: The authors should use appropriate examples in the Introduction section to match the methods of solving the problem with the issues and motivations they propose. The example of emotional intensity used by the authors is a good illustration, as it appropriately reveals that the same label distribution may correspond to completely different levels of emotional intensity. The examples used in the Introduction can also be resolved by adding labels and other means. The authors should also be mindful of the perspective from which they articulate the problem, ensuring that it does not lead to ambiguity for the readers. They should avoid giving the impression that “the issue pointed out by the authors is that the existing labels in the label space of label distribution learning cannot fully describe the samples,” as this would contradict the fundamental assumption of label distribution learning. Questions For Authors: In the example used by the authors in the motivation part, such as the example about the boat, if an additional labels, such as “mountain” or “water,” is added to make the label set meet the condition of “being able to fully describe each sample,” does the issue discussed by the authors no longer exist in the example they used? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your precious suggestions. After careful consideration, our responses to the **questions** you mentioned are listed as follows. **Claims And Evidence: The authors believe that in the existing label distribution learning, the labels in the label space are insufficient to describe the samples, which contradicts the basic assumption in the consensus definition of label distribution learning. The assumption that "the labels in the label space can fully describe each sample" is correct in the cases of label distribution learning, which aims to learn the relative description level of each sample. However, in concentration distribution learning, we focus on learning the absolute description level. In these cases, the labels in the label space are insufficient to describe the samples because they overlook the existence of background concentration. In other words, CDL is an extension of LDL, which are not contradicted with each other. **Methods And Evaluation Criteria: I am rather skeptical about the significance of comparing the experiments in this paper, which involve label distribution learning with concentration terms, to traditional label distribution datasets that do not include concentration terms.** The paradigm of concentration distribution learning is a new concept, so there are no available CDL datasets. To this end, we construct the first real-world CDL dataset in our paper and conduct experiments on it to verify the effectiveness of our model of exvacating background concentration. And by hiding the last dimension of LDL datasets, we transform LDL datasets into CDL datasets. If our model can recover this hidden dimension, which is regarded as the background concentration, its ability to learn concentration distribution can be proved. **Experimental Designs Or Analyses: Would it be better to add a two-tailed t-test to verify the superiority or inferiority against each comparison algorithm?** Thank you for pointing out our shortcoming. The following is the result of the two-tailed t-test. In the significance level of $\alpha=0.01$, the null hypotheses that "there is no significant difference between the ranks of our proposed CDL-LD and the baseline algorithm" are all rejected, which means that the control model CDL-LD is significantly different to all the other baseline algorithms in rank. This result further proves the effectiveness of the proposed model. | p-value | LDLLC | SA-IIS | LCLR | LDLFs | LDLLDM | DLDL | |---------|-----------|-----------|-----------|-----------|-----------|-----------| | CDL-LD | 8.5504e-6 | 2.2121e-4 | 2.9684e-4 | 1.6835e-4 | 1.4473e-4 | 8.7229e-5 | **Weakness 1: The real-world examples used by the authors to illustrate the role of concentration are inappropriate; both sets of examples could resolve the mismatch in descriptive strength by adding extra labels, which is not convincing.** The concept of background concentration cannot be replaced by simply adding extra labels. In Fig. 1(a) and (b), background concentration can be interpreted as the background of the image. However, it's hard to describe the background by adding labels like “mountain” or “water” since there are lots of contents in the background except mountain and water. Even if we make out all objects in the background, the label distribution of them is still costly and time-consuming to obtain. For other cases of Fig. 1(c), (d) and Fig.2, the meaning of background concentration is too abstract that it's impossible to be labeled. So we have no choice but to introduce the concept of background concentration. **Weakness 2: The validation of the algorithm’s performance should not be considered as one of the contributions of the entire paper.** Our contribution is mainly reflected in coming up with the concentration distribution learning paradigm, constructing the first real-world CDL dataset and proposing a model to learn CDs from LDL datasets. We will rearrange the part of contribution in the final version of this paper. **Weakness 3: No theoretical analysis or claims are provided to further validate the robustness and effectiveness of the proposal.** Please refer to the response of **Weakness 1** to reviewer 3FJ2. **Other Comments Or Suggestions: The authors should use appropriate examples to match the methods of solving the problem with the issues and motivations they propose, and they should avoid giving the impression that contradicts the fundamental assumption of label distribution learning.** Please refer to **Claims And Evidence** and **Weakness 1** of this response. **Questions For Authors: If an additional labels, such as “mountain” or “water,” is added to make the label set meet the condition of “being able to fully describe each sample,” does the issue discussed by the authors no longer exist in the example they used?** Please refer to **Weakness 1** of this response. --- Rebuttal Comment 1.1: Comment: I am glad to see the authors' reply. Firstly, the research motivation of this paper is that the existing LDL framework can only answer questions about relative description degree, but not absolute description degree. On this point, we have reached a consensus, and this work is also very important and interesting. Secondly, for the definition and characterization of the concept of absolute description degree, I think this is where our differences lie. This paper gives the concept of background concentration and explains it with examples, but in my opinion, it is easy to regard absolute description degree as the proportion of each label in the entire image, and therefore, it is easy to solve by simply adding a “background label”. This solution conflicts with the original assumption of LDL. The authors replied that this background is not simply adding a label (or multiple labels), which I semantically agree with, but in terms of expression and subsequent processing, it can be equivalent to adding a label, just called “background”. Thirdly, for the concept of absolute description degree, especially for some non-image data, I currently have no good definition. I appreciate the authors' attempt on this issue, but this attempt or some descriptions in this paper have not yet been well aligned with the LDL learning framework (because this paper is an extension of LDL, so the basic assumptions of LDL need to be considered), which is my main concern. I look forward to seeing further replies from the authors to discuss this issue, and I will further modify my score based on the discussion results. --- Reply to Comment 1.1.1: Comment: Thank you for actively participating in the discussion and your valuable comments. After careful consideration, we will address your concerns point by point. First, we agree that the basic label distribution learning assumption that “the labels in the label space can fully describe each sample” is correct. Still, it holds only under the ideal circumstances that the complete label space of instances can be found properly. In fact, most existing LDL datasets do not conform to this property because it’s nearly impossible to annotate all the possible labels of an instance in the real world. As you mentioned, we could add a “background label” to extend the existing LDL datasets. But, to the best of our knowledge, **no existing LDL datasets have been annotated with a label called “background”**. To this end, estimating the value of the “background label” for an existing LDL dataset becomes an important and interesting problem, **which has not been investigated**. To solve this problem, we propose a new framework called concentration distribution learning (CDL) that can estimate the “background label” for an existing LDL dataset without a “background label”. Then, you mentioned that “in terms of expression and subsequent processing, it can be equivalent to adding a label called background”, which is correct. Since we want to study the background concentration, **we must assign a value** to the background concentration in mathematical expression and processing, which is formally equivalent to adding a label called "background". Nevertheless, we need to clarify that **the value of background concentration is far more general than the description of the label "background"**. In the image examples in Figs. 1 (a) and (b), we could add a label called "background", and in this case, the background concentration value roughly equals the “background label”. However, in the material composition estimation example in Figs. 1 (c) and (d), there is no material called "background". A more striking example (emotion description) is shown in Fig. 2; emotion intensity is obviously not related to "background". So the proposed concentration is a much more general concept than “background label”, which can be well captured by the proposed CDL framework. Furthermore, we want to explain the relevance and difference between LDL and the proposed CDL. LDL illustrates how important the **visible labels** are to corresponding instances. However, the complete label space consists of visible labels and **invisible labels** because it‘s impossible to annotate all the possible labels of an instance. The existing LDL methods only focus on the relative importance among the visible labels, and overlook the proportion of the invisible ones. We find out that **ignoring invisible labels can sometimes lead to confusion**. Hence, it’s also necessary to learn their description degree, and the paradigm of concentration distribution learning (CDL) is proposed naturally. The invisible labels may have a clear actual meaning, such as the real background of an image. But in some cases, they can also be **very abstract**, like the description degree of “no emotion” on the human face dataset, so we use “background concentration” as a **universal designation** for the description degree of invisible labels. It's actually a very general concept. By learning an additional background concentration term, CDL addresses the limitation of LDL properly and expands the research depth of LDL. Finally, thank you again for your valuable comments. In the final version of our paper, we will modify the introduction to describe the CDL framework clearly and avoid creating ambiguity for the readers.
Summary: The paper introduces a novel concept called concentration distribution learning (CDL), which extends traditional label distribution learning (LDL) by incorporating a background concentration term. This term represents the absolute description degree of labels not present in the existing label space. The authors propose a method called CDLLD that uses probabilistic methods and neural networks to learn both label distributions and background concentrations from existing LDL datasets. The paper demonstrates the effectiveness of this approach through extensive experiments on multiple real-world datasets demonstrate the effectiveness of the proposed method. ## update after rebuttal. Most of my concerns are addressed, the authors have conducted experiments on larger datasets and clarified the unclear parts, thus I have decided to retain my positive score. Claims And Evidence: The authors claim that "excavating the background concentration makes full use of the information in the datasets and benefits the downstream tasks", however, the experiments are not conducted on several downstream tasks, I suggest the authors should be cautious when making such a statement. Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria in this paper are well-suited for the problem of label distribution learning. Theoretical Claims: No theoretical claim is provided in this paper. Experimental Designs Or Analyses: Yes, I have checked the experimental parts, the experiments in this paper are useful for validating the effectiveness of the proposed method. Supplementary Material: The authors provide the codes in the supplementary material, I only read part of the codes. Relation To Broader Scientific Literature: The key contributions of the tempered sigmoid proposed in this paper are closely related to several areas of the broader scientific literature, particularly in the fields of label distribution learning, facial expression recognition, and emotion recognition. Essential References Not Discussed: Yes, there is one essential reference that should be discussed. [1] "Label distribution changing learning with sample space expanding." Journal of Machine Learning Research 24.36 (2023): 1-48. Other Strengths And Weaknesses: Strengths: 1. The problem studied in this paper is interesting and important in the label distribution learning area. 2. The proposed method is effective according to the experimental results. 3. The authors construct the first real-world concentration distribution dataset. Weaknesses: 1. Some parts of the paper are not easy to understand and need further explanation. See the question parts below. 2. Datasets used in the experiments are small, larger datasets should be considered. Other Comments Or Suggestions: Some parts need further explanation. See the question parts below. Questions For Authors: 1. Why do you choose the Dirichlet distribution(line 149 ), please give the reasons behind. 2. "In Eq. (1), assuming that the background concentration is evenly spread on each class of the real label distribution vector b in probability." Is this assumption reasonable? Maybe it is better to consider uneven spread. 3. Why define Eq.(10) in such a form? Can you make a detailed explanation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable reviews, and your suggestions will effectively help us improve our work. Below are our responses to your **questions**. **Claims And Evidence: The authors claim that "excavating the background concentration makes full use of the information in the datasets and benefits the downstream tasks", however, the experiments are not conducted on several downstream tasks, I suggest the authors should be cautious when making such a statement.** Thank you for your advice. We think that in the experiments on the real-world CDL dataset, our model exvacates the hidden strength of emotions and helps us distinguish the images better, which can also be regarded as a downstream task. So we drew the conclusion that our proposed model can benefit the downstream tasks, which is reasonable. **Essential References Not Discussed: There is one essential reference that should be discussed, that is [1] "Label distribution changing learning with sample space expanding." Journal of Machine Learning Research 24.36 (2023): 1-48.** Thanks for pointing out our missing reference. We will add this to the final version of this paper. **Theoretical Claims: No theoretical claim is provided in this paper.** Please refer to the response of **Weakness 1** to reviewer 3FJ2. **Weakness 2: Datasets used in the experiments are small, larger datasets should be considered.** Thank you for your advice. We carry out further experiments on the human gene dataset (with 17,892 instances). The results are listed as below, with the best result of each metric shown in boldface. The extensive experiment further proves the superiority of the proposed CDL-LD. | | Cheby↓ | Clark↓ | LCLR | Cosine↑ | |:------:|:----------------:|:----------------:|:----------------:|:----------------:| | CDL-LD | **0.5735±.0090** | **3.6863±.0288** | **1.0830±.0160** | **0.7757±.0025** | | LDLLC | 0.6745±.0039 | 3.9021±.0226 | 2.2932±.0206 | 0.6199±.0179 | | SA-IIS | 0.6948±.0121 | 3.7957±.0359 | 2.1820±.0337 | 0.6114±.0483 | | LCLR | 0.6148±.0178 | 3.5548±.0387 | 2.3030±.0228 | 0.6514±.0150 | | LDLFs | 0.6383±.0467 | 3.7784±.0700 | 2.7793±.0261 | 0.6248±.0140 | | LDLLDM | 0.6549±.0283 | 3.8324±.0307 | 2.4558±.0193 | 0.6719±.0248 | | DLDL | 0.6056±.0259 | 3.7024±.0076 | 2.1174±.0328 | 0.6854±.0144 | **Question 1: Why do you choose the Dirichlet distribution(line 149 ), please give the reasons behind.** Dirichlet distribution is a probability distribution on a multidimensional space and the sum of its components is 1, which makes it suitable for representing the probability distribution of multiple mutually exclusive events. Label distributions represent the probability distribution of multiple labels which are mutually exclusive, so we choose Dirichlet distribution in this paper. **Question 2: In Eq. (1), assuming that the background concentration is evenly spread on each class of the real label distribution vector b in probability." Is this assumption reasonable?** The paradigm of concentration distribution learning is a new concept raised by this paper. Since we have no further prior knowledge about the background concentration distribution pattern, the assumption that it is evenly spread across each class of the real label distribution is general. Furthermore, the experimental results also validate the rationality of this assumption. **Question 3: Why define Eq.(10) in such a form? Can you make a detailed explanation?** $\vert\vert\boldsymbol{y}-\boldsymbol{p}\vert\vert_2^2$ is the MSE loss mentioned above in the paper, and $\frac{1}{B(\boldsymbol{\alpha})}\prod^c_{i=1}p_i^{\alpha_i-1}$ is the probability density function of Dirichlet distribution. Integrating the product of these two terms gives the final loss function of the neural network, which aims to minimize the average value of MSE loss over the whole Dirichlet distribution.
Summary: This paper proposes Concentration Distribution Learning (CDL), which introduces background concentration to address the limitation of Label Distribution Learning (LDL) in capturing hidden information. The authors designed the CDL-LD model based on the Dirichlet distribution, combining confidence and background concentration to generate concentration distributions. They also constructed the SJA c dataset to validate the approach. Experiments demonstrate that CDL-LD outperforms existing methods across multiple metrics, especially in concentration distribution tasks. Claims And Evidence: 1.Why does the paper assume that each alpha_i is composed of the sum of the dataset e_i and the background concentration u_i, instead of treating the background as a whole? Specifically, why not define αi = e_i, with the sum of alpha_i plus a single overall u equaling 1? The authors' method seems inconsistent with the idea presented in the introduction. 2.Why does the paper assume that the background concentration is evenly spread across each class of the real label distribution? There is neither theoretical nor intuitive justification for this assumption. Methods And Evaluation Criteria: The paper uses MSE loss in a way that seems somewhat unreasonable. Have you tried using CE loss instead? The idea in the paper is relatively novel, the method is somewhat simplistic. Theoretical Claims: N/A Experimental Designs Or Analyses: The paper provides a detailed introduction to the experimental design and analysis, but the datasets used seem relatively small. Have you tried testing on larger datasets? Supplementary Material: N/A Relation To Broader Scientific Literature: The paper is related to label distribution learning, concentration distribution learning, and face expression recogintion. Essential References Not Discussed: More comprehensive discussion over related works from the last 2-3 years should be incorporated. Other Strengths And Weaknesses: Advantages: The paper presents a novel and relatively reasonable approach. The introduction to related work and the explanation of the experimental section are quite detailed. Disadvantages: Some of the theoretical assumptions in the paper lack proper analysis or proof. The datasets used in the experimental section are relatively small, and no experiments have been conducted on larger datasets. Other Comments Or Suggestions: N/A Questions For Authors: The authors should carefully address the above-mentioned issues. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable reviews, and your suggestions will effectively help us improve our work. Below are our responses to your **questions**. **Claims And Evidence 1 \& 2: Why does the paper assume that each $alpha\_i$ is composed of the sum of the dataset $e\_i$ and the background concentration $u\_i$, instead of treating the background as a whole?** The paradigm of concentration distribution learning is a new concept raised by this paper. Since we have no further prior knowledge about the background concentration distribution pattern, the assumption that it is evenly spread across each class of the real label distribution is general. Furthermore, the experimental results also validate the rationality of this assumption. By defining $\alpha_i=e_i+u_i$, we make the problem mathematically computable and finally draw the conclusion that all $u_i$ equal to one, which indicates that we should treat background concentrations as a whole. In other words, without initially dividing the background concentrations, the value of $u$ cannot be decided. **Methods And Evaluation Criteria 1: The paper uses MSE loss in a way that seems somewhat unreasonable. Have you tried using CE loss instead?** By applying MSE loss, the final loss function can be arranged in the form of Eq. (10) in the paper, which minimizes the predictive square error ($(y_i-\frac{\alpha_i}{S})^2$) and the variance of the Dirichlet distribution ($\frac{\alpha_i(S-\alpha_i)}{S^2(S+1)}$) simultaneously. Optimizing these two terms makes our experimental results more precise and stable. The CE loss, however, does not have this property. **Relation To Broader Scientific Literature: The paper studies label distribution learning, but only ancient references are cited.** Thank you for your pointing out our shortcoming. We will incorporate some recent works in the final version of our paper. **Disadvantage 1: Some of the theoretical assumptions in the paper lack proper analysis or proof.** Thank you for pointing out our shorcoming. We apply rademacher complexity to explain the theoretical bound of our proposed model. Let $\mathcal{H}$ be a family of functions. For any $\delta>0$, with probability at least $1-\delta$, for all $h \in \mathcal{H}$ such that $$ \mathcal{L}(h) \leq \mathcal{L}\_{S}(h)+\widehat{\mathcal{R}}\_{S}(\mathcal{H})+3\sqrt{\frac{\log 2 / \delta}{2 n}}, $$ where $\mathcal{L}(h)$ and $\mathcal{L}\_{S}(h)$ are the generalization risk and empirical risk with respective to h, and $\widehat{\mathcal{R}}\_{S}$ is the empirical rademacher complexity bounded by $(\mathcal{H})\leq\mathbb{E}\_{\boldsymbol{\sigma}}\left[ \frac{1}{n} \sum\_{i=1}^{n} \sigma\_{i} \mathcal{L}\_{AMSE}(\mathbf{\alpha}\_i) \right]$ with $\sigma\_{i}\in[0,1]$. $n$ represents the number of instances. Because $\mathcal{L}\_{AMSE}(\mathbf{\alpha})=\sum^c\_{i=1}(y\_i-\frac{\alpha\_i}{S})^2+\frac{\alpha\_i(S-\alpha\_i)}{S^2(S+1)}>0$, we get $$ \mathcal{L}(h)- \mathcal{L}\_{S}(h) \leq \mathbb{E}\_{\boldsymbol{\sigma}}\left[ \frac{1}{n} \sum\_{i=1}^{n} \sigma\_{i} \mathcal{L}\_{AMSE}(\mathbf{\alpha}\_i) \right]+3\sqrt{\frac{\log 2 / \delta}{2 n}} \leq \frac{1}{n} \sum\_{i=1}^{n} \sum^c\_{j=1} \left[(y\_{ij}-\frac{\alpha\_{ij}}{S\_i})^2+\frac{\alpha\_{ij}(S\_i-\alpha\_{ij})}{S\_i^2(S\_i+1)}\right]+3\sqrt{\frac{\log 2 / \delta}{2 n}}, $$ in which $c$ is the number of classes. We assume that the neural network gives $e>0$ for every instance, then $\forall i,j, \alpha\_{ij}>1, S\_i>c$. So we have $$ \mathcal{L}(h)- \mathcal{L}\_{S}(h) \leq \frac{1}{n} \sum\_{i=1}^{n} \sum^c\_{j=1} \left[(y\_{ij}-\frac{\alpha\_{ij}}{S\_i})^2+\frac{\frac{\alpha\_{ij}}{S\_i}(1-\frac{\alpha\_{ij}}{S\_i})}{S\_i(S\_i+1)}\right]+3\sqrt{\frac{\log 2 / \delta}{2 n}} \leq \frac{1}{n} \sum\_{i=1}^{n} \left[1+\frac{1}{4c(c+1)}\right]+3\sqrt{\frac{\log 2 / \delta}{2 n}}. $$ When the number of instances $n$ tends to infinity, the bound finally becomes $1+\frac{1}{4c(c+1)}$, which indicates that this bound shrinks when the number of classes $c$ increases. **This conclusion is intuitive because the background concentration tends to zero when $c$ tends to infinity, degrading the CDL problem to learnable LDL problem.** **Experimental Designs Or Analyses \& Disadvantage 3: The paper provides a detailed introduction to the experimental design and analysis, but the datasets used seem relatively small. Have you tried testing on larger datasets? \& The datasets used in the experimental section are relatively small, and no experiments have been conducted on larger datasets.** Thank you for your advice. We carry out further experiments on the human gene dataset (with 17,892 instances). The results are listed as in the response of **Weakness 2** to Reviewer 7WaC due to the character limit, which further proves the superiority of the proposed CDL-LD.
null
null
null
null
null
null
Efficient Fine-Grained Guidance for Diffusion Model Based Symbolic Music Generation
Accept (poster)
Summary: This paper proposes a method called Fine-Grained Guidance, FGG, to improve precision and controllability in diffusion-based symbolic music generation, especially in the domain of tonal music where out-of-key notes can be perceived as mistakes. ## update after rebuttal **I raised the score to 4 as the rebuttal addressed my concerns.** Claims And Evidence: - Tables give evidence that the out-of-key notes are never generated. However, the composers intentionally put these out-of-key notes to give some tension to the model. Using this FGG is of course beneficial for most of the cases, but it "eliminates" the generation of such tension-imposed music. How can the authors justify this? - I'm not sure if Proposition 2 is tight enough. If it's not tight, then the ground of Section 4 is largely weaken. - Could the authors provide more intuition of Eq. (8)? If (l,h) is in w_K(l), then epsilon will be the maximum value, but I could not imagine easily of the intuition of $\frac{1}{\sqrt{1-\alpha_t}}$ $(X_{t,lh}-\frac{\sqrt{\alpha_{t}}}{2})$. If you plug in the value to Eq. (6), then the predicted X_0 will be 0.5. Could you explain more about this and the general intuition behind it? - Can you apply this method to non-tonal, avant-garde, or jazz style music? While I appreciate the strong technical contributions of this work, I would like to raise a broader discussion point regarding modality-specific research directions, particularly in the audio domain. Unlike other generative domains such as vision or language, where models are typically expected to generalize across all possible natural images or text, audio — and especially symbolic music — remains heavily siloed. We often see separate models for speech, music, and even for specific genres like pop, classical, or jazz. This raises the question: why is music treated as fundamentally different in its modeling approach compared to image or video generation? Should we continue to develop highly specialized generative models for narrow musical domains, or should the field aim toward more unified, generalized audio models? The current direction in symbolic music seems to serve a small subset of expert users, and it’s unclear whether this granularity is justified in terms of broader impact or scalability. I fully acknowledge that each modality has unique properties — but from a reviewer’s perspective, this fragmentation affects how we measure and compare contributions across fields. I do not see this as a flaw of the paper per se, but I encourage the authors and the community to consider this broader question of generality, user impact, and the future direction of audio generation research. Methods And Evaluation Criteria: I did not carefully read the experiments. Theoretical Claims: I didn't read proofs. Experimental Designs Or Analyses: The experimental section looks solid enough. Supplementary Material: I read a partial of Appendix when necessary. Relation To Broader Scientific Literature: Related to the general public in this community who are particularly interested in music domain (and AI for science given that they also want to generate a sample that a certain constraint is met) Essential References Not Discussed: References are enough Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer's valuable comments. Please allow us to provide responses as follows: **1. Using this FGG is of course beneficial for most of the cases, but it "eliminates" the generation of such tension-imposed music. How can the authors justify this?** We agree that some composers intentionally put out-of-key notes to add “creativity”. However, it is observed that generative models often fail to create an accommodating context for such “creativity”. In other words, most of the out-of-key notes generated by models cannot add creativity, but instead disrupts harmony. We think the main reason is that out-of-key notes are relatively rare in most datasets, which makes it difficult for the model to learn how to “correctly” use out-of-key notes. Given such a limitation in data, the benefit of avoiding out-of-key notes could outweigh the loss of potential creativity. **2. I'm not sure if Proposition 2 is tight enough. If it's not tight, then the ground of Section 4 is largely weaken.** We also cannot theoretically verify whether it is tight. We would like to clarify the following: If Proposition 2 is not tight—meaning the error probability decays *even slower* than $O(n^{−1/(LH)})$—then the probability of generating "unresolved wrong notes" remains even higher. Therefore, the lack of tightness in Proposition 2 does not undermine the main argument in Section 4. **3. Could the authors provide more intuition of Eq. (8)?** Eq. (8) ensures that: if (l,h) is not an out-of-key position, predicted $X_0$ will be unchanged; if (l,h) is out-of-key and predicted $X_0$ exceeds 0.5, then predicted $X_0$ will be reset to 0.5. This acts as a projection method, enforcing that out-of-key notes cannot be present. The threshold of 0.5 is chosen because, in piano roll quantization, values <= 0.5 mean no note, while values >0.5 indicate a note. Instead of correcting only at the final step, we apply this constraint throughout the sampling process to maintain the flexibility of diffusion models while ensuring harmonic coherence. **4. Why is music treated as fundamentally different compared to image or video generation?** Music generation follows two main approaches: treating music as an image (e.g., piano roll representation) or as language (e.g., sequences of pitch, duration, and timing tokens). However, it has the following challenges: - Precision Sensitivity – In image generation, slightly changing a single pixel usually has minimal impact, whereas one misplaced note in music can significantly disrupt harmony. - Temporal Overlaps – Unlike text, where words follow a sequential order, musical notes often overlap, which leads to challenges when conducting music generation in the “next token predictor” manner. - Genre-Specific Rules – Music follows complex, style-dependent structures, such as tonal harmony in classical music, or intricate rhythms in jazz. These factors make it difficult to develop a unified, generalized model for music generation. For now, the fragmentation in music models is partially due to the complexity of musical structure and the precision required to generate high-quality music. But as generative AI matures, we should aim for models that can adapt dynamically to different musical styles rather than being strictly bound to a single one. In fact, our control method can also be applied in stylized music generation (see the next following paragraphs). **5. Does our method has a broader impact or scalability?** We think that our proposed method (especially the sampling control part) is generalizable. In our paper, we enforce a "no out-of-key notes" constraint within the sampling process. As shown in the demo page, this method can be applied to generate stylized music with special scales (e.g., Chinese pentatonic scale). More broadly, our method can be adapted to various musical constraints. Many genres and styles can be defined through rules (e.g., harmonic, rhythmic, or structural constraints), which our sampling method can also incorporate. For example, rhythmic control could be implemented by ensuring that certain time positions must (or must not) contain notes. Then we can incorporate this requirement in the sampling process as follows: in each step, we edit $\epsilon$ to project predicted $X_0$ to the correct domain where the rhythmic constraint is satisfied. Such rhythmic control method could be used for generating jazz music which has special rhythmic requirements.
Summary: This paper presents a fine-grained guidance (FGG) mechanism for improving symbolic music generation using diffusion models. The proposed approach incorporates strict harmonic control by integrating domain knowledge into the generative process, ensuring that generated musical sequences adhere to predefined chord progressions and key signatures. The paper introduces a conditional generation setup, where fine-grained harmonic constraints guide the sampling process, leading to a controlled and structured output. Theoretical justifications are provided, demonstrating the necessity of such structured control in diffusion-based symbolic music generation. To validate the approach, the authors provide both theoretical and empirical evidence, showing that FGG prevents out-of-key generation and enhances structural coherence in generated compositions. The paper also presents an interactive demonstration, showcasing generation capabilities. Claims And Evidence: ### The paper makes three primary claims: 1. The proposed fine-grained guidance mechanism ensures strict in-key generation. 2. Theoretical justifications demonstrate the necessity and effectiveness of FGG. 3. Empirical evaluation supports the effectiveness of FGG in improving symbolic music generation. ### Assessment of Claims and Supporting Evidence 1. Fine-Grained Guidance (FGG) for Strict Harmonic Control - Supported Evidence: The methodology section details how fine-grained constraints are integrated into the generative process, ensuring adherence to predefined harmonic structures. The 0.0% off-key note generationresult confirms this strict control. - Evaluation: Well-supported. The claim is effectively demonstrated, as the model enforces harmonic rules explicitly, leading to an expected perfect in-key generation. 2. Theoretical Justification of FGG’s Necessity and Effectiveness - Supported Evidence: Theoretical formulations, including Proposition 1 and Proposition 2, argue the necessity of structured control in diffusion-based symbolic music generation. - Evaluation: Partially supported. The paper is not self-contained, requiring extensive prior knowledge in statistical methods and diffusion models. Several key derivations rely on unstated assumptions, making it difficult for non-expert readers to follow. - Recommended Improvement: The presentation should be more self-contained, with additional intermediate steps, particularly in the appendix, to improve accessibility. Re-formalizing the problem setup and explicitly deriving key constraints, such as the first constraint over $\epsilon_{l,h}$ in Appendix A.1, would make the proofs much more understandable. 3. Empirical Evaluation Demonstrating Effectiveness - Supported Evidence: The authors conduct experiments using cosine similarity via a VAE latent representation and overlapping area (OA) measures to assess generation quality. - Evaluation: Not convincingly supported. Aside from strict off-key detection, the evaluation metrics lack clear intuition regarding what they measure. The rationale behind using VAE-based cosine similarity and OA instead of direct accuracy or intersection-over-union (IoU) is unclear. - Recommended Improvement: The authors should either justify their choice of evaluation metrics or adopt more interpretable alternatives that directly measure harmonic consistency and generation quality. Methods And Evaluation Criteria: The methodological approach of integrating harmonic constraints in diffusion-based symbolic music generation is novel and well-motivated. However, the evaluation criteria lack clarity and justification. The authors use VAE-based cosine similarity and overlapping area (OA) measures to assess performance, but these metrics are not well explained, and their relevance to assessing generation quality is uncertain. Alternative evaluation methods, such as direct accuracy, intersection-over-union (IoU), or out-of-key note ratios beyond 0.0%, could provide more interpretable and robust validation. Moreover, justification for some design choices are lacking, e.g. why did you use negative values for the condition piano roll when no rhythm control is given? Theoretical Claims: See claims and evidence. Experimental Designs Or Analyses: The experimental setup demonstrates the technical feasibility of the approach but fails to convincingly validate the claimed improvements. The 0.0% off-key rate is expected due to strict constraints, and the other evaluation metrics do not provide clear insight into how well the method improves generative quality. The paper would benefit from: - Justifying the choice of VAE-based cosine similarity and OA as evaluation metrics. - Introducing alternative, more interpretable accuracy metrics. - Expanding empirical analysis beyond off-key note prevention. Supplementary Material: I have tried the Huggingface Space to play with the model which quite consistently showed very nice generations! Relation To Broader Scientific Literature: The progression made prior to this work is insufficiently described at the opening sections of this work. The related work section does name dropping rather than giving context as to what recent works did to tackle the challenges specified by you in the introduction part, what is the progression made from one to the other? What key contribution did they introduce? How do they aim to address the gaps you pointed out in the introduction? You should draw the progression for the reader to better understand the context before diving into the specifics in the next sections. The current related work serves to specify some differences of the suggested approach from some other prior works, not addressing any of the above. Essential References Not Discussed: - I believe that the related work section needs to be revised as a whole and to draw the progression made in recent years, point out the gaps and how other people tried to address them. - Theoretical derivations could benefit from expanding derivation steps, see comment in Claims And Evidence. - Justification for Empirical evaluation metrics are required. Other Strengths And Weaknesses: ### Strengths: 1. Novel structured control mechanism for enforcing harmonic accuracy. 2. Theoretical rigor in proving the necessity and effectiveness of the approach. 3. Well-illustrated methodology and results. 4. Practical value through an interactive demo. ### Weaknesses: 1. Writing clarity issues: The writing requires significant effort to follow due to structural issues and ambiguous wording. Examples: - The contribution summary should be clearer and more structured. Instead of long paragraphs, each contribution should be presented in a concise sentence. - Related work section lacks coherence, with frequent references to the authors' own work disrupting the logical flow. Example: "We adopt classifier-free guidance..." appears prematurely in Related Work rather than in Methods. 2. Justification for evaluation metrics and design choices: - The authors do not justify why negative values are used for the condition piano roll during training when no rhythm control is given. - The choice of evaluation metrics (VAE-based cosine similarity, OA) lacks clear reasoning. 3. Lack of self-containment: - Key derivations require extensive prior knowledge and are not well explained. - Theoretical justifications, especially in Appendix A.1, should explicitly present intermediate steps and provide clearer intuition behind their design choices. Other Comments Or Suggestions: Overall I think that this work is interesting, and shows great performance by imposing domain knowledge constraints on the generative process, and demo also is very fun to play with and shows very nice generations. That being said I strongly suggest a careful revision to improve clarity, as I found it difficult to follow key claims throughout the paper.. You should be more structured, well defined and avoid vague wording. Specifically related work, background and method should be revised. make sure your claims are plainly stated, your design choices justified and that the problem setup is based before moving on. Although I overall like this work yet due to lacking clarity along the paper I find it difficult to confidently give a high ranking at this stage. Please address my concerns, revise your writing and I shall increase my graded score. Questions For Authors: 1. Why do you use negative values for the condition piano roll when no rhythm control is given? 2. How do you justify your choice of evaluation metrics? could you give motivation to what they intend to capture and how does the score demonstrates that? Could you supply additional, more interpretable metrics of evaluation? 3. Could you elaborate on how prior work in this field addressed the gaps you highlight in the introduction? what gaps remain? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer's suggestions on revising! Due to this year's policy, it is not allowed to upload a revised paper, external links can only contain figures/tables, and rebuttal has a 5000 length requirement. Please allow us to describe a revision plan in the follows and promise to follow through in next phase. **Responses** 1. To train a model that can handle both rhythm+chord and chord-only conditions, we must distinguish when rhythmic conditions are provided. We achieve this by using negative values to indicate the absence of rhythmic conditions, preventing the model from misinterpreting 0s and 1s as rhythmic constraints applied everywhere. This design choice was determined through empirical experimentation. 2. **Chord progressions** are harmonic structure of music, and the latent representation better accounts for the degrees of similarity between chords. For features like **pitch, duration, and note density**, we evaluate the **overlapping area** between their distributions in generated and ground truth segments, to evaluate how well the generative model captures the statistical properties inherent in the original compositions.​ We also **added three metrics** to directly measure generation quality: Direct Chord Accuracy, IoU of Chord, and IoU of Piano Roll. New results are shown in [https://drive.google.com/file/d/1IAcAqK4qK4AiQVKWriFSJ91QNhrg5at-/view]( https://drive.google.com/file/d/1IAcAqK4qK4AiQVKWriFSJ91QNhrg5at-/view). 3. See "related literature" in the following. **Revisions in writing** **1. Contribution**: Motivation: We theoretically and empirically characterize the challenge of *precision* in symbolic music generation Methodology: We incorporate fine-grained harmonic and rhythmic guidance to symbolic music generation with diffusion models. Functionality: The developed model is capable of generating music with high accuracy in pitch and consistent rhythmic patterns that align closely with the user’s intent. Effectiveness: We provide both theoretical and empirical evidence supporting the effectiveness of our approach. **2. Related Work**: Thank you for your suggestions, we really appreciate the structure line of organizing this paragraph that you have provided us. We will integrate it as: To leverage well-developed generative models for symbolic music, Huang et al. (2018) introduced a Transformer-based model with a novel relative attention mechanism designed for symbolic music generation. Subsequent works have enhanced the controllability of symbolic music generation by incorporating input conditions. For instance, Huang and Yang (2020) integrated metrical structures to enhance rhythmic coherence, Ren et al. (2020) conditioned on melody and chord progressions for harmonically guided compositions, and Choi et al. (2020) encoded musical style to achieve nuanced harmonic control. These advancements have contributed to more interpretable and user-directed music generation control. To better capture spatio-temporal harmonic structures in music, researchers have adopted diffusion models with various control mechanisms. Min et al. (2023) incorporated control signals tailored to diffusion inputs, enabling control over melody, chords, and texture. Wang et al. (2024) extended this by integrating hierarchical control for full-song generation. To further enhance control, Zhang et al. (2023) and Huang et al. (2024) leveraged the gradual denoising process to refine sampling. Building on these approaches, our work addresses the remaining challenge of precise control in real-time generation. **3. Theory** We will make the theoretical proofs in appendix more detailed by (1) expanding more steps and (2) adding more recalls of formulations from the text directly to the appendix. We will also simplify proposition 2 by removing the conditional probability argument, the details of which are in our response to reviewer uD9t. **4. Sections 2 and 3.** We will add subtitles to section 2, namely “Data representation of the piano roll” and "Formulation of the diffusion model", at corresponding places. We will delete the third paragraph “The FGG method improves…” of section 3. We will revise the first paragraph of section 3.2 to: > We first provide a rough idea of the harmonic sampling control. To integrate harmonic constraints into our model, we employ temporary tonic key signatures to establish the tonal center. Our sampling control mechanism guides the gradual denoising process to ensure that the final generated notes remain within a specified set of pitch classes. This control mechanism removes or replaces harmonically conflicting notes, maintaining alignment with the temporary tonic key. We will add subtitles or topic sentences “mathematical formulation of the harmonic sampling control”, "preliminaries", "Edit intermediate-step outputs of the sampling process" and “theoretical property of the sampling control” to corresponding places of section 3.2, and do further refinement. --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to address my concerns. The added IoU-based evaluation metrics significantly improve the interpretability of the evaluation and help support the model's harmonic precision and structural accuracy. These additions directly address some of my earlier concerns, and I find them convincing. That being said, I still believe the explanation for the Overlapping Area (OA) metric requires further clarification. It remains unclear what an increase in OA actually indicates. Why is a higher OA necessarily desirable? Is the intention to show that the generated segments better match the distribution of observed features in the test set? And is deviation from the test set distribution necessarily a negative outcome? I'm not fully convinced this is always the case. I don’t expect the authors to replace the metric, but it would be helpful to clearly articulate what OA is meant to capture, what an increase in its value signifies in practice, and in what sense it reflects an improvement. I also appreciate the clarification regarding the use of negative values for missing rhythmic conditions. However, since the rebuttal mentions that this design was motivated by empirical findings, I believe the paper would be strengthened by briefly including this evidence — for example, noting that removing this distinction results in a drop of X% in chord accuracy or a similar concrete metric. Overall, the revision plan seems promising and addresses many of the concerns I raised. I’d be happy to revisit my score and would feel more comfortable doing so if the revised content becomes available — even via an external link, as was done with the evaluation tables. ----- Edit Apr 6th. I have updated my grade, as I think that you did a good work in this rebuttal, and work should be accepted. That being said, I still think that the OA is a vague metric, which doesn't stand to compare your suggested approach to the observed baselines. As you mentioned yourself, higher is not necessary better - hence I think this actually serve to confuse the reader rather than emphasize a point you wish to make. I would suggest you consider dropping this metric and use the IOU and accuracy measures alone. Best of luck with your submission. --- Reply to Comment 1.1.1: Comment: Thank you very much for your additional comments and suggestions. Please allow us to provide response as follows: **1. Explanation about OA** Thank you for your insightful comment. The OA metric is designed to measure the degree of overlap between the distribution of key features in the generated outputs and those in the ground truth. A higher OA suggests that the structural patterns of the generated accompaniments better align with the patterns found in human-composed ground truth accompaniments. In other words, it indicates that the model is more capable to produce a similar range and distribution of musical structures, rather than collapsing into a narrow subset of possibilities or generating unrealistic patterns. We agree with the reviewer that deviation from the ground truth distribution is not always a negative outcome, especially in creative domains where novelty and innovation are highly valued. We do not claim that maximizing OA is always equivalent to improving artistic quality. Instead, we treat the ground truth as a "reference," and use OA as a complementary evaluation to assess whether the model maintains coverage of plausible structural features — an important aspect of generation quality alongside harmonic precision. We will revise the manuscript to better clarify the purpose, interpretation, and limitations of the OA metric. **2. The use of negative values for missing rhythmic conditions** Thank you for this helpful suggestion. We agree that providing empirical evidence would strengthen the paper. In our experiments, we found that removing the distinction for missing rhythmic conditions — that is, not using negative values — led to a 8%–15% decrease in chord accuracy across different evaluation settings. (For example, direct chord accuracy drops from 0.485 to 0.421, and chord similarity drops from 0.767 to 0.705). This drop highlights the importance of explicitly encoding missing rhythmic information, as it helps the model better distinguish between different musical contexts. We will include this empirical observation, adding a note to make the motivation for this design choice clearer to readers. **Regarding revision plan and revised content** We much appreciate the reviewer’s recognition of our revision plan and the willingness to spend time to review the revised content. We would be happy to submit a revised paper but unfortunately this year’s policy only permits the external link to contain tables and figures. We consulted AC about the possibility of sharing a revised paper and was advised that it is not allowed at this stage. Nevertheless, we want to assure that we will carefully implement the planned revisions in the next phase of the process. Thank you again for the valuable comments.
Summary: This paper introduces a Fine-Grained Guidance (FGG) approach for diffusion-based symbolic music generation, addressing precision and controllability challenges. FGG incorporates harmonic and rhythmic constraints during both training and sampling, ensuring generated music aligns with user intent. Theoretical analysis bounds the impact of guidance on learned distributions, while experiments demonstrate reduced out-of-key notes, improved chord similarity, and subjective quality. A demo showcases real-time interactive generation, highlighting practical applicability. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Basically yes. The hypothesis seems to be a little bit strong, but empirical results and ablations can support them. Experimental Designs Or Analyses: The experiments are enough for this paper. I am satisfied at chosen baselines (GETMusic, WholesongGen), where WholeSongGen is a strong baseline. The Analysis is firmly around of its hypothesis. It highlights the chord, pitch, rhythm attributes, showing a significant result. I would be happy if more experiments can be conducted on OOD datasets, which showing the model generalisation ability in the wild. Note that a subjective study is provided in the appendix. Supplementary Material: I checked the supplementary material, mainly the subjective results, which supports the hypothesis well. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: I believe current references are enough under the topic of symbolic diffusion-based piano music generation. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer’s recognition of our work's theory, experiments, and the practicality shown by the interactive demo. Regarding OOD datasets, we unfortunately have not yet found a high-quality dataset other than the one we used. Alternatively, in the numerical experiments, the songs of POP909 dataset were split upfront into training and testing sets, to prevent data-leakage in the training process. Meanwhile, although there is not an appropriate OOD dataset for us to run a thorough experiment, we tried to use our model to generate stylized music pieces in section 2 of our demo page. Those styles did not appear in the POP909 dataset but the generated results seem satisfying, which hopefully serves as an indication of our model’s generalization ability. We would love to thank Reviewer 2Mem again for the comments and suggestions, which are very helpful to us. --- Rebuttal Comment 1.1: Comment: Thanks. I really acknowledge your continuous efforts on this paper. Good luck.
Summary: The two main motivators of this work are (1) the importance of providing user control for symbolic music generation, and (2) specifically the importance of *exact* pitch control, and the particular challenge of achieving this when using an image-based representation (i.e. piano roll). The authors solve this by proposing a conditioning-based approach that allows some harmonic and some rhythmic control. They introduce a piano roll representation of the conditioning signal, and they use these signals both during train time of a conditional diffusion model, and also as a mask-like representation of the constraints to provide further guidance during the sampling process. Essentially—if I have understood correctly—if the constraint says “don’t use note X at time T”, then at each step of denoising, the sample value of note X at time T is pushed toward zero. They provide a theoretical bound on how much this adjustment can affect the overall resulting distribution. The authors also provide a theoretical argument to try to explain why harmonic precision is hard for uncontrolled (i.e. unconditional) models. They provide experimental results using the POP909 dataset, with piano rolls of size 2x64x128, corresponding to 4-bars of 4/4 time quantized into 16th notes, with one channel for onset and one channel for sustain. They measure both objective (e.g. percentage of out-of-key notes) and subjective quantities. They present an ablation study where (1) the control signal is provided as a conditioning input, but “wrong” notes are removed only after the completing the reverse-diffusion process; (2) one study where the control signal is again provided as input but no notes are removed; and (3) one study where an unconditional model is used and no control occurs during the sampling process. They find that providing the conditioning signal (i.e. training a conditional model) helps, and providing both conditioning signal together with interventions during the sampling process helps even more. Claims And Evidence: **NOTE**: Since this question requires me to “convert” the standard contributions-based representation into a claim-based representation (some of which are implicit in my reading of the paper), I invite the authors to restate their claims explicitly if I have misrepresented anything here. **Claim**: The authors provide statistical theory evidence to characterize “the precision challenge” in symbolic music generation. (e.g. see Section 4). **Assessment**: I think I like the provided reasoning/argument, in as much as I followed it. However, I found parts of the argument hard to follow (i.e. unclear), perhaps unnecessarily so (see “strengths/weaknesses” below)). **Claim**: The proposed model allows future users precise control over harmonic and rhythmic components of symbolic music generation. (e.g. see Section 3, first few paragraphs). **Assessment**: This claim fails to highlight that of course it can only do this in as much as the conditioning representation can support the desired guidance. I think the conditioning representation is reasonable and good (it makes sense to present represent constraints in a piano roll format!) but also it is quite limited. For example, it does not allow probabilistic constraints (as in the demo link earlier). As other examples, one could certainly imagine other kinds of rhythmic constraints as well, and there are also different ways of specifying harmonic constraints too. But one paper cannot provide all possible forms of control! **Easy Fix**: This could be fixed with a clear and explicit discussion on limitations, which I believe are significant. **Claim**: The proposed controlled diffusion model is capable of generating music with high accuracy and consistency with respect to harmonic and rhythmic guidance, despite limited training data. (e.g. see Section 1, ‘methodology’ and ‘effectiveness’ paragraphs). **Assessment**: I believe this claim is basically supported by clear evidence. **Claim**: The model supports this control even when the controls push the music towards an out-of-sample style. (e.g. see Section 1, ‘methodology’ paragraph). **Assessment**: This depends on what one means by “out-of-sample”. If one thinks of the training+sampling procedures as teaching+enforcing where to place or not place notes, based on the conditioning signal, then I think that providing a slightly different subset of notes (i.e. a different scale or chord from what is used in the training signal) is not necessarily an out-of-sample task. In particular, the adjusted sampling procedure will absolutely guarantee that only the “allowed” notes will be used (i.e. the others will be removed) so if the set of “allowed” notes is very far from anything seen in the training data, then the generated samples will still satisfy the constraints; the only remaining question is whether the generated samples sound good (and perhaps subquestions such as: how much did the conditional training alone help in those particular cases, versus how much was the sample-editing procedure required?). **Fix**: Either justify exactly what is meant by out-of-distribution, and answer my questions above, or remove/carefully-qualify this claim (at no great cost to the paper, in my opinion). **Claim**: “We have published a demo page to showcase performances, as one of the first in the symbolic music literature’s demo pages that enables real-time interactive generation.” [L 32-34]. **Assessment**: (1) “interactive”: The demo allows the user to switch between 4 presets (and regenerate multiple times for each one). This is good (far, far better than no demo!), but when I read about an interactive demo, I was excited to try inputting more complex chords and melodies myself, and melodies that include, e.g. less harmonically obvious notes, and see how the system sounds. I do think it’s OK to call it an interactive demo, though. (2) “one of the first in the symbolic music literature’s demo pages”: That reads to me like a significant exaggeration. For example, there are over 15 interactive musical demos on this page alone (https://magenta.tensorflow.org/demos/web/ , which includes [1]) many of which allow the user to input, e.g. melodies and constraints (i.e. more than just from a dropdown of 4 choices). An effective demo where the user provided melodic contour constraints in real-time was presented in [2]. There are many other online MIDI generation demos as well, e.g. a transformer model here: https://huggingface.co/spaces/skytnt/midi-composer, and many others are available as well. Incidentally, OpenAI’s MuseNet was available interactively as well a few years ago, although it’s not available anymore, so it’s understandable if the authors had not come across it. That said, I think that providing any kind of user interaction is important and commendable and in fact should generally be expected for generative models. The issue here is simply that the claim is incorrect and can be easily fixed. **[1]** Roberts, A. and Engel, J. and Hawthorne, C. and Simon, I. and Waite, E. and Oore, S. and Jaques, N. and Resnick, C. and Eck, D., “Interactive Musical Improvisation with Magenta”, NeurIPS 2016 (Demo). **[2]** Donahue, C, and Simon, I., and Dieleman, S., “Piano Genie”, IUI 2019 (see https://www.i-am.ai/piano-genie.html for an online demo) Methods And Evaluation Criteria: Yes, the proposed methods and evaluations do make sense, generally speaking, especially in relation to the conventions in this field. Also: No, the conventional evaluations do not necessarily make sense, but the current authors are not responsible for that. However, it would be helpful to mention potential limitations associated with the evaluation criteria and dataset. I found the demo page to be very helpful. I would like to see more examples and information on that page (and/or in the appendix, whichever is easier): * to help make sure I understand how the accompaniment was generated for the sample melodies, I would like to see the constraint matrices (i.e. in “piano roll format”) that were used to generate those accompaniments. * it would be interesting and helpful to see some samples generated for each of the ablative conditions * for the sample melodies in the demo page, how were these melodies obtained? In particular, one of them sounds almost as though it was taken from an existing piece (from the dataset? ), and the melody was “extracted” by hand– is that the case, or is that just a coincidence? **Baselines**. The authors have essentially posed/re-framed the symbolic-control problem as an image in-painting problem, where certain pixels have constrained values. The in-filling literature for images is extensive, so I would expect there to be at least one baseline from that literature that would be reasonable to apply here. Could the authors please respond to this? Theoretical Claims: No; I looked at parts of the proofs, but I did not check them line-by-line. Experimental Designs Or Analyses: Yes, the experimental designs seemed fairly reasonable. For the ablations, the authors mentioned a comparison to a “simple rule-based post-sample editing”, but I don’t think I saw this in Table 2. Am I missing something? (unless this refers to the case where the *conditional*-trained model is still used, but the editing only happens once at the end, i.e. “training control edit after sampling”). I assumed “simple rule-based post-sample editing” meant an *unconditional* model, but with an edit applied after sampling. Now I’m thinking maybe it means a conditional model after all– clarification on this would be welcome. In 5.1.5, regarding the ablations, the authors write “In contrast, the latter employs a brute-force editing approach that disrupts the generated samples, affecting local melodic lines and rhythmic patterns. The numerical results further validate this analysis.” However, as far as I can tell, the numbers in Row 1 and Row 2 of Table 2 look nearly identical. E.g. OA(pitch) is 0.628+/- 0.005 versus 0.624 +/- 0.005. I do believe that the brute-force editing might disrupt the generated samples, affecting melodic lines, but to me, this absolutely demands qualitative listening-samples because (if I’ve understood correctly) the numerical results do *not* validate this analysis. This also points to shortcomings in the standard approaches for evaluating this kind of work (see my earlier comment in the Methods & Evaluations section about limitations of evaluations). An ablation or comparison that would have been interesting is to simply zero out—at every step of the sampling process—those pixels/cells of the piano roll where there is not “supposed” to be a note. I wonder if this would capture the main benefits of the current proposed method, in that it would reduce the possibility of “wrong notes” directly at every iteration of the sampling process. I am not requesting that the authors do this, but if it’s feasible, then I would certainly be very interested to see (and hear) the results of such a modified sampling process. I realize it might be theoretically less sound, but would still be an interesting comparison point. (Or, again, am I misunderstanding something?) Related, see Q2 below. Incidentally, another related comparison could be some equivalent of the 'MIDI scale' function that Ableton provides, i.e. just "round notes up/down" to the nearest allowable note that satisfies the constraints. Supplementary Material: I reviewed Section B.3 (including Algorithm 2 for DDPM sampling with fine-grained textural guidance), along with Fig 4, Sections C, D, E, H. I also skimmed through all the other sections of the supplementary material. Relation To Broader Scientific Literature: Some parts of the paper's motivation relate to challenges that are specific to diffusion on piano rolls, not to symbolic music generation in general. For example, try the demo at https://magenta.tensorflow.org/demos/performance_rnn/ to see that it is possible to use a simple language model to enforce certain precise harmonic controls easily and effectively (i.e. probability distribution over pitch classes). Of course language models have their own challenges that diffusion models don’t face. For a more detailed discussion and evaluation of conditioning MIDI-based language models with a variety of controls, see for example [3]. **[3]** Nicholas Meade, Nicholas Barreyre, Scott C Lowe, Sageev Oore, “Exploring Conditioning for Generative Music Systems with Human-Interpretable Controls”, Int’l Conf on Computational Creativity (ICCC) 2019 Essential References Not Discussed: Could optionally consider including any of the references mentioned above. **Image inpainting**. A significant area of related work is image inpainting/infilling, since a premise of this paper is to convert symbolic control into an infilling task. (See also my comment on Baselines in the section above on Experimental Designs.) Some of these papers could be discussed explicitly. Other Strengths And Weaknesses: I appreciated occasional well-articulated observations throughout the paper (e.g. in the introduction regarding common limitations, precision demands for music generation; also at the end of the appendix, etc). As one example: (Sec3): "One challenge of symbolic music generation involves the high-precision requirement in harmony. Unlike image generation, where a slightly misplaced pixel may not significantly affect the overall image quality, an `inaccurately' generated musical note can drastically disrupt the harmony, affecting the quality of a piece." Absolutely! In Section 4, the authors write: “We provide an intuitive explanation under the statistical convergence framework.” Personally, I found the explanation, including Proposition 2, highly unintuitive (or at least unclear). Once I spent time parsing it, then I did appreciate the argument (assuming I understood it correctly). Some intuition and clarity would help. Quantizing to 16th notes and {1,0} note indicators is both ignoring velocity (i.e. dynamics, rhythmic accents), and also ignoring a large class of rhythms (e.g. triplets, swing, other groupings). Also, if I understood correctly, it requires the data to be “beat-aligned”, e.g. would this representation allow ingesting data such as any sophisticated but unquantized performance data which does not have barline/beat information (e.g. MAESTRO dataset)? Quantization is OK in the sense that simplifications need to be made to get ML systems to work, but it can also be a significant limitation that needs to be addressed as such. How complicated would it be to “scale” up to incorporate some of these aspects? How much is lost by not incorporating these? Again, I do understand the need to simplify; it is simply important to be clear and thoughtful about the extent and impact of the simplification. **Lack of Limitations.** This paper is missing almost any discussion of limitations whatsoever. I would like to see this such discussion added, either in one place, and/or throughout the paper in order as appropriate. **Overall**: This is an interesting paper and I look forward to seeing the authors' response and any discussion. Other Comments Or Suggestions: My current score is a placeholder. If my concerns are addressed then I will consider raising my mark. Questions For Authors: (See any questions above). Also: Q1. I am a bit confused about the Dorian mode generation. If harmonic constraints are implemented as “sets” of eligible notes, then how does this allow generation in specific modes that is any different from a major key? For example, G major consists of exactly the same set of notes as A dorian, B phrygian, etc. So in the demo page, which shows A dorian, how is this different from generating in G major? I do agree that this example has a bit of a dorian sound to it, but what is causing that? Presumably sometimes you would have just got something that clearly sounds in G major, right? Q2. Essentially, this problem appears to be an instance of inpainting where parts of the image are known and other parts need to be filled-in. This has been studied extensively. Why ensure Roll[time,pitch] <=0.5 rather than ensure Roll[time,pitch] == 0? I.e. why not explore an inpainting method to fill in the remainder of the piano-roll by grounding the Roll[time,pitch]==0 wherever required? Q3. Out of curiosity: In general, the data-distribution is normalized to [-1,1] before training a diffusion model. So, I was just wondering if the equations use [0,1] data for notational convenience or did the authors actually use [0,1] data for training? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer's suggestions on revising! Due to this year's policy, it is not allowed to upload a revised paper, external links can only contain figures/tables, and rebuttal has a 5000 length requirement. Please allow us to describe a revision plan in the follows and promise to follow through in next phase. **Responses** 1. To enforce Dorian, we restrict pitch classes to the Dorian scale, and additionally use the Am-Em-C-D chord progression for shaping. The combination of them makes it different with G major. 2. We added a comparison with the inpainting baseline (see **4. Additional Experiments and demo page** below). We did not initially frame our methodology from an inpainting perspective. Generating with minor correction looks more efficient to us, since a well-trained model aligns with chords, yielding only 2% incorrect notes. In contrast, inpainting adds complexity to the model, while much of the Roll[time,pitch]=0 in the input might be redundant. 3. We actually use [0,1] data for training. 4. "simple rule-based post-sample editing" is “training control edit after sampling” **Revisions** **1. Additional related work.** We will add two sections, one on precise control over symbolic music generation with other generative models, the other on image inpainting. **2. Modified claims.** - We will explain that the user-specified control refers specifically to user-designed control within the constrained piano roll format. - We will remove the misleading word “out-of-sample”, and replace with the statement “our method can shape the output towards a specific tonal quality.” - We will describe the demo page as “a demo page to showcase performances, which enables real-time generation”, removing "one of the first". **3. Theory.** We decide to remove the argument regarding conditional probability $\widehat{P}(\bar{R}|O)$, which seems too intricate. Further, the introduction of proposition 2 (after the discussion of out-of-key notes and resolutions and before proposition 2) will be: >We provide an explanation using statistical reasoning. Consider a piano roll segment, represented as a random variable $M$. Suppose we are interested in whether this segment contains an out-of-key note (denoted as event $\{O\}$) and whether that note is eventually resolved within the segment (denoted as event $\{R,O\}$). In our training data, almost every out-of-key note is resolved, meaning the probability of unresolved out-of-key note is close to 0, i.e., $P(\bar{R},O)\approx 0$. >Now, we examine the probability in the generated music. The key question then is whether the generative model also learns to keep $\widehat{P}(\bar{R},O)$ small. The following proposition 2... (same as in manuscript). **4. Additional Experiments and Samples.** In numerical experiments, we have added a baseline named **inpainting**, in which we treat the pixels where there should not be a note as known (value should be 0), and let the model inpaint the remaining parts. To do this, we add a mask to model inputs, and trained an inpainting model. We also added **rounding the out-of-key notes to the nearest allowable** in our ablation study. We also added more interpretable metrics. Results are shown in [https://drive.google.com/file/d/1IAcAqK4qK4AiQVKWriFSJ91QNhrg5at-/view](https://drive.google.com/file/d/1IAcAqK4qK4AiQVKWriFSJ91QNhrg5at-/view). Samples generated from ablation conditions are **added to Section 3 of demo page**. Across all ablations, we observed occasional occurrences of excessively high-pitched notes and overly dense note clusters. **5. Discussion of limitation:** The 16th-note quantization follows prior work (Wang et al., 2024), which admittedly reduces rhythmic flexibility, and cannot ingest data without beat information. A potential improvement is integrating our pitch class control method with (Huang et al., 2024), which adds a dynamic dimension, and uses 10ms time quantization for greater rhythmic flexibility. Another key limitation is the control format. Our method supports pitch class and rhythmic control in the piano roll representation, but does not accommodate more abstract forms or probabilistic control. Additionally, the evaluation methods and datasets present challenges in accurately assessing generated music quality. Since music evaluation is inherently detailed and partly subjective, the metrics used in this work has fundamental limitation in metrizing quality improvement. **6. More explanations of the demo page** In section 2 of demo page, the chord conditions are converted to condition matrices exactly following Figure 2 of the paper, the melody conditions are provided in an additional channel, also in the form of a piano roll, and we did not use rhythmic constraint in the generation. We will add a section in appendix to provide description of the matrix. Sample melodies are either randomly picked from the test set of POP909, or extracted by hand from some of our favorite existing pieces. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough rebuttal and for the additional experiments they have run and presented, and their discussion of limitations. I appreciate this effort, I find it helpful, and as I indicate below, I feel it improves the paper. **Based on the direction of their rebuttal so far, and assuming continued responsiveness, I am raising my score to a 3.** I am also assuming that the promised revisions will be made (unless I explicitly indicate otherwise, e.g.see point (4) below). My additional questions/comments are below: 1. **Dorian explanation**: OK this almost made sense, thank you, but I have a followup question: "restrict pitch classes to Dorian" --> this makes sense. "additionally use the Am-Em-C-D chord progression for shaping" --> is chord progression 'shaping' implemented differently from pitch class restrictions? Is this done with some additional conditioning? Or do you mean that you gave the Am/Em/C/D chord progression as restrictions in the piano roll, and that this naturally also limits the pitches to the Dorian, since each of those chords is inside (i.e. a subset of) the A dorian scale? I think all of this relates to the discussion in Sec 3.2 about $\mathcal{C}$ and $\mathcal{K}$, footnote (4), appendix D.1, etc. I had thought I understood fairly well what was happening, but now I am slightly puzzled. To help me understand, could you answer the following: note that C-major triad can be the $I$ chord of C major, or the $V$ chord of F major, and might be played differently in each of those cases. But specifying the scale is not as "specific" as specifying the chord. Exactly how do you specify both the scale (which implies the "wrong" notes) and the chord (which implies the "important" notes)? Please make sure that in the final version, you would clarify all of this somewhere (could be appendix), and refer to it appropriately, e.g. when describing the results on Dorian-mode control. *Last-minute edit*: Re-reading parts of the paper again, I think (?) I finally understood: The *training* process allows **chord**-conditioning at the input level (e.g. "focus on these 4 notes"), whereas the *sampling* process allows **scale**-constraints applied at the output (i.e. "set these out-of-key notes closer to zero at each reverse diffusion step"). *[2nd EDIT after posting: And critically, the harmony constraint matrix used as conditioning input does not need to be the same as the out-of-key constraint matrix used at sampling! and this is what footnote(4) was about?]* Are these [two] edits correct? If so, the paper will be stronger if this simple (but effective!) concept is presented more clearly. 2. **Comparisons.** If there are indeed distinct chord- and scale-conditioning mechanisms (that I didn't previously understand), then are the baseline comparisons still "fair"? They might be; I just would like to hear the author's view on this. E.g. if inpainting is a baseline for scale-conditioning, then shouldn't the inpainting also allow chord conditioning? (or maybe it already does..?) I should reiterate: I really appreciate that the authors added the in-filling baseline in the first place. Even though this baseline turns out to be relatively strong, to me this still strengthens the paper. 3. **Simplification?** Did the authors ever try simply zeroing out the constrained notes at every sample step, rather than gradually "reducing" them (i.e. using eq (8) in eq (2))? My guess would be that it would work almost exactly as well, and be simpler. Proposition 1 would risk seeming slightly less relevant, but it would still provide an interesting approximate theoretical justification. 4. **Regarding probability of out-of-key notes**: I want to clarify that I leave it completely up to the authors to decide what/how to include or not include on this point. Their proposed revision is good too. Like I said in the review: I did (truly) appreciate the argument, *once I parsed it*. I just got the sense there might be simpler ways to explain it. I.e. I don't want to risk weakening the paper by insisting that the authors remove an intuitive argument that they feel is a contribution in itself. No response needed on this point: I leave this entirely up to the authors in their final version. --- Reply to Comment 1.1.1: Comment: Thank you very much for your comments and suggestions. Please allow us to provide response as follows: 1. **Dorian explanation.** Yes, your two edits are absolutely right — the *chord* **input conditioning matrix** is distinct from the *out-of-key* **sampling constraint matrix**. The input conditioning matrix specifies the intended chord for each measure. Specifically, the chord is encoded into a matrix that highlights the chord tones — the pitch classes that constitute the given chord. In contrast, the sampling constraint matrix is designed in parallel with the conditioning matrix to help regulate the output. Let’s illustrate this with an example: - Suppose we are generating two measures, the first in C major and the second in C minor. In the input conditioning matrix, the time span corresponding to the first measure [0, T/2) will highlight the chord tones of C major: C, E, and G. The second measure [T/2, T] will then highlight the chord tones of C minor: C, E♭, and G. - For one plausible version of sampling constraint matrix, the first measure allows all pitch classes in the C major scale (C, D, E, F, G, A, B), while suppressing pitches outside the scale such as C♯, D♯, F♯, G♯, and A♯. The second measure, being in C minor, allows pitches in the C natural minor scale (C, D, E♭, F, G, A♭, B♭) and suppresses out-of-scale tones such as C♯, E, F♯, G♯, and A♯. The natural question that arises is: *how should the sampling constraint matrix be derived from the conditioning matrix?* This remains a very open design decision and can be chosen by the user depending on the musical goals. In our demonstration and experiments (except for the Dorian and Chinese style samples), we restrict the harmonic vocabulary to major, minor, and dominant seventh chords. The constraint matrix is then aligned as follows: - A major chord is associated with the corresponding major scale, - A minor chord with the natural minor scale, - A dominant seventh chord with the major scale plus the minor seventh (e.g., for C7: C, D, E, F, G, A, B♭). To explain, in this correspondence, we take the “key” (coming from the term "out-of-key") as the “*temporary* tonic key” implied by the one current chord. It would be interesting to try inferring key constraints from consecutive chords! As for the mode-specific samples, the sampling constraint matrix would be the *intersection* of the notes allowed by the temporary tonic key and the notes allowed in the style-specific scale. If the chord is A minor and the scale is A-Dorian, we allow pitches in the intersection of A minor scale (A-B-C-D-E-F-G♯) and A-Dorian scale (A-B-C-D-E-F♯-G), which is A-B-C-D-E. Thank you again for highlighting these points of confusion. We will revise the text to better distinguish between the chord conditioning matrix and the out-of-key sampling constraint matrix. Additionally, we will add an appendix section detailing how the Dorian samples were generated. Specifically, we will provide the chord conditioning matrix, and the sampling constraint matrix. 2. **Comparisons.** The chord condition is also allowed for all the baselines (including WholeSongGen, GETMusic, Inpainting method, as well as those ablation studies with training control), so the comparison is relevant and “fair”. Specifically, for the inpainting baseline, we provide the model with both chord condition and scale condition (which serves as the mask for inpainting) as input. 3. **Simplification.** Thank you very much for your advice. Now we added an additional experiments where we zero out the constrained notes. Specifically, in each sampling step, we reset the value of predicted $X_0$ at out-of-key positions as 0. The results are shown in [https://drive.google.com/file/d/1xMHxW0bNQivPocYgwf84aQOER0-wOSOc/view?usp=sharing](https://drive.google.com/file/d/1xMHxW0bNQivPocYgwf84aQOER0-wOSOc/view?usp=sharing). The results are close to our results. Although this method does not reduce the computational cost much, we do agree that it is simpler in terms of formulation. In fact, “zeroing out” the constrained notes is also theoretically compatible with our framework, grounded in “projection”. To explain, zeroing out can be viewed as “projecting predicted $X_0$ to the set {0}” at the out-of-key positions, while our method is “projecting predicted $X_0$ to the set $(-\infty,0.5)$”. We will add a discussion regrading this to our manuscript. 4. **Probability of out-of-key notes.** Thank you very much for your thoughtful consideration and explanation. We will organize the content according to space constraints and the overall readability of the paper. It might also be logically more coherent to first introduce $\widehat{P}(\bar{R}, O)$, and the proposition 2, and then enhance our argument by discussing the conditional probability $\widehat{P}(\bar{R}|O)$.
null
null
null
null
null
null
Empowering World Models with Reflection for Embodied Video Prediction
Accept (poster)
Summary: This paper proposes a world model based on video prediction. The world model is specifically designed for embodied AI (manipulation, more specifically). The authors design a novel strategy, termed Reflection of Generation (RoG), to leverage VLM and video generation models to serve as a world model. Besides, this paper also introduce a benchmark to evaluate embodied world models. Basically this is a good submission, with reasonable methodology and extensive experiments. **Update after rebuttal**: I appreciate the authors' detailed response. The response has addressed most of my concerns, including the paper organization, video duration and open source. After reading the rebuttal and the comments from other reviewers, I choose to keep my original score, i.e., weak accept. Claims And Evidence: Yes, the claims are basically supported. Methods And Evaluation Criteria: Yes, this paper propose a benchmark for evaluation. Theoretical Claims: Yes, I check the algorithm of the paper, which is reasonable. Experimental Designs Or Analyses: Yes, I check the benchmark design. Supplementary Material: Yes, I check the supplementary video and the code. Relation To Broader Scientific Literature: The key contributions are related to the world model, which is a hot topic in embodied AI. Essential References Not Discussed: I do not find any. Other Strengths And Weaknesses: Strengths: 1. The RoG method is interesting and novel to me. 2. The inclusion of a benchmark is valuable for evaluating the method's performance. 3. The paper provides comprehensive experimental results, effectively demonstrating the method's effectiveness. Weaknesses: 1. I'm a little confused by the overall organization of the paper. The benchmark should be presented to evaluate the baseline methods and the proposed method. However, now the benchmark is presented in Sec.4, the basic method is splitted to Sec.3 and Sec.5, which is strange. Besides, Sec.6 describes various details about the evaluation, I'm confused about the difference between this part with Sec.4 (and Appendix C). 2. The video prediction results are a bit short (only 2s for most videos). I'm curious about the results for longer prediction. 3. The video prediction results exhibit noticeable artifacts, e.g., "move brown chip bag near blue chip bag_frame_0_sample0.mp4" in the supplementary. Other Comments Or Suggestions: None Questions For Authors: 1. Will you open source the code? The supplementary code is "NOT RUNNABLE YET". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer cisM We thank the reviewer for recognizing the novelty of RoG, the value of our benchmark, and the thoroughness of our experimental results. Your constructive feedback helps us further improve the clarity and rigor of the paper. --- ## 1. Paper Organization and Benchmark Presentation Thank you for pointing out the confusion regarding the structure of the paper. As our primary contribution lies in the design of a reasoning framework akin to Chain of Thought (CoT) [1], we initially organized the paper to mirror this inspiration. Specifically, we introduced the system framework (RoG) first, followed by the dataset and benchmark (Sec.4), and then the model (Sec.5), to reflect the flow of reasoning. However, we agree that the current structure may appear unconventional and potentially confusing. Upon your suggestion, we have revised the organization for better clarity and coherence. In the updated version, we will make the following changes: - Move Section 4.1 (Task Decomposition) to the end of Section 3. - Swap the order of Section 4 (Benchmark) and Section 5 (Model). The revised structure is now: - RoG Framework: Task Definition, Task Decomposition - Model: VLM, LLM, AR Generation - Dataset and Benchmark Design - Experiments We believe this reorganization better reflects the logical flow from problem formulation to model design, and then to evaluation. Furthermore, due to the complexity and scale of the foundation models used, we include extensive implementation, fine-tuning, and ablation details in the Appendix. --- ## 2. Short Video Duration in Prediction Results Thank you for raising this point. To address this concern, we have provided additional longer video prediction results (8+ seconds) on our anonymous project website under the section "Multi-Round RoG Longer Video Generation": https://sites.google.com/view/icml-eva#h.5u9bchnr2e41 These extended demos demonstrate the model's ability to perform multi-round robotic interaction tasks over longer horizons. We hope this provides a clearer view of the model's temporal reasoning and generation capabilities. --- ## 3. Visual Artifacts in Generated Videos We appreciate your observation regarding visual artifacts. All evaluation cases were randomly selected to ensure fairness and reproducibility. While some artifacts are indeed present, such as in "move brown chip bag near blue chip bag_frame_0_sample0.mp4"—we note that this challenge is not unique to our method. Object consistency in generation, especially deformable objects (e.g., chip bags), remains an open and challenging problem in the video prediction community [2,3]. Our focus in this work is on generating coherent and temporally consistent robotic behaviors, especially in multi-step tasks. That said, we acknowledge the importance of improving object-level consistency. As part of our future work, we plan to: - Introduce spatial consistency constraints during the understanding phase. - Integrate additional consistency modules in the video generation phase, particularly for egocentric views (e.g., robot wrist cameras) to better capture fine-grained object interactions. ## Question: Open Source Yes, we plan to open-source both the code and the benchmark data. The current supplementary code is marked as “NOT RUNNABLE YET” because it is still in the experimental stage and tightly coupled with our internal development environment. To ensure broader usability and reproducibility, we are actively working on: - Cleaning and refactoring the codebase - Packaging the runnable demo - Providing environment setup scripts and installation instructions - Writing a detailed user manual We are committed to releasing a public version shortly after the review process, in line with community standards for reproducibility. We thank the reviewer once again for highlighting this important area for improvement. --- # References [1] Wei J, Wang X, Schuurmans D, et al. Chain-of-thought prompting elicits reasoning in large language models[J]. Advances in neural information processing systems, 2022, 35: 24824-24837. [2] Ren, Weiming, et al. "Consisti2v: Enhancing visual consistency for image-to-video generation." arXiv preprint arXiv:2402.04324 (2024). [3] Xu, Ziyi, et al. "AnchorCrafter: Animate CyberAnchors Saling Your Products via Human-Object Interacting Video Generation." arXiv preprint arXiv:2411.17383 (2024). --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response. The response has addressed most of my concerns, including the paper organization, video duration and open source. I will keep my original score, i.e., weak accept. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review and for highlighting the strengths of our work. Your suggestions have been particularly helpful in improving the clarity and organization of the paper. We are revising the manuscript accordingly and working on releasing a clean, runnable version of the code. We truly appreciate your feedback and support — it has definitely helped us improve the quality of our work. All authors
Summary: In this work, the authors propose a Reflection of Generation (RoG) solution to enhance video prediction in embodied scenarios. To achieve it, they further introduce an Embodied Video Anticipation Benchmark (EVA-Bench) and an Embodied Video Anticipator (EVA) model. Claims And Evidence: Most claims are supported by clear evidence. Methods And Evaluation Criteria: The methods and evalution basically make sense for video prediction within the proposed RoG style. Theoretical Claims: No theoretical claims are introduced in the paper. Experimental Designs Or Analyses: The experiments are basically sufficient to support the design. Supplementary Material: I check the supp doc which contains some codes and videos. Relation To Broader Scientific Literature: The paper provides a possiblity for unifying visual understanding and generation tasks. Essential References Not Discussed: The model style is similar to [Xiang, et al. Pandora: Towards General World Model with Natural Language Actions and Video States]. Please further make the comparison to show the key difference. Other Strengths And Weaknesses: There are three contributions including RoG solution style, EVA-Bench, and EVA model. However, such style and model has been partially investigated in the [Xiang, et al. Pandora: Towards General World Model with Natural Language Actions and Video States]. This would reduce the importance of RoG and EVA contributions in this paper, especially model structure is similar to Pandora. Please clarify the key differences. Other Comments Or Suggestions: Please see the weakness. Questions For Authors: Please see the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Rebuttal to Reviewer aEuy We sincerely thank the reviewer for the insightful comments and for highlighting the relevance of Pandora. We also appreciate your recognition of the strengths of our work, such as “most claims are supported by clear evidence” and “the methods and evaluation basically make sense for video prediction.” Although both EVA and Pandora share the general goal of integrating vision and language for video generation, we differ substantially in purpose, methodology, and impact. We will explicitly include a comparison with Pandora in the revised manuscript. Below, we provide a point-by-point response to clarify the key differences between EVA and Pandora and why these distinctions are essential. ## 1. Conceptual Difference (RoG Solution Style vs. Pandora’s Approach): The primary contribution of EVA is the Reasoning-over-Generation (RoG) strategy, which introduces a structured thinking paradigm for unified understanding-generation tasks. RoG is analogous to the Chain-of-Thought (CoT) reasoning in LLMs, enabling stepwise reasoning and decision-making in multimodal models. Unlike Pandora, which is primarily a multimodal-to-video generation model, RoG is a generalizable strategic framework that can be applied to any unified understanding-generation model, including Pandora or other LLM+VGM frameworks(Tab. 2 & 3, or we newly add EVA(Qwen) version in https://openreview.net/forum?id=onumui0nHi&noteId=cunTZeC5Vq ). As claimed in our experiments, e.g. Table 2, we demonstrate how RoG enhances reasoning capabilities across different models. ## 2. Benchmarking Contribution (EVA-Bench vs. Pandora’s Dataset): EVA-Bench is specifically designed to evaluate both visual understanding and generation abilities in embodied scenarios, which include multiple text metrics(Tab.1), video metrics like goal completion estimation(Tab.2, L.1104, Fig.12), and multimodal generation and reasoning metrics(Tab.). Pandora’s dataset, although it includes some robot-related videos, only has a visual evaluation. EVA-Bench incorporates manual caption adjustment, case separation, and task-specific design to ensure fine-grained evaluation. Additionally, in generation evaluation, we introduce goal-conditioned estimation metrics, which are crucial for benchmarking embodied AI tasks. This makes EVA-Bench distinct from Pandora’s evaluation setup. ## 3. Model Difference (EVA Model vs. Pandora’s Model): EVA and Pandora share the general idea of leveraging multimodal foundational models (ChatUniVi+Dynamicrafter), our approach emphasizes reasoning-driven generation, rather than direct multimodal-to-video synthesis. Therefore, our training stages and alignment targets, including loss design are different from Pandora. 1. **Input-Output Difference (QA-Text&Video vs. Text-Video):** A key distinction between EVA and Pandora lies in input and output. EVA allows Question-Video input and Answer-Video as the response. While Pandora only accepts text instructions as input, and its text output also simply copies the input instruction. This dual-modality output in EVA is crucial for complex reasoning tasks, as it allows the model to explicitly describe, explain, and refine its thought process before generating video sequences. By incorporating textual reasoning alongside video synthesis, EVA significantly enhances its ability to handle complex multi-step tasks, making it more suitable for embodied AI and decision-making applications. 2. **Differences in Training Data and Strategies:** As detailed in the Appendix, EVA employs a unique instruction-tuning approach for text-video interleaved generation. We carefully design training data that spans four meta-tasks under the RoG framework, ensuring comprehensive coverage of embodied task reasoning. During training, we first used mixed data to align the model, similar to Pandora. Then we did one more step as QA instruction tuning. The second step differs from Pandora’s data construction, which does not explicitly incorporate interleaved text-video reasoning in the same structured way. 3. **Video-to-Action Module** Pandora is a general video generation world model. While EVA also includes a base video-to-action head that could further transfer video generation to robot action (Table 5), connecting the generation model with the real world. In EVA, we can achieve end-to-end QA-to-Robot manipulation tasks, which are also far different from Pandora. (L. 450) In summary, while EVA and Pandora share a broad vision, our work introduces a novel reasoning strategy (RoG), a dedicated embodied AI benchmark (EVA-Bench), and a structured training approach tailored for reasoning-enhanced generation. These distinctions ensure that EVA provides unique contributions beyond Pandora. We include the data and benchmark format in our anonymous pages, for a more detailed and straightforward comparison. We appreciate the reviewer’s feedback and will further clarify these differences in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed feedback. It has addressed the main concerns. I keep my original rating of Weak Accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer aEuy, Thank you for acknowledging our rebuttal and for your thoughtful comments. Your suggestions were very helpful in clarifying our contributions, especially the comparison between EVA and Pandora. We appreciate your continued support and the maintained "Weak Accept" rating. We will reflect your feedback in the final version. Best regards, The Authors
Summary: This paper introduces Reflection of Generation, a set of reasoning strategies aimed at improving video generation models for multi-step predictions and Out-of-Distribution scenarios. To support RoG, the authors propose the Embodied Video Anticipation Benchmark to evaluate world models across diverse tasks, and they develop Embodied Video Anticipator , a model that leverages multistage training and autoregressive strategies for high-fidelity video generation. Experimental results demonstrate EVA's effectiveness in video prediction and robotics. Update after rebuttal: Thanks for the detailed rebuttal. I appreciate the authors' efforts in addressing my concerns and helping me better understand the work. Therefore, I will maintain my original rating Weak Accept. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. Experimental Designs Or Analyses: Yes. See "Weaknesses" for issues. Supplementary Material: Yes. Almost all of them. Relation To Broader Scientific Literature: No. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: 1. The proposed RoG strategy enhances video prediction by integrating intermediate reasoning steps, enabling self-correction and improved generalization in embodied scenarios. 2. The introduction of EVA-Bench provides a comprehensive and standardized evaluation framework for world models, assessing both understanding and generation capabilities across diverse in-domain and OOD embodied tasks. 3. Extensive experiments demonstrate the effectiveness of EVA in various downstream tasks. Weakness: 1. The paper does not provide a quantitative comparison of long video generation. How does EVA mitigate the potential issue of accumulated errors in long video generation using Autoregressive Frame Extension? 2. In Table 1, why does the In-Domain experiment only fine-tune ChatUniVi, while the stronger Qwen2 model is not included in the experiment? 3. In Table 3, have the baseline methods, LLAVA-O+EVA-Gen and Qwen2+EVA-Gen, been trained? If not, the comparison may not be fair. Minor typos: 1. Line 340: "ChatUniVi-loRA" should be corrected to "ChatUniVi-LoRA." 2. Table 3: Incorrectly bolded values. Other Comments Or Suggestions: See Weaknesses. Questions For Authors: Why is the task of generating future frames considered significant? Research in simulation environments or even on real-world machines may hold greater practical relevance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Rebuttal to Reviewer wGih We sincerely thank the reviewer for the thoughtful feedback and recognition of our contributions. Below, we address the key concerns raised: ## 1: Accumulated Errors in Long Video Generation We appreciate the concern regarding error accumulation in long-horizon video generation with autoregressive frame extension. To mitigate this, we adopt the following strategies: 1. Randomized Start Frame Conditioning: During training, we randomly sample starting frames to improve generalization across temporal contexts. 2. Contextual Inference: During inference, while we typically extend from the last frame, we also support using the last 4–8 frames as context, which enhances temporal consistency. 3. Hyperparameter setting: Frame Stride (FS): We empirically adjust the FS to limit the final length, reducing the risk of error propagation on very long videos>15s. Robotic video generation poses greater challenges than generic video synthesis, due to the need for object consistency, goal completion, and temporal reasoning. Our proposed RoG mechanism, by incorporating intermediate reasoning from a VLM, adaptively selects high-quality generations and provides a self-corrective feedback loop. This significantly enhances long-term coherence without requiring frame-level supervision. ## 2: Why Only Fine-Tune ChatUniVi in Table 1 Instead of Qwen2 Thank you for this valuable observation. As shown in Table 3, ChatUniVi+EVA-Gen slightly outperforms Qwen2+EVA-Gen in terms of GCE, particularly in zero-shot settings. This motivated our decision to select ChatUniVi as the default model for fine-tuning in Table 1. Moreover, as discussed in Line 710, we aim to explore the benefits of fully pretraining a VLM before integration.In practice, ChatUniVi offers a more accessible codebase and better support for multi-stage tuning, thus making it more suitable for our current RoG-focused evaluation. This makes it more suitable for our experiments focused on architecture-level validation of RoG. We fine-tune Qwen2 directly on the EVA dataset. Without multi-stage alignment and mixture data pertaining, Qwen2-Full Parameter fine-tuning on 300k data(Qwen2-FP) or 50k data (Qwen2-FP-50k) overfits and suboptimal generalization (weak in CLIP and GPT-4o score). Therefore, we still recommend a multi-stage training strategy for stronger VLMs to fully leverage RoG’s potential. |Model|In-Domain|BLEU-1 ↑|METEOR ↑|R-L ↑|CIDEr ↑|SPICE ↑|CLIP ↓|GPT-4o ↑| |----|----|----|----|----|----|----|----|----| |ChatUniVi| × |0.0969|0.0640|0.1497|0.0427|0.0636|27.49|9.03| |Qwen2| × |0.2484|0.1434|0.3255|0.8914|0.2839|28.98| 29.58| |Qwen2-FP-50k| ✓ |0.4189|0.2424|0.5005|2.2336|0.4404|26.73|27.19| |Qwen2-FP| ✓ |0.4833|0.2443|0.5129|2.7301|0.5363| 26.39|24.12| |ChatUniVi-LoRA| ✓ | 0.3007|0.1054|0.3268|0.8245|0.2213|24.89|31.94| |ChatUniVi-FP| ✓ | 0.4105|0.1809|0.4416|1.9012|0.3414|25.36|38.46| ## 3: Baseline Training Status in Table 3 Thank you for pointing this out. We clarify that the baselines in Table 3 were trained to ensure fairness: - Qwen2-FP + EVA-Gen, note as EVA(Qwen) was fine-tuned on 300k same downstream data as indicated in the method section. - EVA(Qwen)-2stages better than Qwen2: This supports our broader claim that RoG is a model-agnostic reasoning framework: different VLMs can benefit from being embedded into the RoG pipeline for robotic video generation. - EVA is still SOTA: multi-stage alignments are important. |Task|Model|EVAS-L ↑|EVAS-V ↑|EVA-Score ↑| |-----|-----|-----|-----|------| |**HOW-TO**|EVA(Qwen)-2Stage|88.49|71.72|80.11| | |Qwen2+EVA-Gen|41.54|69.34|55.44| | |EVA-2Stage|85.5|73.32|79.42| |**Next-Step**|EVA(Qwen)-2Stage|75.54|63.83|69.69| | |Qwen2+EVA-Gen|42.99|60.11|51.55| | |EVA-2Stage|73.02|64.46|68.74| ## Question: Why video generation It enables us to take hypothetical actions without affecting the real environment, facilitating a low trial-and-error cost.[1] Compared to simulators, video generation doesn’t need detailed modeling. Unlike real-world testing, it’s more scalable and cost-efficient. Therefore, it can offer a promising foundation for building scalable and generalizable robotic systems in the real world. [1] Ding J, Zhang Y, Shang Y, et al. Understanding World or Predicting Future? A Comprehensive Survey of World Models[J]. arXiv preprint arXiv:2411.14499, 2024. ## Summary Qwen2 is a stronger foundation model, and our supplementary results show that EVA based on Qwen2 also performs well. However, directly fine-tuning such large models can lead to catastrophic forgetting without multi-stage alignment. This does not affect our main claim: RoG is a model-agnostic framework that consistently improves temporal coherence and goal completion in robotic video generation. We also highlight the importance of hybrid data and staged training to fully unlock model potential. As more powerful VLMs emerge, we believe EVA combined with RoG will continue to advance the field of video prediction.
Summary: This paper introduces Reflection of Generation (RoG), an intermediate reasoning strategy that enhances video prediction by combining pre-trained vision-language and video generation models as a world model. We introduce Embodied Video Anticipation Benchmark (EVA-Bench) to evaluate these models across diverse tasks and OOD settings. Based on RoG, we develop Embodied Video Anticipator (EVA), a multistage-trained model that generates high-fidelity frames and adapts to long video sequences. Experiments show EVA’s effectiveness in video generation and robotics, advancing large-scale pre-trained models for real-world applications. Claims And Evidence: The paper provides extensive experimental validation for its proposed methods, including comparisons across multiple baselines on diverse benchmarks. The introduction of EVA-Bench strengthens the evaluation by systematically assessing both in-domain and out-of-distribution (OOD) performance. However, while the empirical results demonstrate improvements, some claims about the necessity of Reflection-of-Generation (RoG) for long-horizon video prediction could be better substantiated with ablation studies isolating its impact. Methods And Evaluation Criteria: The proposed Reflection-of-Generation (RoG) approach aligns well with the challenge of making video prediction models more robust and adaptive in embodied scenarios. EVA-Bench provides a well-structured evaluation framework, covering multiple levels of task complexity, including goal completion and adaptive reasoning. While the benchmarks are relevant and diverse, additional real-world deployment results could further validate the applicability of the method beyond simulated environments. Theoretical Claims: No theoretical claims are required for review in this work. Experimental Designs Or Analyses: The experimental design is generally sound, with clear task decomposition and evaluation metrics, particularly in EVA-Bench, which assesses both understanding and generation. The comparisons against baselines are well-structured, demonstrating EVA’s advantages in long-horizon video prediction, but some evaluations, such as the role of RoG in performance gains, could benefit from stronger ablations. Supplementary Material: Yes Relation To Broader Scientific Literature: The paper builds on prior work in world models and video prediction, integrating ideas from pretrained vision-language models (VLMs) and autoregressive video generation to improve long-horizon consistency. Essential References Not Discussed: Not found Other Strengths And Weaknesses: Strengths 1. The Reflection-of-Generation mechanism is an innovative approach that introduces intermediate reasoning steps into video generation, allowing for self-correction and adaptive long-horizon forecasting. This is a meaningful departure from traditional video diffusion models, which often struggle with consistency over extended sequences. 2. The introduction of EVA-Bench provides a well-structured evaluation framework that assesses both in-domain and out-of-distribution generalization, addressing a critical gap in embodied AI evaluation. The decomposition of tasks into action description, task completion verification, and next-step prediction enhances the clarity of model assessment. 3. The EVA model demonstrates state-of-the-art performance across multiple video generation and robotics applications, showing robust transferability to real-world scenarios. By evaluating EVA on robot simulation and embodied interaction datasets, the paper strengthens its claims of applicability beyond purely synthetic settings. Weakness 1. While the paper presents quantitative improvements, it lacks fine-grained ablation studies isolating the contribution of RoG beyond basic comparisons. A more detailed breakdown of RoG’s impact (e.g., performance with vs. without intermediate reasoning steps) would better justify its necessity. 2. Some methodological details, particularly regarding the training setup for EVA and the specific architecture choices, could be better documented for reproducibility. While EVA-Bench is a strong contribution, it would benefit from open-source code or dataset access, enabling broader validation by the research community. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer 3Yfm We thank the reviewer for their thoughtful and encouraging feedback. We are glad that the reviewer appreciated the novelty of our Reflection-of-Generation (RoG) mechanism, the well-structured EVA-Bench, and the demonstrated real-world applicability of our EVA model. Below, we address the two main concerns raised point by point: --- ## 1. Insufficient Ablation on the Reflection-of-Generation (RoG) Mechanism We appreciate the reviewer’s point regarding the need for more fine-grained ablation studies to isolate the contribution of RoG. To maintain clarity in the main paper, we had streamlined ablation findings. In the revision, we will include a dedicated subsection. We will revise the experimental section to include a dedicated subsection for RoG ablations. Below, we provide a summary of the key findings: 1. **In the Finish-Thinking Video Generation task(Tab. 2)**, we conducted ablations by applying RoG using different vision-language models (VLMs) on the same fine-tuned generation model (EVA-Gen). The GCE scores consistently improve with RoG, even when using non-finetuned VLMs, demonstrating that RoG—by introducing intermediate reasoning—can effectively enhance task completion. 2. **In robotic task scenarios (Tab. 4 and 5)**, RoG-supervised generation significantly boosts task success rates: - In-domain tasks: While performance of the baseline is already strong, RoG shows benefits in longer-horizon tasks such as "place into", increasing the number of successful executions (Tab. 4). - Out-of-distribution (OOD) tasks: RoG leads to over 2x improvement in success cases compared to the baseline version without RoG (L.430). This supports our claim that RoG enables adaptive correction and robustness in unfamiliar scenarios. These studies support the necessity and effectiveness of the RoG mechanism. We will explicitly restructure and expand the ablation section in the revision. --- ## 2. Reproducibility and EVA-Bench Availability Thank you for pointing this out. We fully agree with the reviewer that reproducibility and accessibility are crucial for community adoption and validation. **We will open-source the code, upload a preview-version during rebuttal.** We include part of the data and some key code in an anonymous link https://sites.google.com/view/icml-eva#h.w1phh0qv55ho We have taken concrete steps to address these concerns as follows: - We will organize the appendix for more clear description, and merge important details into the main paper. More information in appendix includes: model architecture in Sec. "A. Appendix: Model Architecture and Training" L660, VGM and fine-tuning method Ensamble-LoRA in sec A.3(L.740), training detail of VLM in Sec A.5 (L.815), and detailed model architecture and hyperparameters in L.774 and Tab.7. The dataset construction and most construction prompts are also included in Fig.11, Tab.12~18. As we claimed in L.710, we noticed that the fully fine-tunned VLM on a mixture data performs better than fine-tunned on a pretrained backbone, therefore, we choose Dynamicrafter and ChaUniVi since they have a good opensource code base with comprehensive scripts on multistage training. Regarding EVA-Bench, we commit to releasing: - A subset of curated data examples on our project page - Data processing scripts and annotations - Download instructions for third-party datasets used (subject to licensing constraints) As noted in L.817, we outlined multiple stages in the experimental pipeline, but we agree that more granularity is needed. We will expand this section to provide clearer information for the research community. --- We again thank the reviewer for the insightful feedback, which has helped strengthen the paper. --- Rebuttal Comment 1.1: Comment: My concerns are well addressed. Therefore I will keep my original rating and recommend this work for accepting. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 3Yfm, We sincerely thank for carefully reading our response and for the constructive initial feedback. We are glad that the clarifications regarding the RoG ablation study and EVA-Bench reproducibility addressed your concerns. **We appreciate your continued recommendation for acceptance** and your recognition of the contributions of our work. Your feedback has been very helpful in improving the clarity and rigor of the paper. Thank you again for your time and support! All Authors
null
null
null
null
null
null
Enhancing Pruned Models by Input Compensation
Reject
Summary: This paper proposes a method called input compensation (IC) for enhancing pruned models by adjusting the input to compensate for the removed weights. IC is designed in the input space and is orthogonal to existing pruning methods designed in the parameter space. Emprically, IC can be combined with existing pruning methods. Claims And Evidence: yes. Methods And Evaluation Criteria: yes. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: In Section 5 of the paper, there is relevant experimental analysis provided. Supplementary Material: Yes, it is located in the Appendix section of the paper. Relation To Broader Scientific Literature: The primary contribution of this paper lies in the proposal of a novel method that can be integrated with existing pruning techniques. This method demonstrates the capability to enhance the performance of pruning approaches, particularly under conditions of high sparsity. Essential References Not Discussed: The paper discusses that input compensation in the field of pruning is relatively novel. However, should there not also be a discussion on research regarding input compensation outside the realm of pruning, at least from a methodological perspective? Other Strengths And Weaknesses: 1. The methodology presented in this article demonstrates input compensation applied to a model that has already been pruned. However, is it feasible to concurrently train the required K and V for "input compensation" during the pruning process itself? This aspect warrants a thorough discussion both methodologically and experimentally. 2. Outside the domain of pruning, should there not also be a discourse on the research pertaining to input compensation, at least from a methodological standpoint? 3. While we observe significant performance enhancements due to input compensation, does it also contribute to an improvement in the degree of sparsity? This is an area that merits further investigation. Other Comments Or Suggestions: The schematic diagram of the methodology (Figure 1) appears to be somewhat oversimplified and would benefit from a more detailed representation. Questions For Authors: Please see the "Other Strengths And Weaknesses" part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer QATu, We sincerely thank you for your positive rating, thoughtful review, and valuable suggestions that have helped us improve our paper. We have carefully addressed your concerns as follows. If you have any other concerns or questions, please let us know. We are more than happy to address them and further improve our work. Best, Authors --- > **Q1.** > Discussion on research regarding input compensation outside the realm of pruning **A1.** Thank you for this insightful suggestion. Input compensation (IC) has been studied in other domains, particularly in control systems and signal processing. - **Control Systems**: As discussed in the last paragraph of Related Work, IC is a well-established technique in control theory [1-2] to adjust control signals for reducing the influence of potential disturbances. - **Signal Processing**: IC is also relevant to Pre-emphasis in Signal Processing [3], which modifies input signals to counteract the effects of noise and attenuation that can occur in communication channels, especially in analog transmission systems. We will discuss these connections more explicitly in our revised paper, which will help us better position our contribution within the landscape of input compensation techniques across different fields. **References** [1] Automatic control systems. 1995. [2] Feedback control of dynamic systems. 2002. [3] Pre-emphasis and speech recognition. 1995. --- > **Q2.** > However, is it feasible to concurrently train the required K and V for "input compensation" during the pruning process itself? This aspect warrants a thorough discussion both methodologically and experimentally. **A2.** Thank you for your insightful suggestion. We conducted an ablation study (using ten image classification tasks and the CLIP ViT-B/32 model as in the Section 5.1) to explore this variant of IC. As pruning is a non-differentiable process, IC cannot be trained jointly with the pruning process directly. We instead study the performance of jointly training $(K,V)$ with the retained weights after pruning. In our study, we compare two approaches: - **Pruning $\to$ IC (separate training)**, where we first train the retained weights and then learn the $(K,V)$. - **Pruning $+$ IC (joint training)**, where we jointly train the retained weights and $(K,V)$. Table R1 (https://www.dropbox.com/scl/fi/n3qcoi1ckwz4x2mlw0vbk/results.pdf?rlkey=8mt0tzz1f1ikzaa1hzuzns0lc&st=ldzypniu&dl=0) shows that the **joint training consistently surpasses separate training** across different sparsity patterns. This finding underscores the effectiveness of the joint training strategy, which allows for more cohesive optimization, leading to enhanced performance. --- > **Q3.** > a discourse on the research pertaining to input compensation **A3.** See our reply to Q1. --- > **Q4.** > does IC also contribute to an improvement in the degree of sparsity? **A4.** Thank you for your insightful question. IC not only enhances performance at a given sparsity level but can also **enable existing pruning methods to achieve higher sparsity without significant performance degradation.** As shown in Figure 3, Magnitude+IC with 60% sparsity performs better than Magnitude with 50% sparsity, while Magnitude+IC with 50% sparsity performs comparable to Magnitude with 40% sparsity. This means IC effectively allows for an additional 10% of weights to be pruned while still maintaining higher/comparable performance. Figure 3 also indicates that **IC improves the tradeoff curve between sparsity and performance**. For any target performance level, a model with IC can achieve that performance at a higher sparsity level than without IC. By enabling higher effective sparsity, IC allows practitioners to deploy smaller models without sacrificing as much performance as traditional pruning methods would require. We will add the above discussion to the revised paper. --- > **Q5.** > improve the diagram of the methodology (Figure 1) **A5.** Thank you for this valuable suggestion. We have created a more comprehensive diagram (https://www.dropbox.com/scl/fi/n3qcoi1ckwz4x2mlw0vbk/results.pdf?rlkey=8mt0tzz1f1ikzaa1hzuzns0lc&st=ldzypniu&dl=0) with the following enhancements: 1. **Detailed Attention Mechanism**: The revised figure explicitly shows how the query-dependent compensation is generated from the query and how it interacts with the key-value pairs in the compensation pool through the attention mechanism. 2. **Encoder is part of the pruned model**: The revised figure explicitly shows that the encoder is part of the pruned model. This enhanced diagram will be included in the revised paper to provide readers with a more intuitive understanding of our IC methodology. --- Rebuttal Comment 1.1: Comment: Thank you for the authors’ response.I will maintain my current score. --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up comment and for keeping your score positive. We are glad that our response has resolved your initial concerns.
Summary: The paper proposes an input compensation approach for pruning, which reformulates weight tuning as adaptive input modifications. Specifically, the method begins with the dual problem of weight compensation and approximates input compensation using a pre-trained encoder and attention-based computations. Experimental results demonstrate improvements over existing pruning methods across multiple tasks. Claims And Evidence: 1. The claim regarding efficiency should be reconsidered. In the first paragraph of the Introduction, the authors briefly discuss existing efficiency-focused methods, including distillation, quantization, and pruning. However, pruning itself often requires parameter tuning and can be hardware-unfriendly, particularly in the case of unstructured pruning [1]. Therefore, this argument should be presented with greater caution. 2. The assertion that the proposed method is orthogonal to existing approaches may not be entirely accurate, especially when considering the approximation target. Check more details in Methods and Evaluation Criteria part. [1] Fang, Gongfan, et al. "Depgraph: Towards any structural pruning." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. Methods And Evaluation Criteria: The rationale behind the effectiveness of input compensation remains unclear. From the perspective of pruning error, input compensation is the dual of weight compensation, meaning both address the same error. In the linear case, they are theoretically equivalent. However, there is no evidence demonstrating why input compensation performs better or how it aids weight compensation—whether due to optimization challenges or other factors. Furthermore, when extending input compensation to nonlinear models, approximation errors arise. Given this, input compensation does not appear to be truly "orthogonal" to existing methods but rather a variant that modifies the optimization parameters. Theoretical Claims: No theoretical claim. Experimental Designs Or Analyses: ## Strength ## 1. The results on language tasks are strong. Since the baselines are specifically designed for LLMs, the comparison appears fair and demonstrates that the proposed method can sometimes outperform weight compensation empirically. ## Weakness ## 1. Since input compensation is a variant of weight compensation, the authors should provide an error analysis for pruning to justify its effectiveness, rather than solely reporting accuracy improvements. 2. The experiments focus on computer vision tasks (image classification and generation), yet the baselines used for comparison—SparseGPT and Wanda—are designed for language models. Given that weight and activation distributions differ across domains, the optimal hyperparameters for these baselines may also vary. This discrepancy raises concerns about the fairness of the comparisons, as the baselines may not be evaluated under their best settings. 3. In the image classification tasks, the CLIP image encoder is used as the encoder in Figure 1. Although it is pruned, the associated computational budget should not be overlooked when compared to standard pruning methods. The authors should provide more details on this aspect to clarify its impact. Supplementary Material: Briefly reviewed. Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: ## Strength ## The paper is well-written, with a clear presentation of the motivation, methodology, and experiments, making it easy to follow. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer aNPP, Thank you for your time and effort in reviewing our paper. We have carefully addressed your concerns and hope you are satisfied with our responses. **If you have any further questions/concerns, please let us know** and we are more than happy to address them. Best, Authors --- > **Q1.** > The claim regarding efficiency should be reconsidered. **A1.** Thank you for this important point. We agree that pruning itself has limitations: Though pruning (especially unstructured pruning) reduces parameter count, it may not always improve hardware efficiency since some general-purpose devices cannot efficiently process sparse computations. We will do the following to improve the writing: - Remove the sentence "Quantization requires specialized hardware support, while distillation requires extensive retraining". - Provide a more balanced discussion of different efficiency-focused methods. - Add a limitation section to discuss the hardware compatibility issues. We believe these revisions will present a more accurate view of efficiency-focused methods. --- > **Q2.** > The assertion that the proposed method is orthogonal to existing approaches may not be entirely accurate > >The rationale behind the effectiveness of input compensation remains unclear. > >input compensation does not appear to be truly "orthogonal" to existing methods but rather a variant that modifies the optimization parameters. > > Since input compensation is a variant of weight compensation, the authors should provide an error analysis for pruning to justify its effectiveness, rather than solely reporting accuracy improvements. **A2.** We believe there is a **misunderstanding regarding orthogonality**. When we say IC is "orthogonal" to existing pruning methods, we mean: - **Different Spaces**: IC operates in the input space by adjusting inputs, while existing methods based on weight compensation operate in the parameter space by adjusting weights. - **Complementary Approaches**: Because they work in different spaces, IC can be combined with existing pruning methods (which are based on weight compensation) to improve their performance. Hence, **IC is not a variant of weight compensation.** **Testing Accuracy is a Reasonable Metric.** Note that both IC and weight compensation fundamentally aim to minimize the output deviation caused by weight removal. Testing accuracy directly quantifies this deviation and is the standard evaluation metric in the pruning literature. Hence, we believe testing accuracy is a reasonable metric to reflect the pruning performance. **Effectiveness of IC.** Since IC and weight compensation are complementary approaches, the effectiveness of IC can be verified by comparing "weight compensation + IC" against "weight compensation alone." Our extensive experiments (Tables 1-5) consistently show that SparseGPT+IC outperforms SparseGPT (a weight compensation method) across various sparsity patterns and tasks, confirming the effectiveness of IC. --- > **Q3.** > fairness of the comparisons **A3.** Thank you for this concern. We would like to claim that the comparison is fair due to the following reasons: - **Careful Hyperparameter Tuning**: (i) Magnitude and Wanda have no hyperparameters. (ii) We have carefully tuned the hyperparameters of SparseGPT specifically for the vision models. Indeed, the performance of SparseGPT is insensitive to hyperparameters (Hessian dampening, and mask selection blocksize). - **Wanda and SparseGPT are Competitive**: Note that our CV experiments are based on ViT models, which are transformer like LLMs. Hence, Wanda and SparseGPT, which are initially designed for LLMs, are still SOTA baselines for CV tasks. As shown in Tables 1 and 2, Wanda and SparseGPT achieve much higher accuracy than Magnitude Pruning, thus, are very competitive. - **Comprehensive Evaluation Across Domains**: Besides CV tasks, we also extensively evaluated our method on NLP tasks (Tables 3 and 4), which shows that IC consistently improves performance across both CV and NLP domains. In the revised paper, we will include the above discussion to address this concern. --- > **Q4.** > computation cost of the encoder. **A4.** Thank you for this question. We have discussed the computation cost in our submission (last paragraph, Page 7). Our method **increases FLOPs by only 1%** (from 305G to 309G) compared to existing pruning methods. For a detailed breakdown, we provide Table R1 (https://www.dropbox.com/scl/fi/le76r4j7kdq9ixvzzd0rk/results_flops.pdf?rlkey=75b98wu5xaoe3hkqy9mnik6ut&st=cy9k8676&dl=0) to compare Magnitude and Magnitude+IC FLOPs by module. **Given the substantial performance improvements (up to 33.9% accuracy increase in Table 1), this minor computational overhead from constructing compensation is worthwhile**. We will include this detailed analysis in the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for the authors’ response. I still have two concerns: 1. Regarding orthogonality, while I agree that the proposed method can be combined with weight compensation, if both techniques address the same source of error, it is important to include experiments demonstrating how input compensation and weight compensation interact to reduce errors. Specifically, I would like to see some analysis of pruning error—even on simpler models—to better understand the contribution of each component. 2. The table provided indicates that language processing is the primary computational bottleneck compared to the vision model. However, for image classification tasks, prompts are typically short, and in many cases, they can be cached, even if prompt processing is expensive. Could the authors provide more details about how the computational evaluation was conducted? As these concerns remain unresolved, I will maintain my current rating. --- Update: Thanks for the reply. I will raise my rating to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your further comments. We address your concerns as follows. --- > **Q5.** > ... include experiments demonstrating how input compensation and weight compensation interact to reduce errors. ... some analysis of pruning error. **A5.** We understand "pruning error" to refer to the discrepancy between the output of the dense model $\mathcal{F}(\cdot; W)$ and the pruned model $\mathcal{F}(\cdot; \hat{W})$, rather than the parameter distance $\\|W-\hat{W}\\|^2$. This is because minimizing the output error is the ultimate goal of pruning methods, while minimizing $\\| W-\hat{W}\\|^2$ can be achieved through simple Magnitude Pruning. (Remarks: If we have misunderstood the definition of the pruning error, please let us know.) To address this concern, we analyzed the **KL divergence** $\text{KL}(\mathcal{F}(\cdot; W) \\| \mathcal{F}(\cdot; \hat{W}) )$ between the output probability distributions of the dense model and the pruned model for the image classification task using ViT-B/32. As shown in Table R4 (https://www.dropbox.com/scl/fi/le76r4j7kdq9ixvzzd0rk/results_flops.pdf?rlkey=75b98wu5xaoe3hkqy9mnik6ut&st=cy9k8676&dl=0), our analysis reveals two insights: - **IC provides consistent benefits across methods**: IC consistently reduces KL divergence for existing methods across different sparsity patterns, demonstrating the effectiveness of IC. - **Weight and input compensation are complementary**: The combination of weight compensation (SparseGPT) and IC achieves the lowest KL divergence, showing that weight compensation and IC are complementary. --- > **Q6.** > for image classification tasks, prompts are typically short, and in many cases, they can be cached, even if prompt processing is expensive. Could the authors provide more details about how the computational evaluation was conducted? **A6.** Thank you for this insightful question. We address this concern from two perspectives. ### **(1) Prompt caching considerations in CLIP models** We clarify how a CLIP model predicts image classes. 1. Given an image $x$ (for IC, $x\gets x + \Delta_x$), an image encoder $\mathcal{T}\_{\text{image}}$ computes its image embedding $\mathbf{e}\_x = \mathcal{T}\_{\text{image}}(x)$. 2. For $N$ classes with prompts $\\{\mathbf{p}\_i\\}\_{i=1}^N$, where $\mathbf{p}\_i=\text{``This is a photo of a \\{the i-th class-name\\}''}$ is the prompt for the $i$-th class. A text encoder $\mathcal{T}\_{\text{text}}$ computes text embeddings $\mathbf{t}\_i=\mathcal{T}\_{\text{text}}(\mathbf{p}\_i), i=1,\dots, N$. 3. The model predicts the class with highest cosine similarity $\frac{\mathbf{e}_x^\top \mathbf{t}_i}{\\|\mathbf{e}_x\\|\\|\mathbf{t}_i\\|}, i=1,\dots, N$. All computational costs are measured by the `FlopAnalyzer` in MMEngine (https://mmengine.readthedocs.io/en/latest/api/generated/mmengine.analysis.FlopAnalyzer.html). **We agree that for query-independent prompts, text embeddings at step 2 can be cached.** **However, caching is infeasible for the query-dependent prompts**, which contain query-specific information (e.g., $\mathbf{p}_i^{(x)}=[v_1(x), \dots, v_k(x), \text{This is a photo of a \\{the i-th class name\\}}]$, where $v_1(x), \dots, v_k(x)$ are discrete/continuous tokens depend on $x$). These dynamic prompts have demonstrated superior performance over query-independent prompts in recent work [1-3]. ### **(2) Lightweight encoder implementation for IC** To further address this concern, we demonstrated that IC can be implemented with minimal computational overhead by reusing a very lightweight submodule of the pruned model: - **For image classification tasks**: We conducted an additional experiment by using only the **first convolutional layer of the image encoder of CLIP-ViT-B/32** as the encoder $\mathcal{E}$ of IC (denoted by "IC (conv1)"). As shown in Tables R1 and R2 (attached in the above anonymous link), compared with Magnitude, Magnitude+IC (conv1) incurs just **0.1G FLOPs (only 0.03% increase)** while improving accuracy by 68.6% (from 37.3% to 62.9%). - **For the language modeling tasks** in Section 5.2: We have adopted only **the input embedding layer of the language model** as the encoder (Line 761). Table R3 (attached in the above anonymous link) shows that IC adds just **0.75% computational overhead** while reducing the perplexity significantly by more than 5.5 points (Table 3). These results demonstrate that IC can achieve large performance improvements with almost no additional computational cost, making it highly practical for real-world applications. If you have any further questions/concerns, please **update the previous comment** to let us know and we are more than happy to address them. --- **References** [1] Conditional Prompt Learning for Vision-Language Models. CVPR 2022 [2] Learning to Prompt for Vision-Language Models. IJCV 2022 [3] Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts. ICCV 2023 Workshop --- Update: Thank you for raising the score!
Summary: The work, Enhancing Pruned Models by Input Compensation, proposes a new fine-tuning method where, instead of compensating the retained parameters in compressed neural network models, the work introduces input compensation to adjust inputs to compensate the removed parameters and fine-tune the pruned models for classification performance improvement. In particular, the authors introduce a framework with a pre-trained encoder and a learnable compensation pool to learn input compensation. Once the compensation pool is trained, it can be used to fine-tune the pruned models to improve classification performance. Claims And Evidence: The work provides convincing and supportive framework figures, mathematical equations, and a model-training algorithm to support its claims. This work aims to determine an input compensation such that the output of pruned models is close to the output of the corresponding dense model. Methods And Evaluation Criteria: The proposed method is the research problem in model compression. Instead of fine-tuning the retained parameters in the pruned model, this work aims to learn input compensation to reduce the performance gap between the pruned and dense model by the proposed framework. The evaluation criteria are clear enough. The authors evaluate the proposed input compensation framework on different datasets and network architectures, including foundation models. Therefore, the evaluation datasets are good enough for this work. Theoretical Claims: The work does not provide theoretical claims as evidence for the proposed method. Experimental Designs Or Analyses: The authors evaluate the proposed framework and other existing frameworks on multiple standard image classification benchmarks such as CIFAR-10, CIFAR-100, and SUN. Therefore, the experimental designs are convincing. Additionally, the experimentation includes varying ablation studies to empirically prove the success of the proposed input compensation framework. Supplementary Material: I have read the supplementary material in this manuscript. In particular, one section in the supplementary material performs experiments with and without input compensation for pruned models under different Sparsities. This experiment shows the effectiveness of the proposed framework for the pruned models. Relation To Broader Scientific Literature: This work's key contributions are highly relevant to prior scientific findings in efficient deep neural networks and neural network architecture optimization for edge device applications. The proposed framework addresses an essential research problem - fine-tuning the pruned model effectively and efficiently - in model compression-related research. Essential References Not Discussed: The work mentions essential model compression techniques, including pruning, quantization, and knowledge distillation. Then, the work also mentions the reference of prompting for transformer-based models. Finally, the work describes the background of input compensation in control systems. Inspired by it, the authors leverage this idea to model compression. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written. Readers can easily understand the proposed framework and its differences from the others. 2. The proposed framework could be used for other network architectures, such as transformer-based models. Weaknesses: 1. It would be nice if the authors could have theoretical claims for the proposed framework. 2. It would be nice if the authors could test their method on large-scale imge classification datasets. Other Comments Or Suggestions: No extra comments and suggestions for this work. Questions For Authors: I do not have other questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer MNqZ, We sincerely thank you for your positive rating, thoughtful review, and valuable suggestions that have helped us improve our paper. We have carefully addressed your concerns as follows. If you have any other concerns or questions, please let us know. We are more than happy to address them and further improve our work. Best, Authors --- > **Q1.** > It would be nice if the authors could have theoretical claims for the proposed framework. **A1.** Thank you for your valuable suggestion. We agree that strengthening the theoretical foundations would enhance our paper. We have provided a theoretical analysis for linear models in Section 4.1, where we show **the duality between input compensation and weight compensation**: For a linear layer with output $Y = XW$, we demonstrate that if the weight matrix $W$ can be approximated as $W \approx S + AB^\top$, where $S$ is a sparse matrix and $AB^\top$ is a low-rank matrix. This leads to $$Y = XW \approx X(S + AB^\top) = XS + (X + XA\hat{B}^\top)S,$$ where $\hat{B}\equiv B^\top(S^\top S)^{-1}S^\top$. This equivalence shows that adjusting the input (adding $XA\hat{B}^\top$ to $X$) can have similar effects as adjusting the weights (adding $AB^\top$ to $S$) in linear models. In the revised paper, we will strengthen our theoretical analysis by providing a formal theorem and proof for the equivalence between input compensation and weight compensation in linear models. However, **extending this analysis to non-linear models is non-trivial** and requires additional theoretical work. We leave this as a future research direction. --- > **Q2.** > It would be nice if the authors could test their method on large-scale image classification datasets. **A2.** We are grateful for your valuable suggestion. We conducted additional experiments on **ImageNet**, a **large-scale** image classification dataset with 1,000 classes and over 1.2 million training images. Table R1 (https://www.dropbox.com/scl/fi/8srj7o2yacxmxtg9j75j7/results_on_imagenet.pdf?rlkey=krue8mns0ues2ns9uulcxlnae&st=fhqqqoj5&dl=0) compares the testing accuracy of different methods using CLIP ViT-B/32 . As can be seen, **our IC method consistently increases the testing accuracy of existing pruning methods** across different sparsity patterns on ImageNet: - At unstructured sparsity (50%), IC improves Magnitude Pruning from 19.6% to 41.0% accuracy (+21.4%), Wanda from 38.3% to 51.0% (+12.7%), and SparseGPT from 47.7% to 52.9% (+5.2%). - For the more challenging structured sparsity patterns (4:8 and 2:4), IC shows even more significant improvements. For example, with the 2:4 pattern, IC improves Magnitude Pruning from 7.2% to 32.5% (+25.3%), Wanda from 11.6% to 36.1% (+24.5%), and SparseGPT from 36.0% to 48.7% (+12.7%). These results on ImageNet further validate that **our IC method effectively enhances model performance on large-scale image classification tasks**.
Summary: The paper introduces a novel post-pruning algorithm that enhances pruned models by leveraging input compensation (IC) instead of traditional weight updates. This approach is compatible with any pruning method. Through extensive experiments on ViT, LLaMa, and DDPM, the study demonstrates that the proposed attention-based IC design effectively learns and applies input compensation, leading to significant performance improvements across diverse tasks and pruning strategies. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: This method is inspired by input compensation in linear models and introduces an equivalence between IC and weight updates in theory. This part appears reasonable. However, real-world models are not purely linear, and there is no theoretical guarantee for the effectiveness of IC in general cases. Since pruning algorithms are typically heuristic, the absence of a formal theoretical guarantee is acceptable in this context. Experimental Designs Or Analyses: Input compensation is an interesting and novel approach. However, I believe it requires a more thorough analysis compared to traditional weight update algorithms. Q1: When comparing pruning methods with and without IC, the baselines include fine-tuning to recover performance loss. Could you provide details on the fine-tuning process for these methods, including the number of epochs and datasets used? Q2: In the ViT experiments, was ViT fine-tuned separately for each subtask? Additionally, was a single IC trained across all subtasks, or was a unique IC trained for each subtask? Q3: This method is inspired by the theoretical equivalence between input compensation and weight compensation and is trained extensively to learn IC. In essence, it seems to distill weight compensation into input compensation. However, it remains unclear whether this approach provides an advantage over pruning combined with LoRA fine-tuning. To clarify this, it would be beneficial to conduct experiments using LoRA with a comparable number of additional parameters and directly compare its effectiveness with IC. Supplementary Material: Yes. I have reviewed all the additional experiments and their analysis. I have no questions about this part. Relation To Broader Scientific Literature: This paper aligns with the research about how data perturbation affects model performance. Perspectively, it relates to prompt tuning and adversarial attacks, as both modify inputs to influence model behavior. Prompt tuning optimizes inputs to guide responses, while adversarial attacks introduce perturbations to manipulate predictions. Unlike adversarial attacks, IC enhances pruned models by compensating for lost weights. Additionally, IC offers an alternative to weight updates in model compression, complementing existing pruning strategies and aligning with efficient adaptation methods like LoRA. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The paper is well-written and well-organized, with solid experiments demonstrating the effectiveness of IC in post-pruning. Weaknesses: As noted in the questions above, the advantage of IC over pruning combined with LoRA remains unclear. A direct comparison would strengthen the paper. I will raise my score if the authors show their advantage over LoRA + FT. Other Comments Or Suggestions: None. Questions For Authors: See above questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer Wf6n, We sincerely thank you for your thoughtful review and valuable suggestions that enhanced our paper. We have carefully addressed your concerns as follows. **If you have any other concerns or questions, please let us know.** We are more than happy to address them and further improve our work. Best, Authors --- > **Q1.** No theoretical guarantee for the effectiveness of IC in general cases. **A1.** Thank you for your valuable question. We agree that our IC method has a **limitation** that lacks a theoretical guarantee for non-linear models. Though we have provided a theoretical analysis for linear models in our paper, **extending this analysis to non-linear models remains a challenging open problem that we leave as a future research direction**. Despite this theoretical limitation, our approach follows the practical tradition of many successful pruning methods that are primarily heuristic in nature. The empirical results across various tasks (image classification, language modeling, and image generation) demonstrate that IC effectively boosts the performance of pruned models in practice. **These consistent performance improvements across different model architectures and tasks provide strong evidence for the practical utility of our method, even in the absence of complete theoretical guarantees for non-linear cases**. --- > **Q2.** > Could you provide details on the fine-tuning process for these methods, including the number of epochs and datasets used? **A2.** Thank you for your question about the fine-tuning process (i.e., sparse retraining). For all methods, we followed a **consistent** fine-tuning protocol to ensure fair comparison: - Datasets: The fine-tuning is performed on the **same** training datasets (i.e., the ten datasets in Section 5.1) used for learning the IC. - Number of epochs: We retrain the retained parameters for 3 epochs for **all** methods. We observed that performance typically **saturates** after 2 epochs, with minimal gains from additional training. - Optimizer: For all methods, we adopt the AdamW optimizer with a learning rate of 0.000001 and weight decay of 0.01, and a batch size of 128. This consistent protocol ensures that any performance improvements observed when combining pruning methods with IC can be attributed to the effectiveness of our approach rather than differences in the fine-tuning process. --- > **Q3.** In the ViT experiments, (Q3-A) was ViT fine-tuned separately for each subtask? (Q3-B) Additionally, was a single IC trained across all subtasks, or was a unique IC trained for each subtask? **A3.** Thank you for your questions. _(Q3-A)_ **No**, the ViT model was not fine-tuned separately for each subtask. Instead, we fine-tuned a single ViT model across all ten subtasks. This multi-task approach is **more parameter-efficient** as we maintain only one model instead of ten separate models and better reflects real-world deployment scenarios where a single pruned model needs to handle various tasks. _(Q3-B)_ **Yes**, a single IC was trained across all subtasks. All subtasks share the same compensation pool, which provides two benefits: - **Parameter efficiency**: Using a shared compensation pool requires much fewer parameters compared with training separate ICs for each subtask. - **Knowledge sharing**: As shown in Figure 6, different subtasks can share the same components of the compensation pool, demonstrating effective knowledge sharing. --- > **Q4.** > However, it remains unclear whether this approach provides an advantage over pruning combined with LoRA fine-tuning. To clarify this, it would be beneficial to conduct experiments using LoRA with a comparable number of additional parameters and directly compare its effectiveness with IC. **A4.** Thank you for your insightful suggestion. To address your concern, we conducted additional language modeling experiments to compare IC with LoRA fine-tuning on pruned LLMs using approximately the same number of additional parameters as our IC method. For LoRA implementation, we use a rank of 16 for both LLaMA-1 and LLaMA-2. To maintain **the same number of parameters (only 262K)** as our IC method, LoRA is applied only to the first layer of the LLM, which our ablation studies showed to be the most effective configuration. Table R1 (https://www.dropbox.com/scl/fi/x9ni5gmmetxkog2dslsac/results_lora.pdf?rlkey=jdzet17nezcsoxtrfxbyr888g&st=1pyx8l6t&dl=0) shows that IC consistently outperforms LoRA when combined with various pruning methods (Magnitude Pruning, Wanda, and SparseGPT) across both LLaMA-1 and LLaMA-2 models with the same parameter budget. This suggests that, **when using extremely few trainable parameters, IC is more effective than LoRA**. We will add this experiment to the revised paper. --- > **Q5.** > the advantage of IC over pruning combined with LoRA remains unclear. **A5.** See our reply to Q4. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications; they have effectively addressed my concerns. Notably, the fact that IC outperforms LoRA highlights the potential advantages of input compensation over weight compensation. I have raised my score. --- Reply to Comment 1.1.1: Comment: Thank you for your further comments and for raising your score. We are glad that our reply and additional experiments have resolved your concerns.
null
null
null
null
null
null
LensLLM: Unveiling Fine-Tuning Dynamics for LLM Selection
Accept (poster)
Summary: The paper introduces LensLLM, a novel framework for selecting Large Language Models (LLMs) by analyzing their fine-tuning dynamics. The authors propose a Hessian-based PAC-Bayes generalization bound to model the transition phases in fine-tuning, aiming to improve the efficiency and accuracy of model selection. The work also incorporates a Neural Tangent Kernel (NTK)-based Rectified Scaling Model to predict performance across diverse tasks. Empirical results on large-scale benchmarks demonstrate LensLLM's superiority over existing methods. It is indeed very novel, as claimed in the paper, but I really feel confused when reading the paper. Maybe I lack some important background knowledge, like that in Lin et.al 2024, but I believe a good paper should be self-contained. I would be very happy to increase my evaluation if the authors could kindly help me understand this paper better (especially the theory part). Claims And Evidence: Not quite. The main concern for the theoretical results of this paper is the assumptions it makes. I am not sure to what extent these assumptions could describe the transformer’s behavior. The analysis of this paper heavily relies on the generalization bound provided in Ju et.al. 2023 and the scaling law provided Lin et.al 2024. But will they precisely describe LLM’s behavior? The experiments provided in Ju et.al. 2023 only consider image classification tasks, which is quite different from LLM’s auto-regressive training. Compared with Lin et al. 2024, the experiments in this paper do not seem sufficient. IMO, extending this framework to LLMs’ finetuning needs more justifications. Methods And Evaluation Criteria: Part of. The evaluations made by this paper are similar to Lin et.al 2024, but with fewer experimental settings. However, since the proposed methods perform pretty well, the paper still has big potentials. Theoretical Claims: Mostly. 1. The analysis of the paper heavily depends on Theorem 1, which is a Generalization bound of LLM’s finetuning. I am not quite sure whether the bound is tight enough to claim that model A is better than model B since its generalization bound is smaller. 2. It is a bit hard for me to understand the definition of the feature vector x. What would x for GPT2 look like? What is the difference between x for GPT2 and T5? Plus, the notation x is also used in Assumption 1-3 in Section 3.1. Are they the same x? 3. In Equation 8, I cannot understand what x and x’ represent for. Are they some features of pre-training and fine-tuning data samples? If not, and following the definition of x as aforementioned, then why do we call a transformer model f(_, _)? Do we need to provide a vector x with the model size, architecture, etc., to a LLM? Experimental Designs Or Analyses: Yes. The experiments look pretty good. Supplementary Material: I've tried to understand the theory better by reading the appendix. Relation To Broader Scientific Literature: The paper did a good job summarizing and discussing related works in this field. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. Theoretical Contributions: The introduction of a Hessian-based PAC-Bayes generalization bound provides a solid theoretical foundation for understanding LLM fine-tuning dynamics, although a bit hard to understand for me. 2. Efficient Model Selection: The NTK-based scaling model offers a computationally efficient alternative to exhaustive fine-tuning by leveraging pre-trained model properties. 3. Empirical Validation: The framework is tested on diverse datasets and multiple model architectures, demonstrating consistent improvements over many baselines. 4. Open-Sourced Implementation: By providing an open-source implementation, the work promotes transparency and reproducibility in the field. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful questions. The followings are our answers to your concerns. Q1: "The analysis of the paper heavily depends on Theorem 1, which is a Generalization bound of LLM’s finetuning. I am not quite sure whether the bound is tight enough to claim that model A is better than model B since its generalization bound is smaller." A1: We would like to clarify that Theorem 1 is not used for direct model comparison. Instead, its primary purpose is to fit the transition phases and identify the transition point in the fine-tuning dynamics. Once this transition point is determined, we apply our regression model to predict performance and compare the predicted loss values for model selection. Thus, model selection is based on the predicted loss rather than on a smaller generalization bound. Q2: "It is a bit hard for me to understand the definition of the feature vector x. What would x for GPT2 look like? What is the difference between x for GPT2 and T5? Plus, the notation x is also used in Assumption 1-3 in Section 3.1. Are they the same x?" A2: a. In our framework, $x$ represents the features from the input space, which are used to characterize the fine-tuning data. These features encapsulate relevant properties of the input samples that influence the fine-tuning process. For specific models: - For GPT-2, $x$ corresponds to features extracted from its auto-regressive training setup, where the inputs are sequences of tokens with causal masking. - For T5, $x$ represents features from its encoder-decoder input format, where the model processes full input sequences bidirectionally before generating outputs. b. The key difference is that T5 operates on a denoising objective rather than strict left-to-right token prediction like GPT-2. c. Yes, $x$ in Assumptions 1-3 refers to the same definition as for GPT-2 and T5. Q3: "In Equation 8, I cannot understand what x and x’ represent for. Are they some features of pre-training and fine-tuning data samples? If not, and following the definition of x as aforementioned, then why do we call a transformer model f(_,_)? Do we need to provide a vector $x$ with the model size, architecture, etc., to a LLM?" A3: a. Yes, as we clarify just under Equation 8, $x$ and $x'$ refer to input feature representations from pretraining and fine-tuning data, respectively. b. No, we do not need to provide a vector $x$ with the model size, architecture, etc., to an LLM directly. For this part, we want to provide the details of the NTK matrix that are extracted from pre-training and fine-tuning stages, and then we will pass this information to our proposed scaling law model: $$ L(D) = \frac{B}{F(\Theta, t) + D^\beta} + E $$ Overal, we are using the information from pre-training and fine-tuning stages of LLMs to help us find the transition pattern and do better prediction of performance. For the architecture of our model, please refer to the pseudo-code in section 3.2. Q4: "The analysis of this paper heavily relies on the generalization bound provided in Ju et al. 2023 and the scaling law provided Lin et al 2024. But will they precisely describe LLM’s behavior?The experiments provided in Ju et al. 2023 only consider image classification tasks, which is quite different from LLM’s auto-regressive training. Compared with Lin et al. 2024, the experiments in this paper do not seem sufficient. IMO, extending this framework to LLMs’ finetuning needs more justifications." A4: We would like to clarify that our work fundamentally differs from (Ju et al., 2024) and (Lin et al., 2024) as follows: 1. Framework difference from Ju et al. (2023): Ju et al. focus on image classification, but we incorporate transformer-specific elements—such as attention mechanisms, layer normalization, and residual connections—into the PAC-Bayesian generalization bound (as detailed in Appendix A), which are not addressed in Ju et al. 2. Contribution difference from Lin et al. (2024): Lin et al. empirically capture the pre-power and power phases based on heuristic scaling laws derived from observational data. In contrast, our work rigorously verifies these phases through a theoretical framework, specifically by deriving the Hessian-based PAC-Bayes generalization bound. This theoretical foundation enables us to develop a theory-grounded scaling law that precisely describe LLM behavior. We acknowledge that our current computational resources (using a single A100-80G) limit our ability to extend experiments to larger models, and we plan to explore this in future work. Additionally, we have conducted further tests on the robustness of our method to hyperparameters; due to space limitations, please refer to the rebuttal for Reviewer 2LaB for more details. Please let us know if there are any comments or insights, we'd like to explore further! --- Rebuttal Comment 1.1: Comment: Thanks very much for the author's response, which helps me a lot in understanding the paper. So maybe consider merging some of the explanations into the main context in the next version? I guess those clarifications help readers not that familiar with this field a lot. The discussions on the differences with the other two papers are also helpful. I would increase my evaluation to 3 accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable comments and support! We will ensure that the revised manuscript includes these explanations as well as the clarifications of differences from the referenced papers. We appreciate your insightful input on improving our paper.
Summary: The paper proposes a framework, called **LensLLM**, for predicting and selecting the best large language model (LLM) to fine-tune under computationally constrained scenarios. It introduces a theoretical foundation using a Hessian-based PAC-Bayes generalization bound to illustrate how fine-tuning progresses through a “pre-power” phase (where performance improves slowly under low-data regimes) and a “power” phase (where the model follows a more predictable scaling law as dataset size increases). Building on this, the authors design LensLLM—a rectified scaling approach that integrates Neural Tangent Kernel (NTK) concepts with the dynamics revealed by their theoretical analysis. Their experiments suggest that LensLLM not only achieves higher accuracy than existing model-selection methods (e.g., rectified scaling laws, zero-shot, or heuristic-based metrics) but also significantly reduces computation time by progressively sampling smaller portions of the dataset for predictions. Empirical results on FLAN, Wikitext, and Gigaword benchmarks indicate strong correlation and ranking ability while cutting fine-tuning FLOPs by more than half. ## update after rebuttal I have no further questions and provide my final rating based on the overall assessment of the paper. A higher rating was not given due to the paper’s limited contribution in comparison to the rectified scaling law. Claims And Evidence: **Claim:** The authors claim that modeling fine-tuning with a Hessian-based PAC-Bayes approach clarifies how model performance transitions from “pre-power” to “power” phases. - **Evidence:** They provide theoretical reasoning (an extension of the PAC-Bayes style bound) and highlight how truncated Hessian values decrease as more data is used, driving the phase transition. This part is largely conceptual, building on known results and augmenting them for large-scale transformer architectures. **Claim:** The paper asserts that LensLLM achieves up to 91.1% accuracy in ranking the best fine-tuned model across multiple tasks. - **Evidence:** The authors compare their method’s selection performance (via Pearson correlation and relative accuracy) against five baselines. The results are consistent across three datasets and multiple model families, demonstrating a clear improvement. The reported metrics show healthy margins over competing methods. **Claim:** The authors state that LensLLM reduces computational costs by up to 88.5% relative to fully tuning every model on the entire dataset. - **Evidence:** They provide FLOP-based calculations for each approach (e.g., full fine-tuning vs. partial fine-tuning vs. LensLLM’s iterative sampling). The step-wise sampling procedure indeed appears to require fewer training passes than a naive “train everything fully” approach. The calculations and comparisons are largely in line with standard estimates of training costs. Overall, the evidence for these claims seems credible, supported by both theoretical discussion and consistent empirical demonstrations across different tasks and architectures. Methods And Evaluation Criteria: - The proposed method uses an NTK-based scaling law and Hessian insights to predict final fine-tuned performance from partial data. This approach is well-motivated: the paper grounds it in the theoretical transition between small- and large-data regimes, something conventional scaling laws often ignore. - The evaluation criteria focus on ranking accuracy (Pearson correlation) and closeness to the best possible model selection (relative accuracy). These metrics are sensible for comparing how well each technique picks the top-performing model. - The authors also assess computational overhead by measuring FLOPs, which is a standard practice for methods that claim efficiency improvements. Overall, the proposed metrics and methods are appropriate for the problem of model selection. Theoretical Claims: - The main theoretical contribution is an extended PAC-Bayes generalization bound that incorporates Hessians for large-scale Transformers. The bound is used to explain the emerging “pre-power” and “power” phases when fine-tuning on increasingly large datasets. - While the proofs for these claims are only summarized in the main text, the logic appears sound and consistent with prior work on PAC-Bayes bounds and Hessian-based approaches. The bounding technique used is reminiscent of standard expansions from prior bounding theorems, now customized to highlight transitions in the Hessian norm. - No glaring issues stand out in the conceptual extension of the Hessian-based approach, though a deeper reading of the full formal derivations (in the appendices) would be needed to confirm all details (while I didn't check very carefully). At a high level, the argument is plausible and well-motivated. Experimental Designs Or Analyses: - **Study design:** The authors systematically vary the amount of training data by doubling from a small subset up to a relatively large subset, documenting the test loss at each point. - **Comparisons:** They benchmark LensLLM against five methods (including strong baselines such as rectified scaling law, zero-shot performance, and subset tuning). - **Potential concerns:** - The paper focuses on classical NLP tasks (summarization, language modeling, etc.). While these tasks are relevant, additional tasks (like natural language inference, reasoning-intensive tasks) might further validate generalization. Also, I notice that this papers' experimental design is mainly from rectified scaling law (Lin et al. 2024), while WMT19 is replaced with Wikitext, is there any concern or rationale for this? - The discussion does not explicitly detail potential hyperparameter differences between models in the direct comparisons, although they do mention controlling for the number of epochs/steps and compute. Some clarity on controlling possible confounders (like different training schedules) would strengthen the claims. Supplementary Material: - The paper references additional proofs and some extended results in the appendices. - It also provides a link to their open-sourced code. - Due to time limitation, I didn't examine the proof very carefully. Relation To Broader Scientific Literature: - This work builds on lines of research on scaling laws (e.g., Kaplan et al. 2020) and PAC-Bayes-based analyses of neural networks. - While the application is specifically targeted at large language model selection, the theoretical perspective on how Hessians and training data size interplay may also be relevant to general deep network analysis. - The approach also resonates with the established tradition of sub-model selection or partial fine-tuning to reduce compute, but it is distinguished by an explicit theoretical lens on the pre-power vs. power regime. Overall, I think this paper is heavily based on rectified scaling law (RSL). The method (the law it fit) is only different on Eq.10, where RSL uses a parameter to fit using data (namely "pre-learned data size"), while this paper uses a NTK-based test loss function on transformers. The theoretical analysis of phase transition is also a new contribution over RSL (while this may need an extra examination by other reviewers). Essential References Not Discussed: I don't see any missing references. Other Strengths And Weaknesses: - **Strengths**: The paper offers a well-structured theoretical approach that is rare in the realm of purely empirical LLM selection. - **Weaknesses**: 1. The approach might be difficult to implement for extremely large models (i.e., 30B+ parameters) unless the user has the necessary partial fine-tuning infrastructure. The authors mention partial subsets to reduce cost, but the feasibility at truly massive scales may need more real-world demonstration. The sub-fine-tuning based model selection is still costly. 2. The paper could clarify hyperparameter control across different candidate models to ensure consistent comparisons (especially if some models are more sensitive to learning rate than others). 3. The approach might rely on the assumption that Hessians remain relatively stable for a given model family. Future expansions might check whether or not modifications in architecture or pre-training domain strongly shift Hessian-based bounds. 4. The primary concern is that the paper offers only an incremental improvement over rectified scaling law. While the proposed method achieves higher accuracy, the improvement is not substantial. The study would benefit from providing deeper insights into model selection beyond methodological refinement. Other Comments Or Suggestions: - It would be useful to see how robust the final ranking is under mild variations of hyperparameters or partial training steps, to confirm that results aren’t dependent on a very specific training schedule. Questions For Authors: - **Phase Transition Sensitivity**: How robust is the identified transition point between pre-power and power phases to hyperparameter choices (e.g., learning rate, batch size, sequence length)? Do small changes in these settings significantly shift the transition point or degrade predictive accuracy? - **Scalability**: For extremely large models (tens of billions of parameters or more), do you expect Hessian approximations to remain stable in practice? Or is there a risk that computing these approximations or performing partial fine-tuning becomes infeasible? - **Architectural Variations**: How would a significantly modified Transformer architecture (e.g., MoE, encoder-decoder) impact lens-based predictions? Would your approach require re-fitting theoretical parameters for such architectures? - **Experimental Details**: The details of the model selection experiments are not entirely clear. What is the exact set of models used? It appears that the models tested are generally not very large—could you clarify why? Additionally, do any of the fine-tuning results come directly from the rectified scaling law, or were they trained independently? If the latter, what are the fine-tuning details, including software and hardware specifications? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful questions. The followings are our answers to your concerns. Q1: How robust is the identified transition point between pre-power and power phases to hyperparameter choices (e.g., learning rate, batch size, sequence length)? Do small changes in these settings significantly shift the transition point or degrade predictive accuracy? A1: a. We would like to point out that the transition point is derived from the fine-tuning results as illustrated in pseudo-code in Section3.2, so its stability is closely linked to the robustness of the fine-tuning process. We conducted additional experiments on FLAN with the following setting to test the robustness of fine-tuning: -Learning rates in (3e−5, 1e−4, 3e−4, 1e−3) -Batch sizes in (64, 128, 256) -Average input sequence lengths in (18, 20, 22). Table: Variance of fine tuning results on FLAN. |Model|Variance| |-|-| |OPT-6.7B|0.0016| |T5-Base|0.0022| |Cerebras-1.3B|0.0012| |MT5-Large|0.0023| |BART-Large|0.0042| |GPT-2|0.0038| |LaMini-774M|0.0026| The small variance ranging from 0.0012 to 0.0042 demonstrates the robustness of fine-tuning process, thereby supporting the stability of the identified transition point. b. We illustrated the robustness of our method to hyperparameters,including regression threshold, stop threshold, learning rate, batch size and sequence length. Due to space constraints, please refer to the rebuttal of reviwer 2LaB. Q2: For extremely large models, do you expect Hessian approximations to remain stable in practice? Or is there a risk that computing these approximations or performing partial fine-tuning becomes infeasible? A2: We would like to point out that our approach does not rely on explicitly computing Hessian approximations for predictive modeling. Instead our scaling law prediction model is based on extracting the Neural Tangent Kernel (NTK) matrix, which effectively captures interactions between the data and model features. This allows us to make predictions about fine-tuning behavior without requiring direct Hessian computation. Thus, the Hessian is primarily used in our work to establish a theoretical Bayesian bound, which helps verify the pre-power and power phases from a theoretical perspective. In this case, we leverage general properties of the Hessian rather than computing it explicitly. Q3: How would a significantly modified Transformer architecture (e.g., MoE, encoder-decoder) impact lens-based predictions? Would your approach require re-fitting theoretical parameters for such architectures? A3: We would like to point out that we have evaluated both decoder-only (e.g., OPT, GPT-2, LaMini, Cerebras) and encoder-decoder (e.g., T5, mT5, BART) models without re-fitting theoretical parameters in our experiments, demonstrating the consistency of our approach across these architectures. However, for architectures like MoE—where only a subset of parameters is active per forward pass—the effective parameter count is lower. This requires re-fitting the scaling law parameters in our NTK-based framework, specifically by adjusting the effective network width and recalibrating the scaling constants and exponents to account for the sparsity and computational cost. Due to resource constraints, we have not yet conducted experiments on MoE models, but this remains a promising direction for future work. Q4: What is the exact set of models used? It appears that the models tested are generally not very large—could you clarify why? Additionally, do any of the fine-tuning results come directly from the rectified scaling law, or were they trained independently? If the latter, what are the fine-tuning details, including software and hardware specifications? A4: a. As shown in Table 2, the following is our model set: |Model Set| |-| |OPT-350M, 1.3B, 6.7B| |T5-Small, Base| |Cerebras-256M, 1.3B| |MT5-Base, Large| |BART-Base, Large| |GPT-2| |LaMini-124M, 774M| b. Due to limited access to high-end GPUs (we only used a single A100-80G), we were unable to extend experiments to larger models. c. The fine-tuning results are all trained by ourselves, and the followings are the software and hardware details: 1. Software: Fine-tuning was conducted using PyTorch with the Hugging Face Transformers library. We use AdamW optimizer and weight decay as 0.01. 2. Hardware: Experiments were conducted on a single A100-80G. We will clarify the above points in the revised manuscript. A5. We would like to clarify that replacing WMT19 with Wikitext is due to computational constraints, as it is 20 times larger than FLAN and Gigaword. Additional experiments conducted on a randomly selected 1/20-size subset of WMT19 yielded results consistent with our original results. Table: Model selection performances of our model and Rectified Scaling Law |Metric/Method | LensLLM | Rectified Scaling Law | |-|-|-| |PearCorr|85.7|79.3| |RelAcc|90.2|89.0| Please let us know if there are any comments or insights, we'd like to explore further! --- Rebuttal Comment 1.1: Comment: Thank you. I have no further questions. This is my final rating based on the overall assessment of the paper.
Summary: The paper first derives a Hessian-based PAC-Bayes generalization bound that provides deep insight into the fine-tuning dynamics of large language models. It then introduces LENSLLM—a Rectified Scaling Model based on the Neural Tangent Kernel (NTK)—which demonstrates impressive accuracy in predicting performance across a wide range of tasks while maintaining computational efficiency. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The theoretical part looks correct and solid. Experimental Designs Or Analyses: The experimental part and analyses are comprehensive. The performance comparison part looks well. However, the analysis section requires approval. For example, it should assess the effectiveness of the stop threshold ($\tau$) as well as the computational cost for various model sizes ($M$). Supplementary Material: Yes, the theoretical proof part. Relation To Broader Scientific Literature: Improving the performance of Rectified Scaling Model and also theoretical contribution for PAC-Bayes Generalization Bound. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Overall, the paper looks solid in both theoretical and experimental results. I only have some minor questions. 1. Could you please clarify the architecture of your regression model? 2. If I understood correctly, your experimental setup involves randomly selecting all datasets for the training set. Is it feasible to train the regression model on one dataset and test it on another? 3. Could you please add Rectified Scaling Law performance in Figure 4. 4. Could you discuss whether your proposed method remains effective when the test model is not included in the training set? 5. Could you provide some discussion or how to extend your method to vison-language model selection? Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful questions. The followings are our answers to your concerns. Q1: Could you please clarify the architecture of your regression model? A1: As illustrated in Section 3.2, our regression model is constructed based on the NTK matrix as follows: $$ L(D) = \frac{B}{F(\Theta, t) + D^\beta} + E $$ where -$F(\Theta, t)$ is the adapted NTK-based test loss function on transformer -$D$ is the number of training data -$\beta$ denotes the learning difficulty -B adjusts the initial test loss -E denotes the optimal loss of the model given an infinite amount of data. They are all model/task-dependent and we estimate ${B, E, \beta, t}$ for each model by minimizing the loss function: $$ \min_{B,E,\beta,t} \sum_{ i} \text{LSE}(\log B - \log(F(\Theta, t) + D_i^\beta), \log E) - \log L(D_i)) $$ Where $L(D_i)$ denotes the test loss of fine-tuning on the data size $D_i$, and LSE denotes the log-exp-sum operator. Q2: Is it feasible to train the regression model on one dataset and test it on another? A2: In our current approach, we leverage the NTK-based Rectified Scaling Model to identify a transition point in the fine-tuning process, after which we perform regression on the loss trajectory. This regression model is then used to predict performance which assist in model selection, particularly in resource-constrained scenarios. Cross-dataset validation, however, poses a significant challenge. The primary difficulty is that the transition point identified on the training dataset may not align with that of a test dataset, due to differences in data distributions and task characteristics. This discrepancy introduce additional challenges to theoretically understand the scaling behaviors of LLM in fine-tuning. This is beyond the scope of this work, and we would like to leave it as our future work. Q3: Could you please add Rectified Scaling Law performance in Figure 4. A3: We would like to note that we have already included the performance of the Rectified Scaling Law in Figure 4 (https://anonymous.4open.science/r/LENSLLM-3B1E/Revised%20plot4.png) and will ensure that in the revised manuscript. Q4: Could you discuss whether your proposed method remains effective when the test model is not included in the training set? A4: While our experiments primarily focus on evaluating performance within the same set of candidate models, the NTK foundation capturing dynamic behaviors of LLMs during fine-tuning suggests its generalization ability to unseen models, especially those with similar architectural properties and scaling behaviors. To validate this, we conducted additional experiments on the FLAN dataset using LaMini-GPT-774M, GPT2, and BART-large as test models, with LaMini-GPT-124M serving as the training model. Table: RMSE between predicted and actual test losses |Model|RMSE| |-|-| |LaMini-GPT-774M|1.23| |GPT2|1.55| |BART-large|5.31| The results indicate that our method remains effective when the test model shares similar architectural properties and scaling behaviors with the training model (e.g., LaMini-GPT-124M vs. LaMini-GPT-774M and LaMini-GPT-124M vs. GPT2), which is further supported by our theoretical foundation. Please let us know if there are any comments or insights, we'd like to explore further! Q5: Could you provide some discussion or how to extend your method to vision-language model selection? A5: Our current approach models fine-tuning dynamics using the NTK matrix and scaling laws to capture training dynamics and Hessian properties in language models. Extending this framework to vision-language models presents several challenges: 1. Cross-Modal NTK Formulation: The NTK must be adapted to capture interactions between tokenized image representations (e.g., VQ-VAE or CLIP features) and textual tokens, reflecting joint feature spaces and inter-modal attention. 2. Modality-Specific Scaling: Vision and language modalities have different scaling behaviors, requiring recalibration of the scaling laws to account for distinct gradients and effective parameter counts, as well as synergy and competition between modalities(Armen Aghajanyan et al., 2023). 3. Theoretical Bound Adjustments: Our current PAC-Bayesian and NTK-based bounds are tailored to language models; for vision-language models, these would need to be re-derived or adjusted to include modality-specific properties. Due to these complexities, extending our method to vision-language model selection is non-trivial and beyond the scope of this work, but it remains a promising direction for future research. Please let us know if there are any comments or insights, we'd like to explore further! --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. It addresses all my concerns. I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable comments and support!
Summary: LensLLM introduces a novel theoretical framework that addresses the fundamental challenge of efficient Large Language Model selection through the lens of fine-tuning dynamics. The paper develops a rigorous Hessian-based PAC-Bayes generalization bound that characterizes two distinct phases in LLM fine-tuning: a "pre-power phase" in low-data regimes where performance improves slowly due to high Hessian values and parameter sensitivity, and a "power phase" where improvements follow predictable power-law scaling with enhanced stability. The authors implement this theoretical insight through a Neural Tangent Kernel (NTK)-based Rectified Scaling Model that accurately predicts model performance across diverse tasks while maintaining computational efficiency. Empirical evaluation across three benchmarks (FLAN, Wikitext, and Gigaword) demonstrates the framework's effectiveness, achieving up to 91.1% relative accuracy and 85.8% Pearson correlation while reducing computational costs by up to 88.5% compared to full fine-tuning approaches. The work establishes a new foundation for understanding LLM generalization during fine-tuning and provides practitioners with a principled approach to model selection under computational constraints. Claims And Evidence: The claims are well-supported. Methods And Evaluation Criteria: Evaluation criteria makes sense. Theoretical Claims: I checked the correctness of Theorem 2, Corollary 1, and Proposition 1. Experimental Designs Or Analyses: The paper presents interesting findings, though there are several areas where the experimental design could be strengthened in future work. The data sampling strategy, which created smaller datasets by "randomly sampling examples ranging from 200 to 1,638,400," would benefit from additional details about whether multiple random samples were used, if sampling preserved the original data distribution, and how many repeated trials were conducted. Similarly, including confidence intervals or statistical significance tests would provide stronger support for the performance differences observed between the method and the baselines. The paper also could benefit from expanded analysis of hyperparameter sensitivity. While Algorithm 1 references a regression threshold γ and stop threshold τ, it would be useful to example results might vary with different parameter values. An ablation study would help demonstrate the robustness of this promising approach across different conditions. Despite these limitations, the core methodology seems sound, and addressing these experimental design considerations would further validate the paper's contributions. Supplementary Material: I looked at the proofs in the supplementary material. Relation To Broader Scientific Literature: The proposed LensLLM framework expands upon existing theoretical foundations in machine learning, presenting significant advancements through three primary contributions. Building upon McAllester's foundational PAC-Bayesian theory and Ju et al.'s generalization bounds, this work extends these theoretical constructs to transformer architectures by developing a Hessian-based generalization bound that accounts for transformer-specific elements such as attention mechanisms and layer normalization. This theoretical contribution is notable because it addresses the challenge of analyzing complex transformer architectures that previous theoretical frameworks were not designed to accommodate. In addition, the identification of distinct "pre-power" and "power" phases in fine-tuning enhances our understanding of scaling laws beyond the established work of Kaplan et al. and Hernandez et al., offering a theoretical explanation for empirically observed behaviors in low-data regimes that previous methods could not adequately characterize. On the practical front, the paper's NTK-based Rectified Scaling Model demonstrates the application of NTK theory to finite-width transformers for performance prediction. This model represents a significant improvement over previous approaches such as Lin et al.'s Rectified Scaling Law or You et al.'s LogME, which primarily focused on empirical scaling or feature similarity without capturing the dynamic nature of fine-tuning. The progressive sampling strategy employed achieves substantial computational efficiency improvements compared to existing methods like Kaplun et al.'s SubTuning, effectively addressing the increasingly important challenge of resource-efficient LLM deployment. This combination of theoretical depth and practical efficiency addresses the growing need for principled model selection. Essential References Not Discussed: n/a Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: 1. How sensitive is the method to hyperparameter choices, particularly in the algorithm's stopping criteria? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Q1: How sensitive is the method to hyperparameter choices, particularly in the algorithm's stopping criteria? A1: Thank you for your question. a. We performed ablation studies to assess the sensitivity of our method to the stopping criteria—specifically, the regression threshold ($\gamma$) and the stop threshold ($\tau$). The following tables summarize the impact of varying these parameters on the Pearson correlation across three datasets: Table1: Impact of $\gamma$ and $\tau$ on PearCorr on FLAN |$\gamma$\PearCorr\ $\tau$|1|2|3|4|5| |-|-|-|-|-|-| |3|78.41|78.40|78.42|78.43|78.31| |4|78.32|78.39|78.36|78.40|78.40| |5|78.39|78.41|78.40|78.39|78.34| Table2: Impact of $\gamma$ and $\tau$ on PearCorr on Gigaword |$\gamma$\PearCorr\ $\tau$|1|2|3|4|5| |-|-|-|-|-|-| |3|85.74|85.74|85.80|85.71|85.66| |4|85.62|85.71|85.75|85.79|85.66| |5|85.64|85.69|85.66|85.76|85.72| Table3: Impact of $\gamma$ and $\tau$ on PearCorr on Wikitext |$\gamma$\PearCorr\ $\tau$|1|2|3|4|5| |-|-|-|-|-|-| |3|82.47|82.58|82.47|82.48|82.44| |4|82.49|82.51|82.48|82.49|82.54| |5|82.61|82.50|82.46|82.57|82.53| Observations: 1. FLAN: Pearson correlations remain stable (approximately 78.31 to 78.43) across different values of $\gamma$ and $\tau$. 2. Gigaword: The correlation values are consistent, ranging from about 85.66 to 85.79. 3. Wikitext: A slight variation is observed, with correlations fluctuating between roughly 82.42 and 82.61. Overall, the minor fluctuations in Pearson correlation across different $\gamma$ and $\tau$ values indicate that our method is robust concerning the stopping criteria. b. We also conducted some additional experiments to test the sensitivity to hyperparamters in fine-tuning process - specifically, learning rates, batch sizes and average input sequence length. The impacts of learning rates and batch sizes on the Pearson correlation across three datasets are summarized below: Table4: Impact of Learning Rate and Batch Sizes on PearCorr on FLAN |Batch size\Metric\Learning rate|3e−5|1e−4|3e−4|1e−3| |-|-|-|-|-| |64|78.36|78.41|78.40|78.39| |128|78.32|78.34|78.43|78.36| |256|78.37|78.36|78.36|78.34| Table5: Impact of Learning Rate and Batch Sizes on PearCorr on Gigaword |Batch size\Metric\Learning rate|3e−5|1e−4|3e−4|1e−3| |-|-|-|-|-| |64|85.74|85.73|85.74|85.75| |128|85.74|85.79|85.77|85.76| |256|85.69|85.72|85.71|85.69| Table6: Impact of Learning Rate and Batch Sizes on PearCorr on Wikitext |Batch size\Metric\Learning rate|3e−5|1e−4|3e−4|1e−3| |-|-|-|-|-| |64|82.60|82.61|82.60|82.60| |128|82.54|82.53|82.55|82.55| |256|82.51|82.51|82.51|82.50| Observations: 1. FLAN: Pearson correlations remain stable (approximately 78.32 to 78.43). 2. Gigaword: The correlation values are consistent, ranging from about 85.69 to 85.79. 3. Wikitext: A slight variation is observed, with correlations fluctuating between roughly 82.50 and 82.61. Overall, our method is robust concerning the learning rates and batch sizes. Due to time constraints, we evaluated only the average input sequence length on FLAN while keeping the optimal learning rate (3e-4) and batch size (128) as determined earlier. In FLAN, the overall average input sequence length is 20. To test the effect of altering this average, we removed either the shortest or longest sequences to adjust the average to 18 and 22, respectively. Table7: Impact of Average Input Sequence Length on FLAN |Metric/Average Input Sequence Length|18|20|22| |-|-|-|-| |PearCorr|77.39|78.14|76.89| |RelAcc|87.86|88.88|87.91| We observe that the deviations—either shorter (18) or longer (22)—lead to lower Pearson correlation and relative accuracy, the performance gap is not large, which suggests that while there is some sensitivity to sequence length, the model remains reasonably robust. Please let us know if there are any comments or insights, we'd like to explore further! --- Rebuttal Comment 1.1: Comment: I appreciate the detailed rebuttal and recommend the paper's acceptance. It provides theoretic and empirical insight. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful comments and continued support! We will ensure to include the analysis of hyperparameter sensitivity in the revised manuscript. We truly appreciate the time and effort you have devoted to reviewing our work.
null
null
null
null
null
null
Privacy-Shielded Image Compression: Defending Against Exploitation from Vision-Language Pretrained Models
Accept (poster)
Summary: This paper proposes a novel Privacy-Shielded Image Compression (PSIC) method aimed at protecting images from being exploited by Vision-Language Pretrained (VLP) models. The PSIC framework integrates an adaptive multi-objective optimization strategy that balances perceptual quality and encryption effectiveness. A Conditional Latent Trigger Generation (CLTG) module is introduced to generate different decoding options from a single bitstream, while the Uncertainty-Aware Encryption-Oriented (UAEO) optimization function is used to maximize encryption efficiency against VLP models. Experimental results indicate that PSIC significantly degrades the interpretability of compressed images by VLP models while maintaining visual quality for human perception. Claims And Evidence: The claim: "The proposed PSIC scheme is plug-and-play and can be seamlessly integrated into most existing learned image compression (LIC) models" is not well supported. The paper only evaluated PSIC on one backbone (AE-Hyperprior, 2018). How about more recent LIC models such as HiFiC[1], ELIC[2] and MLIC[3]? If the experiments on these more recent models are not possibly all provided, please clarify what cost it will take to equip the general LIC model with the proposed PSIC method, to support that "the proposed PSIC scheme is plug-and-play." [1] High-fidelity generative image compression[C]. Advances in neural information processing systems, 2020, 33: 11913-11924. [2] ELIC: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 5718-5727. [3] MLIC: Multi-reference entropy model for learned image compression[C]//Proceedings of the 31st ACM International Conference on Multimedia. 2023: 7618-7627. Methods And Evaluation Criteria: The method is compared against BAvAFT, but the baseline method lacks proper citation. A clear description of BAvAFT should be provided. Theoretical Claims: The Dempster-Shafer Theory used to model pair uncertainty is presented as a black-box implementation with minimal discussion. More discussion would be helpful to understand. Experimental Designs Or Analyses: 1. The ablation study in the experiment part is useful but would be better to extend to analyze other proposed modules or strategies, such as the proposed adaptive multi-objective optimization. 2. In perceptual quality comparison, instead of comparing to AE-Hyperprior (2018), more recent general LIC models that optimized for perceptual quality should be included to provide baselines. Additionally, more quality metrics should be evaluated, such as those that are more aligned with the human vision system: SSIM, LPIPS, NIQE, CLIPIQA. 3. The paper evaluates the method on four downstream tasks: image-text retrieval, image classification, facial attribute analysis, and image captioning. These are reasonable benchmarks, but additional comparison with recent works in Yu et al. 2023, 2024 (which is reviewed in related work) is necessary, since these works are also designed for misleading downstream machine analytic tasks. Supplementary Material: The authors did not provide supplementary material. Including more qualitative visual comparisons or necessary additional results would improve the paper’s rigor. Relation To Broader Scientific Literature: Previous related works are either focusing on misleading downstream machine analytic tasks or designing pre-processing methods on the input data. The paper focuses on privacy shielding against exploitation from vision-language pretrained models, and it addresses this problem from the aspect of the compression phase, which is interesting and novel. Essential References Not Discussed: BAvAFT mentioned in the experiment section is not cited, and it is unclear if it is a fair baseline. Other Strengths And Weaknesses: Strengths: 1. The proposed Privacy-Shielded Image Compression (PSIC) framework introduces a novel approach to integrating privacy protection directly into the image compression process, which is an underexplored area in learned image compression. 2. The idea of conditional latent trigger generation (CLTG) to enable different decoding versions from a single bitstream is innovative and provides flexibility in balancing privacy and perceptual quality. 3. The method is designed to be plug-and-play, potentially making it applicable to a broad range of learned image compression (LIC) models. Weaknesses: My identified weaknesses have been thoroughly mentioned in other parts of the review, so no additional weaknesses will be mentioned here. Other Comments Or Suggestions: In addition to the one row of images in Fig. 5, please show more visual comparison results to demonstrate the effectiveness of the proposed method (both full and encrypted modes) on perceptual quality. Questions For Authors: 1. Why does the experimental backbone use AE-Hyperprior (2018)? Have you considered evaluating PSIC on newer LIC models? 2. Why is there no direct comparison with Yu et al. (2023, 2024), given that their work also targets misleading downstream machine analysis tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First of all, we greatly appreciate your thorough review and helpful suggestions. Below, we address your concerns one by one, and we hope our responses fully clarify each point. If any misunderstanding remains, we sincerely welcome further clarification or suggestions. Q.1 Incorporate more quality metrics. Ans.: Thanks for your suggestion. We have further evaluated the perceptual quality using additional metrics, including SSIM, LPIPS, NIQE, and CLIPIQA. The results corresponding to Fig. 4 of our manuscript can be found at the following link. https://pic1.imgdb.cn/item/67eb75140ba3d5a1d7e8f36d.png It should be noted that the NIQE curves appear somewhat irregular and produce unusual results compared to the other four metrics. Given that NIQE is a no-reference IQA method with known instability in certain scenarios, it may not accurately reflect the actual perceptual quality in this case. Q.2: Perform the PSIC scheme on other LIC models. Ans.: We appreciate your insightful suggestion. We have implemented the PSIC in two cutting-edge LIC models (ELIC, and MLIC++). To ensure a fair comparison, all training settings strictly follow those described in our original manuscript. The corresponding results can be found in below table. ||Meth.|PSNR|i2t|t2i|ASR(i2t)|ASR(t2i)| |-|-|-|-|-|-|-| |ELIC(bpp=0.19)|Baseline|27.24|0.32|0.22|-|-| ||Ours($x_e$)|26.64|0.30|0.19|-|-| ||Ours($x_f$)|26.05|0.12|0.07|0.74|0.79| |MLIC++(bpp=0.18)|Baseline|27.26|0.35|0.22|-|-| ||Ours($x_e$)|26.93|0.32|0.21|-|-| ||Ours($x_f$)|26.44|0.13|0.09|0.69|0.73| These results demonstrate the effectiveness of the PSIC method on other LIC models, as it achieves perceptual quality comparable to the baseline model while offering remarkable encryption efficiency in terms of ASR.  Due to time constraints, we will include more comprehensive results on additional LIC models across multiple BPP points in the supplementary material of our revised paper. We appreciate your understanding. Q.3 Lacking citation of BAvAFT and compare with recent works (Yu et al. 2023, 2024). Ans.: We apologize for the confusion caused by the missing citation for BAvAFT. In fact, BAvAFT originates from the papers you mentioned (Yu et al., 2023, 2024. The 2024 version is an extension of the 2023 version, and we employed the latest version in our manuscript). We will properly cite both works in the revised version to avoid confusion and to give appropriate confirmation. Q.4. Ablation study on CLTG module. Ans.: We provide an ablation study on CLTG. We remove all CLTG modules and train only the compression backbone using UAEO-based loss function:$λ_1 L_2(x, \widehat{x}) + λ_2 L_{UC} (\widehat{x}) + r$, $\widehat{x}$ is the reconstructed image, $r$ is the bitrate. We performed the comparison at a BPP level of 0.20, and the results are provided below. |Bpp|Meth.|PSNR|i2t|ASR(i2t)|t2i|ASR(t2i)| |-|-|-|-|-|-|-| |0.20|Ours($x_f$)|26.07|0.32|-|0.20|-| ||Ours($x_e$)|25.61|0.13|0.69|0.07|0.78| ||*w/o* CLTG|24.48|0.11|0.73|0.08|0.75| As shown, removing CLTG leads to a PSNR drop of 1.59 dB at the same ASR level. Q.5. Ablation study on multi-stage training strategy. Ans.: We further conduct an additional ablation study on the proposed multi-stage training strategy. In particular, we omit Stage 2 and instead train the model using the settings from Stage 1 until convergence (an additional 100 epochs). We present the comparison results regarding encryption performance (encrypted version) and perceptual quality (full version) below. |Bpp|Meth.|ASR(i2t)|ASR(t2i)|PSNR($x_f$/$x_e$)| |-|-|-|-|-| |0.13|Ours|0.66|0.74|24.72/24.57| ||*w/o* stg2|0.05|0.06|24.70/24.68| |0.20|Ours|0.69|0.78|26.07/25.61| ||*w/o* stg2|0.03|0.05|26.02/26.02| |0.30|Ours|0.74|0.83|27.37/25.67| ||*w/o* stg2|0.04|0.06|27.28/27.27| |0.46|Ours|0.82|0.88|28.78/27.34| ||*w/o* stg2|0.02|0.04|28.84/28.84| The effectiveness of the multi-stage training strategy is readily apparent, as it improves encryption performance by about 72% in terms of the average ASR. Q.6: Provide more visual comparisons Ans.: Thanks for your kind suggestions. We have provided four extra groups of visual compression in addition to Fig. 5, which can be observed in the following links. https://pic1.imgdb.cn/item/67eb9c9c0ba3d5a1d7e9030c.jpg Q.7 Provide a supplemental material. Ans.: We appreciate this constructive suggestion and will incorporate the following in the supplementary material: a) Extended ablation studies on the CLTG module and the multi-stage training strategy across multiple BPP points; b) Comprehensive evaluations demonstrating the implementation of PSIC on other baseline LIC models, along with multiple perceptual quality metrics; c) Additional visual examples generated by the PSIC method, the compression baseline, and the employed BAvAFT method in the supplementary material; d) A detailed introduction to the Dempster-Shafer Theory, along with the derivation process of the employed evidence extractor. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. I have several concerns as follows: 1. Could you give more analysis or discussion regarding the performance comparison on different quality metrics? And your claim "NIQE is a no-reference IQA method with known instability in certain scenarios" needs more clarification, since as far as I know, many works evaluate their methods on NIQE for perceptual quality measurement. 2. The output images encrypted by the proposed method often remain some artifacts on the top and bottom areas. Could you give some analysis or discussion on this artifact? Given that the poison images from BAvAFT show smaller artifact area (just top left), could this visual results be viewed as drawbacks of the proposed method regarding visual performance? --- Reply to Comment 1.1.1: Comment: Q1.1 Could you give more analysis or discussion regarding the performance comparison on different quality metrics? Ans. : Thanks for your suggestions. First, the performance curves of PSNR, SSIM, LPIPS, and CLIPIQA exhibit a consistent trend, indicating that our *full* version closely matches the compression baseline in terms of perceptual quality. In particular, for PSNR, SSIM, and LPIPS, our *full* version’s performance is almost identical to that of the compression baseline. Moreover, it demonstrates a notable improvement over the *clean* version of BAvAFT, with average gains of 0.224, 0.009, 0.024, and 0.025 for PSNR, SSIM, LPIPS, and CLIPIQA, respectively. These results highlight the effectiveness of our PSIC method in providing a perceptually satisfactory compression pipeline. Q1.2 And your claim "NIQE is a no-reference IQA method with known instability in certain scenarios" needs more clarification, since as far as I know, many works evaluate their methods on NIQE for perceptual quality measurement. Ans. : Thank you for pointing this out, and we apologize for our previous wording. NIQE is indeed a widely adopted and milestone perceptual QA metric with proven effectiveness in various applications, *e.g.*, image compression and restoration. In particular, NIQE evaluates perceptual quality by measuring the distance (*e.g.*, Mahalanobis distance) between the feature distribution of a test image and that of the natural images, based on natural scene statistics (NSS). However, in our experiments, the NIQE curves show counter-intuitive trends: our encrypted version shows significantly better scores over the compression baseline, while the poisoned version of BAvAFT also outperforms its clean counterpart. These results are not aligned with those of other metrics (*e.g.*, LPIPS, CLIP-IQA) and do not reflect actual visual perception. Through in-depth analysis, we attribute these irregularities to two main factors: 1) Region Selection Bias: NIQE selects sharp regions in the test images and uses their features in the assessment process. However, in our PSIC and BAvAFT methods, the adversarial patterns often appear at the image borders or in less textured regions, which may be overlooked by NIQE’s sampling process. 2) Non-standard Distortion Types: The distortion patterns introduced by our PSIC method and the LIC-oriented backdoor attack method differ from those commonly found in standard IQA datasets (*e.g.*, Gaussian noise, quantization noise, or contrast degradation). Since NIQE’s model is not trained on these unfamiliar distortions, it may not accurately evaluate them, thereby producing unreliable scores in such cases. In summary, while NIQE remains a valuable metric, we advise caution when interpreting its results under adversarial or task-specific distortions. In our case, other perceptual metrics (*e.g.*, LPIPS, CLIP-IQA) offer more consistent and reliable assessments. Q.2 The output images encrypted by the proposed method often remain some artifacts on the top and bottom areas. Could you give some analysis or discussion on this artifact? Given that the poison images from BAvAFT show smaller artifact area (just top left), could this visual results be viewed as drawbacks of the proposed method regarding visual performance? Ans.: Thanks for your suggestions. Regarding the concern about increased artifacts in our encrypted version, we acknowledge that our encrypted images may exhibit more artifacts near the top and bottom regions. This is primarily due to the inherent trade-off between perceptual quality and encryption efficiency (ASR). Raising encryption efficiency inevitably comes at the expense of perceptual quality, as these two objectives are inherently conflicting. Thus, we believe that the presence of such artifacts alone should not be the sole criterion for evaluating visual performance; rather, a fair assessment requires comparisons under matched conditions and controlled variables. To provide a fair comparison, we trained a version of PSIC that achieves the same ASR as BAvAFT. A set of visual comparisons between our encrypted version and BAvAFT’s poison version can be found in the following link. https://pic1.imgdb.cn/item/67ed50a80ba3d5a1d7eb28bf.jpg Visual comparisons show that under matched encryption efficiency, both methods demonstrate comparable perceptual quality. While the artifact regions in PSIC may appear slightly more spatially distributed, they remain visually unobtrusive and acceptable in practical scenarios. Furthermore, we would like to emphasize a key advantage of our proposed PSIC: it enables both full and encrypted versions to be decoded from a single bitstream, without requiring any additional encoding steps. In contrast, BAvAFT needs an extra encoding process and two SEPARATE bitstreams—one each for the clean and poisoned versions. This design makes PSIC significantly more efficient and user-friendly, especially when balancing encryption needs and perceptual quality.
Summary: The paper presents a novel approach for privacy protection in image compression, termed Privacy-Shielded Image Compression (PSIC), aimed at defending against exploitation by Vision-Language Pretrained (VLP) models. The method leverages a flexible compression scheme that creates bitstreams with multiple decoding options. By default, the bitstream preserves perceptual quality while concealing semantic content from VLP models. The method can be adjusted to allow reconstruction of images with full semantic information when required. The system incorporates a Conditional Latent Trigger Generation (CLTG) module to produce bias information for guiding the decoding process and an Uncertainty-Aware Encryption-Oriented (UAEO) optimization function to maximize encryption performance while maintaining perceptual quality. The paper claims that PSIC can mislead VLP models and prevent them from exploiting compressed images for downstream tasks such as image-text retrieval and classification, while preserving image quality. Claims And Evidence: Yes, clear and convincing. Methods And Evaluation Criteria: Yes. Theoretical Claims: Correct. Experimental Designs Or Analyses: The experiments cover a range of downstream tasks (e.g., image classification, image-text retrieval, and facial attribute analysis) and provide solid evidence that the PSIC method outperforms existing approaches. The use of attack success rate (ASR) to measure the model's effectiveness in misleading VLP models and its ability to retain perceptual quality is appropriate. The ablation studies further validate the contributions of specific modules, such as the UAEO function. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper addresses the growing privacy concerns regarding VLP models and their ability to exploit publicly available visual data. Given the increasing reliance on machine learning models that use large-scale datasets for training, the proposed method provides a timely and important contribution to privacy-preserving data compression. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: The approach is innovative, particularly in the integration of image compression with privacy-preserving mechanisms. The ability to generate multiple versions of an image from the same bitstream—one that protects privacy by preventing interpretation by VLP models, and one that retains full semantic information for legitimate use—is a novel idea. The use of Conditional Latent Trigger Generation and Uncertainty-Aware Encryption-Oriented optimization adds unique elements to the field of privacy-enhancing image compression. Other Comments Or Suggestions: N/A. Questions For Authors: 1. What specific steps would be needed to extend the PSIC method to other data types, such as audio or video, and how do you foresee the challenges related to temporal or sequential data? 2. How does the UAEO optimization function contribute to the robustness of the PSIC method against more sophisticated attacks, such as model inversion attacks? Could further defense mechanisms be incorporated to strengthen the security guarantees? 3. Could you elaborate on the trade-offs between encryption efficiency and compression performance? In particular, how does the PSIC method compare with other state-of-the-art image compression methods, such as JPEG or WebP, in terms of file size and compression time? 4. Can you provide a deeper discussion on the potential limitations of the CLTG module, particularly in cases where the image content varies significantly from the training data? How might the system adapt to these situations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First of all, we deeply appreciate your kind suggestions and positive feedback on our work, which have greatly encouraged us. We hope our responses below adequately address your concerns. Q.1: Discuss the challenges of extending to other data types. Ans.: Thanks for your insightful comments. Based on our experience, we believe the proposed PSIC scheme can be readily extended to different data types, especially video. In particular, end-to-end video compression follows a frame-by-frame pipeline, with a framework similar to image compression. Hence, the CLTG module, UAEO function, and associated optimization strategy can be directly integrated. Meanwhile, certain adjustments are indeed necessary to handle domain-specific structures (e.g., temporal dependencies). Specifically, in the video compression process, the bitstream for a given frame must be reconstructed and used as a contextual prior for compressing the subsequent frame, thereby removing temporal redundancy. Taking both compression and encryption needs into consideration, we recommend decoding the bitstream into the full version, as it retains more similarity to the next frame. Q.2: Discuss the UAEO’s contribution against more sophisticated attacks. Ans.: Thanks for your suggestions. The UAEO potentially enhances the robustness of the PSIC method against sophisticated attacks, e.g., model inversion, by explicitly leveraging the uncertainty within VLP models and incorporating this uncertainty into corresponding constraints or loss functions. Specifically, UAEO employs Dempster-Shafer Theory to identify image-text pairs with high uncertainty (low confidence), guiding the embedding of targeted yet nearly invisible adversarial patterns during the image compression process. This approach potentially complicates an attacker's ability to reconstruct sensitive semantic information, thereby offering stronger privacy protection. Additionally, as UAEO primarily focuses on mechanisms related to representation-level information, further complementary defense strategies could potentially be incorporated to strengthen the overall security guarantees of PSIC. Potential enhancements include adopting differential privacy techniques (e.g., “Gaussian differential privacy”, JRSS2022) to introduce controlled data obfuscation, employing multi-modal encryption methods for broader protection against cross-modal inference threats (e.g., “BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP”, CVPR2024), and regularly conducting security audits combined with adversarial retraining, which may help continuously maintain or further enhance robustness over time. Q.3: Discuss the trade-offs between encryption efficiency and compression performance. Ans.: Thanks for your comments. To illustrate the trade-offs between encryption and compression performance, we present how these metrics evolve during the second training stage, where the encoder is frozen (i.e., the resulting bitstreams remain unchanged). Specifically, we obtained checkpoints (bpp=0.13) at different epochs, and evaluated their encryption efficiency and compression performance. All test settings are identical to those described in our manuscript. |Epoch|10|20|30|40| |-|-|-|-|-| |PSNR|24.82|24.73|24.65|24.63| |ASR(i2t)|0.11|0.14|0.25|0.30| |ASR(t2i)|0.15|0.18|0.26|0.32| As you insightfully noted, there are indeed trade-offs between these two inherently conflicting objectives: increasing encryption efficiency comes at the expense of compression performance. Nevertheless, with the help of our CLTG module, PSIC achieves a favorable balance—maintaining compression performance comparable to standard codecs while providing significantly improved encryption strength. Q.4: Compare PSIC with JPEG or WebP. Ans.: Thanks for your suggestions. The comparison results on the Kodak testing set are shown below. For a fair comparison of file size (indicated by bpp), we adjusted the quantization parameters of JPEG and WebP to achieve the same PSNR (25.85 dB) as our PSIC. Our PSIC runs on an NVIDIA 3090 GPU, while JPEG and WebP are run on an Intel Xeon Silver 4310 CPU. |Method|BPP|Avg Encoding / Avg Decoding (ms)| |-|-|-| |PSIC (full version)|0.13|246/30| |JPEG|0.28|150/2| |WebP|0.11|50/4| Q.5 Discuss the CLTG module’s generalization capacity. Ans.: Thanks for your insightful suggestions. First, we would like to point out that the generalization performance of the CLTG module is CLOSELY TIED to the target DOWNSTREAM mode. To ensure strong generalization, we adopt the CLIP model as the target, as it has been trained on a large-scale dataset (4 billion text-image pairs). As a result, the proposed PSIC method exhibits promising generalization ability across a variety of tasks. In our experiments, the test sets used for image classification, facial attribute analysis, and image captioning were entirely UNSEEN during training, yet our method achieved strong performance in both compression and encryption metrics.
Summary: This paper proposes Privacy-Shielded Image Compression (PSIC), a learned image compression framework that prevents Vision-Language Pretrained models from extracting semantic information while preserving perceptual quality. PSIC enables a single bitstream to be decoded into an encrypted version or a full version conditioned on user input. The method introduces a Conditional Latent Trigger Generation (CLTG) module and an Uncertainty-Aware Encryption-Oriented (UAEO) optimization function to enhance flexibility and attack effectiveness. Experimental results show that PSIC effectively disrupts VLP-based tasks, such as image-text retrieval and classification, while maintaining comparable compression quality to baseline models under the full version. Claims And Evidence: The effectiveness of the proposed methods requires more ablation studies, such as the Conditional Latent Trigger Generation Module and the Adaptive Multi-Objective Optimization Strategy. Methods And Evaluation Criteria: Yes, I believe that introducing uncertainty to constrain image-text matching is a reasonable approach and can more effectively attack VLP (Vision-Language Pretraining) models. Theoretical Claims: No theoretical proof. Experimental Designs Or Analyses: 1. I recommend the authors to add experiments on more effective baselines like ELIC[1], TIC[2]. 2. The authors should add ablations on the proposed Conditional Latent Trigger Generation Module and Adaptive Multi-Objective Optimization Strategy. In this paper, the authors only provide ablation results on proposed UAEO optimization. [1] He D, Yang Z, Peng W, et al. Elic: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 5718-5727. [2] M. Lu, P. Guo, H. Shi, C. Cao and Z. Ma, "Transformer-based Image Compression," 2022 Data Compression Conference (DCC), Snowbird, UT, USA, 2022, pp. 469-469. Supplementary Material: No supplementary materials. Relation To Broader Scientific Literature: The author considers incorporating backdoor attacks into neural image compression for VLP models, which has good practical applicability for user privacy protection on social media. Essential References Not Discussed: No. Other Strengths And Weaknesses: Please refer to above comments. Other Comments Or Suggestions: Typos: 1. Line 099, "pravicy" → privacy. 2. Line 244, "paramterized" → parameterized 3. Line 246, "of for the image-text pair" → of the image-text pair 4. Line 310, unmatched number of parentheses in the equation S 1-1. Questions For Authors: If the VLP model is not CLIP, is the proposed method still applicable? Ethical Review Concerns: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First of all, we would like to express our sincere gratitude for your responsible and constructive comments, which have been very helpful in further improving the quality of our manuscript. Below, we summarize your concerns into five key points and address them one by one. We hope our responses satisfactorily address all of your concerns. If there is any misunderstanding on our part, we would greatly appreciate any further clarification or additional suggestions. Q.1: Ablation study on the CLTG module. Ans.: We provide an ablation study on CLTG. We remove all CLTG modules and train only the compression backbone using UAEO-based loss function:$λ_1 L_2(x, \widehat{x}) + λ_2 L_{UC} (\widehat{x}) + r$, $\widehat{x}$ is the reconstructed image, $r$ is the bitrate. For a fair comparison, we adjusted the values of $\lambda_1$ and $\lambda_2$ so that the ablated version achieves the same ASR level as the full version. Due to the time constraint, we only provide comparison results at 0.20 BPP points shown below, and we will complete the ablation study in our revised paper. |Bpp|Meth.|PSNR|i2t|ASR(i2t)|t2i|ASR(t2i)| |-|-|-|-|-|-|-| |0.20|Ours($x_f$)|26.07|0.32|-|0.20|-| ||Ours($x_e$)|25.61|0.13|0.69|0.07|0.78| ||*w/o* CLTG|24.48|0.11|0.73|0.08|0.75| As shown, the CLTG module’s effectiveness is evident: it significantly boosts the perceptual quality (1.59dB PSNR) at the same ASR level. Q.2: Ablation study on the multi-stage training strategy. Ans.: We follow the nice suggestion to look into the effect of our multi-stage training strategy. In particular, we omit Stage 2 and instead train the model using the settings from Stage 1 until convergence (an additional 100 epochs). We present the comparison results regarding encryption performance (encrypted version) and perceptual quality (full version) below. |Bpp|Meth.|ASR(i2t)|ASR(t2i)|PSNR($x_f$/$x_e$)| |-|-|-|-|-| |0.13|Ours|0.66|0.74|24.72/24.57| ||*w/o* stg2|0.05|0.06|24.70/24.68| |0.20|Ours|0.69|0.78|26.07/25.61| ||*w/o* stg2|0.03|0.05|26.02/26.02| |0.30|Ours|0.74|0.83|27.37/25.67| ||*w/o* stg2|0.04|0.06|27.28/27.27| |0.46|Ours|0.82|0.88|28.78/27.34| ||*w/o* stg2|0.02|0.04|28.84/28.84| As we stated in our manuscript, Stage 2 emphasizes enhancing the capacity for divergent representation. Without this stage, encryption performance would decrease by about 72% on average in terms of ASR. Q.3 More effective compression baselines. Ans.: Thanks for your constructive comments. We have implemented the PSIC scheme in two cutting-edge LIC models (ELIC, and MLIC++). Due to time constraints, we are currently providing comparison results at a single BPP point (see below). All training configurations were kept strictly consistent with those in our manuscript to ensure fair comparisons. We will include comprehensive results on additional LIC baselines (including TIC) across multiple BPP points in the supplementary material of our revised paper. We appreciate your understanding. ||Meth.|PSNR|i2t|t2i|ASR(i2t)|ASR(t2i)| |-|-|-|-|-|-|-| |ELIC(bpp=0.19)|Baseline|27.24|0.32|0.22|-|-| ||Ours($x_e$)|26.64|0.30|0.19|-|-| ||Ours($x_f$)|26.05|0.12|0.07|0.74|0.79| |MLIC++(bpp=0.18)|Baseline|27.26|0.35|0.22|-|-| ||Ours($x_e$)|26.93|0.32|0.21|-|-| ||Ours($x_f$)|26.44|0.13|0.09|0.69|0.73| As shown, the proposed PSIC scheme can be readily integrated into different LIC models, demonstrating strong effectiveness in both encryption and compression performance. Q.4 Applicability to other VLP models. Ans. : Following your insightful suggestion, we have adopted another milestone VLP model, ALIGN (“Scaling up visual and vision-language representation learning with noisy text supervision”, ICML2021), as the target. Specifically, we restructured the UAEO optimization function based on ALIGN, and trained the entire framework at a single BPP point (bpp = 0.13). The corresponding results are presented below, demonstrating the strong applicability of the proposed PSIC method to other VLP models. |Meth.|PSNR|i2t|ASR(i2t)|t2i|ASR(t2i)| |-|-|-|-|-|-| |Comp. Baseline|24.83|0.26|-|0.17|-| |$x_f$(CLIP)|24.72|0.24|-|0.17|-| |$x_e$(CLIP)|24.57|0.11|0.66|0.07|0.74| |$x_f$(ALIGN)|24.68|0.23|-|0.17|-| |$x_e$(ALIGN)|24.32|0.17|0.43|0.14|0.47| Q.5 Proofreading for typos. Thank you for your thorough review. We will definitely incorporate your suggestions into our revised manuscript, and we will carefully proofread the manuscript to eliminate any typos or grammatical issues. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed rebuttal. The revisions addressed most of my concerns, and I will raise my score.
null
null
null
null
null
null
null
null
POQD: Performance-Oriented Query Decomposer for Multi-vector retrieval
Accept (poster)
Summary: The paper introduces POQD, a framework for optimizing multi-vector retrieval (MVR) for retrieval-augmented generation (RAG) systems. The key idea is to improve retrieval performance by decomposing a query into sub-queries. POQD uses an LLM in two roles: one acts as a Query Decomposer that splits the input query into candidate sub-queries, and the other functions as a Prompt Optimizer that iteratively refines the prompt guiding the decomposition. It proposes an alternating training algorithm that alternates between optimizing the prompt and training the downstream retrieval model. The paper supports its contributions with experiments across image and text-based QA tasks. ### Update after rebuttal The rebuttal helped clarify some of my doubts about the experiments, I am raising my score to 3 in light of this. The results in the current manuscript are presented in a confusing manner, and can greatly benefit from a clear presentation. Claims And Evidence: The central claim of the paper is that better query decomposition leads to significant improvements in multi-vector retrieval performance, it is compared against multiple query decomposition techniques and the simple token-based query decomposition (colbert) but the proposed model uses additional training data which is not clear whether the baselines have access to or not, there are other questions around the experiments which needs further clarification (see Questions below). Methods And Evaluation Criteria: See"Experimental Designs Or Analyses" comments. Theoretical Claims: Partially. Experimental Designs Or Analyses: Experiments are performed on QA tasks which is a sound choice for the evaluation of contributed techniques. A better description of the baselines can help in gauging the quality of the baselines used. Supplementary Material: No. Relation To Broader Scientific Literature: Query understanding is an important step in performing information retrieval, better query decomposition which is the focus of this paper, is one way to understand a query better which can help the subsequent retrieval pipeline. Essential References Not Discussed: No. Other Strengths And Weaknesses: ### Strengths - Paper is well motivated - Results show significant improvements on ImageQA tasks (although I'm not convinced of the quality of the baselines) ### Weakness - Limited contributions - the paper's main algorithm is to do an alternate minimization of prompt optimization and retriever training, both of them by themselves are well known techniques, moreover, the theoretical reasoning is very basic assuming very strong assumptions which do not seem fair to make in this setting - The paper writing can be significantly improved, it's easy to read till motivation section but the presentation can be improved in the method and experiment section Other Comments Or Suggestions: - Algorithm 1 is hard to understand, many of its details are deferred to later section which makes a linear read of the paper confusing - Appendix C link in section 6 is not correct - Figure 5 differs in formatting from the rest of the figures Questions For Authors: 1. What are the exact baseline methods in Table 1 & 2 (the baseline description cites multiple works)? Also, are these numbers directly taken from their respective papers? are these baselines also trained on the respective datasets or off the shelf? 2. Why is Colbert much worse than dense retrieval baseline for ImageQA accuracy? Is it because the underlying encoder not trained on image data, in that case underlying encoder should be a capable image retrieval model? 3. Also on TextQA how is it possible that the retrieval accuracy for Colbert is quite high but the final end-to-end accuracy is not? 4. How many iterations does it take on an average to finish the step 1 (prompt optimization for a given retrieval model)? 5. How does query decomposition compare with simple query rewriting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank for your comments. Our responses are below: + About the additional training data: Indeed, in our experiments, we did not include additional training data to train POQD throughout the paper. During the training process, we train POQD using the same set of training samples as those baseline methods. + About the baseline method: As explained in Section 5.1, for S-QD, it aims to train a sequence-to-sequence model to generate decomposed sub-queries. Such training process has been used in many prior works such as (Zhou et al., 2022; Zhu et al., 2023; Guo et al., 2022; Wu et al., 2024). We thus reuse the code from these papers to train this query decomposition model. For U-QD, we directly use the released unsupervised model for performing query decomposition. For ICL-QD, since it performs query decomposition by prompting LLMs. We thus leverage the prompts from prior works to decompose queries through in-context learning. For ICLF-QD, we follow (Qi et al.) to add retrieval scores as feedback in the prompts for in-context learning. The experimental results of these baseline methods are not taken from prior papers. As explained in Appendix C, S-QD, U-QD, ICL-QD and ICLF-QD haven’t been evaluated for decomposing queries for retrieval in the context of RAG. We thus implement these methods in our experiments and obtain these numbers by ourselves. + About the theoretical analysis: See our response to reviewer EhPc (About the theoretical analysis) + About the contribution of the paper: See our response to reviewer EhPc (About the contribution of the paper) + About the image retrieval performance of ColBert: Yes, as mentioned in line 319-321, in the image QA experiments, we leverage the pre-trained CLIP model as the underlying image retrieval model for all methods including ColBert. The reason why Colbert performs worse is that the original version of Colbert has not been applied in image retrieval yet. To mitigate this, we further included the results of Colpali [1] below, which is a variant of ColBert adapted for image retrieval. The results indicate that our method still outperforms Colpali although it is adapted for image retrieval. |ManyModalQA |Top-1 retrieval acc.| QA accuracy| |-|-|-| |Colbert| 16.30| 29.80| |ColPali| 21.05| 32.29| |POQD| **28.67**|**37.92**| |MultiModalQA |Top-1 retrieval acc.| QA accuracy| |-|-|-| |Colbert|16.53 |42.61| |ColPali|36.96|49.13 | |POQD|**48.56**|**61.74**| + About ColBert text QA results: See our response to reviewer Nopx (Regarding the retrieval results in Table 1) + About the number of iterations for step 1: We configure the number of iterations as 5 by default. If we can find a prompt $p^{new}$ which can lead a training loss smaller than that with $p^{new}$ by up to $\alpha$ within 5 iterations, then we terminate this loop early. Otherwise, we terminate the entire process of updating prompts in Algorithm 2. + About Algorithm 1: Some details of this algorithm, particularly the while loop's convergence condition, are deferred to Section 4.3. The loop terminates if the loss with an updated prompt is at least $\alpha$ lower than the initial loss (best\_L in Algorithm 2, equivalent to Algorithm 1's initial loss) within 5 iterations. In other words, convergence occurs when any updated prompt reduces the loss by at least $\alpha$ compared to the initial training loss. We will clarify this later. + About the comparison against the query rewrite strategy: We further compare POQD against the strategy of performing query rewrite (we follow [2]) on WebQA (text). The following results again show better performance of POQD: ||Top 2 retrieval acc.|QA acc.| |-|-|-| |query rewrite|28.42|52.16| |POQD|**53.96**|**61.87**| [1] Manuel et al. "Colpali: Efficient document retrieval with vision language models."ICLR 2024. [2] Ma et al, Query Rewriting for Retrieval-Augmented Large Language Models, arxiv
Summary: The paper proposes an approach to decompose a query into sub-queries, and the decomposition is optimized to obtain better performance in the end task. The decomposition is performed by using meta-prompting to an LLM. It is expected that the LLM is able to iteratively generate better prompts when the performance measure is provided as feedback. The proposed approach is tested on several datasets, showing generally better performance measures than the baselines. Claims And Evidence: The main idea of the paper is to iteratively improve the prompt to LLM to generate sub-queries. To this end, an iterative process is designed. Under some conditions, the paper shows that the decomposition improves. This idea sounds very interesting. The experiments provide evidence to show the utility of the process. Methods And Evaluation Criteria: The main idea is attractive. The use of meta-prompt to generate prompts for query decomposition is innovative. The idea sounds intuitive. However, the paper does not provide sufficient information on what prompts the process may produce. It is unclear that the iterative process will converge. At the end of the iterative process, what type of prompt for query decomposition is generated? Does it correspond to some specific form of prompt? or some pattern for the prompt? If some examples are provided, and better,some analysis about what the iterative process will produce, one can better understand the approach. The experiments show some advantages of the proposed method over the existing baselines. However, the comparison is mainly quantitative. It is unclear why the proposed method can produce better decompositions. Some analysis about the different behaviors of different approaches would help. The proposed approach also include a step of filtering of sub-queries with irrelevant tokens. How is this done? How do you determine that a token is irrelevant? How does this filtering important in the whole method? Would part of the gain for the method be generated by this filtering? If this is the case, then the superior performance of your method is not produced solely by the optimized decomposition process. Would a similar filtering process be applicable to other baseline methods? This should be discussed in the paper. Overall, the evaluation demonstrates the superiority of the proposed method, but there is not sufficient qualitative analysis to allow for better understanding of the reasons. Theoretical Claims: The paper contains some theoretical results, showing that under some conditions, the iterative optimization process can improve. As recognized by the authors, the conditions will not be satisfied in practice, making the utility of the theoretical results less strong. Despite this, the results are informative. Experimental Designs Or Analyses: The experiments are performed on several datasets. The general design of the experiments are correct. There is a lack of discussion about the reasons that better performance is obtained with the proposed method, and its differences with the existing baselines, especially on the sub-queries that can be generated. Supplementary Material: The experimental details in the appendix are reviewed. They are useful. There are still missing details about the filtering step. It would be good to provide examples of the prompts that are generated at the end for query decomposition. Relation To Broader Scientific Literature: The key contribution of the paper is to rely on LLM to decompose queries into sub-queries. The performance-aware optimization is believed to produce a better decomposition than the existing methods. This idea sounds interesting. However, there is a lack of explanation about the process to be fully convinced that the algorithm can indeed produce better decompositions. The main related methods in the literature are cited. Essential References Not Discussed: The main references are cited. They are correctly described. However, the comparison with them is superficial. Other Strengths And Weaknesses: Some statements need further support. - "Notably, the similarity score between the token “Kong” and the photo of Lee Kuan Yew is exceptionally high,": It is difficult to understand why this is produced. Can you provide a plausible reason for it? This may help understand the nature of the problem. - The theoretical parts of the paper are more difficult to understand. This is partly due to the fact that some concepts are used without explanation. For example, μ-strongly convex and L-smooth are not explained. - Sentence BERT is used to encode sub-queries and the corpus: "We thus employ the Sentence-Bert model (Reimers, 2019) for encoding sub-queries and corpus for other baseline methods as well as POQD.". How is this done? Do you consider each sub-query as a sentence and ask sentence-BERT to encode it? How about for the documents? Do you encode each sentence wit it? - In Fig. 6, why is there a so large retrieval time for WebQA (image)? Other Comments Or Suggestions: General suggestion: perform more details about the decomposition that can be obtained, more analysis about the experimental results and more qualitative comparisons with the baselines. Questions For Authors: see the questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would thank your comments. Our responses are below: + About qualitative examples: Thanks for pointing out this issue. Indeed, in figure 1, we leverage one example to show the differences between ColBERT, ICL-QD and POQD. We further expand it by reporting the decomposed sub-queries (both before and after filtering step) generated by other baseline methods as follows (the retrieved figures are shown in this [link](https://anonymous.4open.science/r/Example-9717)). | | before filtering | after filtering | |-|-|-| |S-QD| ['What is Victoria, Hong Kong known for having in abundance at its waterfront', 'Is a type of building'] |["What Victoria, Hong Kong", "type buildings"] | |U-QD | ['where is Victoria' ,'what is in front of the buliding'] |['Victoria' ,'bulidings'] | |ICL-QD| ["historical significance of Victoria Hong Kong", "buildings", "type"] | ["Victoria Hong Kong", "buildings", "type"] | |ICLF-QD| [ 'What is the historical significance of colonial buildings in Victoria, Hong Kong ', 'What are the different architectural styles found in Victoria, Hong Kong'] |["buildings Victoria, Hong Kong"] | Before filtering, the sub-queries from baselines contain many irrelevant tokens, as analyzed below: 1. S-QD: Trained on StrategyQA with human-annotated sub-queries. However, when applied to other datasets, distribution shifts introduce irrelevant tokens. 2. U-QD: Its candidate sub-queries are collected from datasets (e.g., SQuAD), leading to irrelevant tokens in generated sub-queries. 3. ICL-QD/ICLF-QD: Their LLM-generated sub-queries may contain hallucinated irrelevant tokens. Unlike our method, baselines lack iterative refinement using downstream feedback, causing them to either miss key tokens (e.g., U-QD drops "Hong Kong") or retain unimportant ones (e.g., S-QD keeps "type" and "what"). This leads to incorrect image retrieval and answers. We will add this analysis in the revision. + About the prompts produced by POQD: We run Algorithm 1 for four steps and show the prompts and generated prefixes at each step are included in this [link](https://anonymous.4open.science/r/Training-Loss/opt_iter). The LLM optimizer indeed produces variations of query decomposition instructions, searching for variants that minimize the retrieval system’s training loss. + About the filtering step: As noted in lines 198-201, we only filter irrelevant tokens to ensure query decomposition correctness. For LLM-based methods (ICL-QD, ICLF-QD), retrieval and QA accuracy remain nearly identical with or without POQD's filtering, as shown in our ManyModalQA (image) ablation studies below. |Top-1 retrieval accuracy | w/o filtering | w/ filtering| |-|-|-| |S-QD| 28.15| 27.05| |U-QD| 26.86|26.54| |ICL-QD| 27.76| 27.51| |ICLF-QD| 27.89| 28.37| |QA score | w/o filtering | w/ filtering| |-|-|-| |S-QD| 35.82|35.42 | |U-QD| 33.73|33.46 | |ICL-QD| 34.70|36.37 | |ICLF-QD| 34.77| 35.56| We also removed the filtering step from our method and retested on ManyModalQA (image). The results are shown below, which show that both retrieval and QA accuracy get reduced by 2%-5%, but still higher than baseline methods. ||Top-1 retrieval accuracy|QA accuracy| |-|-|-| |POQD (w/o filtering) |28.15 | 36.12| |POQD| 28.67 | 37.92 | + About the theoretical analysis: See our response to reviewer EhPc (About the theoretical analysis). + About the example in Section 2.1: As explained in Appendix B, image retrieval in image QA involves segmenting images into patches and encoding each of them. Due to the maxsim operation, the similarity between a sub-query and an image depends on the most similar patch to this sub-query. In Section 2.1’s example, CLIP considers Patch A [black-filled](https://anonymous.4open.science/r/Training-Loss/kong.png) most similar to "kong," while the ground-truth image’s Patch B [buildings](https://anonymous.4open.science/r/Training-Loss/kong2.png) is less similar. Since "kong" refers to a gorilla-like monster (see [Wiki](https://en.wikipedia.org/wiki/King_Kong)), the Patch A yields unrealistically high similarity, ranking Lee Kuan Yew’s image above the ground truth. We will clarify this in the revision. + About query and document encodings: Yes, we encode sub-queries using pre-trained models like Sentence-BERT. For documents, we encode each sentence similarly for all methods except ColBERT to ensure fair comparison (lines 311-321). ColBERT inherently tokenizes queries and documents, so we use its default setup. But we also test it on WebQA by processing documents like other methods. The top-2 retrieval accuracy (48.92) and QA score (59.71) under this setup are worse than ColBERT's default (61.15 and 60.79 resp.). + About the retrieval time in Figure 6: The retrieval time on WedQA is much smaller than the generator time. You may mean the longer retrieval time on StrategyQA, which as noted in lines 307-311, is due to multi-hop QA requiring repeated reasoning and retrieval per question. We will add this explanation to Section 5.2. --- Rebuttal Comment 1.1: Comment: Thanks for answering the questions. The question about filtering still remains. The filtering removes some irrelevant tokens. How is token's relevance (or irrelevance) determined? About the prompts produced by POQD, unfortunately, I can't open the link to see the examples. So, I still don't understand what type of prompts will be generated. For qualitative analysis of the results, do you have some potential reasons that can explain why your method performs better than the baselines? --- Reply to Comment 1.1.1: Comment: Thanks for your responses! First, sorry for the incorrect link for showing the prompts of POQD. The prompts used during the four optimization steps are included in the following links: [step 1](https://anonymous.4open.science/r/opt_iter/opt_iter_1.png), [step 2](https://anonymous.4open.science/r/opt_iter/opt_iter_2.png), [step 3](https://anonymous.4open.science/r/opt_iter/opt_iter_3.png) and [step 4](https://anonymous.4open.science/r/opt_iter/opt_iter_4.png). Second, regarding the irrelevant tokens, **we regard those tokens appearing in the sub-queries but not from the original queries as irrelevant**. For instance, given the original query is "Victoria Hong Kong has many what type of buildings?" in Figure 1, before the filtering step, the sub-queries produced by U-QD are 'where is Victoria' and 'what is in front of the buliding'. In those two sub-queries, the tokens "where", "is", "in", "front", "the" are not from the original query, and thus removed in the filtering step, which thus results in 'Victoria' and 'what of bulidings' (sorry for the typos in the rebuttal) as final sub-queries after the filtering step. This aims to remove the potentially negative effects of those irrelevant tokens appearing in the sub-queries. **In addition, as shown in "About the filtering step" above, no matter whether the filtering step is used or not, POQD always outperforms baseline methods**. Third, regarding the potential reasons why the baseline methods fail, in addition to the above explanations, we may include the following more intuitive explanations in the revision. **For ICL-QD and U-QD, both of them do not receive any feedback for refining their decomposed sub-queries**. The former relies on the pre-trained LLMs while the latter employs one unsupervised training algorithm to train its query decomposition model. Additionally, **for ICLF-QD and S-QD, although they can receive external feedback for optimizing the query decomposition process, their objectives are not for optimizing the downstream RAG scores**. Specifically, for ICLF-QD, it evaluates the sub-query quality by leveraging another LLM while for S-QD, it evaluates the quality of the sub-queries based on whether it is aligned with the human-annotated sub-queries. However, both the LLM feedback and these human-annotated sub-queries themselves cannot necessarily guarantee better performance of downstream retrieval-based systems. In contrast, our method aims to refine the query decomposition process by optimizing the RAG performance directly, which can thus produce better performance than baseline methods.
Summary: The paper presents Performance-Oriented Query Decomposer (POQD), a framework for optimizing query decomposition in retrieval-augmented generation (RAG) tasks, particularly in multi-vector retrieval (MVR). POQD leverages an LLM-driven iterative optimization strategy to generate sub-queries that enhance downstream question-answering (QA) performance. Extensive experiments on RAG-based QA tasks, covering both image QA and text QA have been reported to demonstrate the effectiveness of the proposed framework. ##update after rebuttal The authors have addressed most of my concerns satisfactorily, I am therefore updating my recommendation to a weak accept. Claims And Evidence: 1. The second contribution of the paper, which pertains to the proposed training algorithm, lacks significant novelty. First, the authors adopt the optimization approach introduced in LLMs as Optimizers for query decomposition, rather than proposing a fundamentally new algorithm. This reliance on existing techniques diminishes the originality of their contribution. Second, the end-to-end training framework is essentially an alternative optimization approach, which is a straightforward and intuitive strategy rather than an innovative methodological advancement. Third, the theoretical analysis presented in the paper is based on critical assumptions that are questionable. Notably, the assumption that the training loss function is strongly convex is highly unrealistic, particularly in the context of complex neural networks, where loss landscapes are typically non-convex and may contain multiple local minima and saddle points. Such an assumption weakens the theoretical foundation of the proposed approach. 2. Furthermore, the third claimed contribution, which concerns the empirical performance of the proposed POQD method, also presents notable issues. While the authors argue that POQD outperforms existing methods, a closer examination of the reported results reveals inconsistencies. Specifically, POQD does not consistently achieve the highest retrieval accuracy across all benchmark comparisons, raising questions about the robustness and generalizability of the approach. The lack of consistent superiority over baselines suggests that the empirical results may not fully substantiate the claimed advantages of POQD. Methods And Evaluation Criteria: The experimental evaluation covers both image-based question answering (image QA) and text-based question answering (text QA) scenarios. The evaluation was conducted using four benchmark datasets: WebQA, MultiModalQA, ManyModalQA, and StrategyQA. Among these, WebQA, MultiModalQA, and ManyModalQA include questions that require retrieval from multiple modalities. However, the paper presents image QA results for all three multimodal datasets while reporting text QA results exclusively for WebQA, without providing a clear rationale for this selective reporting. This omission raises concerns regarding the completeness and consistency of the experimental setup, as it remains unclear why text QA was not evaluated across all relevant datasets. Theoretical Claims: I have carefully reviewed the proof. Based on my analysis, given those rigor assumptions, the logical progression and mathematical reasoning appear to be technically sound. Experimental Designs Or Analyses: 1. The experimental section of this paper lacks critical implementation details, making it difficult to fully understand and reproduce the proposed approach. Specifically, the implementation of U-QD, ICL-QD, and ICLF-QD is not clearly described. It remains unclear which large language model (LLM) is utilized for both the prompt optimizer and the query decomposer. Additionally, key hyperparameters such as the value of alpha and the number of retrieved items used in the retrieval-augmented generation (RAG) system are not explicitly stated. 2. Moreover, the analysis section lacks an ablation study on the hyperparameter alpha, which is essential for understanding its impact on the overall performance of the proposed method. 3. Furthermore, the experiments do not include an evaluation of the framework’s robustness and generalization ability across different settings. Specifically, there is no empirical analysis demonstrating the performance variations when using different LLMs, retrieval models, or generator models. Supplementary Material: The supplementary material appears to include code; but I have not examined it in detail. Relation To Broader Scientific Literature: The idea of optimizing query decomposition with respect to the final downstream performance might be interesting to broader scientific literature. Essential References Not Discussed: None Other Strengths And Weaknesses: The strengths and weaknesses have been listed above. Other Comments Or Suggestions: 1. In Assumption 4.3, located on line 274 of page 5, the symbol F(\theta;p) may have been incorrectly represented and should potentially be L(\theta;p). 2. In the Time Analysis section on page 7, the sentence starting with "the generator model is not fine-tuned for this multi-hop QA dataset The results," in line 337 appears to be missing a period. Questions For Authors: 1. The proposed training algorithm builds upon LLM-based optimization for query decomposition. Could you clarify what specific methodological innovations distinguish your approach from prior work? 2. Could you provide additional insights into the conditions under which POQD performs best and where it struggles? Additionally, have you tested POQD with different LLMs, retrieval models, or generator models to test its robustness? Demonstrating consistent improvements across diverse settings would strengthen the empirical claims. 3. The paper provides image-based QA results on all three multimodal datasets but reports text-based QA results only for WebQA. What was the rationale behind this selective reporting? Would including text QA results for MultiModalQA and ManyModalQA provide additional insights into POQD’s effectiveness across modalities? 4. The impact of the hyperparameter alpha on overall performance is not analyzed in the paper. 5. The paper lacks details regarding the implementation of U-QD, ICL-QD, and ICLF-QD. Could you provide more information on the specific LLMs used for query decomposition and prompt optimizer, as well as key hyperparameters such as the value of alpha and the number of retrieved items in RAG? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your comments. Our responses are below: + About the contribution of the paper: While we don't propose a novel LLM-based optimization method, our key contribution is recognizing the need to optimize query decomposition for multi-vector retrieval—a critical factor in improving retrieval-based systems. To address this, we adapt an LLM-based optimization strategy for query decomposition and jointly train it with the generator model. We believe this work fits the scope of ICML's application track. + About the training algorithm: While our alternative training algorithm may seem straightforward, it could incur higher training overhead compared to optimizing only the RAG generators—a concern for practical applications. However, our theoretical analysis (lines 281-297) shows that with proper hyper-parameters, the added overhead is minimal without compromising optimality, as empirically confirmed in Figure 4. Thus, our algorithm effectively balances efficiency and effectiveness, as highlighted in Contribution 2. + About the theoretical analysis: While our theoretical analysis assumes strong convexity, it also holds under the weaker Polyak-Łojasiewicz star (PL*) condition [2]. Prior work [1] suggests that pre-trained, over-parameterized LLMs fine-tuned with GaLore, a variant of LoRA, likely satisfy the PL* condition. Since the model $\Theta$ in the paper is a pre-trained LLM fine-tuned with GaLore, the analysis in [1] applies. Assuming the loss function $L(\Theta;p)$ satisfies the $\mu$-PL* condition (instead of $\mu$-strong convexity), a variant of Lemma A.1 yields: $L(Θ_k; p) − L(Θ^∗; p) <= (1-\mu/L)^k(L(Θ_0; p) − L(Θ^∗; p))$. Similarly, a modified Theorem 4.4 gives: $L(Θ^*(p^{old}); p^{old}) − L(Θ^*(p^{new}); p^{new}) >= \alpha - (1-\mu/L)^{\tau} M$. These adjustments preserve our conclusion that Algorithm 2 balances efficiency and effectiveness with proper hyper-parameters. We will include this refined analysis. + About the inconsistent retrieval results: See our response to reviewer Nopx (Regarding the retrieval results in Table 1) + About the text QA results: Indeed, we report both image QA and text QA results for WebQA in Table 1,2. The text QA results on MultiModalQA and ManyModalQA below show that our method consistently outperforms baselines: |Top-2 retrieval acc.|MultiModalQA|ManyModalQA| |-|-|-| |Dense retrieval|66.44|49.25| |Colbert|79.89| 87.07| |S-QD|56.17|68.07| |U-QD|45.21|67.89| |ICL-QD|71.43|85.14| |ICLF-QD|69.76|66.49| |POQD|**80.58**|**92.35**| |QA acc.|MultiModalQA|ManyModalQA| |-|-|-| |without RAG|40.36|32.28| |Dense retrieval|59.36|41.25| |Colbert|61.73|77.66| |S-QD|54.92|62.62| |U-QD|49.24|60.95| |ICL-QD|61.86|76.69| |ICLF-QD|63.52|60.07| |POQD|**68.10**|**81.27**| + About implementation details: 1. U-QD: We reuse the code from (Perez et al., 2020). 2. ICL-QD & ICLF-QD: Both follow straightforward principles—ICL-QD uses manual prompts for in-context learning, while ICLF-QD adds relevance scores to retrieved documents (see Section 5.1). These prompts will be included in the revision. 3. We used the GPT-4 as the prompt optimizer and query decomposer (details in revision). 4. Parameters: Default $\alpha=0.02$; retrieved items set to 1 (image QA) and 2 (text QA). + Ablation studies on WebQA (text): We first varied the value of alpha and tracked its effect on the training loss is varied across the training process, which is visualized [here](https://anonymous.4open.science/r/Training-Loss/alpha_ablation_study.jpg). If alpha is too large (say alpha=0.05), POQD struggles to find a suitable $p^{new}$ in Algorithm 1, thus causing the underfitting issue. In contrast, if alpha is too small (say alpha=0.01), POQD converges much slower than our default with alpha=0.02. Hence, alpha=0.02 can balance the convergence speed and the final performance. Ablation on the generator model (from Llama to Qwen2.5): ||| |-|-| |Dense retrieval|57.19| |Colbert|57.55| |S-QD|51.44| |U-QD|50.72| |ICL-QD|57.91| |ICLF-QD|56.47| |POQD| **59.35**| Ablation on the query decomposer (from Llama to GPT4). ||| |-|-| |ICL-QD|55.40| |ICLF-QD|48.92| |POQD|**55.40**| Ablation on the retrieval model (from sentence-bert to Roberta): ||Top-2 retrieval acc.|QA acc.| |-|-|-| |Dense retrieval|22.29|58.63| |Colbert|43.53|60.43| |S-QD|22.30|58.99| |U-QD|20.86|57.91| |ICL-QD|39.57|59.71| |ICLF-QD|34.89|60.07| |POQD|**43.88**|**61.51**| Ablation on the number of retrieved items: |\# of items|1|5| |-|-|-| |Dense retrieval|56.31|62.41| |Colbert|62.41|69.21| |S-QD|51.60|54.23| |U-QD|50.49|55.48| |ICL-QD|61.58|66.85| |ICLF-QD|57.56|60.75| |POQD| **62.97**|**69.63**| These results show the superiority of POQD under various conditions. [1] Liu, X. H. et al. "On the Optimization Landscape of Low Rank Adaptation Methods for Large Language Models." ICLR 2023. [2] Liu, C. et al. "Loss Landscapes and Optimization in Over-Parameterized Non-Linear Systems and Neural Networks." Appl. Comput. Harmon. Anal.
Summary: The paper tackles an important problem of jointly optimizing the query decomposition and retreival model for downstream generation task. The query decomposition and embedding model are trained alternatively. Given an embedding model, the query decomposition is performed using a LLM with the optimization space restricted to prompts. To navigate this space, they use existing ideas from Yang 2024 to have LLM generate better and better prompts. Given a prompt, the model is trained for some iterations using standard optimization. (e.g. gradient descent) The authors find that this acheives superior RAG performance on downstream QA tasks -- specifically, the performance is starkly improved on MultiModal data. Claims And Evidence: A few claims that i was not convinced about are 1. The prompt decomposition trained the way proposed in the paper achieves superior retreival resutls -- this does not seem to be true generally. Specifically, the Table 1, retreival accuracy (which measures if correct document is retrieved), shows that PQQD does not improve retreival across the board. It has phenomenal improvemenets in MultiModalQA. But it can be quite suboptimal when it comes to TextQA when compared to ColBERT. It is interesting that despite this, POQD has consistent improvement over all baselines in the downstream task. I believe that the optimization considers if the LLM generating answer with the retrieved documents can generate the correct answer from the correct document or not -- and that seems to tip the scales in favor of POQD. Am i understanding this correctly? 2. The purpose of theoretical analysis section is unclear to me. On the face of it, it seems like a statement is being made about convergence of the POQD training. However, one most complex pieces of training -- prompt generation is abstracted out in the proof by simply assuming that p_new has lower loss than p_old by amount of $\alpha$ (line 674). Once you assume something like this, along with strong convexity, it is not surprising that the result shows convergence. However, the loss(p_new) begin less than loss(p_old) is the most tricky part imo. I understand that this is hard to even begin analysing this. But section 4.4 needs to highlight this issue that potentially, POQD can have unstable training where you never get p_new that has lower loss because LLM generating prompt may not satisfactorily navigate the prompt space. Methods And Evaluation Criteria: I am not well versed with this area of research. The method and evaluation seem reasonable to me based on broader understanding of the field. Theoretical Claims: I did not check the correctness. But the result seems okay. Some discussion needs to be added though (see the claims section) Experimental Designs Or Analyses: I am not well versed with this area of research. The method and evaluation seem reasonable to me based on broader understanding of the field. Supplementary Material: No. Relation To Broader Scientific Literature: I am not well versed with this area of research. Essential References Not Discussed: I am not well versed with this area of research. Other Strengths And Weaknesses: [strength] 1. The paper is well written with good examples. [weakness] 1. The retreival results in table 1 seem contradictory. Other Comments Or Suggestions: 1. Line 160. n-sub queries ( do you mean k sub queries) 2. Can you provide more empirical details on training . A few things that might help -- how well does prompt scores align with loss. -- Can you show us the plots of training loss. I suspect, we can see some unexpected spikes on the prompt update step. Questions For Authors: Please see my concerns above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would thank your comments. You can find our responses to your comments below: + Regarding the retrieval results in Table 1: We admit that these results look confusing. Indeed, the discrepancy between Table 1 and Table 2 can arise from the fact that we report Top-20 and Top-100 retrieval accuracy while we only retrieve the most relevant image for image QA and the two most relevant documents for text QA. Hence, if we report Top-1 and Top-2 retrieval accuracy instead (as shown below), we can see that our method always performs better than the state-of-the-art: |Top-1 retrieval accuracy | ManyModalQA (image) | |---|------------------------| | Dense retrieval | 27.38 | | Colbert |16.30| |S-QD|28.15| |U-QD|26.86| |ICL-QD|27.76| |ICLF-QD|27.89| |POQD|**28.67**| |Top-2 retrieval accuracy | WebQA (text) | |---|------------------------| | Dense retrieval |52.96| | Colbert |52.16| |S-QD|48.56| |U-QD|46.04| |ICL-QD|41.37| |ICLF-QD|51.80| |POQD|**53.24**| + Regarding the theoretical analysis: We agree that proving the convergence of POQD is trivial. However, the main message that we want to deliver in Theorem 4.4 (and the following explanations between line 290 and 297) is to analyze under what conditions $p^{new}$ would lead to a better training loss at convergence than that of $p^{old}$. This is crucial in demonstrating that the loss $L(\theta;p)$ is indeed optimized with respect to $p$. Otherwise, it is likely that with the finally derived prompt $p^*$ by Algorithm 2, the converged training loss $L(\theta^*(p^*); p^*)$ is even worse than that with the initial random prompt $L(\theta^*(p^{init}); p^{init})$. This thus invalidates the optimality of $p^*$ derived by Algorithm 2. So to guarantee that this solution is optimal, we thus need to make sure that updating $p^{old}$ to $p^{new}$ can lead to sufficiently large training loss reduction (by $\alpha$) within 5 iterations in Algorithm 1 (which will be clarified in the description of Algorithm 2 in the revision). Otherwise, we terminate updating the prompt. + Regarding the possibly unstable training issue: We agree that without identifying a $p^{new}$ that can reduce the training loss by $\alpha$, the training process can be unstable. So as long as we cannot find such a $p^{new}$ within 5 iterations in Algorithm 1, which will be further clarified near line 254 to 257 in the revision, we break the while loop in Algorithm 1 and stop updating $p$ any more. The empirical results in Section 5 indeed demonstrate the performance advantage of our method with this strategy. We would love to elaborate more on this point in the revision. + Regarding the relationship between the prompt scores and loss: As we point out in line 180 in the right column, the training loss $L(Θ; p)$ is viewed as the score + Regarding the plots of training loss: We included the plot of training loss in this [link](https://anonymous.4open.science/r/Training-Loss/Training_loss.jpg), which shows no spike during the entire training process of Algorithm 2 (including the prompt update process). As mentioned above, we only update $p^{old}$ to $p^{new}$ if the training loss is reduced by at least $\alpha$ within 5 steps (as explained in the response to reviewer PnGe). Otherwise, we terminate the training process. This can thus guarantee the smooth reduction of training loss. --- Rebuttal Comment 1.1: Comment: Thanks for response. I do not have any additional questions. I will maintain my current evaluation for this paper.
null
null
null
null
null
null
DMOSpeech: Direct Metric Optimization via Distilled Diffusion Model in Zero-Shot Speech Synthesis
Accept (poster)
Summary: This paper proposes a DMOSpeech speech synthesis method. It utilizes Connectionist Temporal Classification (CTC) loss and Speaker Verification (SV) loss to realize the direct optimization of diffusion-based models. It was evaluated using subjective and objective tests. It outperforms the previous methods in most metrics. Claims And Evidence: The authors claim that the proposed method provides a direct pathway to realize the end-to-end optimization of diffusion-based synthesis model. Its performance has become better than without it. The subjective and objective evaluation supports this claim. Methods And Evaluation Criteria: The evaluation methods are sound. The authors conducted many subjective tests using human annotators, and the results were informative. Theoretical Claims: The proposed method's theoretical claim is that it has direct gradient pathways to all the model components. I believe this is sound enough. Experimental Designs Or Analyses: The authors perform subjective and objective tests. They are fine, but it is not clear how the results of these two are correlated. Their consistent results, and costly subjective tests are meaningful only for ablation studies. Supplementary Material: The details of CTC loss and SV loss are in the Supplementary Material, which I reviewed. Relation To Broader Scientific Literature: The novel point is the use of CTC loss and SV loss to realize direct optimization of diffusion based speech synthesis model. This point is somewhat limited to the domain of speech synthesis. Essential References Not Discussed: Most of the references are OK. However, Table 3 does not report the result of the objective test of the previous study StyleTTS-ZS. Therefore, it is not clear how much improvement the proposed method achieves. ## update after rebuttal ## The comparison result with StyleTTS-ZS will be added. Then, there will be no problem at this point. Other Strengths And Weaknesses: It seems the control parameters of the model (such as \lambda) and the learning parameters are difficult to choose. The ablation studies in this regard should be needed. ## update after rebuttal ## Thank you for the rebuttal comments. Now I believe there is no problem. Other Comments Or Suggestions: 1) The description in Section 3.2 should be improved. Most of the contents are not new, so it is unclear which parts are novel. 2) The MOS values of various methods in Table 1 are almost the same as the ground truth. I am not sure whether the improvement obtained by the proposed method is meaningful enough. ## update after rebuttal ## About 2), I agree with the authors' comments in the rebuttal. They should insist that improving MOS is not the only contribution in the manuscript. Overall, I raised my score from WR to WA. Questions For Authors: What are the differences of the proposed method and the previous method StyleTTS-ZS in the objective test in Table 3? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our paper. We appreciate your feedback and address your concerns below: **Correlation Between Subjective and Objective Metrics** We have indeed analyzed the correlation between subjective and objective metrics in our paper. As shown in Figure 3 and Figure 5 (appendix), the Pearson correlation between speaker embedding similarity (SIM) and human-rated voice similarity is 0.55, while the correlation with style similarity is 0.50. Similarly, word error rate (WER) correlates with naturalness and sound quality at -0.16 for both metrics. All correlations are statistically significant ($p << 0.01$), demonstrating that our optimized objective metrics strongly align with human perception even at the individual utterance level. **Applicability Beyond Speech Synthesis** While our implementation focuses on speech synthesis, the core framework of enabling direct metric optimization through distillation can be applied to other generative domains. For example: - In music generation, differentiable models like instrument detection models or melody extraction models could optimize text-to-music alignment or ensure generated music matches specified instruments or melodic (MIDI input) constraints. - In image generation, differentiable models could maxizmize CLIP scores between the prompt text and generated image, or verify the presence of all text-described objects using image segmentation models - In video generation, similar principles could ensure temporal consistency The key innovation is creating a direct gradient pathway that enables end-to-end optimization with any differentiable metric within the diffusion model frameworks, which has broad applications beyond speech synthesis. **Comparison with StyleTTS-ZS** We have conducted objective evaluations of StyleTTS-ZS that will be included in the revised manuscript. DMOSpeech significantly outperforms StyleTTS-ZS in speaker similarity (0.69 vs. 0.56) with comparable real-time factor (0.07 vs. 0.04). While StyleTTS-ZS achieves lower WER (1.17 vs. our 1.94), our model delivers better overall performance as confirmed by subjective evaluations in Table 1. Additionally, our training pipeline is more straightforward without requiring aligner training, making it easier to scale across languages and larger datasets. **Parameter Selection Process** We have detailed our parameter selection approach in the paper. The process is intuitive rather than difficult—we observe gradient norms for each loss term and balance them accordingly. As described in Section 3.4: >" We set $\lambda_{\text{adv}} = 10^{-3}$ to ensure the gradient norm of adversarial loss is comparable to that of DMD loss. During early training stage, we observed that the gradient norms of SV loss and CTC loss were significantly higher than DMD loss, likely because $G_\theta$ was still learning to generate intelligible speech from single step. To address this, we set $\lambda_{\text{CTC}} = 0$ for the first 5,000 iterations and $\lambda_{\text{SV}} = 0$ for the first 10,000 iterations. This allows $G_\theta$ to stabilize under the influence of DMD loss before integrating these additional losses. After that, both $\lambda_{\text{CTC}}$ and $\lambda_{\text{SV}}$ are set to 1." This approach follows established practices in the literature (Yin et. al, 2024) and doesn't require extensive hyperparameter tuning. **Regarding Other Suggestions** 1. >The description in Section 3.2 should be improved. Most of the contents are not new, so it is unclear which parts are novel. Section 3.2 provides necessary background on Distribution Matching Distillation, establishing context for our improvements. We acknowledge this is primarily background material and will clarify which aspects represent our specific contributions in the revised manuscript. 2. >The MOS values of various methods in Table 1 are almost the same as the ground truth. I am not sure whether the improvement obtained by the proposed method is meaningful enough. While MOS values are similar to ground truth, our key contribution is achieving comprehensive improvements across multiple metrics simultaneously. Previous models like StyleTTS-ZS achieve high naturalness but lower similarity, while NaturalSpeech 3 achieves high similarity but lower naturalness. DMOSpeech uniquely excels in both dimensions while maintaining significantly faster inference speed (13.7x faster than the teacher model). This balanced performance across all metrics represents a meaningful advancement in the field. We appreciate your constructive feedback and will incorporate these clarifications in our revised manuscript.
Summary: This paper introduces DMOSpeech, a distilled diffusion-based text-to-speech (TTS) model that achieves faster inference and superior performance compared to its teacher model. It has two advantages: (1) reducing sampling steps from 128 to 4 via distribution matching distillation, and (2) providing direct gradient pathways from noise input to speech output. This allows direct optimization of speaker similarity and word error rate through speaker verification (SV) and Connectionist Temporal Classification (CTC) losses. The comprehensive experiments demonstrate significant improvements across all metrics, outperforming the teacher model and other recent baselines in subjective and objective evaluations. Claims And Evidence: Overall, the claims are clearly and well supported. 1. The biggest problem of this paper is: This paper combines diffusion distillation and metric optimization (by GAN loss). Since traditional diffusion/flow matching can not generate intelligible speech at high noise levels, this paper bypasses it by diffusion distillation to skip this stage and apply the GAN optimization. Firstly, both diffusion distillation and direct metric optimization in TTS are well studied. Secondly, FlashTTS[2], which is also a consistency model (bypassing the same challenge of unintelligible speech under high noise levels) with direct metric optimization, is trained without distillation and is much easier. So, it lacks novelty for the scope of ICML. 2. This paper should discuss more with some closely related papers, such as DIFFUSION-GAN[1], FlashTTS[2]. [1] DIFFUSION-GAN: TRAINING GANS WITH DIFFUSION [2] FlashSpeech: Efficient Zero-Shot Speech Synthesis Methods And Evaluation Criteria: Overall, the methods are written clearly. The evaluation criteria is sufficient. Theoretical Claims: Yes. Experimental Designs Or Analyses: Some questions: 1. For experiments comparing with end-to-end systems, I recommend to compare with more baselines: F5TTS[1], MASKGCT[2]. 2. It should be compared with FlashTTS[3], which is also a strong baseline of an efficient zero-shot TTS system with few iterative steps and direct metric optimization. [1] F5-TTS: Diffusion Transformer with ConvNeXt V2, faster trained and inference. [2] Maskgct: Zero-shot text-to-speech with masked generative codec transformer [3] FlashSpeech: Efficient Zero-Shot Speech Synthesis Supplementary Material: Yes. Relation To Broader Scientific Literature: No. Essential References Not Discussed: No. Other Strengths And Weaknesses: The strengths and weaknesses are discussed in previous sections. Other Comments Or Suggestions: ## Update after Rebuttal Thanks for the authors' replies. Regarding the main concern about Direct Metric Optimization vs GAN, I agree that FlashSpeech does not involve direct metric optimization in adversarial training. I misunderstood the concept of adversarial training and direct metric optimization. My concerns still exist: 1. The method to distill the teacher model for fast inference via the adversarial training is not novel: works such as DMD 2 [1] and FlashSpeech have studied it. 2. The direct metric optimization is not novel either. The early work, such as StyleTTS 2 [2], also uses pretrained WavLM (a proxy for speaker similarity metric) as the direct optimization objective and achieved good results (see Table 5 w/o SLM adversarial training in the ablation study of StyleTTS 2). The comparisons of F5TTS and MackGCT are good. Finally, I agree with the author's claim: enabling direct optimization of perceptually relevant metrics through a differentiable pathway created by one-step generation through diffusion distillation, especially in the area of TTS. I will update the score to 3 by combining all the proposed methods and considering the good results. [1] Improved Distribution Matching Distillation for Fast Image Synthesis [2] StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models Questions For Authors: The questions are asked in previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's feedback, but we believe there is a fundamental misunderstanding about our paper's contribution. Our work presents several key innovations that have not been explored in prior research, including FlashSpeech: **Clarification on Direct Metric Optimization vs. Adversarial Training** The reviewer conflates direct metric optimization with adversarial (GAN) training, which are fundamentally different approaches: FlashSpeech does not implement direct metric optimization. After careful examination of the FlashSpeech (Ye et. al 2024) paper, we confirm it uses adversarial consistency training but makes no mention of direct metric optimization of perceptual metrics such as speaker similarity or word error rate. Their adversarial training is solely for improving general speech quality. Our direct metric optimization is novel. DMOSpeech enables true end-to-end optimization of specific perceptual metrics (SV loss for speaker similarity, CTC loss for word error rate) through differentiable pathways - not simply adversarial training. This is a significant advancement for TTS systems. **State of Direct Metric Optimization in TTS** The reviewer suggests that "both diffusion distillation and direct metric optimization in TTS are well studied." This is incorrect. As we explicitly state in our paper: >While optimizing perceptual metrics has shown promise in speech enhancement through approaches like MetricGAN for PESQ and STOI, and recent attempts have explored RLHF for improving naturalness, implementing these approaches in modern TTS systems has remained challenging. Previous attempts (e.g., YourTTS) reported minimal improvements from speaker similarity optimization due to their inability to propagate gradients through all model components. The field has struggled with this problem due to architectural limitations such as non-differentiable components or computationally prohibitive backpropagation through iterative sampling steps. **Relationship to DIFFUSION-GAN** Our work bears limited similarity to DIFFUSION-GAN. Our focus is not on GAN-based techniques but on enabling direct optimization of perceptually relevant metrics through a differentiable pathway created by one-step generation through distillation. This is a fundamentally different approach with different goals. **Novel Contributions** Our paper makes several novel contributions: 1. We present the first distilled TTS model that consistently outperforms its teacher model (not merely matching it), while reducing inference time by over 13×. 2. We introduce a framework enabling true end-to-end optimization of differentiable metrics in TTS, demonstrating substantial improvements in speaker similarity and word error rate. 3. We provide comprehensive analyses establishing correlations between objective metrics and human perceptions, revealing new insights into sampling speed and diversity trade-offs. These contributions represent significant advancements in the field of speech synthesis that are well-aligned with ICML's focus on machine learning innovations. **Regarding F5TTS and MASKGCT Comparisons** We thank the reviewer for their suggestions for more baseline comparisons. Per the reviewer's suggestion, we have trained a new DMOSpeech model using F5-TTS as the teacher on the Emilia dataset and conducted comprehensive objective evaluations comparing our new model against both F5TTS and MASKGCT models on the SeedTTS-Eval benchmark dataset: | Model | SIM (en) $\uparrow$ | WER (en) $\downarrow$ | SIM (zh) $\uparrow$ | CER (zh) $\downarrow$ | RTF $\downarrow$ | |-----------|-----------|-----------|-----------|-----------|-----------| | MaskGCT | **0.717** | 2.62 | 0.752 | 2.27 | 1.21 | | F5-TTS (teacher, N=32) | 0.647 | 1.83 | 0.741 | 1.56 | 0.32 | | DMOSpeech (N=4) | 0.687 | **1.78** | **0.757** | **1.43** | **0.06** | Our model has achieved similar or better performance than both MaskGCT and F5-TTS on SeedTTS-eval in both Chinese and English test sets for both intelligibility and similarity. Moreover, our model is significantly faster than both MaskGCT and F5-TTS. We will include the complete evaluation results in the appendix of our revised manuscript. This additional analysis provides a more comprehensive understanding of how our approach compares to current state-of-the-art methods. **Regarding FlashSpeech Comparison** We appreciate the suggestion to compare with FlashSpeech. However, despite our best efforts, we have been unable to conduct direct experimental comparisons due to the unavailability of publicly accessible pre-trained checkpoints for this model, as documented in https://github.com/zhenye234/FlashSpeech/issues/3. In our revised manuscript, we will include a thorough discussion of FlashSpeech, addressing its approach and how it relates to our work. While we cannot provide direct experimental comparisons, this discussion will help contextualize our contributions among other efficient zero-shot TTS systems.
Summary: This paper presents DMOSpeech, a distilled diffusion-based speech synthesis model that achieves true end-to-end optimization of perceptual metrics, specifically through CTC loss for intelligibility and SV loss for voice similarity. The authors integrate these loss functions into a distilled student model trained via DMD2, enabling efficient inference without sacrificing synthesis quality. ## update after rebuttal **I keep my initial assessment as the rebuttal addressed my concerns.** Claims And Evidence: - The claims are very clear that their newly introduced plausible loss functions work in practice. I cannot believe these loss functions have not yet been applied to this diffusion distillation so far, and support this paper acceptance. - The paper backs its claims with comprehensive experiments. Especially, exceling its teacher with 4-step synthesis with such auxiliary loss functions are impressive, but also surpassing strong baselines (e.g., NaturalSpeech 3). However, in all Tables, the authors should write the number of inference steps. For example, StyleTTS-ZS is 1-step generation, and the performance seems to be different from the original paper. Can you explain this? What makes StyleTTS-ZS and DMPSpeech perform differently? - The teacher model is already achieving the real-time generation (RTF < 1). Why should we bother generating that fast algorithm? Say, the teacher is 10B model and clearly not a real-time. Maybe a usecase of this algorithm is to distill that teacher into few-step model that fits to real-time. Can the authors give a good intuition why the model sizes are not scaling to 10B - 100B? Is there an insight of the model behavior once we really scale up to this region? - Is there any ablation that applies CTC and SV losses individually? Will there be any discrepancy between the expected outcome and the real outcome? - Will the code released? Methods And Evaluation Criteria: Yes. They all make sense to me. Theoretical Claims: There is no theory on this paper. One question. When the loss function is fully optimized, will both CTC loss and SV loss not hurt the optimality? It is true in the image domain that using CLIP regularization with strong weight hurts the performance. How about CTC and SV? Also, will CLAP regularization work? Experimental Designs Or Analyses: Not carefully checked the validity of all details. Typically, in this GAN-based experiments, there are many hidden (or appendixed) materials that a reviewer can easily miss. Supplementary Material: No, I haven't read the supplementary materials. Relation To Broader Scientific Literature: Very related to the broad community Essential References Not Discussed: Key references are discussed Other Strengths And Weaknesses: One minor issue is about the contribution of this paper. It's all about the empirical study, and I was wonder if there's any good theoretic catch in this paper. Well, I understand it's very hard to do so, but is there any interesting analysis? Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough evaluation and constructive feedback. Below, we address each point raised: **StyleTTS-ZS Comparison** The performance of StyleTTS-ZS in our evaluation aligns with what was reported in their original paper. The fundamental architectural difference is that StyleTTS-ZS employs a specialized decomposition approach, modeling specific prosody components (F0, energy, duration) separately, while DMOSpeech adopts a more holistic end-to-end generation framework. Our approach offers several advantages: 1. Greater generalizability across diverse speech conditions (such as non-speech vocalizations and multiple speakers in the same utterance). 2. Support for end-to-end optimization with perceptual metrics 3. Robustness to challenging audio conditions (e.g., background noise, processing artifacts, as we presented on our demo page) StyleTTS-ZS achieves high efficiency through its decomposition strategy (enabling one-step generation, since generating prosodic features only is easier than generating the whole speech), but this same design introduces limitations when faced with complex or noisy audio conditions. DMOSpeech maintains comparable efficiency while providing superior generalizability and performance. **Value of Super-Efficient Generation** While the teacher model achieves real-time generation (RTF < 1) on a high-end GPU (V100), there are compelling reasons to pursue even greater efficiency: - **Device Compatibility**: Enabling deployment on resource-constrained environments (CPUs, mobile devices, edge computing) - **Service Scalability**: A 13.7× reduction in inference time translates to substantially higher throughput for cloud services supporting millions of users - **Energy Efficiency**: Reduced computation requirements lead to lower power consumption and carbon footprint For industrial applications, these efficiency gains are critical for accessibility, scalability, and sustainability. Regarding model scaling, while extremely large models (10B-100B parameters) are indeed being explored in production environments (e.g., by ByteDance's Seed-TTS and Amazon's Base-TTS), analyzing scaling behaviors was outside our paper's scope. Our distillation technique remains relevant regardless of teacher model size, as the efficiency benefits become even more pronounced with larger models. **Ablation of Individual Losses** As shown in Table 4, we conducted comprehensive ablation studies applying CTC and SV losses individually: - **CTC Loss Only**: Achieved superior word error rate (1.79 vs. 1.94) but significantly lower speaker similarity - **SV Loss Only**: Produced slightly higher speaker similarity (0.70 vs. 0.69) but substantially worse WER (6.62 vs. 1.94) These results demonstrate that combining both losses achieves the optimal balance between intelligibility and speaker similarity, which aligns with human preference as shown in our subjective evaluations. **Loss Optimization and Potential Trade-offs** The reviewer raises an important point about potential conflicts between optimization objectives. In theory, overly aggressive optimization of auxiliary losses (CTC, SV) could indeed harm performance by causing distribution mismatches with the training data, especially when the loss is optimized below that achieved in ground truth data. To mitigate this risk, we carefully balanced the gradient contributions from each loss component, ensuring the auxiliary loss gradients are comparable in magnitude to the primary DMD loss. This calibrated approach prevents any single objective from dominating and maintains distributional alignment throughout training. Regarding CLAP regularization, while it's an intriguing direction, we focused specifically on speech synthesis rather than general audio generation in this work. We appreciate the suggestion and will mention this as a potential future direction in our revised manuscript. **Theoretical Contributions** While our paper emphasizes empirical results, we provide some theoretical contributions, especially our **Analysis of Mode Shrinkage**. Our detailed examination of distributional changes during distillation (**Figure 2** and **Appendix A**) offers novel insights into how distillation affects output diversity in conditional generation tasks. Our analysis shows that although distillation causes a loss in diversity for a fixed prompt and text input, this reduction in diversity is not necessarily negative or could even be beneficial to the performance of conditional generation, and the mode coverage is not compromised when the model processed different prompt and text inputs. These insights contribute to the theoretical understanding of both diffusion model distillation and perceptual metric optimization in generative models. We thank the reviewer again for their thoughtful comments and positive assessment. Hope our responses address their concerns satisfactorily.
Summary: Diffusion models have shown strong potential in speech synthesis tasks such as text-to-speech (TTS) and voice cloning. However, their iterative denoising process is computationally expensive, and previous distillation methods have led to quality degradation. Existing TTS approaches also suffer from non-differentiable components or iterative sampling, preventing true end-to-end optimization with perceptual metrics. To address these issues, the authors propose DMOSpeech, a distilled diffusion-based TTS model that achieves both faster inference and superior performance compared to its teacher model. A key innovation of DMOSpeech is its ability to enable direct gradient pathways to all model components, allowing for the first successful end-to-end optimization of differentiable perceptual metrics in TTS. The model incorporates Connectionist Temporal Classification (CTC) loss and Speaker Verification (SV) loss, aligning speech synthesis quality with human auditory preferences. Extensive experiments, including human evaluations, demonstrate significant improvements in naturalness, intelligibility, and speaker similarity, while also reducing inference time by orders of magnitude. This work introduces a new framework for optimizing speech synthesis directly with perceptual metrics, setting a new standard for high-quality and efficient TTS models. ## update after rebuttal I have read rebuttal from authors and comments from other reviewers, I think this is a good paper that I vote for Accept Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Methods are clear and the evaluation criteria are suitable. Theoretical Claims: Theoretical claims are not significant, novel, but not wrong for this specific application. Experimental Designs Or Analyses: Experiment design is clear and analysis supported the claim. Supplementary Material: The sample webpage and supplementary material provided clear evidence of the method's performance and the experiment's details. Relation To Broader Scientific Literature: No significant contribution to the broader scientific literature. Essential References Not Discussed: No further reference missing in discussion. Other Strengths And Weaknesses: ## Strength - The paper introduces a novel approach to optimizing perceptual metrics in TTS by enabling direct gradient pathways, which has not been successfully achieved in previous models. - By reducing sampling steps from 128 to 4 while maintaining or improving quality, DMOSpeech addresses a major efficiency bottleneck in diffusion-based TTS. - The paper is well-structured, with clear explanations of the model architecture and loss functions. The inclusion of human evaluation results strengthens the validity of claims. Other Comments Or Suggestions: no further comments Questions For Authors: no further questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their positive recommendation for our paper. We appreciate the recognition of our model's ability to enable direct gradient pathways for end-to-end optimization and the acknowledgment of our efficiency improvements. While we agree with many points raised in the review, we would like to address two specific concerns: **On Theoretical Claims and Scientific Contribution**: We respectfully disagree with the assessment that our theoretical claims are "not significant, novel" and that there is "no significant contribution to the broader scientific literature." Our work makes several important theoretical and scientific contributions: - **Novel Distribution Matching Framework**: We present the first successful application of distribution matching distillation in speech synthesis that achieves superior quality to the teacher model. This counters the prevailing view in the field that distillation necessarily leads to quality degradation. - **Mode Shrinkage Insight**: Our analysis of mode shrinkage during distillation reveals a fundamental insight about conditional generation tasks: in strongly conditional generation, diversity reduction can be beneficial when it emphasizes high-probability regions without compromising output variation across different prompts and text inputs. - **Unified Optimization Framework**: By enabling direct metric optimization in a diffusion framework, we bridge the gap between two previously separate research areas: perceptual metric optimization and diffusion models, establishing a foundation for future research on optimizing generative models with human preferences. These contributions extend beyond speech synthesis and offer valuable insights for any conditional generative modeling task, especially those requiring both quality and efficiency. We believe our work advances both the theoretical understanding of generative model distillation and provides a practical framework applicable to numerous domains requiring conditional generation with perceptual quality constraints. Thank you again for your overall positive assessment. We hope this clarification addresses your concerns regarding the broader impact and theoretical novelty of our work.
null
null
null
null
null
null
On Temperature Scaling and Conformal Prediction of Deep Classifiers
Accept (poster)
Summary: This paper focuses on a popular calibration technique known as temperature scaling (TS) and investigates its effect on major conformal prediction (CP) methods (LAC, APS, RAPS). They show that TS improves class-conditional coverage of adaptive CP but increases prediction set sizes; the effect on LAC is negligible. They uncover a trade-off between prediction set size and conditional coverage when modifying the temperature in TS, with some theoretical analysis. Claims And Evidence: * Empirical * The paper claims to have an extensive empirical study on DNN classifiers to demonstrate how TS affects CP. Numerical experiments are conducted on three CP methods over a range of classification dataset. Several findings are summarized from the experiments, and the appendix contain more numerical details regarding hyperparameter settings and furture comparison. The experiments are pretty thorough and well documented. * Theoretical analysis * Compared to the empirical part, the theoretical side is a bit straightforward but is compatible with the main finding, in part. It does not concern the conditional coverage, but mainly focus on the prediction set size. Regarding the non-monotone structure, Proposition 4.3 does have some theory regarding T=1, but this is more indirect rather than the true kink point in the non-monotone relationship. Methods And Evaluation Criteria: * The evaluation metrics of conditional coverage (impossible in general) and prediction set size are very well recognized in conformal prediction literature. * The numerical experiments are conducted in a range of data sets. Theoretical Claims: * The theoretical claims seem intuitive and correct but I only had limited review of the proofs. On the other hand, the result is not strong and more auxiliary to support the empirical findings. Experimental Designs Or Analyses: The experimental design and analyses appear to be sounds. The authors use relevant datasets, metrics, and comprehensive comparison. The theoretical analysis provides insights into the empirical observations. Limitations and future directions are appropriately discussed. Supplementary Material: Limited review of numerical experiment settings in appendix. Relation To Broader Scientific Literature: Conformal prediction provides guarantee for high-risk decision making and warrants further development. This paper's finding helps to refine calibration technique of CP. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful review. We appreciate your recognition of our extensive empirical study and clear experimental design. Below, we address your comments on the theoretical analysis point by point. *** > Theory for conditional coverage. *** Our work develops a comprehensive mathematical framework to explain the non-intuitive effects of TS on prediction set sizes in CP. We rigorously formulate conditions for changes in prediction set size and clarify the underlying theoretical reasons for these effects. While extending this analysis to conditional coverage is valuable, it would require an entirely new theoretical framework with different assumptions and tools, making it beyond the scope of this study. Nonetheless, establishing theory for the impact of TS on conditional coverage is indeed a promising direction for future research. *** > Regarding Proposition 4.3. *** Let us clarify the scope of our theoretical analysis and particularly the contribution of Proposition 4.3. This proposition focuses on how TS affects prediction set size of APS. It provides a condition, dependent on the value of $ T $, for a decrease or an increase in the prediction set size after applying TS, denoted by $L_T$, compared to the size without applying TS (equivalently, TS with $T=1$), denoted by $L$. It is important to note that this result covers all positive values $ T > 0 $. We would like to clarify that this is not the primary finding of our work. Theorem 4.4, along with its implications, specifically addresses this "kink point" of the non-monotone dependency. Theorem 4.4 shows that applying TS with temperature $T$ affects the prediction set size of a sample $z$ based on $ \Delta z = z_{(1)} - z_{(2)}$ and a bound $ b(T) $. For example, for $ T > 1 $, if $ \Delta z > b_{T>1}(T) $, the prediction set size increases ($ L_T \geq L $). Figure 3 reveals a non-monotonic curve for $ b(T) $ at $T>1$, and since lower (resp. higher) $ b(T) $ implies that more (resp. less) samples obey the condition this provides an explanation to the non-monotonic pattern of the mean prediction set size (AvgSize): it increases for $ 1 < T < \tilde{T}_c $ and decreases for $ T > \tilde{T}_c $ where $\tilde{T}_c$ is the temperature above 1 where $ b(T) $ is minimal. This critical temperature $ \tilde{T}_c $ decreases with $ C $, aligning with empirical results. --- Rebuttal Comment 1.1: Comment: Thanks for the helpful response. I am maintaining my score based on my understanding.
Summary: In this work, the authors studied the effect of the widely used temperature scaling calibration on the performance of conformal prediction techniques for deep neural network classifiers. A wide range of experiments are conducted and a theoretical framework is proposed to explain the effects. Claims And Evidence: The authors have provided a theoretical analysis to show how temperature values influence the properties of prediction sets. With the theoretical results, researchers can understand why temperature scaling can affect the conformal prediction. Empirically, the experiments are strong and extensive. Methods And Evaluation Criteria: The experimental settings are strong, covering various datasets, neural network backbones, etc. The descriptions of the validations are clear. Theoretical Claims: Because this work falls beyond my research comfort zone, a serious assessment of the proposed theories in detail is a challenging task for me. Therefore, this innitial evaluation may be conservative. I am eager to actively follow the discussions with the authors, other reviewers, and ACs. Experimental Designs Or Analyses: No concerns are raised. Supplementary Material: Appendix B (Additional experiments) is reviewed and no concern is raised. Relation To Broader Scientific Literature: Broad literature, including the recent achievements, are duly included in the paper. Essential References Not Discussed: Seems not applicable. Other Strengths And Weaknesses: Additional strength: Basically, the paper is well written and organized. The appendices are extensive. Other Comments Or Suggestions: Some suggestions: 1. It would increase the readability if the font size and line width in the figures are increased. 2. Providing clearer insights/explanations to link some proposed theories and their practical values would further enhance the practicality or be attractive for the practitioners in the ML community. Additional questions: Can the authors provide some insights on the applicability of the findings in other tasks not limited to image classification? Questions For Authors: No further comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable suggestions. We appreciate your recognition of our comprehensive experiments and clear presentation. Below, we provide a point-by-point response to your comments. *** > Improving readability of the figures. *** Following this comment, in the revision we will make every effort to improve the readability of the figures. The extra page allowed in the final version will provide additional space to further enhance readability. *** > Practicality of the proposed theory. *** The primary aim of our theoretical study is to provide mathematical reasoning for the surprising empirical behavior that we discovered. We believe that addressing the "why" question behind non-intuitive results is of high scientific importance. That being said, our theory also provides practical implications. For example, if a practitioner cares only about AvgSize and wants to use APS/RAPS, the $0<T<1$ branch of our theory suggests that they pick small $T$ (which is aligned with the empirical observations). Conversely, practitioners can leverage the $T>1$ branch of our theory, where monotonicity in CP properties breaks. Specifically, it justifies defining a finite range for tuning $T$ to balance prediction set sizes and class-conditional coverage, as discussed in Section 5. We refer the reviewer to Appendix C, where we demonstrate how a small calibration dataset can approximate the curves for AvgSize and TopCovGap versus $T$ (shown in Figure 1). Based on these approximate trends, users can select an optimal $\hat{T}$ that aligns with their needs. Additionally, in Appendix D, we highlight the advantages of this approach over existing methods. *** > Applicability to other domains. *** Our work deals with temperature scaling and conformal prediction applied to multiclass classification. We used image classification datasets (ImageNet, CIFAR-100, CIFAR-10) as they are the benchmark datasets in the related literature. Nevertheless, we expect our findings to be beneficial to classification tasks in domains other than images. In particular, the practical relevance of our paper lies in empowering users to effectively apply CP in multiclass classification, based on their specific needs, by following our proposed guidelines. The applications extend to domains other than images where CP has been used, such as medical diagnosis [1],[2] and NLP [3],[4]. [1] Lu, Charles, et al (2021). "Fair conformal predictors for applications in medical imaging." Proceedings of the AAAI Conference on Artificial Intelligence. [2] Vazquez, J., Facelli, J. C. (2022). "Conformal Prediction in Clinical Medical Sciences." Journal of healthcare informatics research, [3] Kumar, Bhawesh, et al (2023). "Conformal prediction with large language models for multi-choice question answering." ICML 2023. [4] Campos, Margarida, et al (2024). "Conformal prediction for natural language processing: A survey."
Summary: Calibration and Conformal Prediction (CP) are two popular approaches to solving the overconfidence problem in modern DNN classifiers. In this paper, the authors studied the effects of temperature scaling (TS), which is effective for calibration, on the efficiency of CP. The authors first designed extensive experiments to show how the TS affects the sizes of the prediction sets and class-conditional coverage. Then, the authors conducted theoretical analyses for the empirical observations and finally provided practical guidelines for choosing the temperature parameter. ## update after rebuttal The authors have addressed my concerns, so I would support acceptance. Claims And Evidence: The main claim made in this work is that TS has little effect on LAC but strongly affects the prediction set's size and conditional coverage of APS and RAPS. Besides, the effects of the temperature parameter are non-monotonic. These claims were empirically observed and then were analyzed theoretically. Methods And Evaluation Criteria: The evaluation criteria used in the empirical studies, including marginal coverage gap, average set size, and top-5% class coverage gap, are meaningful. Theoretical Claims: The overall theoretical claims seem correct, but some details should be corrected or might be more clearly clarified. - The proof for the Lemma for Theorem 4.1(A.1) seems not correct, or more specifically, Line 575 to 578 is not reasonable. In fact, this lemma can be more simply proved by investigating the monotonicity of function $\exp\left(\frac{z_i-z_j}{t}\right)$ with respect to $t$. - According to the proof for Proposition 4.3 in Appendix A, it seems that, in Line 322 ‘M∈[L]' should be changed to ‘M∈[L_T]' if 0<T<1, and in Line 325 ‘M∈[L_T]' should be changed to ‘M∈[L]' if T>1. Besides, summation signs are missing in Lines 322 and 325, comparing Proposition 4.3 and A.3, which are indeed the same proposition. - Could the authors give more explanations for $\pi^q\approx\pi^q_T$ in Line 353? - In the proof of Theorem 4.4, should $\frac{T-1}{T+1}\ln\frac{4(C-1)^2}{T}$ be changed to $\frac{T}{T+1}\ln\frac{4(C-1)^2}{T}$? And consequently the results in the theorem. I suggest that the authors should carefully check all the details of the theoretical parts. Experimental Designs Or Analyses: The experiments in Section 3 are sufficient to support the authors’ empirical analysis of the effect of TS on CP. The experiments in Section 4 and the appendix are also enough to verify the authors' claims. Supplementary Material: I reviewed all supplementary materials in the Appendix, including proofs, additional experiments, and more analysis. Relation To Broader Scientific Literature: The prior related studies only use initial TS calibration before applying CP methods [1, 2, 3, 4]. None of them explored the effect of TS on CP, which is analyzed in detail in this paper. [1] Angelopoulos, A. N., Bates, S., Jordan, M., and Malik, J. Uncertainty sets for image classifiers using conformal prediction. In International Conference on Learning Representations, 2020. [2] Lu, C., Yu, Y., Karimireddy, S. P., Jordan, M., and Raskar, R. Federated conformal predictors for distributed uncertainty quantification. In International Conference on Machine Learning, pp. 22942–22964. PMLR, 2023. [3] Gibbs, I., Cherian, J. J., and Cand` es, E. J. Conformal prediction with conditional guarantees. arXiv preprint arXiv:2305.12616, 2023. [4] Lu, C., Yu, Y., Karimireddy, S. P., Jordan, M., and Raskar, R. Federated conformal predictors for distributed uncertainty quantification. In International Conference on Machine Learning, pp. 22942–22964. PMLR, 2023. Essential References Not Discussed: The related references are properly cited and discussed. Other Strengths And Weaknesses: Strength: - The effect of TS on the performance of CP is discussed for the first time. - The empirical studies are extensive. - Theoretical analyses are provided. Weakness: - Some details of the theoretical analyses should be refined. Other Comments Or Suggestions: In Line 1087, $d$ in the LHS of the inequality should be $g$. Questions For Authors: My questions are provided in the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your constructive and thorough review of our theoretical work. We appreciate your acknowledgment of our novel analysis, supported by extensive empirical studies and theoretical investigation. We have carefully addressed your comments and suggestions. Below, we provide a point-by-point response to your comments. *** > Regarding the proof for Lemma for Theorem 4.1 (A.1) *** Thank you for highlighting this issue. You are correct — there was an error and a simple proof will be constructed in the revision based on the monotonic decrease of $\exp(c/t)$ with respect to $t$ when $c>0$. Indeed, $\\exp(z_{i}/\\tilde{T}) \\cdot \\exp(z_{j}/T) \\geq \\exp(z_{i}/T) \\cdot \\exp(z_{j}/\\tilde{T}) \iff \\exp((z_{i}-z_j)/\\tilde{T}) \\geq \\exp((z_{i}-z_j)/T)$ and since $z_{i}-z_j \geq 0$ we have that this inequality holds as $\tilde{T} \leq T$. *** > Typo in Proposition 4.3 *** Thank you for pointing out this typo. Indeed, in the branch $0<T<1$, $M\in\left[L\right]$ should be changed to $M\in\left[L_T\right]$, and in the branch $T>1$, $M\in\left[L_T\right]$ should be changed to $M\in\left[L\right]$. Similarly, in the proof, in lines 685 and 688, $L$ should be replaced with $L_T$ (3 instances). As for the sum symbol, we intended to save space by using $\\sum_{i}^{M} \\hat{\\pi}_i - \\hat{\\pi}^T_i $, which we read as $\sum_i^{M} \left( \hat{\pi}_i - \hat{\pi}^T_i \right)$, rather than $\\sum_i^M \\hat{\\pi}_i - \\sum_i^M \\hat{\\pi}_i^T$. In the revision, we will ensure consistent phrasing in the main body of the paper and the appendix. *** > Explanation for $\pi^q \approx \pi^q_T$ in Line 353. *** The paragraph in lines 332-357 analyzes the structure of the "quantile sample" of APS. Figure 2 shows a strong correlation between the samples' scores and $\Delta z = z_{(1)} - z_{(2)}$ (the difference between the largest and second largest entries of the logits vector), both with and without temperature scaling. Since the quantile sample corresponds to a high score ($1 - \alpha$ quantile of all scores), it consistently has a large $\Delta z$. As evidenced in Figure 2, the $1 - \alpha = 0.9$ quantile sample had $\Delta z^q \approx 11$. According to the relation $\pi_i \propto \exp{z_i}$, this implies $\pi^q_{(1)} \gg \pi^q_{(2)}$, specifically, after some approximations we get $\frac{\pi^q_{(1)}}{\pi^q_{(2)}} \propto 10^4$. Similarly, when temperature scaling calibration was applied, the $1 - \alpha = 0.9$ quantile sample again resulted in $\Delta z_T^q \approx 11$, leading to $\pi_{T,(1)}^{q} \gg \pi_{T,(2)}^{q}$. Thus, due to the overwhelmingly dominant entry in both quantile samples, we have that $\pi_{(1)}^{q}$ and $\pi_{T,(1)}^{q}$ are nearly 1, and the associated sorted softmax vectors obey $\pi^q \approx \pi^q_T$. Following this comment, to further illustrate the strong similarity between sorted $\pi^{q}$ and $\pi^{q}_T$, we will add in the revised version concrete examples for these vectors for several dataset-model pairs. For example, the first 5 elements in $\pi^{q}$ and $\pi^{q}_T$ (after sorting) for ImageNet-ViT (with $T = T^*$ optimal for calibration): $\pi^q[:5] = [9.9697e-01, 4.9439e-04, 2.1028e-04, 2.0065e-04, 1.6687e-04]$ $ \pi^{q}_T[:5] = [9.9755e-01, 7.6261e-04, 8.7435e-05, 7.8683e-05, 7.6373e-05]$ *** > Regarding the bound in Theorem 4.4. *** Thank you for pointing out this typo. Indeed, there is a factor of $\frac{T}{T+1}$ in the $T>1$ branch of Theorem 4.4 and not $\frac{T-1}{T+1}$. The same correction applies to lines 787 and 828 in the proof. We emphasize that the proof remains valid except for the final line of each branch (787, 828), where $T-1$ was mistakenly written instead of $T$ in the numerator of the bound. This will be fixed. After this minor correction, the bounds are still aligned with the empirical trends. The complete substitution of $A$ from line 756 into the inequality in line 782, detailed in the following link: https://postimg.cc/hXXHxKNq confirms the correctness of $\max \left( \frac{T}{T-1}\ln(4T),\frac{T}{T+1}\ln(4T(C-1)^2) \right)$ for the branch $T>1$. Following this comment, we will include this substitution explicitly in the revised version.
Summary: The paper aims to study the interplay between conformal prediction (CP) and temperature scaling (TS) calibration. They study the effect of TS on conformal prediction using extensive empirical evaluation with three different CP methods. They present a theoretical analysis to explain the effect of TS on APS and RAPS conformal prediction methods. Claims And Evidence: The paper presents theory to support their claims. I am unsure about the assumption that $\hat{q}$ and $\hat{q}_T$ correspond to the same sample based on Figure 2. How generalizable is this claim? There can always be adversarial sequences that violate this, which puts the assumption in question. If not justifiable theoretically, at the very least more extensive empirical analysis to demonstrate this holds for datasets included in the paper and beyond is important. Methods And Evaluation Criteria: While the paper presents extensive empirical evaluation and the datasets make sense, I am concerned about the metrics included. The justification behind evaluating class-coverage gap on top-5% classes is not provided; moreover, 5% seems an arbitrary choice. Why would you not consider evaluation on average class coverage gap as defined in Ding et al., 2023? At the very least, both metrics could be included. Also, ablation on the 5% choice seems important to make general claims about class-conditional coverage. My other concern is regarding absence of standard error reporting in the experiments. Theoretical Claims: I went over proofs A.1-A.4. Experimental Designs Or Analyses: I checked the soundness of experiments in the main paper. Supplementary Material: I went over Section A and B in the supplementary material. Relation To Broader Scientific Literature: The paper aims to study the interplay between conformal prediction and temperature scaling calibration – methods that have been often studied individually in literature. TS is usually applied initially in conformal prediction methods. They report their findings on the effect of TS on conformal prediction sets in terms of set size and class-conditional coverage. Essential References Not Discussed: While the paper claims this interplay has not been investigated yet, the paper misses an important reference [1] that studies this very impact of confidence calibration on conformal prediction. From a first glance, the analysis and empirical observations of [1] are close to this work. The paper also does not cite [2] who study the connection between calibration and prediction sets, although in the binary classification setting. These are just a few examples and not an exhaustive list! Authors should acknowledge these works and include a discussion on relationship with these works at the very least. The authors are also suggested to do a thorough review of existing literature to contextualize their work better. [1] Xi, H., Huang, J., Liu, K., Feng, L., and Wei, H. (2024). Does confidence calibration improve conformal prediction? [2] Gupta, C., Podkopaev, A., and Ramdas, A. (2020). Distribution-free binary classification: prediction sets, confidence intervals and calibration. NeurIPS. Other Strengths And Weaknesses: The writing of the paper, especially the technical writing and notation in theorems can be improved for greater clarity. I am also unsure about the practical utility of the findings presented – temperature scaling is not a core component of conformal methods and can be done away with. While the authors present some guidelines, it does not seem convincing in the current context. Additionally, the authors mention the runtime of their procedure (pg 8). I have two comments here – (i) please discuss the runtime of your procedure in the paper, (ii) I believe offline training of DNNs should not be compared here given the post-hoc nature of methods. Other Comments Or Suggestions: The definition for reported metrics (pg 4) should be included in the main paper for improved readability. Questions For Authors: No specific questions, please refer to individual comments above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thorough and insightful review. We are pleased that you recognized the extensive empirical evaluation of our experimental design. Below, we carefully respond to all of your comments. > Regarding the technical assumption The core of this technical assumption is the strong similarity between the softmax vectors of the "quantile samples" associated with $\hat{q}$ and $\hat{q}_T$, as discussed in the paragraph in lines 332-357. Due to the limitation of characters, we refer you to the response "Explanation for $\pi^q \approx \pi^q_T$ in Line $353$" to Reviewer 8jYt, where we explain and illustrate the proximity between $\pi^q $ and $ \pi^q_T$. To highlight the strong similarity, in the revised version we will include concrete examples of these vectors for various dataset-model pairs. Given that $\pi^q \approx \pi^q_T$, we believe that it is reasonable to make the theoretical derivation tractable by the technical assumption that $\hat{q}$ and $\hat{q}_T$ correspond to the same sample. We do not claim that it holds in every possible setting. Nevertheless, the fact that our theory provides insights that are aligned with the empirical behavior along many models and datasets serves as a justification for this technical assumption. > Metrics for class-conditional coverage Evaluating conditional coverage across groups using the worst-case coverage metric is both informative and relevant, as demonstrated by previous works, e.g., (Gibbs et al., 2023). Our TopCovGap metric is averaged over the worst 5\% due to high variance observed when considering only the single worst-case coverage, as explained in Appendix B.4. Following your comment, we will report also the AvgCovGap metric that has been used in (Ding et al., 2023) (denoted CovGap there). For example, we present in https://postimg.cc/1fCkf8CH this metric for the three dataset-model pairs shown in Figure 1. Notably, this metric exhibits similar behavior to TopCovGap, displaying a comparable trend of achieving a minimum at temperatures $T>1$. > Missing references We thank you for bringing these related papers to our attention. Both papers will be cited and discussed in the revised version. Paper [1] (arXiv preprint) is a concurrent work (our paper was uploaded to arXiv at the same time). The TS results in [1] are only a small subset of our results. They do not consider conditional coverage and use a limited range of T, which masks the non-monotonic effect on the prediction set size of APS and RAPS. Moreover, they also do not compare APS and RAPS to LAC. On the other hand, our paper provides a complete picture of the effect of TS with a wide range of temperatures on both the prediction set size and the class-conditional coverage of APS, RAPS, and LAC. This complete picture teaches practitioners that tuning the temperature for APS and RAPS introduces a trade-off effect (overlooked in [1]), and we provide a practical way to control it. Paper [2] presents important theoretical results on prediction sets and calibration, which are indeed related to our work. Note though, that [2] is limited to binary classification and do not provide explanation to the fact that calibration (e.g., TS calibration) affects CP methods differently and in a non-monotonic way, as empirically shown and analyzed in our paper. > Practical utility of the findings Our work investigates the effect of TS on CP as a function of temperature $T$. Through our analysis, we identify a trade-off between two essential properties of adaptive CP methods: mean prediction set size and proximity to conditional coverage. Our guidelines, introduced in Section 5, enable practitioners to navigate this trade-off effectively, which is a novel contribution to the CP literature. The guidelines are further explored in Appendix C and D. Appendix C demonstrates how, with a limited amount of calibration data, users can approximate the curves in Figure 1. Based on these trends, they can select an appropriate $\hat{T}$ that aligns best with their objectives. For instance, a practitioner working with CIFAR100-ResNet50 who prioritizes prediction set size can achieve over a 50\% reduction in the AvgSize of APS using our guidelines. Appendix D further illustrates the practical relevance of our findings. Specifically, we show that applying TS at the temperature corresponding to the minimum of the *approximated* TopCovGap curve, followed by RAPS, outperforms Mondrian CP (Vovk, 2012) in both TopCovGap and AvgSize metrics. > Regarding the runtime In the revised version we will discuss more about the runtime of the proposed guidance. We would like to emphasize that this procedure is done offline during the calibration phase and its runtime is within a range of minutes. > Improving readability We will include the definitions of the reported metrics (currently appear in the appendix) in the main body of the revised version leveraging the extra page allowed in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your response. **Technical assumption:** I saw the response and I believe you continue to point to 332-357 and Figure 2. I do not believe it is fair to make a broad assumption based on one dataset and model. While I understand you will add more examples in the revision, in the current form, the justification is not satisfactory. "We do not claim that it holds in every possible setting." -- what are the settings where you claim this holds? This discussion is entirely missing. It is a strong assumption and if it is believed to hold for some specific datasets and models, it should be proved. If it is expected to hold only for the datasets and models you show the trend for, that should also be mentioned clearly. This affects the generalizability of the findings and seems important to me. **Metrics for class-conditional coverage:** I believe Gibbs et al. used worst-case coverage only at one place and mention that a practitioner may choose to prioritize different conditional targets. Most of the experiments report miscoverage over all groups. **runtime:** I believed the discussion of runtime is important since you compare the method with standard conformal prediction that has no such overhead. My comment was with respect to this line in the paper -- "negligible runtime compared to the offline training of DNNs" -- it is not fair to state this since post-hoc conformal methods assume access to pretrained model. I am still concerned about the reported metrics and lack of error bars among other things, and I would like to keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your comment. We are glad for the opportunity to further dive into your concerns and resolve them. &nbsp; ### Technical assumption We emphasize that we did not claim that $\hat{q}$ and $\hat{q}_T$ are associated with exactly the same sample in any specific setting. Rather, we motivate this technical assumption, which makes the theoretical derivation tractable, by showing that the associated softmax vectors $\pi^q$ and $\pi^q_T$ are aligned. Figure 2 illustrates this for a specific dataset-model pair, and this behavior holds across **all other dataset-model pairs** as well. We previously provided concrete examples of softmax vectors for ImageNet-ViT, and in this comment, we include additional dataset-model pairs along with different values of temperatures for further illustration. As mentioned, we will include these additional demonstrations for all other dataset-model pairs we experimented with in the revised version. Moreover, please note that in the CP literature, assumptions are often necessary when developing rigorous theories beyond marginal coverage. For example, as mentioned in Section 2.2, APS (Romano et al., 2020) and LAC (Sadinle et al., 2019) establish theory under the assumption that the classifier outputs the exact posterior distribution— stronger assumption than ours, and without empirical support. Finally, as previously noted, our theory aligns well with empirical observations across models and datasets, which further provides justification for the technical assumption. #### CIFAR10-ResNet34: $\pi^q = [9.9997e-01, 8.2661e-06, 7.3281e-06, 6.7146e-06, 2.0070e-06]$ $\pi^q_{T = T^*} = [9.9997e-01, 1.4801e-05, 8.0310e-06, 2.0299e-06, 1.6542e-06]$ $\pi^q_{T = 0.5} = [9.9998e-01, 1.1870e-05, 4.1955e-06, 2.0069e-06, 1.9213e-06]$ $\pi^q_{T = 2} = [9.9997e-01, 2.0271e-05, 2.0588e-06, 1.7960e-06, 1.6457e-06]$ #### CIFAR100-DenseNet121: $\pi^q = [9.9993e-01, 5.6212e-05, 5.1748e-06, 3.4063e-06, 1.2153e-06]$ $\pi^q_{T = T^*} = [9.9992e-01, 4.9569e-05, 8.1525e-06, 4.1283e-06, 1.8294e-06]$ $\pi^q_{T = 0.5} = [9.9997e-01, 1.2354e-06, 6.5692e-07, 4.6121e-07, 4.4120e-07]$ $\pi^q_{T = 2} = [9.9996e-01, 7.2431e-06, 6.0478e-06, 4.3499e-06, 3.7969e-06]$ #### ImageNet-ResNet152: $\pi^q = [9.9991e-01, 5.8738e-05, 1.5782e-05, 8.8207e-06, 4.0592e-06]$ $\pi^q_{T = T^*} = [9.9992e-01, 3.1237e-05, 9.8604e-06, 9.4194e-06, 3.6732e-06]$ $\pi^q_{T = 0.5} = [9.9993e-01, 3.4423e-05, 2.5229e-05, 1.9685e-05, 1.9673e-05]$ $\pi^q_{T = 2} = [9.9989e-01, 2.9378e-04, 6.1594e-06, 1.5067e-06, 6.1018e-07]$ &nbsp; ### Metrics for class-conditional coverage Please note that all the other reviewers supported our TopCovGap metric. We recognize the value of reporting AvgCovGap (averaged across all groups, as used in (Gibbs et al., 2023)) alongside TopCovGap. As shown in our previous comment in the following **link**: https://postimg.cc/1fCkf8CH, **the observed trends are very similar**. As we stated in the previous response, in the revised version, we will report AvgCovGap across all dataset-model pairs. &nbsp; ### Runtime Thank you for the clarification. Following your comment, we will not include in the revised version the statement: "its runtime is negligible compared to the offline training of DNNs", which relates to the comparison between the runtime of the proposed guidelines and the model training. That said, we still wish to highlight the efficiency of our approach. For example, on our most demanding dataset-model pair, ImageNet-ViT, the entire procedure took under 6 minutes, and is done \textbf{offline} during the calibration phase. &nbsp; ### Error bars Thank you for raising this point. In the following **link**: https://postimg.cc/bSdJqgtg , we present the main metrics (Figure 1), including the AvgCovGap metric with error bars ($\pm$ standard deviation). The variability is minor compared to the trends in the mean metrics; thus, **our interpretations of these figures are unaffected**. In the revised version, we will include error bars in all presented tables and figures. &nbsp; We hope our revisions and clarifications satisfy your concerns and contribute to a more positive evaluation. Best, The Authors
null
null
null
null
null
null
Potemkin Understanding in Large Language Models
Accept (poster)
Summary: The authors discussed the phenomenon of $\textbf{potemkins}$ in large language models, referring to the cases where a model’s misunderstanding of a concept does not align with the way humans would have misconceived it. The authors proposed a framework to formally define what Potemkin is and designed a benchmark to quantify Potemkin with 3 different tasks to test whether LLM can understand and switch between a given concept and its instances. The authors then evaluate the performance of 3 different LLMs (GPT-4o, Llama- 3.3-70B-Instruct-Turbo, and Claude-3.5-sonnet) on three different domains (literary techniques, game theory, and psychological biases). The authors find that the phenomenon of Potemkin exists across all the models and tasks they analyzed. The authors also discussed the problem of self-coherence to understand whether the ubiquity comes from consistent/inconsistent internal concept understanding of LLMs. Claims And Evidence: The authors support their claims with detailed experiments. Methods And Evaluation Criteria: The benchmark dataset simplifies the problem to test concept explanation/related tasks, and evaluate the performance through calculating accuracy. Theoretical Claims: There are no "theoretical claims". This work is about evaluating/benchmarking LLMs. The authors provided a formalized framework using some mathematical notations, which are not very rigid. I discussed some problems in the following sections. Experimental Designs Or Analyses: I carefully read the authors' experimental design in Section 4. Supplementary Material: Yes, Section C, D E, F and H. Relation To Broader Scientific Literature: The authors discuss the problem of Potemkin in LLMs, which can be interesting to the broader community, especially in the fields that are interested in precise-concept applications. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: The problem is interesting. The experimental design is detailed, and the presentation of results is clear. The discussion of related work is from multiple perspectives, and the authors highlighted their contribution. Weakness: See Questions For Authors. Other Comments Or Suggestions: Line 363, notations for $\mathcal{C}_h$ and $\mathcal{C}_l$ are inconsistent. It would be more informative if the authors could add more explanation/thoughts on the results. For now, it mostly just reflects the accuracy numbers. Questions For Authors: 1. The definitions of $\mathcal{C}_h$, $C_h$ (and $\mathcal{C}_l$, $C_l$) are confusing to me. Are they sets of all possible strings to demonstrate people's understanding of the concept $C$? The authors provide verbal descriptions, but it would be helpful to provide some visualization (maybe a Venn graph?) and use the example of Haiku. These are important definitions, so they need to be made clearer to readers. 2. In Line 153, why "keystone elements are valid tests for LLMs iff $\mathcal{C}_h = \mathcal{C}_l$"? Given the definition authors give about $\mathcal{C}_h$ and $\mathcal{C}_l$ ($\textit{collection of sets, where each set represents a distinct and coherent category of how a human might understand a concept}$), this seems like a very strong requirement. 3. In Table 2, the authors provided aggregated results across domains and models. In Table 1, we can see that the performance of different model-domain combinations can be significantly different (e.g., LLama-Game theory 0.10 vs Claude-3.5-Game theory 0.50), and thus aggregation can lead to misrepresentation/misinterpretation of results; standard error will not make sense here either. I suggest the authors provide un-aggregated results in Table 2. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive review of our paper. We're glad that you appreciated our paper and findings. We respond to your comments and describe new results; to summarize the main changes, we've added: - An expansion of the coherence analysis to include 9 models - Unaggregated results for table 2 - A rewritten and simplified framework **We hope our comments have addressed your concerns. If not, please let us know if you have any more questions we can address in the follow-up.** > _The definitions of \mathcal C_h, C_h, \mathcal C_l, C_l, are confusing to me. Are they sets of all possible strings to demonstrate people's understanding of the concept ?_ Thanks for this feedback. It motivated us to **significantly** rewrite and simplify our framework, ensuring clarity while maintaining the core insights. [First, we've included a new figure to provide a visual clarification of our framework (see attached)](https://imgur.com/a/9XHtZWz). Due to length constraints (and because OpenReview currently restricts submission updates), we can't include the full revised section here. However, here is an outline of the new structure: - C*: set of correct interpretations of a concept. (Equivalently, a binary function over strings indicating correct or incorrect concept usage.) - C_h: set of possible human interpretations (with C^* \in C_h) - C_l: set of all possible model interpretations - To test if a human's interpretation of a concept H is correct, it's intractable to compare all elements of H to C*. - Because humans have structured ways to interpret concepts, though, we show there exist keystone questions - sets of questions that serve as proofs of understanding if answered correctly. By definition, these cannot be aligned with any human way to misinterpret a concept. - But unless the set of LLM misunderstandings match human misunderstandings, there's no guarantee keystones work for LLMs: LLMs might interpret concepts in arbitrary ways. - A potemkin is an instance where an LLM answers a keystone question correctly but doesn't understand the concept. We hope this makes our theoretical framing clearer. > _The authors provide verbal descriptions, but it would be helpful to provide some visualization (maybe a Venn graph?) and use the example of Haiku._ We appreciate your suggestion. [We've added a new visualization of these concepts using a Venn diagram at the attached link](https://imgur.com/a/9XHtZWz). > _In Line 153, why "keystone elements are valid tests for LLMs iff C_h=C_l"? Given the definition authors give about and C_k ("collection of sets, where each set represents a distinct and coherent category of how a human might understand a concept"), this seems like a very strong requirement._ We recognize our original language was unclear and as noted above have clarified the framework. We've also revised the manuscript to clarify the phrase “valid test”. - By "valid," we specifically mean that answering keystone questions correctly implies true conceptual understanding in the LLM. - We agree that it is a strong requirement for models to follow, which is part of the point of the paper. We will add a discussion of why if models misunderstand in ways that are distinct from how humans misunderstand, keystones that work for people will not work for LLMs. > _In Table 2, the authors provided aggregated results across domains and models. In Table 1, we can see that the performance of different model-domain combinations can be significantly different (e.g., LLama-Game theory 0.10 vs Claude-3.5-Game theory 0.50), and thus aggregation can lead to misrepresentation/misinterpretation of results; standard error will not make sense here either. I suggest the authors provide un-aggregated results in Table 2._ Thanks for this suggestion. [We've provided a new table with unaggregated results here](https://imgur.com/a/4tHzAGn). This new table is also expanded to include 9 models. We agree that the detailed breakdown makes it easier to explore specific model-domain combinations.
Summary: This paper introduces the idea of Potemkin understandings, which is defined as differences in how human and large language models understand concepts. The main contribution of this paper is the design of a benchmark that tests the discrepancy in the model's ability to claim a definition of a concept and its ability to use the concept. The paper tests three popular models (GPT-4o, Llama-3.3, Claude 3.5) using the proposed benchmark, and shows a large drop in performance across all models. Additionally, the paper tests self-coherence of models by prompting the model to generate examples of a concept, and then checking whether the model classifies its generation as an instance of the concept. The paper show that the three models exhibit low self-coherence. Finally, the paper identified a bias towards questions that do not require concept usage in the MMLU benchmark. Claims And Evidence: Overclaiming is the central issue in this work. While the experimental results support the claim that language models exhibit a discrepancy between the ability to explain a concept and the ability to use it, the scope of the experiments is quite limited. Only three models are evaluated across three domains, restricting the generalizability of the findings. Besides, * The claim that "humans who can clearly define a concept necessarily possess a deep understanding and can effectively apply it" does not logically follow from the observed differences between human and model comprehension. A well-known counterexample comes from mathematics, where individuals can memorize definitions without being able to apply them correctly. * The presence of Potemkin understanding does not inherently invalidate the effectiveness of benchmarks. The specific analysis of MMLU’s distributional properties does not substantiate the broader claims made by the authors. * Measuring the presence of Potemkin understanding does not resolve the debate between the two competing perspectives outlined in the related work. Even if models pass tests designed to assess Potemkin comprehension, one could still validly argue that they function as stochastic parrots, merely mimicking human reasoning rather than genuinely understanding or reasoning about concepts. Methods And Evaluation Criteria: The benchmark design is generally well-structured and conceptually sound. However, the experimental scope is too limited to draw strong conclusions. The evaluation lacks breadth, as it only considers a narrow set of models and testing conditions. To strengthen the study, the authors are encouraged to expand their evaluation to include a wider range of LLMs and alternative evaluation methods. Currently, the experiments rely solely on prompting, but it remains unclear whether this involves zero-shot, few-shot, or chain-of-thought prompting. Theoretical Claims: There is no theoretic claim. Section 2 contains lots of unnecessary math. Experimental Designs Or Analyses: See Methods And Evaluation Criteria. Supplementary Material: I roughly checked the codes. There is no immediate issue to my attention. Relation To Broader Scientific Literature: Besides hallucination, I think this work contributes to the discussion of "competence and performance" in LLMs. Overall this work introduces very novel concepts to the field. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See previous discussions. Other Comments Or Suggestions: N/A Questions For Authors: Indeed, the findings "reveal a significant disparity between the ability of models to explain and use concepts." But it's kind of already a consensus that generation is harder than understanding. How is this work contributing to future model development, or it's just introducing unnecessary terminologies? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful review. We respond to your comments and describe new results; to summarize the main changes, we've added: - An expansion of the coherence analysis to include 9 models - A visualization of our mathematical framework - A rewritten and simplified framework - Analysis of question complexity Your comments were very constructive and have improved the paper. **We hope our comments have addressed your concerns. If not, please let us know if you have any more questions we can address in the follow-up.** > _While the experimental results support the claim that language models exhibit a discrepancy... the scope of the experiments is quite limited._ We like the way you've summarized our experimental results: "they support the claim that language models exhibit a discrepancy between the ability to explain a concept and the ability to use it." We'll update the language in our paper to reflect the way you wrote it. We also agree that considering more models will make our results more robust. [We've expanded our coherence evaluation to include 9 models -- see linked table](https://imgur.com/a/4tHzAGn). We'll use the time between the rebuttal and camera-ready deadline to add more models. Our choice of domains spanned 32 subdomains that were intended to span three distinct forms of understanding: linguistic, formal, and behavioral. Potemkins were ubiquitous across all of these varied contexts, suggesting their presence in other areas as well. > _The claim that "humans who can clearly define a concept necessarily possess a deep understanding and can effectively apply it" does not logically follow._ We fully agree that our original claim is too broad. In fact, this claim is not central to our work. What is central is that we test models on “keystones” - the set of questions that indicate understanding when people do well on them. We'll modify the language in our revision. For the applications we choose (unlike math, as you point out) definitional accuracy is used to indicate understanding (e.g. it's what is used in exams to test understanding). For example, clearly defining sunk cost fallacy encompasses most of the understanding of that concept, as use cases are simple applications of it. > _The presence of Potemkin understanding does not inherently invalidate the effectiveness of benchmarks._ We agree that the presence of potemkins doesn't invalidate benchmarks broadly. Benchmarks remain valid for assessing performance within the distribution they directly measure on held-out examples. Our point was subtler: Potemkins undermine the inference that high benchmark scores must reflect generalizable conceptual understanding. We'll clarify this point in the revision. > _Even if models pass tests designed to assess Potemkin comprehension, one could still validly argue that they function as stochastic parrots, merely mimicking human reasoning rather than genuinely understanding or reasoning about concepts._ We agree that our experiments don't answer whether mimicking human reasoning would reflect genuine understanding, nor do they intend to. Rather, our goal is pragmatic: since developing models that mimic human reasoning is one of the goals of LLM research, it's important to measure this ability. We'll adjust our language to make this clear. > _Section 2 contains lots of unnecessary math._ We've completely rewritten Section 2, simplifying and clarifying the mathematical presentation. See our response to reviewer voEv for more details. [We've also included a new framework visualization here](https://imgur.com/a/9XHtZWz). > _It's kind of already a consensus that generation is harder than understanding._ Our tasks testing "use" of concepts aren't exclusively about generation (e.g., classification tasks rely on recognition rather than generation). In fact, we show that models perform similarly poorly on classification and constrained generation tasks. Prompted by your question, we've also gone back and re-examined our findings. While task complexity contributes in part to the Potemkin gap, we find that it can't fully explain Potemkin failures. For example, we find models correctly apply challenging definitions but fail simpler applications (e.g., correctly defining complex concepts like "Pareto Optimality" yet incorrectly classifying simpler instances of concepts like rhymes). This non-monotonic pattern of errors indicates conceptual gaps rather than just difficulty-driven mistakes. > _How is this work contributing to future model development?_ Good question. Once potemkins are discovered using the methods in our paper, we can build methods to train against them. Moreover the automatic evaluation tasks can be directly optimized during model fine-tuning. One strategy is to explicitly use the automated coherence experiment to penalize the inconsistencies that lead to potemkins. We feel this is the most important contribution of our benchmark and have added a discussion to the paper. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal, it's a nice one, I admit :) As my concerns are addressed, I plan to raise the score to 3 (slightly positive) based on my excitement. However, I also won't consider it a huge loss to ICML if we miss this work. I will leave it to the AC to decide on this work. --- Reply to Comment 1.1.1: Comment: We’re happy to hear we addressed your concerns. Your feedback was incredibly constructive. Thank you for raising your score!
Summary: This paper investigates Potemkin understanding in language models. To assess model behavior, the authors design evaluation datasets across diverse domains, including literary techniques, game theory, and psychological biases. The findings indicate that while language models are good at explaining concepts, their accuracy in correctly applying these concepts remains low. Claims And Evidence: The claims made in the paper are supported by experiments and analysis. Methods And Evaluation Criteria: The paper introduces an evaluation dataset to assess Potemkin understanding in language models. However, I find that some cases for concept usage are overly complex for this setting. In Figure 2, while the first two problems seem reasonable, the third one is tricky. The model correctly generates prime numbers, suggesting it understands the concept. The failure occurs at the second step of reasoning, which does not accurately reflect a lack of understanding of "prime numbers." Do these cases still count as Potekmins? Theoretical Claims: The paper presents a theoretical framework for Potemkin understanding in language models and humans. However, I found some parts challenging to follow: - The notations for two C_l in Line 115 are hard to specify. - In Line 117, the phrase "misunderstand a concept" might be "understand a concept."? Experimental Designs Or Analyses: The experiments across different models and domains are well-performed. The self-coherence analysis also makes sense for evaluating the models' understanding. Supplementary Material: No I did not. Relation To Broader Scientific Literature: This paper introduces a specific form of hallucination, termed "Potemkin," and creates a dataset to evaluate such behaviors in language models using human annotations. This contribution seems to be a useful resource for further research on model behavior and understanding. Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths:** 1. The paper proposes to evaluate a specific type of LLM hallucination, potemkin understanding, by providing human-annotated datasets, which offers a fresh angle for assessing model capabilities. 2. The experiments show the disconnect between models' ability to define concepts versus actually applying them correctly. **Weaknesses:** 1. The evaluation methodology relies heavily on human expert annotation, which limits the scalability and practical applicability of the benchmarks. 2. The theoretical framework, while attempting to formalize the concept of potemkin understanding, adds complexity rather than clarifying the formulation. 3. The study doesn't sufficiently control for question complexity in the "use the concept" evaluations. Poor performance on these questions could result from general reasoning difficulties rather than concept-specific misunderstandings. Other Comments Or Suggestions: Line 363: The notations are inconsistent with those in Section 2 Questions For Authors: 1. Could you elaborate on your claim "Keystone elements are valid tests for LLMs if and only if C_l = C_h" (Line 150)? Does this imply that your proposed benchmarks might not accurately reflect human understanding when tested on humans? Can you use specific examples to clarify this? 2. Have you explored automating the evaluation process, perhaps using LLM-as-a-judge approaches? How well do automatic evaluations perform on your benchmark? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your careful and insightful review of our paper. We respond to your comments and describe new results below; to summarize the main changes, we've added: - A rewritten and simplified framework - A visualization of keystone questions and potemkins - Analysis of the role of question complexity - [An expansion of the coherence analysis to include 9 models](https://imgur.com/a/4tHzAGn) > _The theoretical framework, while attempting to formalize the concept of potemkin understanding, adds complexity rather than clarifying the formulation._ Thanks for this feedback. It motivated us to **significantly** rewrite and simplify our framework. [First, we've included a new figure to provide a visual clarification of our framework](https://imgur.com/a/9XHtZWz). While we don't have space to include the full revision here, an outline is below: - C*: set of correct interpretations of a concept. (Equivalently, a binary function over strings indicating correct or incorrect concept usage.) - C_h: set of possible human interpretations (with C^* \in C_h) - C_l: set of all possible model interpretations - To test if a human's interpretation of a concept H is correct, it's intractable to compare all elements of H to C*. - Because humans have structured ways to interpret concepts, though, we show there exist keystone questions - sets of questions that serve as proofs of understanding if answered correctly. - But unless the set of LLM misunderstandings match human misunderstandings, there's no guarantee keystones work for LLMs. - A potemkin is an instance where an LLM answers a keystone question correctly but doesn't understand the concept. > _The study doesn't sufficiently control for question complexity in the "use the concept" evaluations._ This is an important point, and the current draft does not address it. Prompted by your question, we've gone back and re-examined our findings. While task complexity contributes in part to the Potemkin gap, we find that it cannot fully explain potemkin failures. For example, our benchmark contains cases where models correctly apply challenging definitions but fail simpler application tasks (e.g., correctly defining complex concepts like "Pareto Optimality" yet incorrectly classifying simpler instances of more intuitive concepts like rhymes). This non-monotonic pattern of errors indicates conceptual gaps rather than just difficulty-driven mistakes. > _However, I find that some cases for concept usage are overly complex for this setting. In Figure 2, while the first two problems seem reasonable, the third one is tricky..._ This is an important observation. A reason why potemkins are common is that conceptual understanding of one concept often depends on other concepts. For example, even if a model successfully lists prime numbers, it must also accurately recognize digits (such as '1') to demonstrate full understanding. However, we agree with your point that a more direct example in the figure might better demonstrate Potemkin failures, and have updated the figure with a new example. > _The evaluation methodology relies heavily on human expert annotation, which limits the scalability and practical applicability of the benchmarks._ We would like to highlight that while our methodology indeed uses human expert annotations in parts, much of it is automated, including: 1. Coherence evaluations (fully automated) 2. The classification task, which introduces 320 new questions as part of our benchmark. We'll highlight these points in the revision. > _Could you elaborate on your claim "Keystone elements are valid tests for LLMs if and only if C_l = C_h" (Line 150)? Does this imply that your proposed benchmarks might not accurately reflect human understanding when tested on humans?_ Thank you for pointing this out. We agree that this was unclear in the original text. What we meant (and hope the above clarifies) is that (i) keystone questions are constructed to precisely reflect human conceptual understanding and (ii) if LLMs do not misunderstand in the same way, models can do well on keystones while failing to understand a concept. As such, the benchmarks we use in the paper are explicitly designed so that human success on the benchmark aligns with genuine understanding. > _Have you explored automating the evaluation process, perhaps using LLM-as-a-judge approaches?_ As noted above, parts of our evaluation are already automated, making them scalable. But we appreciate your suggestion: LLM-as-a-judge methods are an interesting direction, and they're straightforward enough that we will implement them between the rebuttal period and camera-ready deadline. Because we already have expensive human labels, that will also help us evaluate how good LLM-as-a-judge methods are. > _Typo suggestions_ Thank you for finding typos. We've corrected these in the revision, and our simplified framework also reduces complexity. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses and for conducting additional experiments! My concerns regarding the automatic evaluation and the overcomplicated framework have been addressed. However, I remain somewhat unconvinced by the explanation of task difficulty/complexity of the two tasks. You mentioned that “a reason why potemkins are common is that conceptual understanding of one concept often depends on other concepts.” Since prior work has shown that LLMs tend to struggle with compositional reasoning, it is expected that their performance would drop on tasks requiring multiple steps. In contrast, explaining a concept (even a difficult one) resembles a memorization-based, single-step task. That being said, regarding the third example in Figure 2, a fairer comparison with similar task compositionality might be: "List all prime numbers between 0 and 20." To sum up, I find the introduction of Potemkin Understanding to be an interesting contribution to the community, and I appreciate the authors' efforts in the rebuttal. I have accordingly raised my score to a 2. --- Reply to Comment 1.1.1: Comment: Thank you for your incredibly constructive feedback. We're glad our rebuttal addressed your evaluation and framework concerns. We'll continue incorporating your suggestions about task compositionality into the final revision and will also update Figure 2 -- we completely agree it'll be more compelling with a more direct comparison. Thank you for engaging with our paper and for raising your score!
Summary: This paper introduces and systematically investigates a novel failure mode in large language models (LLMs), termed Potemkin Understanding. This phenomenon refers to the model's ability to correctly interpret a concept while demonstrating inconsistent or incorrect behavior in practical applications, akin to creating an "illusion" of comprehension. The researchers constructed specialized benchmark tests to compare the model's ability to explain concepts (definition tasks) with its ability to apply them (classification, constrained generation, and editing tasks), thereby quantifying this phenomenon. Experiments were conducted across three distinct domains (literary techniques, game theory, and cognitive biases), involving 32 concepts and 5,986 data points. The results reveal that while LLMs achieve an accuracy of 97.7% in explaining concepts, their accuracy drops to 67.9% when applying these concepts. Furthermore, the study highlights that existing mainstream benchmarks, such as MMLU, fail to effectively assess the impact of Potemkin Understanding. Claims And Evidence: Yes Methods And Evaluation Criteria: make sense Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: The zip file and Appendix Relation To Broader Scientific Literature: Cognitive Psychology, Interpretability of Large Language Models, maybe Knowledge editing of LLMs Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: (1) Novelty of the problem (Potemkin Understanding); (2) Cross-domain systematic experimentation; (3) Rigorous methodology with a focus on self-consistency evaluation, incorporating experiments on self-coherence. Weakness: (1) The paper primarily reveals the existence of the Potemkin phenomenon through experiments but does not delve deeply into its root causes, such as the model's training data, architectural characteristics, or optimization objectives. (2) The domains covered are incomplete; similar issues may arise in areas like mathematical reasoning and code generation, which could be included to enrich the dataset. (3) Although the gap between definition and application tasks is evident, the authors do not thoroughly explore the inherent differences in difficulty and complexity between these two types of tasks. This may lead readers to question whether the lower accuracy in application tasks is solely due to their higher difficulty rather than a genuine lack of conceptual understanding. Other Comments Or Suggestions: Suggestions: (1) Incorporate evaluations of open-source models (e.g., LLaMA, Mistral) on this benchmark; (2) Based on benchmark statistical data, attempt to propose solutions or suggestions to mitigate this issue, reduce Potemkin failures, to assist subsequent researchers follow this work better. Questions For Authors: See Suggestions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review. We’re glad you found our work novel, rigorous, and empirically interesting. We respond to your comments and describe new results; to summarize the main changes, we've added: - An expansion of the coherence analysis to include 9 models - Analysis of difficulty - Proposed solutions for mitigating potemkins **We hope our comments have addressed your concerns. If not, please let us know if you have any more questions we can address in the follow-up.** > _Suggestions: (1) Incorporate evaluations of open-source models (e.g., LLaMA, Mistral) on this benchmark_ This is a great suggestion. We've expanded our evaluation to include the following nine models on the coherence task, many of them open-source: LLaMA-3.3, GPT-4o, Claude-3.5, GPT-4.5, o1-mini, o3-mini, Mistral-7B, DeepSeek-v3, and DeepSeek-r1. [See this table for the full results](https://imgur.com/a/4tHzAGn). We'll use the time between the rebuttal and camera-ready deadline to expand our analysis in Table 1 to the same set of models. We also note that our initial analysis already included LLaMA-3.3, [with results available here](https://imgur.com/a/8fKo8BS). > _Suggestions: (2) Attempt to propose solutions or suggestions to mitigate this issue, reduce Potemkin failures, to assist subsequent researchers follow this work better._ As you suggest, once potemkins are discovered (e.g. using the methods in our paper) we can build methods to train against them. The automatic evaluation tasks in our benchmark -- e.g. concept use classification accuracy and self-coherence scores -- can be directly optimized during model fine-tuning. For example, a potential mitigation strategy is to explicitly use the automated coherence experiment to penalize the inconsistencies that can lead to potemkins. We feel this is the most important contribution of our benchmark and have added a section discussing it in our paper. > _The paper primarily reveals the existence of the Potemkin phenomenon through experiments but does not delve deeply into its root causes, such as the model's training data, architectural characteristics, or optimization objectives._ We agree that our primary focus was documenting and quantifying potemkin understanding. As you point out, understanding the root causes would be very valuable. We speculate that this phenomenon arises due to a few reasons: - **Reinforcement Learning with Human Feedback (RLHF)** prioritizes fluent and plausible-sounding explanations over accurate and coherent ones. - **No coherence training objectives:** Models aren't explicitly trained to provide coherent responses, just accurate next-token predictions. Our automatic method for evaluating coherence provides a scalable and new way to train models and reduce the scale of potemkins. - **Limitations in Model Architecture:** The architectural specifications vary by each model. However, models like transformers may inherently favor shallow associations due to limited inductive biases for structured reasoning or causality. We've expanded our discussion of these points in the manuscript. Future work should explore training interventions to directly address these causes. > _The domains covered are incomplete; similar issues may arise in areas like mathematical reasoning and code generation, which could be included to enrich the dataset._ Our choice of domains—literary techniques, game theory, and psychological biases—was intended to span three distinct forms of understanding: linguistic, formal, and behavioral. Though incomplete, this set was intended to cover a wide range of types of understanding. Moreover, each domain includes multiple subdomains (32 in total). We found that potemkins were ubiquitous across all of these varied contexts, strongly suggesting their presence in other areas as well. Your suggestion to include mathematical reasoning and code generation is excellent because, like game theory, many of these problems can be evaluated automatically. We'll explicitly note this in our revision. > _Although the gap between definition and application tasks is evident, the authors do not thoroughly explore the inherent differences in difficulty and complexity between these two types of tasks._ This is an important point. We've gone back and re-examined our findings. While task complexity contributes in part to the Potemkin gap, we find that it cannot fully explain Potemkin failures. For example, our benchmark contains cases where models correctly apply challenging definitions but fail simpler application tasks (e.g., correctly defining complex concepts like "Pareto Optimality" yet incorrectly classifying simpler instances of more intuitive concepts like rhymes). This non-monotonic pattern of errors—where easier tasks are sometimes failed, despite succeeding at harder conceptual definitions— indicates conceptual gaps rather than purely difficulty-driven mistakes. We'll clarify this point explicitly in our revised text.
null
null
null
null
null
null
Efficient Curvature-Aware Hypergradient Approximation for Bilevel Optimization
Accept (poster)
Summary: The paper focus on bilevel optimization and incorporates curvature information into the approximation of hypergradients in bilevel optimization. The authors propose a Newton-based framework (NBO) that solves lower-level problems with computing Hessian inverse-vector products. They establish the convergence rate guarantees in both deterministic and stochastic scenarios, demonstrating improved computational complexity over popular gradient-based methods. Numerical experiments validate the effectiveness of the proposed method. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I did not fully check the correctness of the proofs, but the theoretical claims seems reasonable. Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: This paper achieves enhanced iteration complexity for bilevel algorithm with strongly convex lower-level problem in terms of the condition number in determinstic setting. Essential References Not Discussed: The key contribution is in quadratic programming-based bilevel methods; however, the literature on this topic is incomplete, e.g. [R1]-[R3]. Most notably, [R3], which employs a constant batch size for the stochastic setting, in contrast to the increasing batch size required by the proposed method. [R1] Non-Convex Bilevel Games with Critical Point Selection Maps. Michael Arbel, et. al. NeurIPS 2022. [R2] A Generalized Alternating Method for Bilevel Optimization under the Polyak-Łojasiewicz Condition. Quan Xiao, et. al. NeurIPS 2023. [R3] Single-Timescale Multi-Sequence Stochastic Approximation Without Fixed Point Smoothness: Theories and Applications. Yue Huang, et. al. Furthermore, since their method leverages second-order information, it is essential to compare it with fully first-order bilevel methods to demonstrate that the use of second-order terms does not impede numerical performance. However, the paper lacks references to the relevant literature in this area. [R4] On Penalty-based Bilevel Gradient Descent Method. Han Shen, et. al. ICML 2023. Other Strengths And Weaknesses: Strengths: 1. The paper presents a novel method for hypergradient approximation that efficiently incorporates curvature information with rigorous convergence rate guarantees and improves rate in deterministic setting. 2. Numerical experiments showcase the effectiveness of the proposed method, showing improved performance over existing gradient-based methods. Weaknesses: 1. The theoretical improvements are established only in the deterministic setting. Moreover, the analysis for the stochastic setting requires an increasing batch size inversely proportional to $\epsilon$, while other bilevel methods use a constant batch size (see [R3], [R5]–[R7]). This requirement is impractical for large-scale machine learning applications. Also the lower-level strongly convexity is somewhat restrictive. 2. Although the paper primarily builds upon a fully single-loop second order method, it would be beneficial to include a more comprehensive convergence rate comparison with other bilevel approaches, such as fully first-order methods in discussion and Table 1. [R1] Non-Convex Bilevel Games with Critical Point Selection Maps. Michael Arbel, et. al. NeurIPS 2022. [R2] A Generalized Alternating Method for Bilevel Optimization under the Polyak-Łojasiewicz Condition. Quan Xiao, et. al. NeurIPS 2023. [R3] Single-Timescale Multi-Sequence Stochastic Approximation Without Fixed Point Smoothness: Theories and Applications. Yue Huang, et. al. [R4] On Penalty-based Bilevel Gradient Descent Method. Han Shen, et. al. ICML 2023. [R5] A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. Mathieu Dagréou, et. al. NeurIPS 2022. [R6] Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems. Tianyi Chen, et. al. NeurIPS 2021. [R7] A Fully Single Loop Algorithm for Bilevel Optimization without Hessian Inverse. Junyi Li, et. al. AAAI 2022. Other Comments Or Suggestions: No Questions For Authors: See weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We will address each point in detail. The figures and the tables mentioned below are in this anonymous link: https://drive.google.com/file/d/1lCY1UF3isNnoujM8AGCPIdJlRqHctj-b/view?usp=sharing ## Essential References Not Discussed: * **Comparison with fully first-order methods (numerical evaluation):** **We clarify that we have compared our method with the fully first-order method F2SA [1] in Experiments (Section 4.2 and 4.3).** Our method consistently outperforms F2SA, even when using small batch sizes (64 or 256). It is worth noting that, based on the results from [2,3], F2SA is considered a SOTA fully first-order bilevel algorithm. Furthermore, we conducted **additional experiments on meta-learning** using the Omniglot and miniImageNet (approximately 3GB) datasets, with 4-layer convolutional neural networks (CNN4). We compared our NBO with the SOTA algorithms PZOBO [4] and qNBO [5]. The results, presented in Fig. 1 and 2 of the [anonymous link](https://drive.google.com/file/d/1lCY1UF3isNnoujM8AGCPIdJlRqHctj-b/view?usp=sharing), show that NBO consistently outperforms the other methods on both datasets, highlighting the effectiveness of our framework. Note that PZOBO is a widely used Hessian-free bilevel algorithm, while qNBO is a recently proposed curvature-aware algorithm that employs the quasi-Newton method to solve the lower-level problem. * **More references:** First, among the references mentioned by the reviewer, [R1, R2, R4] focus on nonconvex lower-level problems, whereas our work focuses on strongly convex lower-level problems. Second, the reference [R3] does not propose a new bilevel algorithm but rather provides an improved analysis for SOBA. Since SOBA is a special case of our NBO framework (as noted in Remark 2.1 (i) of our work), their analysis also applies to our NSBO-SGD when $T=0$. We appreciate the reviewer’s suggestions and will include these references in our work. Additionally, we have added a summary table for the stochastic setting that incorporates the references mentioned by the reviewer, as shown in Table 1 in the [anonymous link](https://drive.google.com/file/d/1lCY1UF3isNnoujM8AGCPIdJlRqHctj-b/view?usp=sharing) (the second page). ## Other Strengths And Weaknesses: * **In the stochastic setting, we also establish a theoretical improvement:** As stated after Theorem 3.7, our NSBO-SGD improves upon the SOTA result of AmIGO by a factor of $\log \kappa$ in the stochastic setting, where AmIGO also employs a large batch size. * **Concern about batch size:** The large batch size we choose helps achieve better sample complexity. If the batch size is set to $O(1)$, we can obtain a sample complexity of $O(\kappa^{16} \epsilon^{-2})$, which is worse than $O(\kappa^9 \epsilon^{-2})$ in Theorem 3.7. It is also worth noting that SOBA is a special case of our NBO framework. * **Extension to non-strongly convex problems:** The NBO framework is primarily designed for bilevel optimization problems with a strongly convex structure. For non-strongly convex problems, existing methods often involve reformulating the original problem to incorporate a strongly convex structure. Once this structure is established, our NBO framework can be applied. For instance, we compared the BAMM method [7] and BAMM+NBO (i.e., using NBO to compute $d_x^k$ in BAMM). The result, presented in Fig. 3 of the [anonymous link](https://drive.google.com/file/d/1lCY1UF3isNnoujM8AGCPIdJlRqHctj-b/view?usp=sharing), shows that BAMM+NBO significantly outperforms BAMM, highlighting the effectiveness of the NBO framework. * **Comparison with fully first-order methods (convergence rate and complexity):** Please see Table 1 in the [anonymous link](https://drive.google.com/file/d/1lCY1UF3isNnoujM8AGCPIdJlRqHctj-b/view?usp=sharing) for the stochastic setting. For the deterministic setting, we will add a new row to Table 1 in our paper, presented in Table 2 of the [anonymous link](https://drive.google.com/file/d/1lCY1UF3isNnoujM8AGCPIdJlRqHctj-b/view?usp=sharing) (the second page). [1] Kwon et al., A fully first-order method for stochastic bilevel optimization, ICML 2023. [2] Chen et al., Near-optimal nonconvex-strongly-convex bilevel optimization with fully first-order oracles. arXiv preprint 2023. [3] Chen et al., On finding small hyper-gradients in bilevel optimization: Hardness results and improved analysis. COLT 2024. [4] Sow et al., On the convergence theory for hessian-free bilevel algorithms, NeurIPS 2022. [5] Fang et al., qNBO: quasi-Newton Meets Bilevel Optimization, ICLR 2025. [6] Arbel et al., Amortized implicit differentiation for stochastic bilevel optimization, ICLR 2022. [7] Liu et al., Averaged Method of Multipliers for BiLevel Optimization without Lower-Level Strong Convexity, ICML 2023.
Summary: This paper consider the bilevel problem $\min_x \Phi(x) = f(x, y^*(x))$ where $y^*(x) = \arg\min_y g(x, y)$ where the inner function $g$ is strongly convex w.r.t. the inner variable $y$. The paper proposes a new AID-based method where the inner variable $y$ and the linear system variable $u$ are updated by an approximate Newton step. These approximate Newton directions are computed by applying (S)GD to the quadratic functions associated with the two linear systems. Then, the paper shows that in the deterministic setting, the proposed method improves the gradient complexity by the factor $(\kappa\log(\kappa))^{-1}$ in comparison with classical AID-based methods. The computational complexity of the stochastic variant is also provided. Numerical expriments on a synthetic problem, hyperparameter optimization and data cleaning are provided. Claims And Evidence: * Convergence rates in the deterministic and stochastic settings are provided and sound. * NBO-GD achieves a convergence rate in $\mathcal{O}\left(\frac{\kappa^3}{K}\right)$ leading to a gradient complexity of $\mathcal{O}\left(\frac{\kappa^3}{\epsilon}\right)$ and a HVP complexity in $\mathcal{O}\left(\frac{\kappa^4}{\epsilon}\right)$ (which can be reduced to $\mathcal{O}\left(\frac{\kappa^{3.5}}{\epsilon}\right)$ by using CG instead of GD). * NBO-SGD achieves a convergence rate $\mathcal{O}\left(\frac{\kappa^3}{\epsilon}\right)$ (by assuming batch sizes in $\Theta(K)$) leading to a gradient complexity in $\mathcal{O}(\kappa^5/\epsilon)$ for $F$, $\mathcal{O}(\kappa^9/\epsilon)$ for $G$, and HVP complexity in $\mathcal{O}(\kappa^7/\epsilon)$. Methods And Evaluation Criteria: The method is numerically evaluated on a synthetic problem, hyperparameter optimization and data cleaning. This setting is classical for the numerical evaluation of bilevel optimization methods. Theoretical Claims: I checked the proof for the deterministic setting and the result are sound apart from some inconsequential typos (see **Other Comments Or Suggestions** section for detail). Experimental Designs Or Analyses: Experimental results show the practical interest of the proposed method. Supplementary Material: I reviewed the related work section and the proof in the deterministic setting. Relation To Broader Scientific Literature: The paper proposes an adaptation of the single-loop algorithmic framework introduced in [1], where the directions for solving the inner linear system are approximated using Newton steps. It is worth noting that [2] and [3] explore the use of quasi-Newton steps to approximate the inner solution. Furthermore, [3] utilizes the quasi-Newton directions employed in solving the inner problem to enhance the resolution of the linear system. However, unlike this paper, [2] and [3] do not provide non-asymptotic results. [1] Dagréou, M., Ablin, P., Vaiter, S., and Moreau, T. *A framework for bilevel optimization that enables stochastic and global variance reduction algorithms*. NeurIPS 2022. [2] Pedregosa, F. *Hyperparameter optimization with approximate gradient*. ICML 2016. [3] Ramzi, Z., Mannel, F., Bai, S., Starck, J.-L., Ciuciu, P., and Moreau, T. *Shine: Sharing the inverse estimate from the forward pass for bi-level optimization and implicit models*. ICLR, 2022. Essential References Not Discussed: To my knowledge, any essential reference is missing. Other Strengths And Weaknesses: ### Strengths * The paper is well-written * The method is novel to my knowledge. * Potential improvements with variance reduction and momentum are discussed. ### Weaknesses * The stochastic result assumes large batch sizes which scale linearly with the number of iterations. This setting does not match the practice. * The gain in the stochastic setting is limited while this setting being important in practice. Other Comments Or Suggestions: * **Line 39**: when the citation is part of the sentence, it should not be in parentheses (i.e. it should be `\citet` instead of `\citep`). * **Line 131**: *"(2) Observe that v∗(x, y) and u∗(x, y) share the same Hessian inverse."* It seems that this sentence is not supposed to be here. * **Line 150**: *"we write $u = u^k - w$"*, $u^k$ is used before being defined. * **Line 303**: The equation goes beyond the margin. * **Algorithms 3 and 5**: I guess by reading the proof that it is $w^{-1},k = u^k$ instead of $w^{-1},k = 0$ in the initialization of the algorithms. * **Theorem 3.2 and 3.7**: By reading the text, it is not clear what BOX 1 and BOX 2 refer to. These things should be introduced before the theorems. * It would be nice to have a summury of complexity results for the stochastic setting, as done in Table 1 for the deterministic setting. * **Box 2**: The same batches $B_0$ and $B_0'$ are used in all the iterations. Why? * **Line 1049**: *"$\nabla_2f$"* -> *"$\nabla_2g$"*. * **Line 1142-1143**: I think there is no $\frac12$ in factor of $\lVert y^k-y^*(x^k)\rVert^2$ anymore if the inequality comes from $(a+b)^2\leq 2a^2 + 2b^2$. * **Section E.2 and first line of section E.2**: It Theorem 3.7 and not Theorem 3.2. * **Equation (52)**: Isn't the equality an inequality actually? Questions For Authors: * As it is common to warm start the subsolvers in bilevel optimization [1, 2], I wonder why Algorithm 3 and 5 not set $v^{-1,k} = y^k$ instead of $v^{-1,k} = 0$. [1] Ji, K., Yang, J., and Liang, Y. *Bilevel optimization: Convergence analysis and enhanced design*. ICML 2021 [2] Arbel M. and Mairal J. *Amortized implicit differentiation for stochastic bilevel optimization*. ICLR 2022 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable feedback. We will address each point in detail below. The figures mentioned below are in this anonymous link: https://drive.google.com/file/d/1v6ftNYExUb_ClkgoS7b9wU2Q3nNY1IsP/view?usp=sharing ## Other Strengths And Weaknesses: * **Concern about batch size:** The large batch size we choose helps achieve better sample complexity. If the batch size is set to $O(1)$, we can obtain a sample complexity of $O(\kappa^{16} \epsilon^{-2})$, which is worse than $O(\kappa^9 \epsilon^{-2})$ in Theorem 3.7. Since SOBA is a special case of our NBO framework (as noted in Remark 2.1 (i) of our work), their analysis also applies to our NSBO-SGD when $T=0$. Moreover, a large batch size related to $ \epsilon $ or $ K $ is commonly used in the stochastic bilevel optimization literature (see, e.g., Ji et al., 2021; Arbel & Mairal, 2022). * **Our improvements in the stochastic setting and in practice:** In theory, as stated after Theorem 3.7, our NSBO-SGD improves upon the SOTA result of AmIGO by a factor of $ \log \kappa $ in the stochastic setting, where AmIGO also employs a large batch size. In practice, our algorithms show significant advantages in experiments, even when using small batch sizes (64 or 256). Additionally, we add experiments on meta-learning using the Omniglot and miniImageNet (about 3GB) datasets, with CNN4 networks. We compared our NBO with the SOTA algorithms PZOBO in [1] and qNBO in [2]. The results, presented in Fig. 1 and 2 of the [anonymous link](https://drive.google.com/file/d/1v6ftNYExUb_ClkgoS7b9wU2Q3nNY1IsP/view?usp=sharing), demonstrate that NBO consistently outperforms the other methods on both datasets. [1] Sow et al., On the convergence theory for hessian-free bilevel algorithms, NeurIPS 2022. [2] Fang et al., qNBO: quasi-Newton Meets Bilevel Optimization, ICLR 2025. ## Other Comments Or Suggestions: Thank you for your careful reading. We will address each point as follows: * **Line 39:** We will revise it. * **Line 131:** This sentence represents the second aspect of our motivation. Although we have not fully utilized this point, we still believe it is important. * **Line 150:** Here we use warm-start for $u$ and will make revisions to clarify this. * **Line 303:** We will revise it. * **Algorithm 3 and 5:** The initialization point $w^{-1,k}=0$ is correct. With this initialization, we obtain $w^{0,k}=\gamma_k d_u^k$ (where $d_u^k$ is defined in Line 150), ensuring that the second equality in (44) holds. * **Theorem 3.2 and 3.7:** BOX 1 and 2 represent the initialization strategies of Algorithm 2 and 4, respectively. We will add an introduction before these theorems. * **Summary:** Depending on the available space in the final manuscript, we will consider adding a summary table for the stochastic setting. * **Box 2:** Thank you for pointing out the typo. $B_0$ and $B_0^{'}$ should be replaced by $B_{0,n}$ and $B^{'}_{0,q}$ respectively, while maintaining the same batch size. * **Line 1049:** We will revise it. * **Line 1142-1143:** The constant $\frac{1}{2}$ arises from $2 \times (\frac{1}{2})^2$. * **Section E.2 and the first line of section E.2:** We will revise them. * **Equation (52):** We will revise it. ## Questions For Authors: * **Concern about warm-start:** The reason we use warm-start for $u$ but not for $v$ is that $v$ directly affects $||y^{k+1}-y^{k}||$, while $u$ does not. Indeed, in the $k$-th iteration of NBO, $v^k$ serves as an inexact approximation of the Newton direction $v^*(x^k,y^k):=[\nabla_{22}^2g(x^k,y^k)]^{-1} \nabla_2 g(x^k,y^k)$, whereas $y^{k}$ approximates $y^*(x^{k})$. We update $y^{k+1}= y^k - v^k$ using a single inexact Newton step. If we further compute $v^{k+1} = v^{k} - \gamma_k d_v^{k} $ (**using one-step gradient descent with warm-start**), according to the Lyapunov function argument, we estimate $||v^{k+1} - v^*(x^{k+1},y^{k+1})|| \leq (1-\gamma_{k} \mu)||v^{k} - v^*(x^{k},y^{k})|| + ||v^*(x^{k+1},y^{k+1})-v^*(x^{k},y^{k})||. (1) $ Similar to $y^*(x)$ and $u^*(x)$, $v^*(x,y)$ is Lipschitz continuous and we can get $||v^*(x^{k+1},y^{k+1})-v^*(x^{k},y^{k})||\leq \frac{2L_{g,1}}{\mu}(||x^{k+1} - x^{k}||+||y^{k+1} - y^{k}||). (2)$ Note that $||y^{k+1} - y^{k}||\leq ||v^k-v^*(x^{k},y^{k})||+||v^*(x^{k},y^{k})||\leq ||v^k-v^*(x^{k},y^{k})||+\frac{L_{g,1}}{\mu}||y^{k}-y^*(x^k)||. (3)$ Substituting (2) and (3) into (1), we obtain $||v^{k+1} - v^*(x^{k+1},y^{k+1})|| \leq (1-\gamma_{k} \mu+\frac{2L_{g,1}}{\mu})||v^{k} - v^*(x^{k},y^{k})|| + others. $ Since $1-\gamma_{k} \mu+\frac{2L_{g,1}}{\mu}>1$ when $\gamma_{k}\leq 1/L_{g,1}$, the Lyapunov function argument fails to hold, indicating that the above warm-start strategy for $v$ is problematic. If multi-step gradient descent is performed for $v$, we choose to initialize at 0 because, in this setting, when $T=0$, NBO reduces to the single-loop algorithm framework proposed by (Dagréou et al., 2022) .
Summary: This paper proposes a novel method for bilevel optimization, focusing on improving hypergradient estimation by incorporating curvature information. The key contributions include: (1) New Algorithmic Framework: An enhanced algorithm using an inexact Newton method, with improved computational complexity and convergence guarantees in both deterministic and stochastic settings. (2) Empirical Results: Numerical experiments demonstrate significant performance benefits compared to existing gradient-based methods. "## update after rebuttal" Through the rebuttal, I believe this work presents a general and efficient gradient-based framework for bilevel optimization, which can be extended to the non-strongly convex setting using techniques such as the BAMM method. The motivation and approach are highly interesting and are supported by strong theoretical guarantees. Therefore, I recommend acceptance. Claims And Evidence: Yes, the paper provides both theoretical and empirical evidence to support the claims made. Methods And Evaluation Criteria: Yes. In the theoretical part, the authors use $\| \nabla \Phi(x) \|$ as the stationary measure, which is reasonable for bilevel problems with a strongly convex lower-level problem. In the experimental part, the benchmark used by the authors are also reasonable, various types of bilevel algorithms are involved in the benchmark they used. Theoretical Claims: I have checked the proofs of Theorems 3.2 and 3.7 and did not find any errors. Experimental Designs Or Analyses: I have checked the experimental designs and analyses, including the models, datasets, and step size selection. Specifically, the step sizes are tuned via grid search, which is fair. Supplementary Material: The supplementary material contains the code for the experiments. I have checked the code for the proposed algorithm NSBO, including “NSBO.py” and “NSBO_deter.py” in “solvers” folder. Relation To Broader Scientific Literature: The hypergradient method is a commonly employed approach for solving bilevel optimization problems in the literature. However, most of previous works utilized first-order methods to estimate the hypergradient. The key contribution of this paper lies in its use of the curvature information, which is theoretically proven to be more efficient by the authors. This advancement represents an improvement over existing gradient-based methods. Essential References Not Discussed: I do not find any missing references. As far as I know, in terms of complexity, this paper compares its results with the state-of-the-art works. Other Strengths And Weaknesses: Strengths: (1)The authors propose a novel method for estimating the hypergradient, which marks a significant departure from classical gradient-based methods. This method leverages the unique structure of the hypergradient, where a Hessian is shared when using inexact Newton method. This idea is interesting and I think it is a breakthrough. (2)Particularly, the authors provide theoretical results that the proposed algorithm achieves lower computational complexity compared to existing methods, thereby highlighting the advantages of using curvature information. Weaknesses: The authors only consider the lower-level strongly convex case. (Actually, I know that the non-strongly convex case is very challenging.) Other Comments Or Suggestions: There is a citation typo in Table 1. The reference for "No-loop AID" should be (Ji et al., 2022). Questions For Authors: Please discuss the possibility of extending the algorithm proposed in this paper to bilevel optimization problems where the lower-level problem is non-strongly convex. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. We will address each point in detail below. The referenced figures are compiled in a single-page PDF (containing only figures) available at the anonymous link: https://drive.google.com/file/d/15xqtvUMRk7Ah7Gi5hnvBQWyuZ6W15zX8/view?usp=sharing ## Other Strengths And Weaknesses & Questions For Authors: * **Extension to lower-level non-strongly convex problems:** The NBO framework is primarily designed for bilevel optimization problems with a strongly convex structure. For non-strongly convex problems, existing methods typically involve reformulating the original problem to incorporate a strongly convex structure. Once this structure is established, our NBO framework can be applied. For instance, in BAMM method [1], when $g$ is merely convex, an aggregation function $\phi_{\mu} = \mu f + (1 - \mu) g$ is defined, which is strongly convex when $f$ is strongly convex, then an approximated hypergradient $d_x^k$ can be computed by replacing $g$ with $\phi_{\mu}$. We compared the BAMM method and BAMM+NBO (i.e., using NBO to compute $d_x^k$ in BAMM) on the toy example (13) in [1]. The result, presented in Fig. 1 of the [anonymous link](https://drive.google.com/file/d/15xqtvUMRk7Ah7Gi5hnvBQWyuZ6W15zX8/view?usp=sharing), shows that BAMM+NBO significantly outperforms BAMM, highlighting the effectiveness of the NBO framework. Due to space constraints, we primarily focused on strongly convex lower-level problems in this work, which limited our ability to fully showcase the versatility of NBO. We appreciate your valuable suggestions and will include additional discussion in the revised manuscript to further demonstrate the versatility of NBO. ## Other Comments Or Suggestions: Thank you for pointing this out. We will revise this citation typo. [1] Liu et al., Averaged Method of Multipliers for BiLevel Optimization without Lower-Level Strong Convexity, ICML 2023.
Summary: This paper introduces a Newton-based approach to efficiently compute hypergradients in bilevel optimization. Instead of directly inverting the Hessian, which is costly, the method approximates Hessian-inverse-vector products (HVPs) to improve computational efficiency. The proposed Newton-based bilevel optimizer (NBO) works for both deterministic and stochastic settings and provides theoretical convergence guarantees. The main idea is to use curvature information from the inner problem to improve hypergradient estimation while keeping computation manageable. Compared to existing first-order methods, the approach reduces complexity and converges faster. Experiments on meta-learning and hyperparameter optimization show that NBO requires fewer iterations to reach comparable or better solutions. Claims And Evidence: The paper’s claims are mostly well supported by empirical evidence and theory. Methods And Evaluation Criteria: Consider Bregman-Based Curvature-Aware Methods: - Would Bregman-based methods achieve similar improvements? - Could a hybrid approach combining Newton and Bregman techniques be better? See, e.g., - Enhanced Bilevel Optimization via Bregman Distance by Feihu Huang, Junyi Li, Shangqian Gao, Heng Huang - Online Nonconvex Bilevel Optimization with Bregman Divergences by Jason Bohne, David Rosenberg, Gary Kazantsev, Pawel Polak Theoretical Claims: I did not review the proofs. Experimental Designs Or Analyses: I suggest to compare against other curvature-aware algorithms. Supplementary Material: No. Relation To Broader Scientific Literature: Lack of comparison against second order methods. Essential References Not Discussed: It does not benchmark against full second-order methods (e.g., ones that directly compute Hessians rather than approximations). It does not compare against Bregman-based methods. Other Strengths And Weaknesses: The paper is well-structured with strong theoretical backing, but it could be improve by discussing the scalability of the method to large dimensional problems and how well it performs empirically when the strong-convexity assumption is violated. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. We will address each point in detail below. The referenced figures are compiled in a single-page PDF (containing only figures) available at the anonymous link: https://drive.google.com/file/d/1ZEzn2mKcwrPzlBeziFpC1mKGWDeyq6-C/view?usp=sharing ## Methods And Evaluation Criteria: * **Combine with Bregman-based methods:** * The improvements of our NBO framework and Bregman-based methods stem from two different perspectives: the NBO framework improves hypergradient approximation (related to updates of the lower-level variable $y$), while Bregman-based methods use mirror descent to update the upper-level variable $x$. * Thank you for your constructive suggestion. The idea of combining our NBO framework with Bregman-based methods is both interesting and promising. To explore this, we conducted a numerical experiment where we used the NBO framework to approximate the hypergradient, followed by updating $x$ using the mirror descent method. We then compared SBiO-BreD in [1] and SBiO-BreD+NSBO (i.e., using our proposed framework to compute $w\_t$ in SBiO-BreD) in the context of data hyper-cleaning. The result, presented in Fig. 1 of the [anonymous link](https://drive.google.com/file/d/1ZEzn2mKcwrPzlBeziFpC1mKGWDeyq6-C/view?usp=sharing), demonstrates that SBiO-BreD+NSBO significantly outperforms SBiO-BreD, highlighting the effectiveness of the NBO framework. ## Experimental Designs Or Analyses & Relation To Broader Scientific Literature: * **Comparison against other curvature-aware algorithms:** We would like to clarify that we have compared our method with SHINE in Experiments (Section 4.2 and 4.3). SHINE is a popular curvature-aware algorithm that employs the quasi-Newton method to solve the lower-level problem. ## Essential References Not Discussed: * **Comparison against exact Hessian methods:** Directly computing the Hessian and its inverse is both time-consuming and memory-consuming. In our setting, using the Hessian-vector product is a more practical choice, as it can be efficiently computed and stored with modern automatic differentiation frameworks. To demonstrate this, we conducted a numerical experiment comparing NBO with the exact Hessian inverse (implemented using jax.hessian and jnp.linalg.inv) and NBO-GD for hyperparameter optimization on synthetic data. The result in Fig. 2 of the [anonymous link](https://drive.google.com/file/d/1ZEzn2mKcwrPzlBeziFpC1mKGWDeyq6-C/view?usp=sharing) shows that NBO with the exact Hessian inverse is significantly slower. * About comparison against Bregman-based methods, please see the response to "Methods And Evaluation Criteria". ## Other Strengths And Weaknesses: * **Scalability to large dimensional problems:** Thanks for the suggestion. As shown in Fig. 3 of the [anonymous link](https://drive.google.com/file/d/1ZEzn2mKcwrPzlBeziFpC1mKGWDeyq6-C/view?usp=sharing), we evaluated the scalability of NBO by testing hyperparameter optimization on synthetic data while progressively increasing the problem dimension. * **Extension to non-strongly convex problems:** The NBO framework is primarily designed for bilevel optimization problems with a strongly convex structure. For non-strongly convex problems, existing methods typically involve reformulating the original problem to incorporate a strongly convex structure. Once this structure is established, our NBO framework can be applied. For instance, we compared the BAMM method [2] and BAMM+NBO (i.e., using NBO to compute $d_x^k$ in BAMM). The result, presented in Fig. 4 of the [anonymous link](https://drive.google.com/file/d/1ZEzn2mKcwrPzlBeziFpC1mKGWDeyq6-C/view?usp=sharing), shows that BAMM+NBO significantly outperforms BAMM, highlighting the effectiveness of the NBO framework. Due to space constraints, we primarily focused on strongly convex lower-level problems in this work, which limited our ability to fully showcase the versatility of NBO. We appreciate your valuable suggestions and will include additional discussion in the revised manuscript to further demonstrate the versatility of NBO. [1] Huang et al., Enhanced bilevel optimization via bregman distance, NeurIPS 2022. [2] Liu et al., Averaged Method of Multipliers for BiLevel Optimization without Lower-Level Strong Convexity, ICML 2023.
null
null
null
null
null
null
Tackling View-Dependent Semantics in 3D Language Gaussian Splatting
Accept (poster)
Summary: This paper proposes a novel method,named LaGA and it tackles the challenge of view-dependent semantics in language-driven open-vocabulary 3D scene understanding. LaGa decomposes the 3D scene into distinct objects and then builds view-aggregated semantic representations by clustering semantic descriptors and reweighting them based on multi-view information. Extensive experiments demonstrate that this approach effectively captures the key nuances of view-dependent semantics, leading to a more comprehensive understanding of 3D scenes. Claims And Evidence: yes Methods And Evaluation Criteria: yes, it does make sense. Theoretical Claims: Yes, I have reviewed the theoretical explanations provided in the paper for the following components: 3D Scene Decomposition, View-Aggregated Semantic Representation, and Weighted Descriptor Relevance Aggregation. I did not find any errors in these sections. Experimental Designs Or Analyses: Extensive experiments demonstrate that LaGa effectively captures key information from view-dependent semantics, enabling a more comprehensive understanding of 3D scenes. Supplementary Material: Yes. Video and supplementary pdf. Relation To Broader Scientific Literature: It is worth noting that the claimed innovation in 3D Scene Decomposition shows notable similarities with the previous work on openGaussian[1]. I recommend that the authors explicitly discuss these similarities and clarify how their approach differs from and improves upon the prior work. [1] OpenGaussian: Towards Point-Level 3D Gaussian-based Open Vocabulary Understanding Essential References Not Discussed: No. Other Strengths And Weaknesses: ### Strengths This paper is written clearly, with well-articulated motivation and clearly presented figures and charts. The experiments demonstrate that the proposed method can achieve precise open-vocabulary 3D Gaussian localization. ### Weaknesses The claimed innovation in 3D Scene Decomposition shows notable similarities with the previous work on openGaussian[1]. I recommend that the authors explicitly discuss these similarities and clarify how their approach differs from and improves upon the prior work. [1] OpenGaussian: Towards Point-Level 3D Gaussian-based Open Vocabulary Understanding Other Comments Or Suggestions: In addition to the quantitative comparisons for 3D open-vocabulary localization, I would also like to see 2D query results, as well as qualitative results on the mipnerf360 dataset— for example, for “Room” and “Garden”. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive comments. We hope our response can help address your concern. ## Weaknesses >W1: The claimed innovation in 3D Scene Decomposition shows notable similarities with the previous work on openGaussian. I recommend that the authors explicitly discuss these similarities and clarify how their approach differs from and improves upon the prior work. Thanks for the insightful suggestion. We discuss the relationship between OpenGaussian and LaGa below: **Similarity:** Both OpenGaussian and LaGa first decompose a 3D scene into objects before associating the scene with semantic information. **Key Differences in Motivation:** The two methods are built upon fundamentally different motivations. OpenGaussian addresses the inconsistency between 3D point-level features and 2D pixel-level CLIP features, attributing it to (1) inaccurate 2D–3D associations introduced by alpha-blending during differentiable rendering, and (2) limited expressiveness from 3D feature compression for efficinet rendering. In contrast, LaGa focuses on tackling the semantic discrepancy among multi-view observations of the same 3D object, which naturally arises in open-vocabulary 3D understanding. This shift in perspective leads to substantially different methodologies. **Key Differences in Methodology:** To mitigate 2D–3D feature inconsistency, OpenGaussian decomposes the 3D scene so that it can directly assign a 2D CLIP feature to each Gaussian of an object. It employs a rule-based representative view selection strategy for this assignment. While straightforward, this strategy neglects rich information embedded in multi-view semantics and lead to suboptimal performance. In contrast, LaGa extracts semantic descriptors across views using adaptive clustering and then adopts the weighted relevance aggregation to suppress noisy semantics and enhance robustness. This design enables LaGa to construct a more comprehensive and robust 3D semantic representation. Notably, LaGa does not involve any 3D vision-language feature training, and thus is also unaffected by the 2D–3D inconsistency problem that OpenGaussian aims to address. The design of LaGa enables it to tackle complex 3D objects with various multi-view semantics, evidenced by its significant +18.7% mIoU improvement over OpenGaussian. **On Scene Decomposition:** We acknowledge that our scene decomposition strategy is not the core innovation of LaGa, which is inspired by prior contrastive-learning-based approaches. However, compared to OpenGaussian’s two-stage codebook-based decompsition pipeline, which requires manual tuning of two codebook sizes and thus incurs higher computational cost, LaGa adopts a more lightweight solution, i.e., using HDBSCAN to automatically determine the number of decomposed objects based on feature density. This results in comparable decomposition accuracy while significantly reducing system complexity. We will incorporate this clarification and discussion in the revised paper for improved clarity and completeness. >W2: In addition to the quantitative comparisons for 3D open-vocabulary localization, I would also like to see 2D query results, as well as qualitative results on the mipnerf360 dataset— for example, for “Room” and “Garden”. Thank you for the valuable suggestion. We provide several 2D segmentation masks in Figure 8, which demonstrate the 2D projections of LaGa’s 3D query results. Since LaGa does not involve any 2D CLIP feature learning, it is not intended for direct querying on the 2D image plane. Nevertheless, we believe that our 3D-centric paradigm can serve as a more flexible alternative to 2D methods, as the 3D query result can be rendered from arbitrary viewpoints without requiring repeated queries for each individual image. In addition, we have conducted qualitative experiments on the MIP-360 dataset, including scenes such as Room and Garden, and observed that LaGa generalizes well to these scenes. The corresponding visualizations will be included in the revised paper.
Summary: The paper addresses the challenge of view-dependent semantics in 3D Gaussian Splatting for language-driven open-vocabulary scene understanding. The authors propose LaGa (Language Gaussians), a method that decomposes 3D scenes into objects and constructs view-aggregated semantic representations by clustering and reweighting semantic descriptors based on multi-view semantics. Specifically, they weigh the descriptor relevance with the directional consistency and internal compactness. The paper claims significant improvements over state-of-the-art methods, particularly on the LERF-OVS dataset, with a +18.7% mIoU improvement. ##Update after rebuttal I thank the authors for their clarification. I have no further concerns, but I remain neutral regarding this paper's novelty and significance. Therefore, I have decided to maintain my score as weak accept. Claims And Evidence: The claims made in the paper are generally supported by clear and convincing evidence. The authors provide extensive experiments to validate their motivation, including semantic similarity distribution analysis and semantic retrieval integrity analysis. The quantitative results on the LERF-OVS, 3D-OVS, and ScanNet datasets show improvements over existing methods, supporting the claim that LaGa achieves more comprehensive 3D scene understanding. Methods And Evaluation Criteria: The proposed methods make sense for the problem at hand. LaGa's approach to decomposing 3D scenes into objects and constructing view-aggregated semantic representations is a logical way to address view-dependent semantics. The evaluation criteria are appropriate, with the authors using standard benchmarks (LERF-OVS, 3D-OVS, and ScanNet) to evaluate their method. The use of mIoU as a metric is standard in semantic segmentation tasks and provides a clear measure of performance. Theoretical Claims: The paper does not present any theoretical proof, so there are no theoretical claims to evaluate. Experimental Designs Or Analyses: The experimental designs and analyses are sound and well-executed. The authors conduct a thorough evaluation of their method on multiple datasets and provide both quantitative and qualitative results. Supplementary Material: I reviewed the additional qualitative results in the supplementary material. Relation To Broader Scientific Literature: This paper aims to resolve the inconsistency of the 2D language embeddings of CLIP, which is an important task as currently we only have 2D multimodal foundation models and can only obtain 3D features by distilling features from 2D models. Thus the 3D inconsistency of 2D models is an important problem to tackle. Essential References Not Discussed: The references seem sufficient. Other Strengths And Weaknesses: Strengths: 1. The paper is well-motivated, aiming to address the inconsistency of the 2D CLIP embedding. 2. The experimental results are strong, with improvements over state-of-the-art methods on multiple datasets. Weaknesses: 1. The proposed method does not support multi-scale segmentation like LangSplat, which could hinder its practicality in open-vocabulary segmentation. 2. Although the proposed method shows strong results, it is more engineering-driven and does not provide enough knowledge advancement. The idea of addressing the feature inconsistency has been proposed in previous work such as OpenGaussian. Other Comments Or Suggestions: No other comments. Questions For Authors: 1. Can the method perform multi-scale segmentation? 2. What are the runtime statistics of this method (e.g. how long and how much memory does it take)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your effort in reviewing our paper. We hope the following response would address your concerns. ## Weaknesses: >W1: ... does not support multi-scale segmentation like LangSplat ... Thank you for the comment. LaGa supports multi-scale segmentation, though not sufficiently emphasized in the main paper. Some details are provided in Appendix B under the implementation section. Specifically, during the scene decomposition, LaGa maintains three affinity features for each 3D Gaussian, which are supervised using SAM-extracted segmentation masks at the subpart, part, and whole levels, respectively. During inference, the final result is produced by averaging the responses from these three levels, enabling LaGa to capture semantic concepts across varying granularities. This multi-scale capability is further reflected in the visualization results. For example, in Ramen scene shown in Fig. 6, OpenGaussian merges the 'kamaboko' with the surrounding noodles due to its lack of multi-scale segmentation ability, whereas LaGa successfully distinguishes it. Similarly, in Fig. 7, LaGa accurately segments the pirate hat from the rubber duck. Moreover, we verify that when queried with 'rubber duck', LaGa retrieves both the duck and its pirate hat, demonstrating its effectiveness in capturing hierarchical part-whole semantics. We will incorporate the visual results of 'rubber duck' into the revised paper, along with more specifically designed visualizations to better illustrate the multi-scale segmentation ability of LaGa. >W2: ... engineering-driven and does not provide enough knowledge advancement. The idea of addressing the feature inconsistency has been proposed in OpenGaussian. Thank you for the thoughtful comment. While our method includes practical design choices, we believe it also contributes meaningful insights to open-vocabulary 3D scene understanding. In particular: 1. Identification and analysis of view-dependent semantics: We are the first to systematically identify and quantitatively analyze the phenomenon of view-dependent semantics, as illustrated in Figures 3, 4, and 12. This issue, long overlooked by prior work including OpenGaussian, plays a critical role in open-vocabulary 3D understanding and poses unique challenges for multi-view semantic aggregation. 2. An effective approach for semantic aggregation across views: We propose an effective approach, LaGa, which is able to robustly suppress noisy or inconsistent semantics while preserving informative signals, without requiring manual rules or task-specific heuristics. We believe the simplicity, generalizability, and effectiveness of our approach can serve as a good foundation for future research in this area. **Comparison with OpenGaussian:** While both LaGa and OpenGaussian aim to address feature inconsistency, their motivations and methodologies differ fundamentally. OpenGaussian focuses on the **inconsistency between 3D point-level features and 2D pixel-level CLIP features**, attributing the problem to (1) inaccurate 2D–3D feature association caused by the alpha-blending in differentiable rendering, and (2) limited expressiveness due to 3D feature compression for rendering efficiency. To address this, OpenGaussian decomposes the 3D scene, so that it can adopt a hand-crafted strategy to directly assign a 2D CLIP feature from a specific view to all 3D Gaussians within an object. However, this hard assignment discards rich multi-view semantic cues, resulting in substantial information loss and ultimately sub-optimal performance. In contrast, LaGa recognizes feature inconsistency as the **semantic discrepancy among multi-view observations of the same 3D object**, which arises naturally in 3D scene understanding. To address this, LaGa proposes to aggregate view-specific 2D semantics into comprehensive and robust 3D semantic representations through adaptive semantic descriptor extraction and descriptor reweighting. Note that LaGa does not involve any 3D vision-language feature training, and thus is also unaffected by the 2D–3D inconsistency problem that OpenGaussian aims to address. The revelation of the critical view-dependent semantics issue and the corresponding improvements help LaGa achieve a significant performance gain: LaGa outperforms OpenGaussian by +18.7% mIoU under the same experimental setting, validating the effectiveness of our approach. ## Questions: > Q1: Multi-scale segmentation? Yes. Please refer to our response to W1. > Q2: Runtime statistics of this method (e.g. how long and how much memory does it take)? LaGa is highly efficient. We evaluate its runtime performance on the LERF-OVS dataset using a single NVIDIA RTX 3090 GPU. The inference time per query ranges from approximately 80 ms to 200 ms, with peak GPU memory usage between 5 GiB and 13 GiB. Note that compared with 2D methods like LangSplat, which takes about 250 ms per query on the rendered feature map, LaGa directly deliver 3D point-wise understanding.
Summary: The paper proposes to perform open-vocabulary semantic segmentation of 3DGS scenes that respect view-dependent and view-independent semantics. Using SAM masks, per-gaussian contrastive features are learned to learn 3D object clusters. The method is evaluated on LERF OVS and ScanNet datasets showcasing competitive quality. Claims And Evidence: The paper claims that prior methods for 3D semantic understanding suffer from view-dependency of 3D semantics which limits their quality and robustness. The claim is supported by experiments on precision and recall rates for the 2D-3D semantic features. Methods And Evaluation Criteria: The problem is well-motivated with experimental analyses. The core part of the method is learning contrastive features which is similar to SAGA, GARField among many others. The biggest drawback of this method are the steps that builds on top of it. Overall, the clustering and filtering approaches performed seem crude and not that impactful based on quantitative evaluations. They also seem to require careful tuning of hyperparameters for it to work well. It would be great if the authors can comment on this. The HDBSCAN performed on top of \hat{f}_M seems to dictate the number of segmented objects in the scene and affect semantic view-aggregation downstream. But how is the number of clusters N_p obtained in the first place? A wrong value could under or over-segment the scene leading to misaligned semantics. Sensitivity to SAM masks: How do you handle SAM masks that aren't consistent in mask boundaries for a particular object across multiple frames. This is very common artifact in the LERF dataset for \textit{figurines}, \textit{ramen}, and \textit{teatime}. What is the reason for performing k-means clustering in Cross-View Descriptor Extraction? An alternative is to perform farthest point sampling that will represent the most "diverse" viewpoints. With kmeans there is a possibility of averaging out far away features which might be undesirable. The quantitative evaluations are performed by rendering the 2D binary map. Its not clear if its completely sufficient to show 3D segmentation quality unless 1) evaluation is performed directly in 3D or 2) 360 degree 2D evaluation is performed. Only table 3 and figure 6 actually support the claim of 3D segmentation quality. Table 4: 1) why is -DW better than +DW for fixed k=5,10,20 on figurines and teatime? 2) For Waldo kitchen scene with adaptive clustering, -DW performs better than +DW^c and +DW^d. But the combined +DW is better than -DW. How is this possible? 3) For Teatime scene, average pooling is better than fixed k=5,20 but worse than k=10. What is the reasoning for this? Is Avg pooling for k > 20? 4) The adaptive scheme is not a clear winner in any one scene. Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: Yes, appendix and video visualization. Relation To Broader Scientific Literature: The contrastive approach followed in the paper is in line with previous works like SAGA, GARField among others that perform 3DGS segmentation using 2D contrastive losses. Essential References Not Discussed: - Table 1 SAGA, GARField are strong baselines to compare against which are missing in the evaluations. Other Strengths And Weaknesses: The method follows a well-established approach to segment 3D Gaussians using contrastive losses. By performing clustering and further view-weighted associations on top of the contrastive features, they are robust to viewpoints and retain semantics for all gaussians belonging to a semantic entity. However this would be sensitive to hyperparameters and would require extensive tuning per scene/dataset. Other Comments Or Suggestions: - Eq. 3 should be performed per mask. M^I_{i} and not M^{I} - Eq. 8 should be C not C' for dimension of d_{i}. - Eq. 10 exp <d \dot \phi^{i}_{canon}>. \dot -> , Questions For Authors: - For evaluation, how is the binarized gaussians associated with the 2D ground-truth masks? based on mask iou? - What is the effect of Cross-View Descriptor Extraction individually as an ablation? - For internal compactness, couldn't larger clusters have the L2-norm be much greater than 1? - L632-633: "... follow LangSplat to train a three-layer model...". Do you train a complete Langsplat model (learning clip features on gaussians)? why is it that necessary? It seems quite overkill for the purpose of getting multi-view consistent 2D segmentations. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for careful evaluation. ## Weaknesses > W1: Core part similar to SAGA, GARField ... seem crude and not impactful... This is a misunderstanding about our method's core. Rather than decomposition, its core lies in the view-aggregated representation, which we believe is not crude. It consists of two novel modules: 1. Cross-view descriptor extraction adaptively captures multi-view semantics for objects. 2. Weighted descriptor relevance aggregation refines the representation by assigning weights to descriptors. Unlike hard filtering with information loss, all descriptors are preserved. LaGa improves 18.7% mIoU over SOTA, and ablation shows +10.6% over baseline. This is clearly impactful. > W2: ... require careful tuning ... The view-aggregated representation needs no sensitive tuning. Its K_max is fixed to 20 across all scenes/datasets. It is stable across K_max from 5–30. |K_max|5|10|15|20|30| |-|-|-|-|-|-| |mIoU|63.4|63.4|64.1|64.0|63.2| > W3: HDBSCAN ... how to obtain number of clusters N_p ... HDBSCAN automatically infers N_p. Its epsilon is fixed to 0.3 without per-scene tuning. As shown below, LaGa has stable performance across a wide range (0-0.3). Too large epsilons lead to object merging. |Epsilon|0|0.1|0.2|0.3|0.4| |-|-|-|-|-|-| |mIoU|62.6|63.0|62.1|64.0|60.6| > W4: Sensitive to SAM masks Boundary regions mis-segmented in one view often appear as interior in others. Through multi-view aggregation, these inconsistencies can be eliminated in 3D, similar to prior work (SemanticNeRF, SA3D). > W5: ... reason for k-means ... why not FPS We adopt k-means to reduce noise by aggregating features locally in the semantic space. In contrast, FPS may preserve more outliers. For the concern about averaging distant features, we add an denoising to discard features far from their centroids (brings a minimal +0.2% mIoU). The FPS performs worse: |FPS|Ours|Ours+Denoise| |-|-|-| |52.4|64.0|64.2| > W6: ... evaluations in 2D ... No standard 3D benchmark exists for open-vocabulary 3D-GS. We think a sparse set of well-chosen 2D annotated views is a proxy, as neighboring views have redundant information. We check LERF-OVS annotations and find them sufficient in multi-view coverage. This is why most works use it as the main evaluation. > W7: Ablation > W7-1: ... -DW vs +DW on F. and T.? > W7-3: T., Avg pooling ... The performance fluctuations in F. and T. (Table 4) stem from the sensitivity of fixed k-means clustering. Manually set K may be too small to capture semantic diversity or too large causing outliers. Both are harmful for hard cases. In Figurines, querying "Pikachu" targets a plastic bag with a Pikachu print. Even K=5 splits the "Pikachu bag" semantics from "bag". By +DW, they are down-weighted as outliers. In Teatime, CLIP sometimes misclassifies a plate as "apple". Avg. pooling or small K merge such descriptors, while K=10 offers better separation. K=20 introduces additional noise. The avg. pooling performs well in Teatime because its objects have more consistent multi-view semantics than other scenes. Avg. pooling is independent of K, simply averaging all multi-view features per object. Although corner cases are rare, they affect performance (Pikachu: 97% to 31%; Apple: 87% to 21%) and bring instablity. Our adaptive strategy successfully handles them and enables consistent gains with +DW. > W7-2: Waldo kitchen... Thanks for finding this. It is an error caused by swapped DW^c and DW^d results between the F. and W. columns. The correct results are ||F.||W.| |-|-|-|-| |-DW|59.7||62.8| |+DW^c|61.6||59.2| |+DW^d|60.8||63.9| |+DW|64.1||65.6| > W7-4: ...adaptive scheme not clear winner This is a misunderstanding. The adaptive scheme is designed to deliver consistently strong and robust performance across diverse scenes without per-scene manually tuning. > W8 (reference): SAGA, GARField ... They are designed for class-agnostic segmentation, rather than our task. To our knowledge, no prior work evaluates them on LERF-OVS. For comparison, we adapt the released SAGA code for 3D-OVS to deliver the following results: |F.|T.|R.|W.|Mean| |-|-|-|-|-| |36.2|19.3|53.1|14.4|30.7| ## Questions >Q1: Binarized gaussians with 2D GT masks. During evaluation, for each query, binarized gaussians are projected to 2D via 3D-GS rendering. The resulting 2D masks are compared with GT without any associating. >Q2: Cross-View Descriptor Extraction as ablation. Disabling it is infeasible, as 3D objects require at least one descriptor. Instead, we evaluate it by comparing adaptive scheme with alternatives (Sec. 6.4). '-DW' (Table 4) show results of this module work alone. >Q3: Larger clusters L2-norm > 1. This won't happen since all features are L2-normlized before clustering (L. 192). >Q4: Train Langsplat? No. We do not train LangSplat (neither CLIP features for Gaussians) in LaGa. L632-633 indicates we train three affinity features for each 3D Gaussian to maintain the multi-level SAM segmentation. We will clarify.
Summary: This paper proposes LaGa, which explores open-vocabulary 3D Gaussian understanding by decomposing the 3D scene into objects and then establishing view-dependent semantic connections. The proposed approach does not rely on aligning 3D Gaussian semantic features with 2D semantic priors and therefore is simple but effective. Extensive experiments demonstrate the effectiveness of LaGa. Claims And Evidence: Yes. The superiority of the experimental results is demonstrated through numerical metric comparisons and visual comparisons. Ablation studies and appendix also provide demonstrations on the effectiveness of the proposed method. Methods And Evaluation Criteria: Yes. Theoretical Claims: No proofs or theoretical claims. Experimental Designs Or Analyses: Yes. The authors compare their method with the latest approaches on widely used benchmarks. While there is a gap compared to the latest metrics on 3D-OVS dataset, LaGa achieves the best results on LERF-OVS and ScanNet datasets. The authors also conducted a thorough analysis of the experimental results. Supplementary Material: Yes, I carefully checked every part of the supplementary material. Relation To Broader Scientific Literature: Compared to baseline methods, this paper's contribution lies in addressing view-dependent semantics and establishing cross-view semantic connections to explicitly capture view-dependent 3D semantics. Essential References Not Discussed: None. Other Strengths And Weaknesses: weaknesses: 1. It would be better to display some failure cases and provide corresponding analysis. 2. I did not find a reasonable understanding of the weighted descriptor relevance aggregation module. The ablation study of average pooling and max pooling does demonstrate the necessity of this module, but there doesn’t seem to be a clear explanation of why this design works. Some further clarification is needed. Other Comments Or Suggestions: typos: first paragraph of Section 4: “3D objects Section 5.3” -> “3D objects as illustrated in Section 5.3” Questions For Authors: How do you handle the issue of varying segmentation granularity of SAM across different views? If there are inconsistent segmentations across different views, how does this issue affect the final result? Further discussion on this problem would be beneficial. ######-------------- Update: The rebuttal has addressed my concerns. I will maintain my score of weak accept. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for instructive comments. We hope our response can help clear your concerns. ## Weaknesses > W1: ... failure cases and analysis. Thanks for the suggestion. We summarize key failure cases and provide representative examples: 1. Bag-of-Words Effect in CLIP: When prompted with phrases like 'pirate hat on the rubber duck' or 'cookies in the plate,' CLIP often activates on individual nouns rather than the intended composite concept. This also affects LaGa. Future work may use large language models for better compositional grounding. 2. Lack of Context in 2D Semantics: LaGa extracts 2D object-level semantics by feeding segmented regions into CLIP. While this reduces distraction from unrelated objects, it also removes necessary context. For instance, in Teatime, hooves are unrecognizable without the full sheep, but with the full object, only 'sheep' is detected since missing part-level cues. These cases reflect the challenges of real-world open-vocabulary perception and the gap between model and human understanding. We will include this discussion and illustrative examples in the paper. > W2: ... understanding of the weighted descriptor relevance aggregation ... why works needs clarification ... We hope the following clarification can help address your concern: Since LaGa does not manually control the segmentation (by SAM) or subsequent semantic feature extraction (by CLIP), the extracted 2D features may contain incorrect or noisy semantics. For example, the spine of a book may look like a 'knife' from certain views. To mitigate this issue, we adaptively weight each semantic descriptor based on two criteria: 1. Directional Consistency: This metric measures the cosine similarity between each individual descriptor and the global semantics of the object. Descriptors inconsistent with the global semantics receive lower weights. For instance, in the case of the book, descriptors representing a 'passport' align closely with the global semantics 'book,' resulting in higher weights, whereas descriptors resembling a 'knife' are suppressed due to semantic inconsistency. 2. Internal Compactness: In addition to clearly incorrect semantics, some descriptors result from ambiguous or noisy segmentation (e.g., SAM oversegmentation). They may lack clear meaning yet distort the global feature and compromise directional consistency. To address this, we introduce Internal Compactness, defined as the L2 norm of a descriptor. If the features in the cluster are semantically consistent and have similar directions, their average will have a relatively large L2 norm (close to 1). In contrast, if the features are inconsistent with diverse directions, their vector average will cancel out, resulting in a lower norm. Thus, the norm serves as a confidence measure for semantic reliability. Together, these criteria help LaGa emphasize descriptors that are both globally consistent and semantically coherent. We will clarify this design further in the paper. > W3: typos. Thanks. We will fix them. ## Questions > Q1: (1) ... varying segmentation granularity across views? (2) ... inconsistent segmentations affect the final result? Further discussion Thank you for the insightful suggestion. (1) To address varying segmentation granularity, we adopt a multi-level modeling strategy (Appendix B). For each 3D Gaussian, we learn three affinity features corresponding to SAM's subpart, part, and whole-level masks, and construct parallel view-aggregated representations. Predictions from all levels are averaged during inference. (2) When segmentation inconsistency occurs within a level, LaGa resolves it in a statistial learning manner: For a 3D region observed from multiple views with inconsistent segmentation, the dominant label (i.e., the granularity supported by the majority of views) is reinforced through training, enabling convergence toward a stable 3D prediction. During decomposition, in each level, all masks for the same 3D object are grouped together regardless of granularity. In the cross-view descriptor extraction, masks of different granularities will be assigned to separate semantic descriptors if they exhibit semantic discrepancies. This avoids semantic conflicts during aggregation. This design also enables **multi-granularity semantic retrieval**: when queried with part-level prompts, the whole-level may exhibit a high response (e.g., 0.5) if it contains the semantic descriptor of that part. At the part level, the queried part will yield a high response (e.g., 0.6), while the remaining regions produce lower responses (e.g., 0). By averaging predictions, the model produces a high response (0.5 + 0.6) for the queried part, a moderately high response (0.5 + 0) for the whole object, and low responses (0 + 0) for unrelated objects. This hierarchical behavior aligns with human cognition and reflects real-world compositional semantics. We will add the above discussion to our paper to help understanding.
Summary: The paper introduces LaGa, a novel method designed to improve open-vocabulary 3D scene understanding by tackling view-dependent semantics in 3D Gaussian Splatting. LaGa works by breaking down the scene into 3D objects using multi-view 2D masks. It then aggregates the semantics of these objects through a combination of adaptive clustering and weighted relevance scoring. Experimental results highlight substantial advancements in 3D perception, with LaGa achieving an impressive +18.7% increase in mIoU (mean Intersection over Union) compared to earlier methods when tested on complex datasets. Claims And Evidence: Yes. It claims: The view-dependency of 3D semantics causes object understanding to vary with perspective, and simply projecting 2D semantics onto 3D Gaussians results in inaccurate and incomplete scene understanding due to this unaddressed variation. Evidence: Figure3 and Figure2. Methods And Evaluation Criteria: The evaluation uses LERF-OVS, 3D-OVS, and ScanNet datasets—covering complex 360° scenes, forward-facing views, and real-world scans—paired with mIoU and recall rate metrics to assess segmentation accuracy and object retrieval completeness. These choices are fitting because they test LaGa across diverse scenarios and directly measure its success in overcoming view-dependency, with supporting analyses and ablation studies reinforcing the method’s design and effectiveness for the problem at hand. Theoretical Claims: The paper lacks formal proofs for its theoretical claims—view-dependent semantics and the efficacy of scene decomposition with semantic aggregation—relying instead on logical reasoning and empirical evidence. I assessed their correctness through coherence, data support (e.g., semantic similarity and retrieval analyses showing view-dependency, and an 18.7% mIoU gain validating the method), and consistency, finding no issues. While not mathematically rigorous, the claims are well-supported by qualitative examples, quantitative results, and ablation studies, making them credible without formal proofs. Experimental Designs Or Analyses: I assessed the experimental designs and analyses validating the LaGa method for addressing view-dependent semantics in 3D Gaussian Splatting, focusing on datasets, metrics, ablation studies, and additional analyses. The datasets—LERF-OVS (complex 360° scenes), 3D-OVS (forward-facing, diverse objects), and ScanNet (real-world scans)—are diverse and relevant, testing LaGa across varied scenarios with clear preprocessing steps (e.g., SAM and CLIP usage). Metrics like mIoU and recall rate appropriately measure segmentation accuracy and object retrieval, while ablation studies confirm the value of adaptive clustering and weighting, with strong quantitative support (e.g., 10.6% mIoU drop without clustering). The designs are mostly sound, with established benchmarks ensuring comparability and analyses like semantic similarity and retrieval integrity (e.g., 50% retrieval failure) reinforcing the problem’s scope. However, potential dataset bias, limited qualitative analysis, an untested recall threshold (0.75), and narrow ablation scope (e.g., omitting decomposition details) are issues that could weaken generalizability and transparency if not addressed. Overall, the experiments are robust but could benefit from broader testing and deeper analysis. Supplementary Material: No Relation To Broader Scientific Literature: The paper’s introduction of LaGa for open-vocabulary 3D scene understanding via 3D Gaussian Splatting significantly advances prior work like NeRF (Mildenhall et al., 2020) and CLIP (Radford et al., 2021) by explicitly tackling view-dependent semantics—a critical challenge rooted in multi-view stereo where object meaning shifts with perspective; LaGa innovates with 3D scene decomposition and view-aggregated representation, drawing from contrastive learning (NeRF-SOS, Fan et al., 2023), mask-based methods (Garfield, Kim et al., 2024), and clustering, offering a direct 3D solution validated on benchmarks like LERF-OVS and ScanNet, surpassing 2D-reliant approaches like LangSplat (Qin et al., 2024). Essential References Not Discussed: No. Other Strengths And Weaknesses: Figure 2 effectively illustrates the motivation behind the study, presenting a clear and intuitive example that guides readers seamlessly through the authors' reasoning. The subsequent figures and experimental results strongly demonstrate the detrimental impact of this motivation on performance. Other Comments Or Suggestions: No. Questions For Authors: No for now. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your time and effort dedicated to evaluating our work. We greatly appreciate your recognition of our motivation, methodological design, and the clarity of our writing. We find the remaining concerns mainly locates in the 'Experimental Designs Or Analyses' section. We hope our responses below will address them. > (Experimental Designs Or Analyses) However, potential dataset bias, limited qualitative analysis, an untested recall threshol d (0.75), and narrow ablation scope (e.g., omitting decomposition details) are issues that could weaken generalizability and transparency if not addressed. **Potential dataset bias:** Thank you for the comment. As you mentioned, our evaluation involves diverse benchmarks: LERF-OVS (complex 360° scenes), 3D-OVS (forward-facing views with diverse objects), and ScanNet (real-world scans). We believe this coverage is sufficient to demonstrate the effectiveness of our method. We are not entirely sure which specific bias is being referred to and would be happy to further discuss this concern. **Limited qualitative analysis:** Please kindly refer to our appendix for more visualization results including more visual comparisons, multi-view visualizations, and results on the 3D-OVS dataset. To further strengthen our qualitative analysis, we will incorporate additional visualization results about multi-granularity segmentation cases and representative failure cases to the paper. **Untested recall threshold (0.75):** The threshold of 0.75 was chosen based on analysis of the precision–recall trade-off under different cosine similarity values. Here, precision refers to the proportion of retrieved 3D Gaussians that actually belong to the corresponding 3D object. Low precision indicates that unrelated Gaussians are being retrieved for a given 2D mask. As shown in the table below, lowering the similarity threshold (e.g., to 0.7) increases the proportion of samples with high recall (>0.9), but at the cost of significantly reduced precision. For example, at 0.7, the average precision of high-recall samples drops to just 13.4%. From these observations, we find that thresholds in the [0.75, 0.8] range strike a better balance. We conservatively select 0.75, where 50.2% of samples exhibit low recall, clearly demonstrating the challenge of view-dependent semantics. A threshold of 0.8 also yields valid results, with fewer high-recall samples but higher precision. |Threshold |0.5|0.6|0.7|0.75|0.8|0.9| |-|-|-|-|-|-|-| |Low / (Low+High) (%)|2.6|13.1|33.7|50.2|65.4|93.8| |Average Precision of High (%)|1.7|3.9|13.4|24.1|38.6|48.7| Note that the average precision does not reach 100%, as CLIP features operate at the semantic level, not the instance level. Therefore, 3D Gaussians with similar semantics but belonging to different objects may also be retrieved, even with a high threshold. **Narrow ablation scope:** Thanks for the suggestion. To further demonstrate the effectiveness of our design choices, we ablate two core components of the decomposition process: 1. An ablation study on the **data resampling strategy** used for training the affinity features. 2. A **hyperparameter analysis** of the HDBSCAN decomposition algorithm. Effect of Resampling Strategy: | | F. | T. | R. | W. | Mean | |--------------|------|------|------|------|-------| | w/o Resampling | 54.8 | 70.5 | 40.8 | 57.5 | 55.9 | | w/ Resampling | 64.1 | 70.9 | 55.6 | 65.6 | 64.0 | As shown above, removing the resampling strategy leads to consistent performance drops across all scenes, highlighting its importance for stable learning of affinity features. Effect of HDBSCAN Epsilon: | $\epsilon$ | 0 | 0.1 | 0.2 | 0.3 | 0.4 | |--------------|-----|-----|-----|-----|-----| | mIoU (%) | 62.6 | 63.0 | 62.1 | **64.0** | 60.6 | We empirically set epsilon = 0.3 for all experiments. As the table shows, LaGa achieves strong performance within a broad and reasonable range ($\epsilon \in [0, 0.3]$). Performance degradation only occurs when $\epsilon$ becomes too large, potentially causing unintended merging of semantically distinct objects. This analysis shows that LaGa is not sensitive to this hyperparameter.
null
null
null
null
ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference
Accept (spotlight poster)
Summary: This paper presents ShadowKV, a system for long-context LLM inference that optimizes memory usage and throughput with negligible impact on output quality. ShadowKV consists of two key techniques: 1. In GPU memory, it only stores the SVD decomposition of pre-ROPE key caches to reduce memory usage per request. The value cache will be stored in CPU memory and be loaded back to GPU during the inference. 2. During the inference, it selects a subset of the tokens' KV cache to run the attention, which further increases the batch size and the inference throughput. The evaluation results show that Shadow KV achieves similar quality to the SOTA KV cache eviction baselines and can improve the inference throughput compared to full KV cache computation. ## update after rebuttal Thanks to the authors for answering my questions. The rebuttal has addressed my concerns and I will keep my score of "accept". Claims And Evidence: The first key claim in this paper is that the Pre-ROPE key tensor is much sparser than other intermediate catchable results during LLM inference. The author provides clear and easy-to-understand evidence in the form of figures to support this claim. The second key claim in the paper is that the attention scores of the adjacent tokens are usually very similar (with a small amount of outliers). The authors also used real experimental results to support the claim. However, the authors did not clarify which dataset they used to produce the experimental result, whether the dataset is real or synthetic, and whether it matches the target use case of this paper. This limits the credibility of the claims. Methods And Evaluation Criteria: The evaluation section consists of 3 main parts: 1. evaluating the impact on the LLM generation quality. 2. evaluating the improvement of the runtime efficiency. 3. evaluating the impact of different configurations in the system In all three parts, the baselines and the setup are clear and easy to understand. The selected datasets also fit the target use case of this paper. Overall, the evaluation of the paper is solid. Theoretical Claims: The theoretical analysis and algorithm design (Section 3.1 and Section 4) make sense. Experimental Designs Or Analyses: The experimental design is clear and comprehensive. Supplementary Material: The paper has a long list of supplementary materials, which provide additional technical details and experimental results. The technical detail sounds reasonable, including how ShadowKV manages the KV cache of the newly generated tokens, how it handles outlier tokens, and the detailed latency breakdown of each step in ShadowKV. The additional experimental results cover the quality on a new dataset called InfiniteBench, making the paper more solid. Relation To Broader Scientific Literature: This paper provides a new way of compressing and storing the KV cache. The SVD technique will not only be useful to save the memory usage of the KV cache but also useful in reducing the cost for the KV cache storage use cases. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - Well-designed system with clear technical details, covering most aspects during the inference. - Solid experimental results with SOTA baselines and a wide range of datasets. - Good baseline selection in performance evaluation (used FlashAttention) Other Comments Or Suggestions: I really enjoyed reading this paper and liked the idea of SVD decomposition of pre-ROPE key cache. Questions For Authors: It would be great if you could give some details about how the token selection works with the state-of-the-art attention frameworks like FlashAttention. Does it require modifying the kernel? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the supportive comments and for recognizing the novelty of our method and the thorough evaluations. We hope our detailed clarifications below address the remaining concerns. --- ### **Q1: Missing clarification on which dataset was used for observations** We appreciate the reviewer's careful reading. The observations regarding the low-rank structure of pre-RoPE keys, the high similarity of post-RoPE keys, and the KV cache hit rate plots (Figures 2a, 2b, 5b, and 5c) are based on sequences drawn from PG-19 [1], a real-world long-context dataset. We chose PG-19 to ensure that our structural analyses reflect natural language characteristics rather than synthetic artifacts. For Figure 5a, the results are based on sequences sampled from RULER-NIAH, consistent with our downstream evaluation tasks. --- ### **Q2: How does the token selection interact with state-of-the-art attention frameworks like FlashAttention? Does it require kernel modifications?** Thank you for the insightful question. In our implementation, token selection is applied prior to the attention computation. At each decoding step, we identify the most relevant KV tokens based on chunk-level attention scores. For efficiency, the chunk-level attention score computation (GEMM + Softmax) is fused into a single kernel. The selected KV entries are then gathered into a compact buffer, which is passed directly to the FlashAttention interface. Since we only modify the input tensors to attention and not the attention logic itself, no modifications to the FlashAttention kernel are required. This makes ShadowKV easy to integrate into modern inference stacks that already support FlashAttention. [1] Compressive Transformers for Long-Range Sequence Modelling
Summary: This paper presents several interesting findings, including the observation that pre-ROPE keys exhibit low-rank properties, and that post-ROPE keys show high similarity with neighboring tokens. Based on these insights, the authors propose two main techniques: Low-Rank Keys and Offloaded Values for Storage, and Accurate KV Selection for Fast Decoding. The designs are well-reasoned and supported by comprehensive experiments. The proposed methods demonstrate strong compression performance, achieving impressive results across multiple models and long-text benchmarks. ## update after rebuttal The rebuttal partially addresses my concerns, so I will maintain my rating of 3. Claims And Evidence: The paper mentions that the reconstruction of the low-rank keys and the movement of values from the CPU to the GPU are synchronized. As a result, the authors omit this part of the process in their calculation of the Theoretical Equivalent Bandwidth. However, it would be important to experimentally consider the time overhead of this step. Specifically, the reconstruction involves an additional matrix multiplication, followed by ROPE which has an O(L) complexity, this might have a significant impact on performance that should be accounted for. Methods And Evaluation Criteria: The authors propose two main techniques: Low-Rank Keys and Offloaded Values for Storage, which effectively reduce the KV cache footprint for long-text sequences, and Accurate KV Selection for Fast Decoding, which improves accuracy. The proposed methods are well-designed and supported by comprehensive experiments. The authors use relevant benchmarks, such as RULER and Longbench, which are highly suitable for the problem at hand, effectively demonstrating the superiority of their approach across multiple models and long-text datasets. Theoretical Claims: The theoretical proofs provided in Section 4.2 are generally sound. However, I have some concerns about one of the underlying assumptions: that the time required for low-rank key states reconstruction can be disregarded. As I mentioned in the "Claims and Evidence" section, I find this assumption somewhat questionable. Given that the reconstruction involves an additional matrix multiplication, and ROPE has an O(L) complexity, the potential time cost should be considered more thoroughly. The provided theoretical proofs in Appendix A.2 were reviewed and appear correct and carefully derived, and no immediate issues were identified. Experimental Designs Or Analyses: I reviewed the experimental evidence supporting the two key findings, and I find them reasonable. The selection of RULER and Longbench, which are both long-text datasets, is appropriate for the experimental requirements. However, I believe that the theoretical claims discussed earlier would benefit from some experimental validation to strengthen the argument. Supplementary Material: I have reviewed Appendix A to gain a deeper understanding of the superiority of the proposed method, particularly through the extended experimental results that support the main claims of the paper. Additionally, I reviewed Appendix B, which provided valuable insights into the specific implementation details and further elaboration on the experimental setup. Relation To Broader Scientific Literature: This paper’s findings, such as pre-Rope keys having the lowest rank and post-Rope keys showing high similarity between adjacent tokens, offer valuable insights for KV cache compression. These results extend prior work on transformer optimizations by highlighting key structural patterns, which can lead to more efficient compression techniques and improved memory and computational performance in large models. Essential References Not Discussed: Previous works like Eigen-attention[1], HeadsKV[2], MatyroshkaKV[3] has also explored the low-rank properties of the KV cache and utilized corresponding methods, such as SVD/PCA, to reduce the KV cache footprint. [1] Eigen Attention: Attention in Low-Rank Space for KV Cache Compression [2] Effectively Compress KV Heads for LLM [3] MatryoshkaKV: Adaptive KV Compression via Trainable Orthogonal Projection Other Strengths And Weaknesses: **Strengths** The paper presents several interesting findings, such as the pre-ROPE key states have a low-rank property compared to other parameters, and that most of the key states post-ROPE exhibit high similarity with neighboring tokens. These contributions offer valuable perspectives in the field. Additionally, the proposed methodology is well thought out and highly reasonable. The detailed results demonstrate strong performance, confirming the effectiveness of the approach. **Weaknesses** - The amount of work done in this paper is impressive, particularly the discovery mentioned above. However, the core innovation of the work is not entirely clear. To my knowledge, both the use of SVD decomposition on the KV cache to store low-rank matrices and the application of landmarks for estimating attention scores for selection have been explored in prior work. What novel adaptations or insights does this paper offer beyond simply combining these existing techniques? Other Comments Or Suggestions: The overall writing of the paper is well-structured and adheres to proper conventions, with no immediate issues identified. Questions For Authors: I find your findings very intriguing! However, I have one curiosity regarding the outlier tokens. Do these outlier tokens correspond to any specific patterns in the original text? Is there a way to identify these tokens as outliers based on the original text itself? Additionally, I noticed that you only presented results from a subset of the layers. Have you analyzed whether the number of outlier tokens correlates with the number of layers and heads in the transformer model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful and thorough review. We truly appreciate your recognition of our findings and experiments. We have thoroughly addressed each of your questions and hope our responses will lead you to consider raising your score. --- ### **Q1: Concern about unaccounted latency from key reconstruction and value fetching in the Theoretical Equivalent Bandwidth analysis.** As detailed in Appendix A.6, we provide a full component-wise decoding latency breakdown (in milliseconds) across various batch sizes and context lengths on an A100 GPU, which includes the time cost of both key reconstruction (Recompute K) and value cache fetching (Fetch V). Our system is designed to leverage CUDA multi-streams to overlap these two operations. |Context|GEMM+Softmax|Max|TopK|Recompute K (Overlapped)|Fetch V|Attention|FFN|QKV| |-|-|-|-|-|-|-|-|-| |48$\times$64K|0.56|0.07|0.14|1.25|1.84|0.23|0.33|0.05| |24$\times$128K|0.58|0.07|0.15|1.36|1.66|0.21|0.29|0.05| |12$\times$256K|0.65|0.07|0.16|1.49|1.75|0.19|0.25|0.05| |6$\times$512K|0.71|0.07|0.17|1.51|1.69|0.18|0.23|0.05| As shown in the table, Fetch V and Recompute K exhibit comparable latency. Since they are executed on separate CUDA streams, their durations can be effectively overlapped—meaning only Fetch V contributes to the critical path. Furthermore, our measured end-to-end throughput gains validate that these design choices translate into practical system-level improvements. --- ### **Q2: Some references not discussed.** As stated in Section 3.1 (Line 188), prior low-rank approaches primarily focus on **data-independent, offline weight decomposition**, i.e., performing low-rank factorization on the **model weight matrices** using calibration data or during training. These methods either **require training or achieve limited compression**. The cited works (EigenAttention, HeadsKV, and MatryoshkaKV) belong to this category. Specifically: - EigenAttention generates lower-rank weight matrices, yielding at most 40% KV cache reduction. - HeadsKV converts MHA into GQA via low-rank decomposition of weights, which requires training and is only applicable to MHA-based models. - MatryoshkaKV trains orthogonal projection matrices, achieving around 60% average KV cache compression. In contrast, ShadowKV **identifies the dynamic, sequence-dependent low-rank structure in pre-RoPE keys**, and performs **online SVD directly on the KV cache** during inference. This approach is training-free, adaptive to each sequence, and achieves significantly higher compression (over 6$\times$, 15.6% of the original memory). Thank you for giving us the chance to elaborate on this point. We will add these mentioned references to the future version. --- ### **Q3: Core innovation of the work.** As discussed in Q2, unlike existing low-rank methods that compress weights offline, ShadowKV performs online activation-level SVD on the KV cache in a sequence-adaptive manner, and achieves substantially better compression while preserving accuracy. Prior methods rely on static weight decomposition and often require fine-tuning or model modification. Moreover, our approach is tightly integrated with a system-level design. We carefully co-design algorithm and system by enabling low-rank key cache reconstruction and CPU-to-GPU value fetching to overlap through CUDA multi-streams. Additionally, we utilize an accurate sparse KV selection mechanism that reduces attention computation and memory movement. This holistic integration ensures end-to-end throughput gains. --- ### **Q4: Outlier tokens — are there interpretable patterns? And does outlier count correlate with layers/heads?** Thank you for the insightful question. We clarify that we selected a subset of layers rather than showing all layers in Figure 5c for visualization purposes to avoid clutter. We performed additional analysis to better understand the nature of outlier tokens and their distribution. We found that certain tokens, such as attention sinks [1], are consistently identified as outliers, with their similarity scores even being negative, highlighting their distinctiveness in the distribution. Across heads, the sets of outliers tend to differ. We did not observe a clear trend or correlation between outlier frequency and specific layers or head indices. We absolutely agree that this is a rich direction for future exploration, and we plan to conduct deeper investigations into the interpretability and structure of outliers in future work. [1] Efficient Streaming Language Models with Attention Sinks
Summary: This paper finds a novel method of KV cache management that takes advantage of partial offloading to CPU and matrix decomposition to obtain impressive KV reduction without affecting accuracy and reducing latency significantly. The paper finds two important properties of LLMs that it takes advantage of: (i) the Key cache in KV caches, before applying positional embeddings (this rotates each token/vector in K by an amount determined by its position so that the model can learn positions of tokens), has very low rank and thus high compressibility. (ii) A majority of K tokens after positional embeddings have been applied, show high cosine similarity to adjacent tokens, with few outliers. What this paper does it takes advantage of property (i) by offloading the V (Value) cache to CPU memory and compress K (key) using Singular Value Decompsition (SVD) cache in GPU memory, thus during inference when KV values are needed, while V is being fetched from CPU simultaneously K is being uncompressed and has positional embeddings applied to it. Furthermore, it takes advantage of (ii) to perform a ‘chunk level approximation’, where a small number of K tokens (1.56%) are retained that can be used to approximate most of the other tokens while the outliers are retained in GPU memory. This leads to further compression. The results show 3x improvement in throughput while supporting 6x larger inputs or batches, this performs better with regards to throughput than a theoretical infinite KV cache memory on GPU, due to minimizing total data movement. Claims And Evidence: The paper makes modest claims and back them up through empirical evidence. Methods And Evaluation Criteria: The evaluation criteria are explained reasonably well. Theoretical Claims: The paper does not have a theoretical claim. Experimental Designs Or Analyses: The experiments are designed and executed well and the results are analyzed in sufficient depth Supplementary Material: NA Relation To Broader Scientific Literature: The paper does a good job in comparing the proposed idea against existing works. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: • Paper discovers two separate important properties of LLMs and exploits them each, to obtain very impressive compression (6x reduction in GPU memory) and throughput increases (3x higher). • Benchmarks are comprehensive, with 3 different benchmarks used, along with 3 different LLM models. Behaviors of different parameters are also shown in the paper. Comparisons to several other state of the art KV methods are also done. Weaknesses: • In Figure 8, x axis needs clarification. The paper reports 6x larger batch sizes, yet the figures here show x axis to have ‘sparse budgets’ down to 0.2% while maintaining performance, which us much more than a 6x compression. It looks like sparse budgets here is referring to something other than KV cache GPU memory usage, this should be clarified in the caption. • Some undefined terminology: KV cache hit rate is not defined in text or in the figures; the ‘D’ in D(H1, H2) in observation section is not defined, along with a few other mathematical notations in that section. • Authors do not mention how GPU memory usage increases during inference, as the K cache is reconstructed and V cache is prefetched. It seems like the memory usage should spike up during inference in this case. • It would be good if the paper briefly addresses the limitations of this approach. Currently it does not. Other Comments Or Suggestions: This paper proposes a method of KV cache compression and latency improvements obtaining very impressive results with two complementary approaches. It introduces a method of obtaining a very high compression ratio on K caches, and offloading V to CPU. During inference fetching V from CPU memory and K reconstruction are overlapped. Some things can be addressed to clarify some parts. In figure 8 it looks like the x axis shows KV compression as it is labelled as ‘Sparse KV Cache Budget’ and shows up to 0.20% KV budget while retaining accuracy. This implies up to 500x compression, however the introduction states there is a 6x compression. It seems that ‘Sparse KV Cache Budget’ is referring to something other than total KV cache memory usage and should be clarified. The authors test the method on 3 models, but this raises the question whether this method can generalize to other models and positional embeddings other than RoPE. This should be discussed in the paper. Another possible limitation that should be addressed is how the CPU side affects the performance. The authors implicitly assume that fetching V from CPU memory and reconstructing K in GPU memory have around the same time, but what is the CPUs are slow, or CPU to GPU bandwidth is low. This should also be addressed. A final possible limitation of this paper is how GPU memory usage changes during inference. Outside of inference the authors report 6x compression of GPU memory, but if the KV cache is reconstructed during inference, then it seems that the GPU memory will spike back up to roughly baseline KV cache size. This needs to be addressed in the paper. Questions For Authors: Can you please address the concerns in the weaknesses part above? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for detailed review and valuable feedback. We appreciate the reviewer's recognition of the novelty and effectiveness of our method. Below, we address the raised concerns below and will incorporate clarifications into the revised version. We hope the reviewer can consider raising your score in light of our response. --- ### **Q1: Clarification on Figure 8 and the meaning of "Sparse KV Cache Budget".** The x-axis in Figure 8 refers to the percentage of KV pairs selected at each decoding step for attention computation—i.e., the active sparse KV budget per step, rather than the overall GPU memory usage for KV cache storage. In contrast, the 6$\times$ memory compression refers to the total reduction in GPU memory footprint for the KV cache, achieved through a combination of low-rank key compression and value offloading. The extremely small sparse budget (e.g., 1.5%) is made possible by our accurate TopK selection mechanism. We will clarify this distinction explicitly in the figure caption in the future version. --- ### **Q2: Undefined terminology such as KV cache hit rate and $D(H_1, H_2)$.** Thank you for the valuable suggestion. We will add clarifications below in the future version. - The KV cache hit rate refers to the proportion of selected KV chunks at decoding step $t$ that are also selected at step $t+1$, due to temporal locality in generation. This metric reflects the effectiveness of caching selected sparse KV pairs across decoding steps to reduce redundant key cache reconstruction and value cache fetching. - As for $D(H_1, H_2)$, it is defined in Section 3.1 (Line 204) as $D(H_1, H_2) = \langle H_1, H_2 \rangle / r$, where $H_1$ and $H_2$ are rank-$r$ projection matrices and $\langle \cdot, \cdot \rangle$ denotes the Frobenius inner product. We agree the definition could be made more prominent, and we will revise the presentation to improve clarity and visibility. --- ### **Q3: GPU memory usage during inference — will it spike due to K reconstruction and V fetching?** In ShadowKV, GPU memory usage remains bounded and does not spike during inference. This is due to two key factors: - At each decoding step, only a sparse subset of KV pairs (e.g., 1.5%) is selected for reconstruction and fetching, not the full sequence. - These operations are performed layer-by-layer, and the buffers are reused across layers, so peak memory does not accumulate across the entire model. For example, in Table 3 (e.g., 122K context, batch size of 24), we reconstruct 1.56% of the full KV cache during each decoding step, resulting in <200MB peak additional memory usage—negligible compared to the full KV size. We will add a memory profiling figure to better illustrate this in the future version. --- ### **Q4: The paper currently does not mention limitations.** Thank you for this suggestion. We plan to add a limitations paragraph, covering the following points: 1. **Dependency on RoPE positional encoding.** Our low-rank key compression relies on the pre-RoPE key representation being compressible. While RoPE is used in most state-of-the-art LLMs, the method may require adaptation for models using absolute or learned positional embeddings. We are exploring this generalization as future work. 2. **Linear-time sparse KV selection.** Our current KV selection requires an $O(N)$ complexity. We view the design of $O(\log N)$-time sparse search methods as a promising direction. 3. **SVD-based compression on low-precision hardware.** ShadowKV relies on SVD for key cache compression, which is highly efficient on modern GPUs. However, on edge devices with only low-precision compute units (e.g., INT8-only cores), efficient SVD implementation may be challenging. Exploring lightweight alternatives to SVD could improve portability to constrained hardware. --- ### **Q5: Generalization to non-RoPE models.** We focus on RoPE-based models because RoPE is the dominant positional encoding scheme in state-of-the-art LLMs (e.g., LLaMA, Qwen, Yi, GLM, etc.). We acknowledge that generalization to other positional embeddings is an interesting direction, and we are currently exploring variants such as ALiBi in ongoing work. --- ### **Q6: What if CPU-to-GPU bandwidth is low, or the CPU is slow?** ShadowKV performs well under typical bandwidth conditions and is designed to keep transfer sizes small and overlappable. We clarify that: - Value cache fetching is PCIe bandwidth-bound. The CPU merely serves as a memory pool for the value cache and is not involved in computation. Hence, CPU speed has negligible impact on performance. - In our implementation, we use PCIe Gen4 with 32 GB/s bandwidth, which is standard in modern server-grade inference deployments. In practice, the PCIe bandwidth is typically well-matched to the GPU performance. For instance, H200 GPUs are commonly paired with PCIe Gen5 (128 GB/s). Therefore, it is rare to find a high-performance GPU configured with a low-performance PCIe.
Summary: Large language models (LLMs) excel at handling extended contexts but face challenges with key-value (KV) cache scaling, increasing memory usage and reducing inference throughput. Existing methods like KV eviction and sparse attention either degrade accuracy or inadequately optimize memory use and latency. This paper introduces SHADOWKV, leveraging two key insights: (1) pre-RoPE keys are inherently low-rank and highly compressible, and (2) value caches, being less compressible, can be efficiently offloaded to CPU storage. SHADOWKV integrates low-rank key compression, value cache offloading, and precise sparse KV selection to significantly reduce memory usage while preserving high accuracy and low latency. Comprehensive experiments demonstrate SHADOWKV achieves 6× larger batch sizes and throughput gains up to 3× compared to baseline methods across various LLMs and tasks. Claims And Evidence: SHADOWKV claims significant improvements in accuracy and system performance through its combination of low-rank compression and optimized memory management. The evidence supporting accuracy improvements is robust, detailed, and quantitatively convincing. The paper also provides adequate quantitative analysis regarding the potential benefits to system performance, though further detailed experiments on throughput could strengthen these claims. Methods And Evaluation Criteria: The paper evaluates two primary metrics: accuracy and throughput. Accuracy evaluation is extensive, clearly demonstrating the effectiveness of the proposed compression techniques. Throughput evaluations, while less thorough, sufficiently illustrate efficiency improvements. Expanding throughput analysis, particularly across more diverse settings, would further strengthen the evaluation. Theoretical Claims: The theoretical foundation for low-rank key compression is plausible and well-articulated, though I am less confident in fully assessing its theoretical rigor. Nevertheless, quantitative system performance analyses and experimental results effectively support these theoretical claims. Experimental Designs Or Analyses: The experimental design, primarily focused on accuracy and system performance metrics, is solid. Accuracy evaluations are especially thorough and convincing. However, throughput experiments could benefit from more comprehensive scenarios or additional benchmarks to better understand the practical limits and generalizability of the proposed approach. Supplementary Material: The supplementary material provides comprehensive details supporting the experimental evaluations, contributing valuable context for understanding the results presented in the main text. Relation To Broader Scientific Literature: The paper situates itself within existing literature on KV cache management strategies, such as compression and offloading techniques, highlighting its contribution with the insight on compressing RoPE keys and reconstruct values. Essential References Not Discussed: None explicitly identified. Other Strengths And Weaknesses: A notable strength of the paper is its insightful design, particularly in efficiently compressing pre-RoPE key caches and significantly reducing CPU-GPU communication overhead, demonstrating substantial potential for long-context scenarios. However, the evaluation of throughput is somewhat limited. Memory savings do not always translate directly into performance gains; thus, expanding this evaluation with additional context or detailed performance metrics could enhance the credibility of system-related claims. Other Comments Or Suggestions: Overall, this is a well-written and carefully prepared paper. Questions For Authors: 1. In a typical scenario, what is the time breakdown for low-rank key cache reconstruction? 2. How effectively can value cache fetching overlap with computation, and under what conditions? 3. Could you clarify the experimental settings used for generating Table 3? Specifically, is this based on standard vLLM? 4. What are the primary limitations that might prevent integrating SHADOWKV into modern LLM serving engines like vLLM and SGLang? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and your positive assessment of our work. We appreciate your recognition of the novelty of ShadowKV, particularly its system-level insights and the strength of accuracy evaluations. We address your concerns with additional clarifications and experiments as detailed below. --- ### **Q1: Throughput evaluation is somewhat limited. Memory savings do not always translate directly into performance gains.** We appreciate the suggestion. To further support our claim, we include a table below, showing throughput (tokens/s) under varying batch sizes and contexts for both Full KV and ShadowKV with Llama3-8B-1M. |(seq,method)|bsz=1|bsz=2|bsz=3|bsz=4|bsz=5|bsz=6|bsz=8|bsz=12|bsz=16|bsz=24|bsz=32|bsz=48| |-|-|-|-|-|-|-|-|-|-|-|-|-| |60K,Full|58.62|89.19|111.44|126.73|142.62|147.40|**160.62**|OOM|OOM|OOM|OOM|OOM| |122K,Full|45.59|65.12|75.16|**80.77**|OOM|OOM|OOM|OOM|OOM|OOM|OOM|OOM| |244K,Full|32.22|**40.37**|OOM|OOM|OOM|OOM|OOM|OOM|OOM|OOM|OOM|OOM| |488K,Full|**20.15**|OOM|OOM|OOM|OOM|OOM|OOM|OOM|OOM|OOM|OOM|OOM| |60K,ShadowKV|47.06|89.69|126.61|159.41|184.92|205.41|244.20|306.09|346.34|399.65|428.63|**455.14**| |122K,ShadowKV|36.57|65.61|94.28|115.01|132.23|143.77|166.72|196.73|217.24|**239.51**|OOM|OOM| |244K,ShadowKV|27.63|48.39|65.95|78.92|87.83|94.07|104.73|**119.01**|OOM|OOM|OOM|OOM| |488K,ShadowKV|17.27|29.82|41.01|47.13|50.85|**53.46**|OOM|OOM|OOM|OOM|OOM|OOM| - ShadowKV enables significantly larger batch sizes at long context lengths, where Full KV quickly runs out of memory. - At small batch sizes (e.g., bsz = 1), ShadowKV may be slightly slower due to reconstruction and fetching overhead. However, as batch size increases, the benefit of memory savings and increased parallelism translate directly into throughput gains. These additional results help clarify that ShadowKV not only reduces memory usage but also yields consistent throughput improvements. We will include it in the final version and appreciate the reviewer's suggestion, which helped us strengthen this aspect of the paper. --- ### **Q2: What is the time breakdown for low-rank key cache reconstruction in a typical scenario?** As detailed in Appendix A.6, we provide a full component-wise decoding latency breakdown (in milliseconds) across various batch sizes and context lengths on an A100 GPU, which includes the time cost of both key reconstruction (Recompute K) and value cache fetching (Fetch V). Our system is designed to leverage CUDA multi-streams to overlap these two operations. |Context|GEMM+Softmax|Max|TopK|Recompute K (Overlapped)|Fetch V|Attention|FFN|QKV| |-|-|-|-|-|-|-|-|-| |48$\times$64K|0.56|0.07|0.14|1.25|1.84|0.23|0.33|0.05| |24$\times$128K|0.58|0.07|0.15|1.36|1.66|0.21|0.29|0.05| |12$\times$256K|0.65|0.07|0.16|1.49|1.75|0.19|0.25|0.05| |6$\times$512K|0.71|0.07|0.17|1.51|1.69|0.18|0.23|0.05| --- ### **Q3: How effectively can value cache fetching overlap with computation, and under what conditions?** As discussed in Appendix A.6, we leverage CUDA multi-streams to overlap CPU-GPU data transfers (Fetch V) with GPU-side compute (Recompute K), ensuring high GPU utilization during decoding. This overlap is particularly effective in long-context settings where attention computation dominates latency. As shown in the table, Fetch V and Recompute K exhibit comparable latency. Since they are executed on separate CUDA streams, their durations can be effectively overlapped—meaning only Fetch V contributes to the critical path. Furthermore, our measured end-to-end throughput gains validate that these design choices translate into practical system-level improvements. --- ### **Q4: Experimental settings for Table 3—are they based on standard vLLM?** Our baseline implementation leverages open-source efficient CUDA kernels, including those from both vLLM and FlashInfer [1]. Our inference engine is optimized specifically for large-batch long-context decoding. The table below presents a decoding latency (tokens/s) benchmark, demonstrating that our baseline achieves comparable or slightly better efficiency than vLLM in full-attention mode. This shows that our baseline is a valid and fair reference point. |Context (Full Attention)|vLLM |Our Baseline| |-|-|-| |1$\times$30K|64.19|69.38| |1$\times$60K|55.30|58.62| |1$\times$120K|43.41|46.60| |1$\times$240K|30.51|32.78| |1$\times$480K|19.04|20.74| --- ### **Q5: Limitations for integrating SHADOWKV into vLLM and SGLang?** ShadowKV employs custom CUDA kernels for low-rank reconstruction, RoPE fusion, and efficient CPU data prefetching with overlap , built on top of CUTLASS [2]. Integrating ShadowKV into vLLM and SGLang would require backend modifications to support these kernels and its memory pool management strategy. While this integration is non-trivial, it is technically feasible, and we are actively working toward supporting popular serving engines in the future. [1] https://github.com/flashinfer-ai/flashinfer [2] https://github.com/NVIDIA/cutlass --- Rebuttal Comment 1.1: Comment: Thank you for the detailed explanation I do not have any further questions, thank you for the good work again : ) --- Reply to Comment 1.1.1: Comment: Thank you for your kind comments and encouraging feedback. We are pleased to hear that our responses were helpful and truly appreciate your support of our work.
null
null
null
null
null
null
LoRA-Gen: Specializing Large Language Model via Online LoRA Generation
Accept (poster)
Summary: The paper proposes LoRA-Gen, a framework for specializing language models (LMs) on edge devices by generating task-specific LoRA parameters using a cloud-side model. The core idea involves leveraging a large cloud-based LM to generate meta tokens from task descriptions, which dynamically assemble LoRA parameters from a pool of experts. These parameters are merged into the edge-side model via reparameterization, reducing input context length and enabling efficient inference. Key claims include: ● Training-free specialization for unseen tasks via single-turn inference on system prompts. ● 2.1x speedup with TinyLLaMA-1.1B and 10.1x compression ratio with Gemma-2B on agent tasks. ● Superior performance over LoRA, LoRA-MoE, and MixLoRA across eight commonsense reasoning benchmarks. Claims And Evidence: The claims are largely supported by extensive experiments on commonsense reasoning (e.g., ARC-c, OpenBookQA) and agent benchmarks (GPT4Tools). Results demonstrate clear improvements in accuracy, latency, and compression. However: ● The training-free generalization claim requires broader validation across diverse tasks. ● Limited analysis on task-specific strengths/weaknesses (e.g., why LoRA-Gen excels in certain tasks but not others). ● The claim of knowledge transfer needs deeper exploration (e.g., how cloud-to-edge knowledge injection works mechanistically). Methods And Evaluation Criteria: The methodology is well-designed, combining LoRA, MoE, and reparameterization to address edge-side efficiency. Evaluation metrics (accuracy, latency, compression ratio) are appropriate. The use of harmonic mean accuracy for multi-task evaluation and task-specific benchmarks (e.g., GPT4Tools) strengthens validity. Theoretical Claims: No formal theoretical proofs are provided. The work is empirically driven, with ablation studies validating design choices (e.g., meta tokens vs. direct LoRA generation, routing strategies). Experimental Designs Or Analyses: Experiments are comprehensive, comparing LoRA-Gen against strong baselines (LoRA, LoRA-MoE) across multiple models (TinyLLaMA, Gemma-2B). Key gaps: ● Hyperparameter sensitivity: Limited discussion on the impact of expert count (n=8) or auxiliary loss coefficient (α=0.01). ● Cloud model dependency: No analysis of how varying cloud-side models (e.g., larger/smaller LMs) affects performance. Supplementary Material: not provided Relation To Broader Scientific Literature: The paper is well-situated within the broader literature on parameter-efficient fine-tuning of large language models. It builds upon recent advances in LoRA and MoE techniques and addresses the critical need for efficient and effective model specialization. The authors provide a good overview of related work, including various parameter-efficient fine-tuning methods and context compression techniques. However, the paper could benefit from a more detailed discussion on how LoRA-Gen compares to other non-LoRA methods, such as adapter-based approaches or prompt tuning. Essential References Not Discussed: The work builds on LoRA (Hu et al., 2021), MoE (Jacobs et al., 1991), and context compression (e.g., AutoCompressors). However, it omits critical recent works: ● VeRA (Kopiczko et al., 2024): A parameter-efficient method using shared low-rank layers. ● DoRA (Liu et al., 2024b): Decomposes weight updates into magnitude/direction components. Other Strengths And Weaknesses: Strengths: ● The proposed framework is innovative and addresses a significant challenge in deploying specialized language models on edge devices. ● The results demonstrate clear improvements in both performance and efficiency, making the method highly practical for real-world applications. ● The idea of generating task-specific LoRA parameters using a cloud model is novel and leverages the strengths of both cloud and edge computing. Weaknesses: ● The dependency on a cloud-side model for generating LoRA parameters may limit the applicability of the method in scenarios with limited network connectivity or strict privacy requirements. ● The generalization ability of LoRA-Gen to unseen tasks needs further validation on a broader range of tasks and domains. ● The paper could benefit from a more detailed discussion on the limitations of the method and potential areas for improvement. Other Comments Or Suggestions: ● Clarify computational overhead of cloud-side LoRA generation. ● Discuss limitations (e.g., reliance on cloud infrastructure, task scope). ● Fix typos: Page 6, "utilyze" → "utilize"; unify "LoRAMoE"/"LoRA-MoE". Questions For Authors: 1. How does the computational cost of cloud-side LoRA generation compare to edge-side efficiency gains? 2. What task characteristics make LoRA-Gen most effective (e.g., prompt complexity, task diversity)? 3. What are the primary failure modes of LoRA-Gen, and how might they be addressed? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: The training-free generalization claim requires broader validation across diverse tasks.** **Ans:** As shown in Table 2, Table 3 in the manuscript and Table 10 in the response to reviewer HcLz, we have mainly validated our method in the fields of mathematics, commonsense reasoning, science, daily life and tool usage. In the future, we will further expand to more fields to verify our method. **Q2: How cloud-to-edge knowledge injection works mechanistically.** **Ans:** As shown in Table 12, in the response to reviewer TR5Y, when we do not utilize the online LLM to select experts, the overall performance decreases, further highlighting the importance of the online large model in converting instructions into internal knowledge to guide edge-side models. **Q3: Hyperparameter sensitivity.** **Ans:** Our exploration of different hyperparameter settings, as shown in Table 5 of the manuscript, revealed that adding more experts to the LoRA Expert Pool does not inherently improve results. We speculate that this may be due to the mismatch between the number of experts and the scale of available training data, resulting in poorly tuned expert classifications, and the auxiliary loss coefficient is the result of a hyperparameter search. **Q4: No analysis of how varying cloud-side models (larger/smaller LMs) affect performance?** **Ans:** We have conducted an experiment to show that the LLM with stronger reasoning ability brings better results, as shown in Table 13 in the response to reviewer TR5Y. **Q5: The paper could benefit from a more detailed discussion on how LoRA-Gen compares to other non-LoRA methods, Prompt tuning?** **Ans:** We conduct an experiment to show performance comparison between LoRA-Gen and Prompt-tuning as indicated in Table 15. | | **Prompt-tuning** | **LoRA-Gen** | |-----------------|:-----------------:|:---------------:| | ARC Challenge | 43.2 |44.3 | **Table 15. Performance Comparison.** **Q6: Reliance on cloud infrastructure and the limitations of the method and potential areas for improvement.** **Ans:** Our method supports diverse application scenarios, such as tool calls, personalized virtual assistants, offline intelligent systems, IoT device control, and tasks necessitating long system prompts. However, the current paradigm needs to predefine a pair of cloud and edge-side LM. The model-agnostic framework leaves an open question for future work. **Q7: Clarify the computational overhead of cloud-side LoRA generation.** **Ans:** After specifying the task description or system prompt, the cloud model only needs to perform one inference to complete the generation of LoRA parameters. For specific FLOPs, Memory, and Latency in the training and inference stages, please refer to Table 4 in the supplementary material. **Q8: Fix typos: Page 6, "utilyze" → "utilize"; unify "LoRAMoE"/"LoRA-MoE".** **Ans:** Thanks for your valuable advice. We will refine this in our revision paper. **Q9: How does the computational cost of cloud-side LoRA generation compare to edge-side efficiency gains?** **Ans:** In fact, in the deployment and reasoning phase, the cloud model only needs the computational overhead of a single inference to complete the specific task. The main computational overhead comes from the training phase. As shown in Table 15 in response to reviewer TR5Y, the 13B model will bring some gains over the 7B model. **Q10: What task characteristics make LoRA-Gen most effective (e.g., prompt complexity, task diversity)?** **Ans:** Our method is more beneficial in scenarios that typically feature a fixed specialized system prompt and varying user inputs. In such cases, the cloud-side large model performs the generation of customized LoRA weights through a one-time system prompt inference and supplies these weights to the edge-side small model. **Q10: What are the primary failure modes of LoRA-Gen, and how might they be addressed?** **Ans:** The major limitation of the current paradigm needs to predefine a pair of cloud and edge-side LM. If we do not use the same model pair as in the training phase, it will lead to a primary failure mode. A possible solution is to train multiple models in cascade, which we will continue to explore in future work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It addresses most of my concern. I'll keep my rating.
Summary: The authors automate the creation of task specific LoRAs, by leveraging a larger scale LLM finetuned to generate LoRA parameters. The larger-scale llm is prompted with a system prompt specifying the task. The LoRa parameters are applied to a smaller scale edge model. The authors find these generated lora parameters are effective for seen tasks, unseen tasks for common sense resasoing. They also find it to be effective for tool usage relative to LoRA alone. # After the Rebuttal Thank you for providing the detailed answers. I am therefore keeping my score. I'd strongly recommend incroporating training data details in clear place in the main paper, since this is a key piece of understanding the system. Claims And Evidence: * LoRA-Gen is faster than alternatives: Timing experiments with shorter context. Issue: It's not clear to me that context is needed for trained (non-base models), especially on seen tasks. * LoRA-Gen generalizes better to new tasks: Experiments it shows some generalization capability to new tasks (possibly due to few-shot examples). * LoRA-Gen is training free Indeed, do not need training for new tasks. However, it'd be intersting to see how a small amount of training would help. *LoRA-Gen Allows knowledge transfer from large model to small model. It is unclear how much of the large llm base model gets transfered into the generate lora params, vs how much is learned during the general lora generator training phase. Maybe ablate different lora generators? Methods And Evaluation Criteria: Proposed methods and evaluation criteria do make sense. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: I checked soundness of desigins. Experiment design is sound. Supplementary Material: I reviewed all of the supplementary material. Relation To Broader Scientific Literature: This work builds on the prior ideas of PEFT, Mixture-of-Experts, and context compression. It takes findings that LoRA is a highly effective PEFT methodology and that LLM's can compress prompts, and instead represents prompts as parameters. Essential References Not Discussed: HyperDreambooth[1] is an intersting paper to cite; it generates lora parameters for personilzation of generative models. [1] Ruiz, Nataniel, et al. "Hyperdreambooth: Hypernetworks for fast personalization of text-to-image models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024. Other Strengths And Weaknesses: Strenghts: Clarity + writing is high on average (for minor writing weakness see below). Weakness: The lora generator + lora expert pool needs training, yet testing on GPT4Tools is said to not need training (red x, second column, table 3). Then how is it trained? Does it generalize from commonsense reasoning task training? How important is the lora generator base llm? Would a stronger reasoing model be better, since the cost is being amortized anyways? This type of meta-training really requires a 'dataset of datasets'. However, the current setup is more of a proof-of-concept becacuse the number of seen task is quite low. Are there scaling properties? Other Comments Or Suggestions: Overall, an interesting direction! I'd like to see this scaled up and an experiment on the importance of the cloud-side LM. It almost feels like an embedding-type model (maybe encoder-decoder?) would make more sense. Questions For Authors: See weakneses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: It's not clear to me that context is needed for trained (non-base models), especially on seen tasks.** **Ans:** Sincerely sorry for the confusion. We aim to utilize the online large language model and the online LoRA Expert Pool to convert user instructions, system prompts, or contextual content into tailored LoRA parameters, improving the performance of the small model. Therefore, during training, we will provide a few-shot instruction to the online LLM to guide the model in acquiring this functionality. **Q2: It'd be interesting to see how a small amount of training would help.** **Ans:** As shown in Table 12, a small amount of training can lead to a small increase of some tasks. | | **Hellaswag** | **Winogrande** | **PIQA** | |-------------------------|:-------------:|:--------------:|:--------:| | Training-free | 49.1 |67.4 | 76.9 | | Training With Few Cases | 48.8 |67.5 | 77.1 | **Table 12: Performance with Few-Shot Training.** **Q3: Ablate different lora generators.** **Ans:** As shown in Table 13, we conduct an ablation study on the average accuracy and harmonic mean of the LLM-based LoRA generator and the general LoRA generator across 8 datasets, using TinyLLaMA-1.1B as the small LLM. The results demonstrate that the LLM-based LoRA generator outperforms the general LoRA generator. Additionally, as presented in Table 2 of the manuscript, the performance of LoRA-Gen surpasses that of several other LoRA-based MoE methods. | | **Average Score** | **Harmonic Mean** | |--------------------------|:-----------------:|:-----------------:| | General LoRA Generator | 54.0 |48.2 | | LLM-based LoRA Generator | 55.1 |49.8 | **Table 13. Ablation of Different LoRA Generators.** **Q4: How is training-free setting in GPT4Tools trained?** **Ans:** As mentioned in Section 1.4 of the supplementary material, we process the Alpaca dataset through GPT-4, resulting in a filtered and abstracted set of 37,658 training samples. Then we use this part of the data for pre-training to obtain the instruction-aware LoRA generator and LoRA expert pool. We directly utilize the checkpoint to perform generalization tests on the GPT4Tools test set, corresponding to the 4th row "without training setting". **Q5: How important is the LoRA generator base LLM?** **Ans:** We conduct an experiment to verify the impact of the LoRA generator. As shown in Table 14, LLaMA3-8B (with the strongest reasoning ability) does bring better results, and increasing the size of the LoRA generator will also improve to a certain extent. | | **ARC Challenge** | **OpenbookQA** | |-------------|:-----------------:|:-----------------:| | LLaMA2-7B | 41.3 |28.6 | | LLaMA2-13B | 42.6 |29.0 | | LLaMA3-8B | 43.2 |31.0 | **Table 14. Ablation of Different LLM-based LoRA Generators.** **Q6: Are there scaling properties?** **Ans:** In this work, we offer an initial verification of the advantages of this paradigm. Moving forward, we will expand the dataset size to explore more effective and resilient training paradigms. --- Rebuttal Comment 1.1: Comment: Hi, I thank the author for their response. I am leaning towards keeping my score, but I have two **important** clarifying questions: **Q1:** You write: ```Sincerely sorry for the confusion. We aim to utilize the online large language model and the online LoRA Expert Pool to convert user instructions, system prompts, or contextual content into tailored LoRA parameters, improving the performance of the small model. Therefore, during training, we will provide a few-shot instruction to the online LLM to guide the model in acquiring this functionality.``` I mean regarding the Figure 2 a.) Vanilla LoRA Paradigm. Do you have insights or ablations regarding performance without a system prompt? **Q2:** You write ```As mentioned in Section 1.4 of the supplementary material, we process the Alpaca dataset through GPT-4, resulting in a filtered and abstracted set of 37,658 training samples. Then we use this part of the data for pre-training to obtain the instruction-aware LoRA generator and LoRA expert pool. We directly utilize the checkpoint to perform generalization tests on the GPT4Tools test set, corresponding to the 4th row "without training setting".``` Does this mean you train with Alpaca for tool use? Or with commonsense data (ARC/OPQA/HellaSwag etc.)? Or both? How about for the commonsense eval? In general, could you clarify what training data is used for what experiments? Thank you. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the precious review time and comments. **Q1:** Our method requires specific content to be provided to the online LoRA generator. As shown in Table 16, we evaluate the performance of vanilla LoRA without a system prompt. Our LoRA-Gen utilizes a general system prompt to produce tailored LoRA weights. | | **Vanilla LoRA** | **LoRA-Gen** | |-------------------------|:-------------:|:--------------:| | ARC-c (Seen Task) | 32.94 |34.73 | **Table 16: Performance comparison.** **Q2:** (a) Commonsense scenery evaluation shown in Table 2: all methods are trained with the training set of seen tasks as well as Alpaca. (b) Tool usage scenery evaluation shown in Table 3: all methods except Line1&4 are trained with Alpaca and the training set of GPT4Tools. Line4 is trained only with Alpaca.
Summary: This paper introduces LoRA-Gen, a framework that enhances domain-specific task performance for small edge-side models by leveraging a large cloud-side model to generate LoRA parameters based on task descriptions. Utilizing reparameterization, LoRA-Gen merges these parameters into edge-side models, enabling flexible specialization without specialized training. This approach improves inference efficiency by reducing input context length and facilitates knowledge transfer. Claims And Evidence: Yes. Methods And Evaluation Criteria: The evaluation in this paper is constrained in both scope and settings: 1. The study focuses on classification, question answering, sentence completion, and fill-in-the-blank tasks, labeling them as "Reasoning Tasks." However, it does not include reasoning-specific datasets such as mathematical reasoning, multi-hop reasoning, or commonsense reasoning, which are critical for validating such claims. 2. The paper lacks experiments on training-free unseen tasks, which are necessary to demonstrate the method's generalizability. Expanding the evaluation with broader experiments or more in-depth analysis would strengthen the validity of the proposed approach. Theoretical Claims: I have reviewed the formulas within this paper. Experimental Designs Or Analyses: See "Methods And Evaluation Criteria." Supplementary Material: Checked the entire appendix. Relation To Broader Scientific Literature: The paper combines and builds upon concepts from parameter-efficient adaptation, mixture-of-experts, context compression, and knowledge transfer. It introduces an online generation and reparameterization mechanism designed to balance efficiency and effectiveness, offering a solution to the inherent trade-offs between these two aspects. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper does not provide a detailed discussion on the specific experts included in the LoRA expert pool. A more thorough explanation of these experts would help clarify their relevance to the tested datasets and better illustrate the method's generalizability across different tasks. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: Do not include reasoning-specific datasets such as mathematical reasoning.** **Ans:** Thanks for the great advice, we further evaluate our method on the mathematical reasoning dataset GSM8K. As shown in Table 10, LoRA-Gen brings gains. Due to time constraints, we will collect more types of tasks for evaluation in the revised version. | | **GSM8K (mathematical reasoning)** | **OpenBookQA (multi-hop reasoning)** | **Arc challenge (commonsense reasoning)** | |-----------------------|:----------------------------------:|:------------------------------------:|:-----------------------------------------:| | Baseline | 62.4 | 31.2 | 43.3 | | LoRA-Gen | 64.2 | 33.4 | 44.3 | **Table 10: Diverse Tasks Performance.** **Q2: The paper lacks experiments on training-free unseen tasks.** **Ans:** In this work, we mainly explored the generalization capabilities of mainstream reasoning scenarios and agent scenarios. We set Hellaswag, Winogrande, and PIQA as generalization test datasets, which do not appear in the training period, to verify the training-free unseen capabilities of our method as shown in Table 2 of manuscript. At the same time, we further verified the generalization performance of our method by testing the unseen tools in the GPT4Tools dataset as shown in Table 3 of manuscript. We will further explore the performance of LoRA-Gen in a wider range of scenarios in our future work. **Q3: The paper does not provide a detailed discussion on the specific experts included in the LoRA expert pool.** **Ans:** We analyzed the weight distribution across different experts when processing various tasks, as shown in Table 11. For instance, expert 5 is frequently activated for agent tasks but rarely for SIQA tasks, demonstrating that the online LoRA Pool implicitly assigns knowledge to different experts. | | **Expert 1** | **Expert 2** | **Expert 3** | **Expert 4** | **Expert 5** | **Expert 6** | **Expert 7** | **Expert 8** | |--------------------|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:| | ARC Easy | 0.0981 | 0.5938 | 0.3027 | 0.0149 | 0.4082 | 0.3223 | 0.2051 | 0.0518 | | SIQA | 0.0107 | 0.3945 | 0.4277 | 0.4297 | 0.0378 | 0.2715 | 0.3906 | 0.0361 | | GPT4Tools | 0.2041 | 0.6875 | 0.2070 | 0.0026 | 0.5586 | 0.2617 | 0.0415 | 0.0361 | **Table 11: Expert Weight Distribution.** --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I have increased my score accordingly.
null
null
null
null
null
null
null
null
Unifying Specialized Visual Encoders for Video Language Models
Accept (poster)
Summary: The paper presents a VideoLLM that aligns and unifies the outputs of four different visual encoders to improve its video understanding capabilities. Specifically, the authors use a spatial encoder (DINOv2), a temporal encoder (ViViT), an image-language encoder (SigLIP), and a video-language encoder (LanguageBind). They design the inputs to the four encoders to be spatio-temporally aligned and train a pre-fusion projection network followed by a cross-attention network to respectively transform the features from the different encoders into a common space and then combine those features. The authors perform relevant experiments to evaluate their approach on multiple video Q&A datasets and compare against baseline methods. Claims And Evidence: The central claim -- that of combining different visual encoders focusing on different aspects of image, video, and their correlations with text inputs to improve video understanding performance -- is sufficiently backed up by experimental results. However, the additional benefits of computational efficiency can be evidenced further: 1. In Table 1, what are the total running times and FLOPs of the compared methods? How much does the proposed method improve on this aspect? 2. Since the computational improvements reported in Fig. 4a are 3-7 orders of magnitude higher than the FLOPs required for the pre-fusion projectors (Table 2a), can the accuracy numbers be improved further (while keeping the computational increases insignificant) by considering longer sequences as inputs to the pre-fusion projectors? Or would longer sequences also significantly impact the computational overheads of the feature fusion step (Table 2c)? Methods And Evaluation Criteria: The proposed method is technically sound, and the evaluation criteria are appropriate. Theoretical Claims: Not applicable - the paper presents experimental findings to justify the proposed approach. Experimental Designs Or Analyses: The experiments, including the ablations, are sound, and the analyses are generally thorough. One observation from the results (Fig. 5 in particular) is that the proposed approach improves significantly over DINOv2 and ViViT across the board, but marginally over SigLIP and LanguageBind in many categories. To complement these results, looking at failure cases due to removing individual encoders (especially DINOv2 and ViViT) can further help understand their specific contributions. Supplementary Material: I have read all the supplementary material in conjunction with the main paper. Relation To Broader Scientific Literature: With the development of language models for video understanding, the paper's contribution is relevant and timely, and it establishes new baselines for future video understanding models. It will likely interest the broader scientific communities working on language models, video understanding, and their intersections. Essential References Not Discussed: While not a domain expert, I did not find any major missing references in my search. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: I am unclear on the authors' example of an action that is indistinguishable from itself if temporally reversed. Pulling from left to right is temporally distinct from pulling from right to left as (a) the hand location relative to the object is different, and (b) the direction of movement is reversed (Fig. 8, columns 2 and 3). However, pulling from left to right, if temporally reversed, can be indistinguishable from *pushing* from right to left. Was that the intended meaning of "indistinguishable"? Maybe the authors can make this point fully clear with a pair of temporally indistinguishable actions in Fig. 8? Questions For Authors: Please refer to the comments in previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer jZRP, we thank you for spending your time and effort on reviewing our work. We appreciate your recognition of our thorough experimental setup and ablations that sufficiently backs up our claim, as well as its relevant and timely contribution towards a broader collection of works regarding video understanding models. We will address your two main questions, as well as the two other comments you had for our work. **Q1: Total running time and FLOPs of compared methods?** \ We obtained the running time and FLOPs for the methods in Table 1. In the main paper, we have calculated FLOPs by fixing the length of the textual tokens. But due to different generation schemes, this method cannot be applied to some VLMs. To make a fair comparison between VLMs, we use the same video and with the prompt "Describe this video." We follow the prompt builder and the video-processor provided by the official implementation. We report the Time-to-first-token (TTFT) as a metric for running time. Our FLOPs were measured in the same TTFT setup. | | TTFT (Inference) | TFLOPs | |---------------|------------------|--------| | Video-Chat | 135ms| 3.22 | | LLaMA-Adapter | 124ms| 2.69 | | Video-LLaMA | 169ms| 11.07 | | Video-ChatGPT | 321ms| 10.69 | | SeViLA^ | X (two LMs) | X | | LLaMA-VID-7B | 217ms | 3.05 | | LLaMA-VID-13B | 242ms| 3.47 | | Video-LLaVA | 312ms | 15.93 | | MERV** | 261ms | 10.5 | ^ Since SeViLA is MCQ-only VLM, needing to run LLM per each MCQ option, we do not compute TTFT and FLOPs as we cannot compare them with other models in a fair manner. \ ** As we are using a different prompt, the FLOP value is different from the main paper. Our method, despite having 4 encoders, has far fewer FLOPs compared with Video-LLaVA (which our work primarily builds on), due to the compressed visual representation used. While LLaMA-VID achieves fewer FLOPs by restricting a single frame to have 2 visual tokens, we can achieve a much better performance with a similar step time with our efficient implementation. **Q2: Can accuracy be improved by using longer input sequences to the pre-fusion projector, and how would it affect the computational overhead of feature fusion?** This is a very interesting question! Below we tried using inputs with longer sequences to the pre-fusion projector by scaling up temporal resolution (as spatial resolution is fixed at 224x224) to see if our method improves. | MERV | TFLOPs | MSVD | MSVD-Score | MSRVTT | MSRVTT-Score | TGIF | TGIF-Score | Perception | |:---------:|:------:|:-----:|:----------:|:------:|:------------:|:-----:|:----------:|:----------:| | 16 frames | 17.19 | **61.61** | **3.41** | **45.17** | **2.75** | **46.11** | **2.65** | 46.21 | | 32 frames | 26.23 | 61.28 | 3.40 | 44.75 | 2.74 | 45.42 | 2.62 | **47.77** | * _As gpt-3.5-turbo-0613 is now deprecated, we use GPT-4o-mini for open-ended evaluation._ We find that our method fails to take advantage of the extra temporal resolution for most datasets except for the Perception Test (which is a temporally challenging dataset). This indicates an upper limit to the capacity of our LLM for processing extra visual information, in which case maybe extra token selections or compression needs to happen. Regarding the second part of the question, increasing the input sequence length (i.e., larger $l$) to the pre-fusion projector leads to a linear increase in computational overhead for the feature fusion step, as can be seen from Eq. 2. However, for the subsequent LLM processing, the overhead grows quadratically with sequence length. **C1: Looking at failure modes when removing DINOv2 and ViViT** Thank you for the suggestion, we really appreciate it! We have not thought of the qualitative results of visualizing the failure cases when removing ViViT from MERV. We hypothesize that the failed cases are the videos that contain a lot of temporal movements, e.g., rapid camera movement. The hypothesis comes from a similar experiment, Figure 10 in Appendix, where we look at the attention weight of the feature fusion module, and see which video gains the highest weight for each encoder. As we saw that it was often the video that has large movements that is preferred by ViViT, we believe that these videos will have the most impact when ViViT is removed from MERV. We will add the suggested analyses in the camera-ready version of the paper. **C2: What is "temporally indistinguishable classes" in Something-Something V2 dataset?** Yes, your interpretation is exactly right. In particular, we selected action classes where the temporal reverse of one class approximates another class — as you pointed out, reversing frames of "Pulling from Left to Right" approximates frames of "Pushing from Right to Left". We identified 12 such class pairs from SSv2, which are listed in the appendix. We understand that our original explanation could be clearer, and we'll revise the paper following your helpful suggestion!
Summary: The authors propose to use multiple video encoders (instead of a single encoder) for visual feature extraction in the context of video-LLMs. They propose a simple, straightforward technique to ensemble multiple encoders with a clever feature fusion strategy leading to lesser inference-time FLOPs than any of the individual encoders. Thorough and extensive evaluations establish the usefulness of proposed method while also uncovering some interesting insights on visual encoders for video tasks. Claims And Evidence: Yes. Extensive experimentation demonstrates usefulness of proposed method. Methods And Evaluation Criteria: Yes. Clear method with extensive evaluation. Theoretical Claims: N/A Experimental Designs Or Analyses: Authors follow standard experimental settings on established benchmarks. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: Visual encoders are an important component in multi-modal LLMs (especially) video, and is relatively under explored. The authors investigate a novel direction of ensembling multiple visual encoders with negligible increase in runtime costs. This direction can be valuable to the video understanding community, especially for video QnA. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** 1. The paper is well written with clear presentation of methodology and experiments. 2. The authors explore a timely topic (visual encoders for video) that could be valuable to the video understanding community. 3. A simple and straightforward modification to existing baselines with clever design choices to maintain inference compute. 4. Extensive experimentation achieving strong results improvements across a range of video QnA benchmarks. **Weaknesses** 1. Minor improvements for better clarity (see comments / suggestions). Other Comments Or Suggestions: 1. On L125 (right column): "As t is the same across each v_e" - why is this? do the video encoders used have no pooling / compression across time axis? 2. Eq (2) for feature fusion: can you discuss motivation for this exact design choice? 3. Table 2 (b) - could these numbers be reported (maybe in appendix) for Perception test and SSv2? * Could more tokens help with more temporally complex tasks? * This is motivated by ViViT outperforming MERV on several such tasks. Could the lower token count (information bottleneck) of MERV be a reason for that? 4. The SSv2 analysis against ViViT, maybe repeat with same token count for MERV? It maybe unfair for MERV to compete against a more expensive ViViT version that uses more TFLOPs. 5. What is the black outer-most line in Figure 1 (right)? This is unclear. Questions For Authors: 1. Could the feature aggregation be conditioned on the textual prompt? Is this a direction that was explored at any point? * E.g. In feature fusion cross-attention operation, the queries could be derived / conditioned on from textual prompt. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear Reviewer 44Cd, thank you for your time and effort on reviewing our work. We are grateful for the insights that you have provided us. Specifically, we appreciate that you have seen our work to have **novel and clever strategy** with **less inference-time and FLOPs**, providing **interesting insights** on visual encoders for the underexplored field of video understanding. We appreciate that you see the main weakness of the paper to be in clarification, which we will address in order below along with your question. **C1: On L125 (Right), why is $t$ the same across each $v_e$?** \ This is because, prior to spatial alignment, we performed temporal alignment. You are right that video encoders may have temporal pooling, which alters the temporal resolution. Therefore, we achieve temporal alignment in the most reliable and straightforward way—by adjusting the number of input frames fed into each visual encoder. E.g., we feed 32 frames to ViViT so that the output $t=16$. **C2: Can you discuss motivation for your feature fusion design?** \ We wanted a design which could adaptively and efficiently select the most relevant visual information from all of the encoders. The simplest methods, such as sequence and channel concat, allow us to naively combine the information at its full resolution, but are inefficient and limited, e.g., sequence concat ignores our spatio-temporal alignment and orders in sequence. We can make this more efficient by summing the features together with learnable scalar weights for each, i.e., the “Learnable W” row in Table 2(c). This reduces the output sequence length. However, this has much weaker expressiveness than other methods and is still not adaptive. Cross-attention achieves the best of both worlds, being expressive while still being efficient. Using learnable queries as input, we adaptively set our encoder feature weights based on the input video and sum the features. The final output is of shape [BxLxD] instead of [Bx(NL)xD] where N is the number of encoders. Our ablations in Table 2(c) compare all of these feature fusion methods and find ours to be the most performant, along with channel concat. In fact, both methods are similar at heart. Channel concat creates a feature of shape [BxLx(ND)], but immediately has a linear layer following it to project into the LLM dimension of D. Our cross attention does a similar feature consolidation but constrained at the encoder level, i.e., each encoder’s feature gets one global weight. **C3: Can more output tokens from pre-fusion projector help more temporally complex tasks? Can the lower token count bottleneck MERV on Perception test and SSv2?** | | LanguageBind-LLM | MERV w/ 4 encoders | |:-:|:-----:|:-------:| |**Tkns**| Perception / SSv2 | Perception / SSv2 | |1|42.85/32.97|44.52/33.89| |4|43.31/33.34|43.16/35.81| |16|43.18/36.29|46.45/41.47| |64|44.43/37.36|46.21/**42.01**| |100|**45.56**/37.72|44.96/40.58| |144|43.94/**38.82**|45.86/39.40| |256|43.51/35.25|**47.26**/40.06| We evaluated LanguageBind-LLM and MERV on PerceptionTest and SSv2 across varying pre-fusion output token counts. The results indicate that while more tokens can help with more temporally complex tasks, performance gains generally plateau beyond 64 tokens, suggesting that 64 tokens offer an optimal trade-off between performance and efficiency. We agree that limited token count could lead to an information bottleneck depending on the complexity of the task at hand, but in our studies, 64 tokens seems to be enough. **C4: In Figure 4, it may be unfair for MERV to compare with ViViT-LLM where ViViT-LLM uses more TFLOPs.** To make the TFLOPs comparable, we perform an additional comparison where we use pre-fusion projector (pfp) with ViViT-LLM, so that the visual token size is now 64 per frame, instead of 196. |Model|TFLOPs|SSv2|SSv2-Temporal| |:---:|:----:|:--:|:-----------:| |MERV|17.19|**42.01**|36.84| |MERV (full)|17.19|39.76|**40.65**| |ViViT-LLM|27.12|26.78|39.77| |ViViT-pfp-LLM|13.00|29.07|35.15| As expected from Table 2(a), our pre-fusion projector is designed so that it does not hurt the performance while using fewer visual tokens. Regardless, our method shows better or matching performance with single-encoder ViViT, with or without pre-fusion projector. **C5: What is the black outermost line of Figure 1 (Right)?** \ The thin black line is the edge of the graph. Thank you for spotting them. We will fix the graph to be more clear. **Q1: Can we make the queries text-conditioned? Did we try this?** \ Yes, we tried. As of now, current Video-LLaVA training datasets have vastly different text distribution with the benchmarks. As we are doing zero-shot evaluation, we did try and find that our feature fusion module does not generalize to such out-of-distribution textual prompts when the queries fed to the fusion module. We think this could be an exciting scope for future research to effectively generalize a text-conditioning which works on broader examples. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the comprehensive response. All clarification concerns have been resolved. The paper would definitely be valuable to the video understanding community, given its novel insights on building a *more performant ensemble* of multiple visual encoders, that is *computationally cheaper* for inference than any of the individual encoders, due to their clever fusion strategy. In light of above, I retain my rating of Strong Accept.
Summary: Traditional VideoLLMs typically rely on a single vision encoder, which restricts the model's ability to leverage the diverse strengths of different visual encoders. To overcome this limitation, this paper propose a novel framework that integrates multiple specialized vision encoders into a unified video representation model. The proposed approach employs encoders such as SigLIP, DinoV2, ViViT, and LanguageBind—each contributing unique capabilities in spatial, temporal, and multimodal understanding. Claims And Evidence: Yes Methods And Evaluation Criteria: Strengths: - The proposed method is very well motivated. Reproducibility is high. Weaknesses: - Although there is a performance improvement, my primary concern is that the innovation of the paper is rather limited. The paper proposes an empirical method for combining multiple visual encoders in a multimodal model. The feature fusion operation is based on cross-attention and linear projection layers. Since ensemble methods are a fundamental concept in machine learning, the paper does not provide new insights, and considering the increased computational cost, the performance improvement is expected and not surprising. Theoretical Claims: The paper does not present any theoretical contributions. Instead, it employs an ensemble of multiple vision encoders to address the diverse tasks encountered in MLLMs. Experimental Designs Or Analyses: The authors conducted extensive experiments on combining visual encoders, covering both individual and joint usage scenarios. This exploration provides practitioners with insights into the effectiveness of different encoder combinations. Additionally, the authors analyzed the impact of various training stages in LLaVa-style video language model (VLM) training and offered observational conclusions on the importance of each stage in the two-stage training process. Supplementary Material: The authors did not provide any supplementary materials. Relation To Broader Scientific Literature: The authors enhanced the model's performance by integrating multiple visual encoders, an approach that is closely related to ensemble learning in machine learning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Q1: What are the reasons behind the authors' selection of these four encoders? For example, the chosen video model ViViT was released in 2021. However, many more advanced methods, such as MViTv2 and VideoMAE v2, are now available. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer 1K5Z, we thank you for spending your time and effort on reviewing our work. We appreciate your recognition of the motivation, the novelty of our framework, extensive experiments, insights we provided to practitioners, and high reproducibility of our work. Your concerns seem to be with a) the **innovation** of our paper, particularly concerning our feature fusion and our relationship to ensemble methods, and b) our **reasoning behind our encoder selection** and the choice of picking more recent video encoders. Before we will address these in order, we want to point out an error in your review. **The supplementary material was provided in page 13~22** which addresses some of your questions. **W1: Weak innovation about method, esp. around feature fusion and ensemble methods.** We agree that ensembling is a fundamental concept in machine learning, and simply applying such methods without understanding how each unique domain affects the method in practice would not provide any new insights. While our method may seem simple at a first glance, there were many design decisions which were critical in enabling the strength of our results which are not apparent if one tries to simply scale prior works to multiple encoders; see our detailed response to ukCZ, where we discuss how our method provides new insights into practically building these ensemble methods. In contrast, prior works such as Prismer [1]—though not focused on RGB-based video understanding MLLMs—have failed to make a marked improvement over single-encoder baselines. As for the feature fusion, our method is quite different from existing approaches, as those primarily fall under either channel concatenation or sequence concatenation, as mentioned in the paper text with references (Appendix A.2. Related Works). However, our method is still better or on par with these methods, while also providing extra, meaningful analyses into how the model is making these decisions as shown in Fig.10. By visualizing the videos that have the high attention weight of an encoder, e.g. videos with a lot of text have high attention weight on SigLIP, we gain insight of the data mix and training objective of each encoder. **Q1: Why these four encoders, and more recent video encoders?** We chose these four encoders because we felt that they offered a broad coverage of both optimization objectives and training data, both of which are important for giving our model a wide breadth of coverage. Within each type of model, we performed experiments using a few different models, such as V-JEPA [2] for video, but ended up choosing ViViT based on final performance. We hypothesize that V-JEPAl is difficult to use for frozen adaptation, so we leave this as future work for others. The other models were chosen based on strong performance in other places. In our Appendix A.6, we have conducted experiments using a newer SoTA video model Hiera-B+ [3], i.e. MViTv3, in both a four-encoder setting replacing ViViT and a five-encoder setting. For your convenience, we list the results in the table below, with the best result in bold and the second-best in italics. Replacing ViViT with Hiera demonstrates improvements on the fine-grained spatiotemporal reasoning benchmark, PerceptionTest, with accuracy gain of 1.29%. Similarly, adding Hiera yields an improvement on ActivityNet, achieving a 0.36% increase in accuracy. However, on other benchmarks, the original MERV remains the strongest model. Overall, we observe no significant performance improvement when training with Hiera, which aligns with expectations, since Hiera is under the same paradigm as ViViT, functioning as a temporal expert trained on short videos. We also hypothesize that Hiera is more sensitive to the temporal stride than ViViT, as ViViT can reasonably deduce motion from uniformly sampled frames. We expect performance to improve if we incorporate encoders trained on different paradigms and data sources or process a much greater number of frames simultaneously, which we will leave for future work. | Method | MSVD -Acc | MSVD -Score | MSRVTT -Acc | MSRVTT -Score | TGIF -Acc | TGIF -Score | ActivityNet -Acc | ActivityNet -Score | Perception -Acc | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | VideoLLaVA | 67.74 | 3.69 | 56.90 | 3.18 | 47.99 | 3.17 | 47.08 | 3.27 | 44.22 | | MERV | **70.97** | **3.76** | **59.03** | **3.25** | **51.1** | **3.26** | *50.87* | **3.34** | 46.21 | | MERV, ViViT replaced with Hiera | *69.68* | *3.74* | 57.64 | 3.22 | *50.38* | *3.24* | 50.24 | **3.34** | **47.50** | | MERV + Hiera | 69.67 | 3.72 | *58.26* | *3.23* | 50.32 | 3.22 | **51.23** | **3.34** | *46.23* | [1] Liu et al., Prismer: A Vision-Language Model with Multi-Task Experts, 2023. \ [2] Bardes et al. "Revisiting feature prediction for learning visual representations from video." arXiv 2024. \ [3] Ryali et. al., Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles, ICML 2023.
Summary: This paper tackles the problem of video-language understanding by introducing a multi-encoder strategy for constructing a comprehensive representation for videos. The authors claim that existing single-encoder methods can merely obtain a limited amount and type of visual information. Therefore, they proposed to leverage multiple encoders from different backbone families (assumed to have different and diverse capabilities) and map their representations into a unified space. Experimental results demonstrate the performance gains given by such multi-encoder design. Claims And Evidence: The main claim in the paper is “existing single-encoder methods are capable of obtaining only a limited amount and type of visual information.”, which is somewhat reasonable, but it is challenging to proof it with any rigorous theoretical analysis. Methods And Evaluation Criteria: Yes. From the application perspective, the proposed strategy can effectively enhance the capabilities of existing MLLMs. Theoretical Claims: The paper does not contain any proofs or theoretical claims. Experimental Designs Or Analyses: The experiments are conducted on public video understanding benchmarks, with well-designed metrics and fair comparisons. The experiment protocols are reasonable. Supplementary Material: The authors provide an appendix at the end of the main paper. It provides further explanation about the limitations, related work, more details about the model, more experimental results, and viusualizations. Relation To Broader Scientific Literature: The key contribution is to extend the existing Video-LLMs from single-encoder to multi-encoder, which I think is already a common strategy in enhancing visual perception capabilities in the entire CV community. This makes the novelty of the proposed scheme limited. Essential References Not Discussed: No Other Strengths And Weaknesses: Generally, the paper provides some practical design choices and experiments on how to extend single-encoder models to multi-encoder styles. Since this is a simple strategy that most researchers would know of its effectiveness, I think the novelty of this paper is limited. It would be better to provide more insights about how to jointly encode more diverse information (using a single encoder or only a few smaller encoders) rather than simply making use of more encoders. Other Comments Or Suggestions: N/A Questions For Authors: The authors are encouraged to provide more strong justifications for the novelty of the proposed framework. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer ukCZ, we thank you for spending your time and effort on reviewing our work. We appreciate your positive comments on our practical design choices that can enhance the capabilities of existing Multimodal LLMs (MLLMs). Your concerns are with a) the **insights** gained from our paper, and b) the **novelty** of our method. We will address them as follows. We posit that while our final method seems simple, it is the result of exploring >100 different configurations, not all of which worked well, which resulted in this simplicity. For example, while prior methods like VideoLLaVA (which our work primarily builds on) used full-resolution visual tokens, we found that combining 2D averaging with just 64 tokens/frame outperformed this base setting and was far more efficient (Table 2(a) and (b)). Note that in Table 2(a), only 2D/3D Avg and 3D Conv outperform this base setting, with 3D Conv doing so at a significant cost in parameters and FLOPs. Similarly, in Table 2(b), 64 frame tokens outperform the default 256 from prior works, with 144 being the only other better setting. Finally, also see our response to Reviewer 1K5Z discussing how even our choice of encoders is not simple. This shows that it is not a matter of simply adding encoders, as without caring for these details, the resulting method is easily worse and even less efficient than the original method. As for novelty, our work is indeed the first to successfully leverage multi-encoders for video understanding MLLMs, as agreed by the Reviewer 1K5Z and 44Cd to be novel. We also believe there is merit in the detail and thought behind our paper. For example, we contribute a computationally efficient scheme for adding additional encoders which takes advantage of the distributed setups required today (Fig.3). We provide a very in-depth analysis of multi-encoder MLLMs, to a level of detail not taken by other works (Sec.3 and 4). We illustrate behaviors behind these large pretrained models reflecting their training data and inherent biases which are not commonly understood (Appendix, Fig.10) (e.g. SigLIP trained models are biased towards textual data). Additionally, one underrated choice we made was to deliberately focus on a single training mix so that we can control for data and understand these models and our choices. The video domain has many quirks not present in images, so discovering additional settings which work is nontrivial. Finally, we summarize our findings and share insights not covered in prior research: Insight 1: What works for integrating multiple RGB-encoders into a single VideoLLM: 1. Select encoders from distinct backbone families. 2. Align features spatio-temporally before fusion. 3. 2D average pooling (no parameters) is the best pre-fusion projector. 4. Optimal token size per frame is 64 (searched from 1 to 256). 5. Stage 2-only tuning is fast (43% time) with similar results; unfreezing LLM in Stage 1 boosts alignment and benchmark scores. Insight 2: What doesn’t work: 1. Assembling random encoders can hurt performance. 2. No pre-fusion strategies lead to worse results and higher FLOPs. 3. Other projectors (class token, 3D avg, perceiver-style attention, 2D/3D conv) perform worse and cost more. 4. Too small/large token sizes per frame degrade performance. 5. Alternative fusion methods (concat, equal weighting, learnable scalars) are suboptimal considering both performance and efficiency. 6. Training only projectors and fusion in Stage 1 or mixing stages performs worse than explicit two-stage training. Insight 3: Computational time doesn’t scale linearly with respect to the number of visual encoders used, as one might initially think, thanks to multi-GPU model parallelism. Using more encoders adds very minimal overhead (Fig.3) since they can run in parallel and are small compared to the LLM. Insight 4: MERV outperforms prior works by up to 4.62% with minimal added parameters and faster training. It leverages multiple encoders effectively, while each encoder is truly contributing, without trading off performance between specializations as single encoder models do. This offers a free alternative to scaling training data and model understanding by scaling visual encoders instead. Insight 5: MERV provides better interpretability with the inner workings of visual encoders and the decision-making process of MLLMs. Its cross-attention weights activate on corresponding videos, allowing it to capture the visual strengths of different encoders and intuit both motion and general understanding simultaneously. Summary: As elaborated above, we are not simply making use of more encoders, nor did MERV just provide a simple strategy that most researchers would already know to be effective. As Reviewer 44Cd and 1K5Z agrees, MERV introduces novel direction and is the result of extensive effort and care in exploring the right design choices. We provided many insights into how to effectively encode information using a single/few encoder(s).
null
null
null
null
null
null
When Every Millisecond Counts: Real-Time Anomaly Detection via the Multimodal Asynchronous Hybrid Network
Accept (spotlight poster)
Summary: This paper presents a multimodal asynchronous hybrid network for real-time anomaly detection in autonomous driving scenarios by combining event-camera streams with RGB-camera data. The method uses an asynchronous Graph Neural Network for event-stream processing and a CNN for spatial feature extraction from RGB images. This approach effectively captures spatiotemporal dynamics in driving environments, enabling fast and accurate anomaly detection. Claims And Evidence: The paper claims that existing methods focus on improving detection accuracy while neglecting inference speed, but it doesn't fully consider the existing real-time anomaly detection methods, resulting in an insufficient literature review. Some sentences are poorly constructed and show obvious translation marks, affecting clarity. Figure 1 lacks explanation, and the method's innovation is questionable as the used GNN and ResNet are common in anomaly detection. The multimodal fusion is just simple sampling and merging without in - depth modeling of complex relationships between modalities. Methods And Evaluation Criteria: The proposed multimodal asynchronous hybrid network is logically sound. By employing lightweight networks and straightforward fusion strategies, it enhances the real-time performance of anomaly detection. Theoretical Claims: The paper does not explicitly propose theoretical claims requiring validation or proofs. Its primary contribution lies in introducing a novel multimodal network architecture for real-time anomaly detection, validated experimentally for effectiveness and superiority. Experimental Designs Or Analyses: The experimental design is generally reasonable, with extensive evaluations on multiple benchmark datasets demonstrating the method’s advantages in accuracy and response time. However, the selection of baseline methods in Table 1 is problematic: most compared methods are outdated and not real-time approaches. Including state-of-the-art real-time methods would strengthen the results’ credibility. Supplementary Material: Yes, the supplementary material was reviewed, including additional qualitative evaluations and experiments. Relation To Broader Scientific Literature: This work focuses on real-time anomaly detection and proposes a multimodal asynchronous hybrid network, which offers some innovation in autonomous driving research. However, the paper overlooks existing methods specifically targeting low-latency anomaly detection, weakening the persuasiveness of its contributions and results. Essential References Not Discussed: The paper fails to discuss key works dedicated to real-time anomaly detection, such as algorithms or architectures optimized for low-latency performance. Other Strengths And Weaknesses: Other Strengths: Clear visualizations highlight the method’s advantages, and the experimental design and analysis are thorough. Other Weaknesses: Inadequate literature review, unclear phrasing in sections, limited methodological novelty, and suboptimal baseline selection. Additionally, the claim that "most existing methods focus on accuracy but ignore speed" is questionable, as numerous real-time approaches exist in the field. Other Comments Or Suggestions: See "Other Strengths and Weaknesses" above. Questions For Authors: My main concerns are outlined in "Other Strengths and Weaknesses." Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your inspiring review and actionable suggestions. Below, you will find our detailed responses to your questions. > **Q1**: Real-time Anomaly Detection Methods We did omit some real-time anomaly detection papers, but these methods are not applicable to the traffic domain. For example, AED-MAE (CVPR 2024)[1] is a state-of-the-art anomaly detection method for surveillance videos that focuses on detection speed. However, it incorporates static background information into its approach and operates at the frame level. In autonomous driving, interference from dynamic backgrounds would significantly degrade its performance. Similarly, industrial anomaly detection methods such as EfficientAD(WACV 2024) [2] are also highly efficient but operate at the frame level, making them unsuitable for the complex environments of autonomous driving. MOVAD(ICASSP 2024)[3] is the first method to propose online real-time anomaly detection specifically for autonomous driving. We compare our approach with these three methods, and our method achieves the best balance between accuracy and speed, specifically designed to meet the unique requirements of autonomous driving. A key innovation of our work is its focus on achieving robust performance in dynamic environments. We tested these three models on ROL and DoTA and the results are shown in the following table. |Method|Type|AUC-Frame(%)|AUC-Frame(%)|mTTA(s)|mTTA(s)|mResponse(s)|mResponse(s)|FPS| |---|---|---|---|---|---|---|---|---| |Datasets|-|ROL|DoTA|ROL|DoTA|ROL|DoTA|-| |EfficientAD|Frame|0.519|0.549|0.89|0.97|3.65|2.68|557| |AED-MAE|Frame|0.571|0.652|1.01|1.35|3.36|2.79|1655| |MOVAD|Frame|0.719|0.821|2.47|2.55|2.61|1.33|158| |OURs|Object|0.736|0.823|2.80|2.78|2.35|1.21|579| > **Q2**: Inadequate literature review In fact, our comparison methods already include SOTA in traffic anomaly detection, such as MAMTCF (ArXiv 2023), AM-NET (IEEE TIV 2023), and TTHF (IEEE TCSVT 2024). In our revised manuscript, we will expand Table 1 to incorporate additional recent methods, including three state-of-the-art real-time anomaly detection approaches—EfficientAD, AED-MAE and MOVAD. We will clearly articulate the distinctions between existing video-frame-based real-time anomaly detection methods and our object-centric framework, which is specifically designed to address the challenges inherent in autonomous driving scenarios. Additionally, we acknowledge that our original claim that "most existing methods focus on accuracy but ignore speed" may have been too broad. In the revision, we will refine this statement to "most existing object-centric methods focus on accuracy but ignore speed" to more accurately reflect the current state of the art. > **Q3**: Methodological Novelty Our method is the first to utilize both event streams and RGB data for traffic anomaly detection, making it a novel contribution. The key innovation of our approach lies in leveraging the unique characteristics of event streams as critical features for road traffic anomaly detection. Through graph-based modeling in asynchronous GNNs, we perform neighborhood aggregation only on active event nodes, combined with a lookup table acceleration mechanism in Spline convolution, enabling efficient feature extraction. Moreover, the continuity of event streams allows the model to perform anomaly detection between RGB frames (Figure 4), enabling early anomaly detection, which is another key innovation of this work. Finally, by incorporating global and object-level graph modeling along with a dynamic attention mechanism, the model leverages a GRU module to capture the temporal dependencies of these features, focusing on anomalous motion patterns of specific objects, thereby achieving accurate anomaly identification. > **Q4**: Figure 1 Figure 1 illustrates our anomaly detection process for autonomous driving. When a leading vehicle exhibits abnormal behavior, the model's anomaly score gradually increases until it surpasses the anomaly score threshold. Therefore, our primary focus is on whether the anomaly score can quickly reach the threshold when detecting an anomalous object.This process involves two key time components: \(T_{\text{inference}}\), which represents the model's processing latency, and \(\Delta T_{\text{detection}}\), the additional time required to recognize and confirm the anomaly—essentially, the time it takes for the anomaly score to reach the threshold. The sum of these components defines the overall response time, which reflects both the inference speed of our model and the latency in anomaly detection (i.e., the delay in quickly identifying an object at risk of being anomalous). **References** [1] Self-Distilled Masked Auto-Encoders are Efficient Video Anomaly Detectors, CVPR 2024. [2] EfficientAD: Accurate Visual Anomaly Detection at Millisecond-Level Latencies, WACV 2024. [3] Memory-augmented Online Video Anomaly Detection, ICASSP 2024.
Summary: The paper presents a real-time anomaly detection framework for autonomous driving through a novel multimodal asynchronous hybrid network. The method integrates high-temporal resolution event data from event cameras with spatially rich RGB images processed by a CNN, combined with an asynchronous GNN. Temporal dependencies are captured via a GRU enhanced with an attention mechanism, which enables rapid and accurate anomaly detection. Experiments on the ROL and DoTA datasets show that the proposed approach outperforms current methods in both detection accuracy (e.g., improvements in AUC, AP) and responsiveness (nearly millisecond-level inference at approximately 600 FPS). Claims And Evidence: Yes 1. The authors claim substantial improvements in detection accuracy and inference speed. 2. These claims are substantiated by comprehensive experiments, including comparative studies and ablation experiments that report key metrics such as AUC, AP, frame-level AUC, and mean Time-to-Accident (mTTA). 3. The evaluation is limited to the ROL and DoTA datasets, which may not fully represent complex real-world scenarios (e.g., extreme weather, low-light conditions). 4. Additional experiments on diverse conditions would strengthen the evidence regarding the method’s generalization capabilities. Methods And Evaluation Criteria: Yes 1. The integration of event and RGB data is well-justified, effectively capturing both dynamic and static scene information. 2. The choice of an asynchronous GNN, along with GRU-based temporal modeling and attention mechanisms, is well aligned with the goal of reducing inference latency. 3. It would be beneficial to include comparisons with Transformer-based models or other lightweight asynchronous techniques to better position the advantages of the proposed asynchronous GNN. 4. A deeper discussion on the trade-offs between various event processing methods could provide further clarity on the design choices. Theoretical Claims: 1. The paper primarily focuses on algorithmic innovation and empirical validation rather than formal theoretical proofs, which is appropriate given its applied focus. 2. The intuitive arguments supporting asynchronous processing and multimodal fusion are well supported by experimental results. 3. Given that the v2e conversion process can introduce errors, an analysis of how such errors might propagate through the model and impact overall performance would add further rigor to the study. Experimental Designs Or Analyses: 1. The experimental setup is thorough, with extensive comparisons to state-of-the-art baselines and detailed ablation studies that isolate the contributions of individual components (e.g., GRU, attention, BBox modules). 2. The analyses convincingly demonstrate the model’s improvements in performance metrics. 3. The paper would benefit from additional experiments testing inference speed across diverse computing platforms. 4. Module-level ablation studies, particularly regarding the impact on detection performance when components like GRU or attention are removed, could help clarify the contributions of each module. 5. A comparative analysis of the computational overhead of the asynchronous GNN versus other lightweight event processing methods is also recommended. Supplementary Material: The supplementary material, including additional ablation studies and qualitative evaluations (e.g., visualizations of anomaly score trajectories and challenging scenario examples), provides valuable insights into the model’s behavior. Relation To Broader Scientific Literature: The contributions are well-situated within the existing literature on anomaly detection, event-based vision, and sensor fusion in autonomous driving. The paper builds upon established methods such as ResNet, YOLOX, and asynchronous GNNs, while addressing the critical issue of latency. It extends prior work by demonstrating that multimodal fusion can simultaneously achieve high detection accuracy and rapid response, a balance that is crucial in safety-critical applications. Essential References Not Discussed: While the paper cites many relevant works, it could benefit from discussing additional recent advances in asynchronous processing and multimodal fusion techniques, especially those addressing real-time constraints in other domains. Including references to cutting-edge approaches in temporal modeling or sensor fusion from the latest conferences could provide a more comprehensive context. [1] Dissecting Multimodality in VideoQA Transformer Models by Impairing Modality Fusion [2] InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions Other Strengths And Weaknesses: 1. Limited discussion on the performance under diverse real-world conditions such as adverse weather or low-light scenarios. 2. The paper could include a more detailed theoretical analysis or discussion of potential failure cases and limitations. Other Comments Or Suggestions: More qualitative comparisons, such as side-by-side visualizations of detection outputs in challenging scenarios, would further enhance the clarity of the presentation. Questions For Authors: 1. Can you provide more details on the computational overhead of the asynchronous GNN module compared to synchronous processing methods? 2. Have you conducted sensitivity analyses on the hyperparameters related to the fusion strategy and detection thresholds? How stable is the model’s performance across different settings? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **Claims 3**: V2E Transformation Please refer to the response to Weakness in Reviewer rGL6 and Q1 in Reviewer WT2W. > **Exp 3**: Inference Speed We evaluated inference speed on different platforms: RTX3090, RTX4090, and A100-80G. The RTX4090 achieved the fastest speed at 603 FPS, followed by the A100-80G at 579 FPS and the RTX3090 at 517 FPS. These modest differences demonstrate that our approach consistently delivers real-time performance across various hardware configurations. > **Exp 4**: Ablation Studies Our ablation studies show that both the GRU and attention modules are critical. The GRU captures long-term dependencies to improve temporal feature aggregation and stability, raising the mTTA from 1.44 to 1.98 seconds. Meanwhile, the attention module focuses on key regions—especially when fusing RGB images with event stream data—ensuring the model prioritizes the most informative cues. Together, these modules enable our full model to achieve an AUC of 0.879, demonstrating their complementary benefits for anomaly detection. |RGB|Event|GRU|Attention|AUC|AP|AUC-Frame|mTTA|mAP| |-|-|-|-|-|-|-|-|-| |√|√||√|0.819|0.498|0.657|1.52|43.77| |√|√|√||0.817|0.508|0.668|1.98|43.59| |√|√|||0.805|0.479|0.648|1.44|41.66| |√|√|√|√|0.879|0.570|0.736|2.80|45.15| > **Exp 5**: Event Processing We compared our asynchronous GNN with two lightweight methods: [AsyNet](https://arxiv.org/abs/2003.09148), which uses sparse convolutions, and [AEGNN](https://arxiv.org/abs/2203.17149), which employs asynchronous GNNs. Our experiments show that our approach achieves better detection performance with far lower computational overhead. Specifically, AsyNet uses 367 MFLOPs per event, AEGNN uses 10.98 MFLOPs, and our method only uses 8.732 MFLOPs per event. This efficiency and accuracy highlight the benefits of our approach for processing event-based data. > **Ref**: Related Work In the revised manuscript, we will discuss recent advances [1,2] in asynchronous processing and multimodal fusion that address real-time constraints. For example, recent video-question answering work uses non-parametric probes like QUAG to decouple and evaluate both intra- and inter-modal interactions. > **Weaknesses 1**: Adverse Weather or Low-light Scenarios I collected data on severe weather or low-light scenes on the ROL and DoTA datasets, and performed comparative experiments with the two best performing methods in the comparative experiments. Benefiting from the advantages of event cameras in extreme lighting scenarios, we surpass the performance of other methods by a large margin in this scenario. |Method|AUC|AP|AUC-Frame|mTTA|mResponse| |---|---|---|---|---|---| |STFE|0.631|0.397|0.582|1.26|3.12| |TTHF|0.652|0.416|0.594|1.31|2.98| |OURs|0.719|0.442|0.612|1.47|2.35| > **Weaknesses 2**: Failure Cases [Figure R3-1](https://anonymous.4open.science/r/RTAD-3B3C/R3-1.png) shows that our system's performance relies heavily on the object detector's accuracy. In Frame25 and Frame30, blurry images and small objects led to a complete detection failure, which in turn caused the anomaly detection framework to fail. Although an object was detected in Frame35 when it was very close to the ego vehicle, the attention score was only 0.56 due to missing previous detections, preventing the GRU from building a robust temporal feature set. These cases highlight that our approach is limited by the reliability of the object detector, and improvements in detection accuracy are essential. > **Suggestions 1**: Visualizations The [Figure R3-2](https://anonymous.4open.science/r/RTAD-3B3C/R3-2.png) shows that our target-level attention module assigns varying scores to bounding boxes. For instance, a vehicle cutting in starts with a low score (0.21 in Frame 20) that rises as it approaches (0.45 in Frame 25 and 0.71 in Frame 30). This indicates that our attention mechanism, combined with event stream features, effectively emphasizes fast-moving objects, leading to higher anomaly scores. > **Q1**: Asynchronous GNN Our asynchronous GNN module requires only 8.732 MFLOPs per event—far lower than synchronous methods (74559 for Events+YOLOv3, 6984 for RED, and 27659 for ASTM-Net). This efficiency is achieved by processing events asynchronously, eliminating the heavy cost of frame-based operations. > **Q2**: Sensitivity Analyses Our [Figure R3-3](https://anonymous.4open.science/r/RTAD-3B3C/R3-3.png) shows that the model remains robust across different settings. At the standard 0.5 IOU and 0.5 confidence thresholds, the model achieves an AUC of 0.879. As shown in [Figure R3-4](https://anonymous.4open.science/r/RTAD-3B3C/R3-4.png) and [Figure R3-5](https://anonymous.4open.science/r/RTAD-3B3C/R3-5.png), Increasing these thresholds sharply reduces AUC by filtering out more detection boxes, while lowering them causes a gradual decline as false positives are effectively down-weighted. Overall, the AUC ranges from 0.65 to 0.879, confirming the stability of our configuration. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' thorough and detailed responses to my concerns and questions. The additional experiments provided, especially regarding inference speed across multiple platforms, extensive ablation studies, computational overhead comparisons, sensitivity analyses, and performance evaluations under adverse conditions, comprehensively address the points raised in my original review. The inclusion of failure case analyses and qualitative visualizations further clarifies the strengths and limitations of the proposed method. Given the substantial improvements made and the insightful clarifications provided, I am fully satisfied with the authors' rebuttal and now strongly support the acceptance of this submission. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thoughtful response and your recognition of our rebuttal.
Summary: This paper introduces a multimodal asynchronous hybrid network designed for real-time anomaly detection in autonomous driving. The main contribution lies in combining high-temporal-resolution event stream data captured via event cameras processed asynchronously using GNN and spatial information extracted from RGB images using CNNs. This integration leverages the strengths of both modalities-temporal responsiveness from event streams and spatial detail from RGB data-enabling high-accuracy anomaly detection with exceptionally low response times (millisecond-level). Extensive experimentation on benchmark datasets (ROL and DoTA) demonstrates superior performance in accuracy and significantly reduced latency compared to existing methods, validating the practical efficacy of the approach. Claims And Evidence: - The claims regarding the model's capability to achieve high accuracy and minimal response times through multimodal asynchronous data integration are clearly supported by extensive experiments and ablation studies. - The empirical evidence presented, particularly in Table 1 and Table 2, convincingly supports the model's superiority in accuracy (AUC, AP) and response time metrics compared to existing state-of-the-art methods. - The paper employs a v2e method to convert traditional video into event stream data. This conversion may not fully capture the noise characteristics and the specific data format of real event cameras, potentially affecting the model's generalization performance. Methods And Evaluation Criteria: - The proposed integration of RGB images with asynchronous event data via an asynchronous GNN and CNN is novel, effectively utilizing the complementary strengths of both modalities. - The evaluation metrics employed (AUC, AP, AUC-Frame, mTTA, mResponse) are comprehensive and well-suited to reflect both accuracy and real-time performance. - The paper might benefit from additional insights or a more detailed discussion of why two-stage significantly differ in contribution across scenarios, which was evident in the results. Theoretical Claims: - The theoretical claim regarding the efficiency and latency benefits provided by asynchronous event streams and the sharpness-aware features of the proposed network is well-supported by experimental validation. - Since the v2e conversion is theoretically prone to errors, it would be beneficial to analyze how these errors affect the overall results. Experimental Designs Or Analyses: - Experimental designs are thorough, carefully constructed, and address key practical and theoretical considerations relevant to real-time anomaly detection. - Although the method shows an advantage in inference speed, a more detailed analysis of computational overhead (e.g., FLOPs, GPU/CPU runtime) is recommended. Additionally, evaluating its feasibility on low-power devices would provide further insights into its practical deployment. Supplementary Material: I read the whole appendix. The experiments provide more convincing validation of the effectiveness. Relation To Broader Scientific Literature: The paper effectively contributes to broader fields, particularly autonomous driving and safety-critical real-time applications, by providing a novel, easily adaptable model that emphasizes response time. Essential References Not Discussed: While comprehensive, authors might also benefit from discussing recent advances in multimodal and asynchronous processing models outside the driving anomaly detection domain. [1] Combining events and frames using recurrent asynchronous multimodal networks for monocular depth prediction [2] AEGNN: Asynchronous Event-based Graph Neural Networks Other Strengths And Weaknesses: The clarity of the paper is excellent, and its originality stems from a meaningful integration of event-based cameras with conventional CNN processing, addressing a practical need for low latency in autonomous systems. Other Comments Or Suggestions: The paper does not discuss in depth the explainability of abnormal events. For example, which features are the most critical? Is it possible to visually analyze how the model determines anomalies? This is crucial to improving credibility and deployment security. Questions For Authors: - Could you elaborate on how errors or artifacts introduced by synthetic event data generation affect anomaly detection accuracy in real-world applications? - Have you explored the interpretability of the model, specifically regarding visual explanations or attention maps, and how anomalies are identified? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate your insightful feedback and practical recommendations. Detailed responses to your questions are listed as follows. > **Methods And Evaluation Criteria 3**: Two-stage Contribution (1)Our two-stage model first performs object detection to generate bounding boxes, providing precise localization and object-level features for subsequent anomaly detection. In complex traffic scenarios—such as multi-vehicle interactions or crowded pedestrian areas—this initial detection stage enhances object recognition accuracy. (2)For example, when a vehicle changes lanes suddenly, the bounding box prior helps quickly lock onto the anomalous object, reducing false detections. (3)Moreover, in fast-moving or low-light scenarios where RGB images suffer from motion blur or poor illumination, asynchronous event streams capture fine luminance changes to compensate. This dual-modality support boosts detection robustness, as shown in our experiments. (4)In contrast, while a single-stage model can meet millisecond-level response requirements by sharing features and reducing redundant computation (achieving a higher FPS), it may sacrifice some accuracy. Thus, the two-stage model trades off increased latency for improved detection accuracy, making it well-suited for scenarios that demand high reliability, such as highway cruising. > **Experimental 2**: Computational Overhead (1)Our method requires only 8.732 MFLOPs per event, uses 23.5 GB of video memory. Under typical conditions in the ROL dataset, with an average event rate of 560k EV/s, this overhead is manageable. In high-speed driving scenarios—where the event rate can rise to 1–10M EV/s—the worst-case computational load is approximately 87.32 TFLOPs. (2)Current autonomous driving chips such as Atlan, Thor, and Orin can fully deploy our algorithm on-board, while chips like Xaiver and Parker, although sufficient for normal driving, might face challenges under extreme conditions. The specifications of some representative chips are as follows: |Chip|Architecture|Compute Power |-|-|-| |Parker|2×Denver|1 TOPS| |Xaiver|4×ARM Cortex A578|30TOPS| |Orin|Nvidia custom Carmel|254 TOPS| |Atlan|ARM6412 × Arm Cortex‑A78|1000 TOPS| |Thor|AE (Hercules) ARM Neoverse v2|2000 TFLOPS @ FP8| > **References**: Related Work We acknowledge the importance of recent advances in multimodal and asynchronous processing outside the driving anomaly detection domain and we will cite these two articles in our paper. (1)For instance, [1] proposes a Recurrent Asynchronous Multimodal Network (RAM Net) for monocular depth estimation by combining event cameras and RGB cameras. This approach leverages ConvGRU units to support asynchronous inputs, preserving the high temporal resolution of event data while fusing rich spatial information, which enhances prediction accuracy in dynamic scenes. (2)Similarly, [2] introduces an Asynchronous Event Graph Neural Network (AEGNN) that efficiently processes sparse event data by performing local asynchronous updates within a spatiotemporal graph, significantly reducing computational overhead. > **Q1**: synthetic event data generation affect (1)The current traffic anomaly detection dataset does not have a real Event mode. Our subsequent work is to use event cameras to collect this data.(2)V2E addresses this issue by considering various types of noise. **First**, it introduces temporal noise by utilizing Poisson noise to generate physically consistent noise events. **Second**, it simulates shot noise, which arises from photon statistics under low-light conditions in event cameras. **Third**, it accounts for threshold mismatch, where event trigger thresholds vary across different pixels in real event cameras, by modeling this phenomenon with a Gaussian distribution. For experimental verification of V2E, please refer to the response to Weakness in Reviewer rGL6. > **Q2**: Explainability of Abnormal Events (1)RGB features correspond to appearance features, BBox features correspond to position and movement features, event stream features correspond to features of rapid and abnormal movement of objects, and object-level features are local detailed features. (2)Our model's object-level attention mechanism assigns scores to detected objects, highlighting those more relevant to anomaly detection. As shown in [Figure R2-1](https://anonymous.4open.science/r/RTAD-3B3C/R2-1.png), when a vehicle suddenly cuts in, its attention score increases as it approaches, indicating its growing importance as a potential anomaly. Visualizations of these attention scores across frames illustrate how the model dynamically focuses on critical objects, providing insights into the features it considers vital for identifying anomalies. This enhanced interpretability aids in understanding the model's decision-making process, thereby improving its credibility and deployment security.
Summary: This paper focuses on real-time anomaly detection tasks in autonomous driving, aiming to balance detection accuracy and response time. The core algorithm is a multimodal asynchronous hybrid network that integrates event streams from event cameras with RGB camera image data. Process event streams through asynchronous graph neural networks (GNNs) and utilize their high temporal resolution; Extracting spatial features from RGB images using Convolutional Neural Networks (CNN). The combination of the two can capture spatiotemporal information of the driving environment, achieving fast and accurate anomaly detection. Claims And Evidence: The claims in the submitted materials are supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods and evaluation criteria proposed in this paper are of great significance for the problem and application of real-time anomaly detection in autonomous driving. Theoretical Claims: This paper does not involve theoretical proof. Experimental Designs Or Analyses: The experimental design is rigorous, and multiple advanced methods are selected as baselines for comparative experiments, tested on different datasets to ensure the reliability of the results. The ablation experiment analyzes the impact of different modules on performance by gradually removing them, which helps to understand the role of each part of the model. Supplementary Material: I reviewed ablation experiments, two-stage versus single-stage networks, new datasets and qualitative evaluation parts of the supplementary materials. Relation To Broader Scientific Literature: In the field of autonomous driving anomaly detection, existing methods mainly focus on detection accuracy and ignore response time. In this paper, the response time is innovatively incorporated into the performance index. The proposed multimode asynchronous hybrid network is an improvement on the traditional method that only relies on a single mode or complex neural network, which can ensure the accuracy while greatly improving the response speed. Essential References Not Discussed: There are no significant missing key references. Other Strengths And Weaknesses: Strength: 1. This paper has strong innovation in methodology, with novel multimodal fusion and asynchronous network design. 2. This paper conducts thorough experiments, verifies multiple datasets and indicators, and conducts in-depth analysis of the model through ablation experiments. 3. This paper incorporates response time as a core performance indicator, effectively addressing the shortcomings of existing methods in time sensitive scenarios. The research results have strong practical application value. Weakness: The ROL and DoTA datasets used lack event modality. Although V2E transformation technology is used to generate supplements, there are differences between the generated data and the real event stream, which may not fully reflect the characteristics of the real scene. Other Comments Or Suggestions: N/A Questions For Authors: The model adopts a multimodal asynchronous hybrid network with a complex structure. As the amount of input data increases or the complexity of the scene increases, the computational complexity of the model may significantly increase. Can the model be extended to adapt to more complex scenarios while ensuring real-time performance? Could Transformer-based architectures further improve detection accuracy? For example, Transformer with linear attention. What are the limitations of real-world deployment? How does the model scale to different vehicle types or camera configurations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your motivating review and concrete suggestions. Detailed responses to your questions are listed as follows. > **Weakness**: V2E Transformation vs. Real Event Stream (1)The current traffic anomaly detection dataset does not have a real Event mode. Our subsequent work is to use event cameras to collect this data. (2)V2E effectively mimics key DVS sensor traits (Gaussian Threshold Distribution, Temporal Noise, Leak Events, Intensity-dependent Bandwidth). For example, on N-Caltech 101, using V2E data raised ResNet34 accuracy on real DVS from 81.69% to 83.36%, reaching 87.85% with fine-tuning (vs. 86.74% with real data alone). (3)DSEC[1] is an autonomous driving dataset and contains real Event modalities, but since it is a completely normal driving dataset, it cannot be used for our anomaly detection. In order to verify the validity of the data generated by V2E, We extended our evaluation using the DSEC dataset by generating a simulated dataset, **DSEC+V2E**. Training on DSEC for normal samples and testing on ROL and DoTA revealed only minor variations in anomaly detection accuracy, confirming the robustness of our model when incorporating V2E-generated event data. While there are the inherent differences between synthesized and real event data, our results clearly demonstrate that the V2E method effectively supplements event modality and maintains the robustness of the detection model. For more V2E issues, please refer to the Q1 in Reviewer WT2W. |Dataset|DSEC (mAP)|ROL(AUC)|DoTA (AUC)|ROL (AUC-F)|DoTA (AUC-F)|ROL (mTTA)|DoTA (mTTA)|ROL (mR)|DoTA(mR)| |---|---|---|---|---|---|---|---|---|---| |DSEC|41.9|0.841|0.857|0.697|0.794|2.24|2.18|1.66|1.79| |DSEC+V2E|44.2|0.846|0.862|0.712|0.808|2.46|2.37|1.45|1.61| > **Q1**: Model Scalability and Real-Time Performance (1)The ROL and DoTA datasets already include highly complex scenarios—extreme weather, nighttime conditions, intense lighting, and diverse camera perspectives. Despite these challenges, our model maintains excellent real-time performance and accuracy. (2)Our multimodal asynchronous hybrid network is designed modularly, allowing us to increase its depth (i.e., number of layers) to handle even more complex data. While deeper networks offer slight improvements in accuracy, they also introduce a small increase in processing delay. (3)Our experiments on the ROL dataset demonstrate the trade-off between accuracy and latency. In summary, our model’s modular architecture enables it to effectively adapt to increasingly complex environments with only a minor trade-off between improved accuracy and additional computational delay. (Layers contains a ResBlock and a look-up-table Spline convolution) |Layers|AUC(%)|AP(%)|AUC-Frame(%)|mTTA(s)|FPS|mResponse(s)| |---|---|---|---|---|---|---| |4|0.879|0.570|0.736|2.80|579|1.17| |5|0.885|0.574|0.739|2.89|312|1.31| |6|0.892|0.577|0.740|2.93|166|1.56| > **Q2**: Transformer-based Architectures Replacing the CNN backbone with a ViT or Swin Transformer enables global feature modeling and boosts detection accuracy by capturing long-range dependencies. However, Transformers add latency; for example, ViT’s self-attention has O(N²) complexity. Our experiments confirm that Transformer-based architectures improve accuracy when latency is managed. |Model|AUC (%)|AP (%)|AUC-Frame (%)|mTTA (s)|FPS|mResponse (s)| |---|---|---|---|---|---|---| |Ours(CNN)|0.879|0.570|0.736|2.80|579|1.17| |CNN→Swin|0.881|0.576|0.739|2.85|278|1.44| |CNN→ViT-B|0.886|0.581|0.745|2.87|213|1.51| > **Q3**: Real-world Deployment After calculation, our model can be fully deployed on most autonomous driving chips, and a small number of autonomous driving chips can be deployed through quantization. (1)Computing resource limitations: For instance, under normal conditions the system handles 560k events per second [2], and even at peak loads (up to 10M events/second), the overhead (~87.32 TFLOPs, 42W) is manageable on chips like Orin. (2)Although we need two camera devices, we can ignore the transmission delay issue. Hardware synchronization limits timing errors to ~78 microseconds, while event camera delays are around 6 ms. For more detailed deployment information, please refer to the response to Experimental 2 in Reviewer WT2W. > **Q4**: Different Vehicle Types or Camera Configurations Our training data was collected from a diverse range of vehicles, including both sedans and trucks. Because our method is designed around an object-level approach that centers on the target detector rather than the camera or vehicle type, it naturally generalizes across different vehicle configurations. Actually, we used 20fps RGB camera data. However, the system can easily benefit from higher frame rate cameras in real-world deployments, potentially further enhancing detection performance without compromising the model's scalability. **References** [1] DSEC: a stereo event camera dataset for driving scenarios [2] Prophesee Evaluation Kit - 2 HD
null
null
null
null
null
null
On the Private Estimation of Smooth Transport Maps
Accept (poster)
Summary: This paper leverages Differential Privacy (DP) to address concerns about privacy leakage when estimating the transport map between two distributions derived from user data. The authors propose a DP transport map estimator and establish its statistical guarantees in terms of an upper bound and a minimax lower bound. Claims And Evidence: The authors establish statistical guarantees for the proposed private transport map estimator. However, there is a lack of insight and clarification regarding these bounds, particularly in terms of how they compare to the non-DP version, and how changes in $\epsilon$ affect the convergence rate and the estimation's optimality. The theoretical claim could be strengthened by deeper insights and more detailed explanations. Methods And Evaluation Criteria: The problem addressed in this paper is important. However, the method is not novel; it is a routine development that combines Differential Privacy (DP) with existing transport map estimators. Theoretical Claims: There are no red flags in the theoretical part, but there is a possibility that I may have missed something. Experimental Designs Or Analyses: The performance of the privately estimated map compared to the true map is shown in Figure 2. Although the estimated map visually appears close to the ground truth, there is no quantitative measure to assess the estimation error and confirm its optimality. The figures appear to show results from only a single run of the algorithm, but a single run provides little insight into the algorithm's general performance and variance. What is its average approximation error across multiple simulations? The fact that this is neither displayed nor mentioned is quite concerning. To address privacy concerns, it could be interesting to evaluate the performance of the estimator in cases where the underlying distribution is heavily skewed. Supplementary Material: Yes, I briefly went through the proof sketch of the theorem, but there is a possibility that I may have missed something. Relation To Broader Scientific Literature: The main results of this paper could contribute to applications and methodologies involving optimal transport maps using real user data. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: My main concern is with the numerical experiments. Please see the comments under 'Experimental Design and Analysis'. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely appreciate the time and effort you have taken to review our paper. Below, we provide detailed responses to each of your comments. - **Comment 1:** *The authors establish statistical guarantees for the proposed private transport map estimator. However, there is a lack of insight and clarification regarding these bounds, [...] and more detailed explanations.* **Response:** We would like to point out that we included elements addressing this question between lines 359 and 362, where we exhibit the regime in which any private algorithm, regardless of the specifics, will necessarily experience performance degradation. We will enhance the discussion after Theorems 4.3 and 5.1 to better explain the bounds in different regimes. For instance, our upper bound recovers the usual optimal rate of estimation in the limit $\varepsilon \to \infty$ (i.e., no privacy). In the regime of high smoothness ($\alpha \to \infty$), we can see that a constant privacy budget does not affect utility. In intermediate regimes, we will ensure to detail this point using the inversion formula. - **Comment 2:** *However, the method is not novel; it is a routine development that combines Differential Privacy (DP) with existing transport map estimators.* **Response:** Our work indeed combines existing transport map estimators with an established Differential Privacy mechanism, but its statistical analysis requires a careful link between these two components, which we believe results in a valuable contribution. The slight mismatch between our upper and lower bounds (which does not exist in non-private transport map estimation) arises as a direct consequence of the additional challenges posed by privacy constraints. A significant portion of the differential privacy literature consists of applying standard mechanisms (e.g., Gaussian, Exponential, etc.) to prescribed quantities, but the challenge lies in quantifying the utility of such procedures. - **Comment 3:** *The performance of the privately estimated map compared to the true map is shown in Figure 2. Although the estimated map visually appears close to the ground truth, there is no quantitative measure to assess the estimation error and confirm its optimality & The figures appear to show results from only a single run [...]* **Response:** Our work is primarily theoretical in nature, and the experiments are mainly illustrative of the soundness of our approach. We will add a numerical figure of merit that measures the estimation error (e.g., the $L^2$ error between the two maps as a function of $n$). We can also include the standard deviation when averaging over multiple runs. However, we want to emphasize that the experiments serve to demonstrate that the proposed method is effective, at least in a toy example. The main contribution of the paper is to establish the first statistical bound for the problem at hand. - **Comment 4:** *To address privacy concerns, it could be interesting to evaluate the performance of the estimator in cases where the underlying distribution is heavily skewed.* **Response:** We are unsure of what the reviewer means by "skewed" in this context. If "skewed" refers to cases where some regions have a high probability mass while others are nearly empty, this is indeed an interesting question. However, this scenario is excluded by our assumptions (see Definition 2.1), as it induces instabilities in the estimation procedure, even in the absence of privacy constraints. We believe this is a valuable question for future empirical research, but it is outside the scope of the current paper. We would be happy to discuss any remaining questions during the author-reviewer discussion phase.
Summary: This paper presents a differentiably private (DP) estimator for continuous $W_2$ transport maps, obtained from discrete samples from source and target distributions. It builds on top of the framework established by [Hutter & Rigollet 21] for estimating the Brenier potential ($\nabla f$ returns the transport map), via wavelet approximation. Privacy is achieved with a noisy (Laplace) report argmin mechanism, applied to a covering of the admissible family of potentials. They are the first to provide upper and lower bounds for such a DP problem (in terms of $L^2$ error on the transport plan), but optimality of such bounds is not proven. The authors also present a simple practical implementation for a planar scenario that leverages a grid-based discretization. ## Update after rebuttal I appreciate the clarifications from the authors, but remain at a weak accept, especially as some of my main concerns were shared by other reviewers. Claims And Evidence: The paper is primarily theoretical, so the need for empirical evidence is minimal. Detailed proofs are provided for the claims given, that mostly leverage prior analytical tools from [Hutter & Rigollet 21] and established mechanisms from the DP literature. Methods And Evaluation Criteria: Yes, methods are fine. The experiment is small, but that is fine, as the focus is on the analysis and statistical bounds. I could add that it would be nice to see which components of the bounds seem to dominate in which regimes over particular dimensions, sample sizes, etc. Not vital, in my opinion, however. Theoretical Claims: I did not have time to give a detailed check for the theoretical proofs in the appendices, but the arguments in the main text were correct. Experimental Designs Or Analyses: The experiment proposed is minimal, but sufficient. Supplementary Material: I did not review any supplementary material. Relation To Broader Scientific Literature: The paper is the first to provide some statistical guarantees on DP OT estimation in the continuous setting. The related work does a good job of establishing this and referencing relevant work (or at least, I did not notice any glaring omissions). Essential References Not Discussed: None: see above. Other Strengths And Weaknesses: A main strength of the paper is its clarity. I found the exposition to do a great job presenting and outlining their basic technique and results. If I had to level one clarity complaint, it's that I would have included a few words about the construction of the finite covering family, namely how one maintains convexity of the functions. Also, for reproduction's sake, it would be quite nice to include an implementation of the practical algorithm for the research community. Other Comments Or Suggestions: Explanation of rating: I found the paper to provide an interesting look at a heretofore unexpored problem. I did not give a higher rating than borderline accept, as it was unclear to me if the method is practically useful. For the one example displayed in Figure 2, I was surprised to see the degree of disagreement, given the use of $n=200000$ points. Additionally, I was a bit unsure of the degree of novelty in the theoretical construction, given that it builds so strongly off of a typical DP mechanism and the work of [Hutter and Rigollet 21]. Questions For Authors: See the one clarity weakness that I had a problem with above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely appreciate the time and effort you have taken to review our paper. Below, we provide detailed responses to each of your comments. - **Comment 1:** *I could add that it would be nice to see which components of the bounds seem to dominate in which regimes over particular dimensions, sample sizes, etc. Not vital, in my opinion, however.* **Answer:** We thank you for the suggestion. Since this point was raised in other reviews, we propose to extend the discussion in the relevant paragraph. For instance, our upper bound recovers the usual optimal rate of estimation in the limit $\varepsilon \to \infty$ (i.e., no privacy). In the regime of high smoothness ($\alpha \to \infty$), we can see that a constant privacy budget does not affect utility. In intermediate regimes, we will ensure to detail this point using the inversion formula. - **Comment 2:** *If I had to level one clarity complaint, it's that I would have included a few words about the construction of the finite covering family, namely how one maintains convexity of the functions.* **Answer:** The construction of the covering (detailed in Section E.2) maintains convexity via the algorithm developed in lines 1194-1200. Alternatively, since the set of potentials of interest is closed and convex in a Euclidean space, it is possible to replace the presented procedure with a projection step, which is contractive and would allow us to gain a factor of $2$ in the constant. We emphasize that this construction mainly makes sense from a theoretical point of view in order to obtain the statistical upper bound. In practice, this construction can be challenging, and we recommend checking the convexity of the candidate potentials beforehand. - **Comment 3:** *Also, for reproduction's sake, it would be quite nice to include an implementation of the practical algorithm for the research community.* **Answer:** We thank the reviewer for the suggestion. We will include the link to the code on our GitHub in the final version of this article. - **Comment 4:** *Explanation of rating: I found the paper to provide an interesting look at a heretofore unexplored problem. I did not give a higher rating than borderline accept, as it was unclear to me if the method is practically useful. For the one example displayed in Figure 2, I was surprised to see the degree of disagreement, given the use of $n=200000$ points. Additionally, I was a bit unsure of the degree of novelty in the theoretical construction, given that it builds so strongly off of a typical DP mechanism and the work of [Hutter and Rigollet 21].* **Answer:** The degree of disagreement in Figure 2 is an artifact of the evaluation rather than of the estimation. We will update the figure with more precise evaluation and add figures of merit for the disagreement. Since the empirical evaluation was also pointed out by other reviewers, we plan to add to the final version a figure representing the error as a function of $n$, the sample size, as well as error bars. Regarding the degree of novelty in the theoretical construction, we refer to our response to Reviewer tgzq. For instance, the additional error related to privacy has to be precisely controlled in the estimation; this is the purpose of Lemma 2.5, which is crucial to linking smooth map estimation and the differentially private component. When it comes to practical applicability, we believe that our method should be split into two parts: an exploration phase and a selection phase. The exploration phase (the covering construction) can be challenging for practical problems. However, the selection phase (via the empirical semi-dual criterion) is perfectly applicable in practical scenarios where a prescribed set of candidate potentials has been suggested beforehand (e.g., via prior information). We would be happy to discuss any remaining questions during the author-reviewer discussion phase.
Summary: This paper considers the problem of learning the optimal transport map from $P$ to $Q$, given samples from each, under the requirement of (pure) differential privacy. They impose assumptions on smoothness of the optimal map which are standard in the map estimation literature. They obtain an upper bound using a noisy argmin mechanism and a covering argument. Their lower bound employs standard packing arguments. ## Update after Rebuttal I am still borderline. The result is relatively interesting and the domain is new from the perspective of DP, but the privacy tools applied are pretty standard. Use of standard privacy tools is fine - the real question is whether their analysis is novel/significant enough for ICML. I am not sure - the covering and sensitivity analysis seem somewhat routine. If the accompanying lower bound was tight, then it wouldn't be fair to complain about this, but there is still a gap. On the other hand, I do not doubt correctness, and the paper was well-written and nice to read. Claims And Evidence: Yes, this is primarily a theoretical paper, with results and proofs that appear sound. Methods And Evaluation Criteria: They employ the standard evaluation metric for the map estimation task. The technical tools for designing the upper and lower bounds are standard in the literature (but have not yet been applied to this domain) Theoretical Claims: I skimmed through all of the appendix proofs, but did not check them line by line. However, these methods are super natural to apply here so I would be very surprised if there are any major issues. Experimental Designs Or Analyses: The experiments are naturally limited to 2d because of poor computational scaling with dimension. However, I think a statistical contribution is sufficient given that this problem has not been explored before. Plus, when smoothness is low, even the statistical rates without privacy scale poorly. (On the other hand, maybe that could enable "efficient" algorithms that scale polynomially in the sample size, since that is already exponential in dimension) Supplementary Material: Yes, I skimmed through all of the proofs. Nothing raised any red flags - the techniques seemed pretty standard, so I do not suspect issues even though I did not verify line by line. Relation To Broader Scientific Literature: They provide good background on the map estimation literature. There is a related problem of privately estimating a distribution/density under Wp. They mention some work in this direction but I note a couple missing references below. Essential References Not Discussed: These works look at the problem of private distribution estimation under Wp and were not included. I think they have tight rates and some efficient algorithms. Algorithmically Effective Differentially Private Synthetic Data Yiyun He, Roman Vershynin, Yizhe Zhu COLT 2023 Private Measures, Random Walks, and Synthetic Data March Boedihardjo, Thomas Strohmer, and Roman Vershynin Probability theory and related fields, 2024 Other Strengths And Weaknesses: I think the novelty/significance of these contributions is a bit borderline. Since this problem has not been studied before, I am definitely open to acceptance. However, the techniques for the upper and lower bounds are all very standard in the literature, and I don't think this problem has not been studied due to its difficulty, but rather because it is a bit niche. To calibrate, I would probably accept this paper at a conference like AISTATS but reject for a conference like COLT. I think matching upper and lower bounds, or clear answers to my questions below, would convince me to accept. Other Comments Or Suggestions: I'm sure there is a simple reason why you use the extended domain $\tilde{\Omega}$ over $\Omega$, but I didn't see any discussion. Could you be explicit about that? It would be nice if the proof of Lemma 2.5 was split into a lemma capturing the needed argument from Hütter & Rigollet and then your own contribution. Questions For Authors: Can you resolve the complexity of this problem in 1D? Here the connection to quantile functions should considerably simplify the problem. What about first doing private density estimation to obtain estimates $\hat{P}$ and $\hat{Q}$ which are accurate under $W_2$, then computing a smooth map for the $W_2(\hat{P},\hat{Q})$ problem (which is private by post-processing)? Analysis isn't obvious to me, but is there anything you can say here? To be fair, I don't think private density estimation has been well-studied under smoothness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely appreciate the time and effort you have taken to review our paper. Below, we provide detailed responses to each of your comments. - **Comment 1:** *These works look at the problem of private distribution estimation under Wp and were not included. I think they have tight rates and some efficient algorithms.* **Answer:** We thank you for the suggested references and will include them in the relevant body of literature. After reviewing them, they unfortunately do not seem to exploit smoothness, which means they will probably not lead to better rates for our problem. However, it is possible that their numerical performance is stronger than that of the presented method. - **Comment 2:** *I think the novelty/significance [...] conference like COLT.* **Answer:** Yes, the problem may seem a bit niche, but it still has some practical interest. The estimation of transport maps plays an important role in practical applications. Ensuring the privacy of such transport maps is particularly beneficial in applications involving the infringement of fundamental rights, such as bias analysis in machine learning algorithms. For instance, in [1] and [2], OT maps enable counterfactual analysis of discrimination. For a discussion about matching upper bounds, see our comment about the use of density estimators later in our response. - **Comment 3:** *I'm sure there is a simple reason why you use the extended domain $\tilde{\Omega}$ over $\Omega$, but I didn't see any discussion. Could you be explicit about that?* **Answer:** In a nutshell, this is a consequence of estimating the transport map through smooth Brenier potentials: as the transport map is constructed as the gradient of the potential $T=\nabla f$, $f$ must be defined on a larger domain than $\Omega$ so that the gradient exists everywhere there. Thus, this is not linked to differential privacy but rather to transport map estimation through potentials. In fact, $\tilde{\Omega}$ could be chosen arbitrarily as long as $\Omega$ is included in its interior and it is a hypercube. The hypercube constraint arises from the support adequation condition described in Appendix A. - **Comment 4:** *It would be nice if the proof of Lemma 2.5 was split into a lemma capturing the needed argument from Hütter & Rigollet and then your own contribution.* **Answer:** In the present proof, we try to clarify what is relevant for privacy and what simply follows the steps of Hütter & Rigollet. Unfortunately, since their result uses more advanced concentration tools than a uniform bound, we did not find a way to split the lemma in a way similar to a triangular inequality. Instead, we had to track the propagation of suboptimality in their proof and make the necessary measurability assumptions. We can add a remark after the result detailing which terms come from Hütter and Rigollet and which terms pertain to privacy. - **Comment 5:** *What about first doing private density estimation to obtain estimates $\hat{P}$ and $\hat{Q}$ that are accurate under $W_2$, then computing a smooth map for the $W_2(\hat{P}, \hat{Q})$... well studied under smoothness.* **Answer:** Indeed, the approach proposed by the reviewer is sound. In fact, private density estimation has even been studied under smoothness (see [5]) for the $L^2$ loss with optimal rates. Furthermore, [4] demonstrated that density estimators can be used as suggested if they are sufficiently precise in $W_2$ distance. However, two problems arise: (i) the estimation must be in $W_2$ distance and not $L^2$, though we are confident that this should not be a problem as the techniques appear to be manageable with privacy, and (ii) it requires extra smoothness assumptions on the source and target distributions (on top of those on the map), limiting the applicability of the method. While this method is likely to yield optimal rates (at least in the one-sample setting where $P$ is known), we aimed to study the problem under the purest set of hypotheses. We can add a remark about this direction in the article. - **Comment 6:** *Can you resolve the complexity of this problem in 1D? Here, the connection to quantile functions should considerably simplify the problem.* **Answer:** In 1D, it should be possible to import results from quantiles/CDFs since the transport map is known, but it is unclear whether this will be optimal. We would be happy to discuss any remaining questions during the author-reviewer discussion phase. [1] Transport-based counterfactual models, De Lara et al., JMLR 2024 [2] Fliptest: Fairness testing via optimal transport, Black et al., CFAT 2020 [3] Plugin estimation of smooth optimal transport maps, Manole et al., Annals of Stats., 2024 [4] Discrete Wasserstein barycenters: Optimal transport for discrete data, Anderes et al., Mathematical Methods of Operations Research 2016 [5] Privately Estimating Smooth Distributions on the Hypercube by Projections, Lalanne et al., ICML 2024
Summary: The paper studies the private estimation of smooth optimal transport maps using a semi-dual formulation of optimal transport, where the transport map is obtained as the gradient of a Brenier potential. To make the problem tractable, they restrict the function space using standard nonparametric estimation techniques with wavelet bases, following the approach of Hütter & Rigollet, which provides minimax bounds for transport map estimation. The core contribution is the introduction of differential privacy (DP) by injecting Laplace noise into the empirical semi-dual objective function $\hat S(f_i)$ ensuring that the estimation process remains private while preserving statistical guarantees. To implement this, they define a finite set of candidate functions $C_{J, M}$ which forms a discrete covering of the function space $V_J$ (merely a span of restricted wavelet basis) and the semi-dual objective is then computed for each function $f_i$​ in this set, and added classical Laplace mechanism to the values to enforce privacy. The private transport map is then obtained as the gradient of the selected function. The paper rigorously analyzes the trade-off between privacy, statistical accuracy, and discuss some computational feasibility, proving upper and lower bounds on the estimation error and demonstrating that privacy constraints degrade the convergence rate but remain reasonable. Unfortunately, the lower and upper bound does not match. Some numerical experiments provide some illustration on toy examples. Claims And Evidence: The claims of the paper are mostlty theoretical and supported by formal proof. Methods And Evaluation Criteria: Only toy experiments are presented to illustrate how the proposed transport map are close to the optimal one. Unfortunately, these synthetic illustrations might not reflect practical performances, specially for nowadays highdimensional ML problems. Theoretical Claims: I briefly skim into the appendix but not read in details. Experimental Designs Or Analyses: As commented above, the numerical experiments section is rather weak and serve as toy illustrations for the theoretical claims. Supplementary Material: No Relation To Broader Scientific Literature: N.A Essential References Not Discussed: No Other Strengths And Weaknesses: The theoretical analysis and presentations are cleanly presented with transparent assumptions. The minimax bounds is similar to (Hutter & Rigollet, 2021) $n^{-1} \vee n^{\frac{-2 \alpha}{2 \alpha - 2 + d}}$ with the additional differentially private term $(n \epsilon)^{ \frac{-2\alpha}{2\alpha + d} }$. This additional term is unfortunately not tight; how the privacy mechanism interacts with function approximation is quite mysterious at some points. Other Comments Or Suggestions: Several use of same notation $M$ but with different meaning accross the text Questions For Authors: - Assuming the function class $\mathcal{F}$ is parameterized in a finite-dimensional space and Once the problem is formulated using the empirical distributions, the semi-dual problem becomes a finite-dimensional convex optimization problem. Differentially Private (DP) algorithms for convex optimization already exist and could, in principle, be applied directly? Examples in mind (https://arxiv.org/abs/2207.08347 , https://jmlr.org/papers/volume12/chaudhuri11a/chaudhuri11a.pdf , https://proceedings.mlr.press/v23/kifer12.html). - Any plan to include experiments with real dataset or more realistic ML setting where DP OT map are needed? - The choice of the constant $J$ and the finite function set $C_{J, M}$ ​ remains unclear throughout the paper. It is difficult to keep track of how these quantities are selected, particularly since $J$ appears to depend on the dimension $d$ in an implicit manner, yet its exact scaling is not clearly derived or discussed. Additionally, the "constant" $R$ is not formally defined (coudln't find it), making it challenging to understand its role in the function space discretization. Theorem 4.3, which should provide precise details on these constants, remains evasive on their explicit dependence, even in the proof. This lack of clarity significantly affects the practical implementability of the proposed method "## update after rebuttal" I thanks the authors for the clarifications they provided during the discussions. The points on selecting the parameters are a bit more transparent. I maintain my score and not increase it because the numerical experiments are clearly weak and do not illustrate the initial motivation presented by the authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely appreciate the time and effort you have taken to review our paper. Below, we provide detailed responses to each of your comments. - **Comment 1:** *Only toy experiments are presented to illustrate how the proposed transport maps are close to the optimal ones. Unfortunately, these synthetic illustrations might not reflect practical performance, especially for modern high-dimensional ML problems. & Any plans to include experiments with real datasets or more realistic ML settings where DP OT maps are needed?* **Response:** The experiments in the paper are primarily intended to illustrate the theoretical soundness of our approach. Specifically, the private estimator we consider has the advantage of yielding tractable statistical bounds. However, it may not be the most efficient choice from a numerical standpoint. This observation extends to minimax estimation of transport maps in general (even without privacy constraints), where entropic approximation performs best in practice but remains theoretically challenging. Exploring this further is an interesting direction for future research. To clarify the practical scope of our work, private maps can be used in fairness applications, for measuring population flux, and in other domains where transport maps may contain sensitive information on individuals. - **Comment 2:** *The theoretical analysis and presentation are clean, with transparent assumptions. The minimax bounds resemble those in (Hutter & Rigollet, 2021) with an additional differentially private term. However, this additional term is not tight, and the interaction between the privacy mechanism and function approximation remains somewhat unclear.* **Response:** Privacy introduces an extra variance term, the magnitude of which depends on the dimension of the function approximation space. This same space also controls the bias of the estimation procedure, with its dependence tuned by smoothness. Consequently, the dimension of the functional space serves as a cutoff parameter that balances the bias-variance tradeoff. We will expand the discussion following Theorem 4.3 to clarify the different regimes of the bound and how the private bound differs from the classical case. - **Comment 3:** *Assuming the function class $\mathcal{F}$ is parameterized in a finite-dimensional space, and once the problem is formulated using empirical distributions, the semi-dual problem becomes a finite-dimensional convex optimization problem. Differentially private (DP) algorithms for convex optimization already exist and could, in principle, be applied directly.* **Response:** Indeed, as noted, the problem is convex in $f$ and Lipschitz, and we have explored this direction. However, a key difficulty arises because $ f^* $ (the Fenchel conjugate appearing in the optimization problem) does not have a closed-form expression in terms of the coefficients, nor does its gradient. Additionally, we were unable to establish gradient smoothness or strong convexity, which excludes the suggested references. Furthermore, the absence of strong convexity means that DP-SGD techniques are not in their most favorable cases. Adding regularization could improve this but such biased techniques fall out of the scope of this article. That said, since the objective function is convex and Lipschitz-continuous (see Eq. (40), where we establish a Lipschitz constant scaling as $2^{Jd/2}$), it is almost everywhere differentiable. This suggests that gradient-based optimization might be possible, (although computing $\nabla_{\text{coeffs}} f^*$ remains challenging). Whether such an approach would yield a favorable utility-privacy tradeoff remains an open question. - **Comment 4:** *The choice of the constant $J$ and the finite function set $C_{J,M}$ remains unclear throughout the paper. It is difficult to track how these quantities are selected, particularly since $J$ appears to depend implicitly on the dimension $d$ without an explicit derivation. Additionally, the "constant" $R$ is not formally defined (I couldn't find it), making it challenging to understand its role in function space discretization. Theorem 4.3, which should clarify these constants, does not explicitly detail their dependence, even in the proof. This lack of clarity significantly impacts the practical implementability of the proposed method.* **Response:** The introductory paragraph of Section 2 clarifies what is considered a constant and what is not. Specifically, we precisely track all factors involving $J, n, \varepsilon$. The parameter $R$ controls the level of $\alpha$-smoothness, as defined in Equation (4). We will revise the relevant paragraph to make this clearer. For practical applications, as discussed in Section 6, we recommend selecting a set of candidate potentials and using our method as a selection tool. We would be happy to discuss any remaining questions during the author-reviewer discussion phase. --- Rebuttal Comment 1.1: Comment: Thanks for the comments and clarifications. > To clarify the practical scope of our work, private maps can be used in fairness applications, for measuring population flux, and in other domains where transport maps may contain sensitive information on individuals. Yes, it would be nice to showcase experiments for the problem you mentioned being able to solve. That being said, I agree that the paper is theory perfumed, I am not requesting those experiments for acceptance. > The introductory paragraph of Section 2 clarifies what is considered a constant and what is not. Still there is several things to track in the paper and as of now, I would struggle to implement the proposed method based on the presentation of the paper. So please provide more guidance on that. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the quick response, allowing us time to provide a detailed answer. Below, we clarify the practical construction and algorithm. ## Choosing $J$ First, we determine $J$ to achieve the best bias-variance tradeoff. Using Equation (24), we set $$ J^* = \min \Big( \Big\lceil \frac{\log_2(n)}{ d - 2 + 2 \alpha} \Big\rceil, \Big\lceil\frac{\log_2(n \epsilon)}{ d + 2 \alpha}\Big\rceil \Big). $$ This choice asymptotically yields an error $$ \lesssim J^* R^2 (\log_2(n ) + \log_2(n \epsilon)) \times \text{Upper-Bound Equation (25)}, $$ where we explicitly track the scaling in $R$ and retain logarithmic terms. ### Constructing the Covering With $J^*$ fixed, we construct the covering. We acknowledge that the current presentation requires backtracking through proofs and that some constants are implicit. We will explicitly detail the construction, either in the main text or in an appendix, depending on length constraints. From Equation (39), controlling the infinite functional norm by the vector's infinite norm requires computing $\sqrt{\text{Vol}(\tilde{\Omega})}$. The article assumes $\text{Vol}(\tilde{\Omega}) = 3^d$, but since $\tilde{\Omega}$ only needs to be a hypercube containing $\Omega$ in its interior, we can instead set $$ \text{Vol}(\tilde{\Omega}) = (1+\gamma)^d $$ for any fixed $\gamma > 0$. ### Enforcing Conditions In Lemma 4.2 and Theorem 4.3, we first construct a $\delta = C / (n \epsilon)$ covering in vectorial infinite norm for the space of wavelet coefficients up to $J^*$, within a radius $C M^2$, where $$ C = 2 \sqrt{\text{Vol}(\tilde{\Omega})}. $$ This covering is straightforward since it requires only axis-wise discretization in the space of wavelets coefficients. The challenging part follows: the conditions on Hessian eigenvalues and the potential’s norm/gradient may not hold. We enforce them via the reasoning in lines 1194-1198. While this step is theoretically sound for obtaining the upper bound, we believe it is intractable in practice. ### Grid Approximation To address this, we can approximate using a grid, as in Section 6. This does not affect privacy analysis but may introduce numerical errors that vanish as the discretization step decreases. Candidate potentials are now represented by their grid discretization and live in the span of wavelet discretizations up to level $J^*$. For each candidate in the initial covering, we check whether a nearby potential (within distance $\delta$) satisfies the conditions on the Hessian, gradient, and potential itself, following Definition 2.3 and lines 1194-1198 in a discretized manner. At this stage, existence testing reduces to a convex optimization problem (without privacy because the covering construction is agnostic to data)—minimizing the infinite norm of a vector under linear constraints, since numerical differentiation schemes are linear operators. A numerical solver can efficiently handle this step. The set $C_{J, M}$ is thus constructed using this quasi-projection approach (lines 1194-1198) applied to each candidate potential from the initial covering. ## Conclusion This concludes the "practical" implementation of our estimator. While computationally expensive, it was designed for theoretical purposes. However, we emphasize that the selection rule in Section 3 remains valid for any set of convex potentials with bounded infinite norm—ensuring meaningfulness in optimal transport theory and privacy analysis. The complexity mainly lies in generating candidate potentials (which is not due to privacy but rather to the difficulty of the non-parametric nature of the potential estimation as in [Hutter&Rigollet,2021]), whereas the private selection procedure itself is more practical. Thus, with a realistic potential generator (e.g., leveraging prior knowledge), our private selector should yield strong utility in practice.
null
null
null
null
null
null
When do neural networks learn world models?
Accept (poster)
Summary: The paper explores whether neural networks can develop internal world models akin to those humans create to understand the underlying data generation processes. The research presents the first theoretical results in this area, demonstrating that in a multi-task setting, models with low-degree bias can recover latent data-generating variables under mild assumptions. This recovery, however, is sensitive to model architecture. Using Boolean models of task solutions via the Fourier-Walsh transform, the researchers introduce new techniques to analyze invertible Boolean transforms. They show that the implicit bias in deep learning towards low-complexity functions can favor certain solutions over others. Their findings have implications for self-supervised learning, out-of-distribution generalization, and the linear representation hypothesis in large language models. The paper provides a formal framework for world model learning and presents theoretical results, identifying key factors such as multi-task settings and low-complexity bias necessary for the successful learning of world models. The research also illustrates algorithmic implications through tasks like polynomial extrapolation and learning physical laws, showing that architectures informed by their analysis can outperform conventional ones. Claims And Evidence: The claims are well supported by the theorems. They could be better supported if the authors added some experiments. Methods And Evaluation Criteria: No proposed method Theoretical Claims: I haven't checked every detail of the proof due to time constraints, but I believe the logic of the proof is correct. Experimental Designs Or Analyses: I think additional experiments should be added to better support the main claim. Supplementary Material: Yes, I haven't checked every detail of the supplementary materials due to time constraints, but I believe the logic of the proof is correct. Relation To Broader Scientific Literature: This is pioneering work in studying the data generation model when learned by neural networks. Most assumptions are realistic and facilitate the proof of the results. Theorems 4.3 and 4.8 are novel to me and are practically important. Therefore, I believe this paper will make a significant impact in this field in the following ways: the assumptions can be reused for other theoretical studies, and the conclusions can inspire future empirical work. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** 1. This is pioneering work in studying the data generation model when learned by neural networks. 2. Most assumptions are realistic and facilitate the proof of the results. 3. Theorems 4.3 and 4.8 are novel to me and are of practical importance. **Weaknesses:** 1. The major weakness is that the experiments are not strongly related to the theorems. I would improve the score to "strong accept" if this point is addressed. (Please refer to Questions 1, 2, 3.) 2. "World model" in the title is defined as the data generation model. This does not seem to be a widely accepted term. The authors should support this claim with more existing works or consider changing the title. Other Comments Or Suggestions: No Questions For Authors: 1. A major assumption is that the neural network will learn the function with minimal (realization) degree (e.g., (ii) in Theorem 4.8). The authors support this assumption with literature on "simplicity bias," but I believe that additional support from synthetic experiments is important. 2. Theorems 4.1 and 4.3 state that the data generation model $\Phi$ emerges if we adopt a multi-task learning scheme. Could the authors support this claim with synthetic experiments? 3. Could Theorems 4.8 and 4.9 be supported with synthetic experiments? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your detailed review and your approval of our work. We provide point-by-point responses to your questions in the following: **Synthetic experiments:** Thank you for these questions; we believe that they greatly help to improve our work. Inspired by them, we conducted additional synthetic experiments using Boolean inputs and MLPs to further support our theoretical claims. _Please find our experimental results in this anonymous link: https://anonymous.4open.science/r/rebuttal_figures-8C6A/rebuttal_figures.pdf._ (Backup links: https://ibb.co/mrdwJt9T and https://ibb.co/gNX4FYT if the former link is not available). A brief introduction on how these results answer your questions is as follows: - **Low-degree bias of neural networks (Q1):** _Figure 1_ shows that MLPs indeed have a low-degree bias when solutions with different degrees exist for the same training distribution. - **Multi-task learning and the identification of data-generating variables (Q2 & Q3, Theorem 4.3 & Theorem 4.8):** _Figure 2_ shows that the identification error of data-generating variables significantly decreases as more training tasks are drawn, while single-task learning does not lead to similar identification. - **Identifying data-generating variables benefits OOD generalization (Q3, Theorem 4.9):** _Figure 3_ shows that in a Hamming sampling setting as in Theorem 4.9, MLPs with a representation $\Phi$ that identifies true data-generating variables achieve superior OOD generalization compared to MLPs without such a representation. We will add these experiments to the next version of our paper. **Definition of the world model:** We believe that defining the world model as a model that learns data-generating variables, although not formally established as a standard term, is well-motivated by the existing literature: - In the first work that introduces the term "world model" in the machine learning context [1], the authors describe the world model as learning "an abstract representation of both spatial and temporal aspects" of the information flow, which naturally motivates a representation learning formulation. The authors of [1] also experimentally demonstrate this concept by training an RNN-based _generation_ model on top of a learned _latent vector_, which involves latent data generation models. Thus, our definition can be viewed as a natural abstraction of their model. - Many existing works explicitly use very similar definitions as ours: For example, the authors of [2] wrote "An alternative hypothesis is that LLMs, in the course of compressing the data, learn more compact, coherent, and interpretable models of the _generative process_ underlying the training data, i.e., a world model", while [3] defines a "causal world model" as "a causal model of the _data generating process_". - A series of empirical works (e.g., [4-8]) also share a similar spirit: [4-6] treat world models as "understandable models of the process producing the sequences they (language models) are trained on" [4]; empirically, they test if transformers learn world models/world representations by connecting the models' internal representations to environment states that generate the training sequence. [7] adopts a similar approach to the task of generating program traces. [8] describes "linear world models" as models with high-level features linearly decodable from their latent spaces (see also our discussion "Connection to the linear representation hypothesis" in Section 4.2). Given the above discussion, we feel that our current title is consistent with the related literature. We will also include more discussion in the next version of our paper. **Please let us know if you still have any questions or concerns. We are happy to engage in further discussion.** --- [1] Ha and Schmidhuber. World models. NeurIPS, 2018. [2] Gurnee and Tegmark. Language models represent space and time. ICLR, 2024. [3] Richens and Everitt. Robust agents learn causal world models. ICLR, 2024. [4] Li et al. Emergent world representations: Exploring a sequence model trained on a synthetic task. ICLR, 2023. [5] Nanda et al. Emergent linear representations in world models of self-supervised sequence models. EMNLP, 2023. [6] Toshniwal et al. Learning chess blindfolded: Evaluating language models on state tracking. AAAI, 2022. [7] Jin and Rinard. Emergent representations of program semantics in language models trained on programs. ICML, 2024. [8] Marks and Tegmark. The geometry of truth: Emergent linear structure in large language model representations of true/false datasets. COLM, 2024. --- Rebuttal Comment 1.1: Comment: Most of my concerns have been addressed and I have raised the score. I enjoy reading the paper. Could you further explain in detail how you identify the data generation variable? For example, (1) why use linear probing instead of a direct feature selection? (2) which layer is suitable for linear probing? --- Reply to Comment 1.1.1: Comment: Thank you for your engagement in the discussion! We are glad that our responses have addressed most of your concerns. In the following, we provide more details on our data-generating variable identification experiment (Figure 2 in the link) to hopefully answer your follow-up questions. We will also add more experimental details and discussion to the next version of our paper. In the experiment, we trained a shared three-layer MLP as the representation $\Phi$ and $n$ separate three-layer MLPs as task-specific decoders $g_i,i\in[n]$ given $n$ tasks, following our theoretical setup. For data-generating variable identification, we used the learned representation $\Phi(x)$, i.e., the final-layer output of the representation MLP. For practical neural nets, $\Phi(x)$ is a float vector rather than Boolean, so we did not perform direct feature selection and instead used linear probing, which amounts to examining whether the true latent variables $z$ can be identified up to a linear transform. This is consistent with Theorem 4.8, which shows that $z$ can be identified up to degree-1 transforms in the Boolean domain (corresponding to linear transforms in the real domain). Note that similar linear probing techniques have also been used in prior empirical works on world models (e.g., [1-3]). --- [1] Nanda et al. Emergent linear representations in world models of self-supervised sequence models. EMNLP, 2023. [2] Gurnee and Tegmark. Language models represent space and time. ICLR, 2024. [3] Marks and Tegmark. The geometry of truth: Emergent linear structure in large language model representations of true/false datasets. COLM, 2024.
Summary: this paper aims to answer the question "when do neural nets learn world models". This is an ambitious goal that has been studied in many papers. the paper proposes a theoretical results for this problem. then the paper also shows the algorithmic implications of the results. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: no Relation To Broader Scientific Literature: i believe this paper will have some impact to a sizable audience in the ML community Essential References Not Discussed: references are adequate Other Strengths And Weaknesses: i believe causality is important in world models, but is not explored in this paper. I'm not very familiar with the content in this paper so I'll reevaluate the paper after reading the comments from the other reviewers. Other Comments Or Suggestions: n/a Questions For Authors: please see the comments above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. We are happy to engage in further discussion if you have additional questions or concerns in the author-reviewer discussion stage. On the point of causality, we agree that causality is also an important aspect of learning world models. Moreover, we would like to emphasize that a well-known difficulty in causal discovery is the identification of causal latent variables when the observed data is generated from them in a _non-linear_ fashion (see e.g., [1]), for which we show that such identification could be hoped if the model has a low-complexity bias. Therefore, combining our results and the results in causal discovery is interesting future work. --- [1] Schölkopf et al. Toward causal representation learning. Proceedings of the IEEE, 2021.
Summary: This paper provides a theoretical framework to reveal whether neural networks are capable of learning world models that capture the underlying data generation process. The framework connects the latent variable models with the world model learning. The theoretical results provide insights for future research. ## update after rebuttal Thanks to the authors for providing the rebuttal. I've read the author's response and comments from other reviewers. I have no further questions at this time. I will keep my original positive rating. Claims And Evidence: The theoretical framework and results support the paper's claims. Methods And Evaluation Criteria: This paper mainly proposes a theoretical framework and experiments on polynomial extrapolation and learning physical laws. Theoretical Claims: I have checked the main theoretical claims, but not in every detail. Experimental Designs Or Analyses: The experimental results support the claims. Supplementary Material: I have read the appendix, but not all of the details. Relation To Broader Scientific Literature: This paper is among the first to propose a theoretical framework to study the problem of when neural networks learn world models. Essential References Not Discussed: To the best of my knowledge, the references are sufficiently covered. Other Strengths And Weaknesses: [Strengths] 1. The overall paper is solid and well-structured. 2. The theoretical results are comprehensive and insightful. 3. In addition, the experimental results are provided in this paper to further verify the claims of this paper and help readers understand the proposed concepts more easily. Overall, this paper is solid and provides insightful observations/proof for future research in the areas of physical AI, robotics, autonomous driving, etc. Despite some questions listed in the following Questions For Authors section, no major technical concerns are found at this time. Other Comments Or Suggestions: N/A Questions For Authors: 1. Whether the derived insights could be applied to model-based RL tasks? 2. Considering that video generation-based world models play crucial roles in robotics and autonomous driving, I'm wondering whether the derived insights could benefit the design of large video world models by allowing them to capture real-world physics from data. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review and your approval of our work. The following are our responses to your questions. **Applications:** Thank you for your questions on potential algorithmic applications of our results. Yes, we believe that our results may inspire future algorithmic research in the following ways. - The main message of Theorem 4.3 is that the conditional degree of proxy tasks is important for representation learning. This may inspire the development of self-supervised representation learning objectives by maximizing their conditional degree (e.g., pixel-level prediction may not be a good objective since it potentially has a small conditional degree), which could be applied to designing auxiliary/proxy tasks in model-based RL, training video generation models, and language modeling. We are now exploring this idea further in the context of training language models by augmenting next-token prediction with auxiliary training objectives. - The notion of basis compatibility shows that accommodating the inductive biases of the proxy tasks in the network architecture can benefit world model learning, as demonstrated in our proof-of-concept experiments. This may inspire future model design for learning and extrapolating physical laws in a larger context. For example, for video generation models to robustly capture periodicity in real-world physics, we may need to explicitly embed modules with a natural periodicity behavior, rather than merely hoping that models with standard architectures could learn such behaviors from data. - More broadly, an important takeaway of our work is that "while neural networks typically exhibit 'simplicity biases', the underlying meaning of 'simplicity' is not fixed and is _dependent_ on the network architecture". To our knowledge, this is an underexplored aspect in the existing literature on the simplicity bias of neural networks. This could also explain why the simplicity bias tends to be sometimes beneficial and sometimes harmful in different tasks. Moving forward, we believe that designing network architectures that could _adaptively_ fit such simplicity biases to the "right" inductive biases of training tasks is a promising direction. We are currently working on a project on this topic. We will also include more discussion in the next version of our paper. **Please let us know if you still have any questions or concerns. We are happy to engage in further discussion.** --- Rebuttal Comment 1.1: Comment: Thanks to the authors for providing the rebuttal. I've read the author's response and comments from other reviewers. I have no further questions at this time. I will keep my original positive rating.
Summary: The authors explore the theoretical and empirical conditions under which neural networks could learn world models. To do so, they abstract the possible task that a model would need to learn as a set of Boolean functions which can be approximated using polynomials over binary inputs. The authors claim that this is a suitable abstraction since numerical representations in computers are represented with limited precision anyway and it allows to use simple complexity metrics. They then proceed to prove several theorems on how models would behave: will they learn shallow representations or deeper ones? The end by showing how their approach (fitting polynomials) performs against Transformers. ## update after rebuttal I've raise my score to a week accept. I like the general direction of the paper and their responses (and my own independent analysis) have cleared some things up but I feel like it is really hard to read at times. The authors try to be very precise with their approach which I understand, but it makes following the paper harder than it should be, in my opinion. Claims And Evidence: There are several claims which do not convince me: that their representation abstraction (using polynomials over parity functions) is a reasonable proxy for the kind of representations that world models would operate on; that their tests correctly mimic what a world models would need to process. The first is not at all clear to me because while it is true that computers operate with binary functions, human cognition does not. Sure brains operate with electrical signals, but the representations at the cognitive level are more abstract. Indeed the point of world models is that such abstractions emerge from this more basic substrate. It is not clear to me how such an equivalent process happens in this setting. The second is related to this. If the input signals are so low-level and it is not clear if higher level abstractions emerge from them, that how do we know that complex functions as defined here mimic what a world model needs to learn? Methods And Evaluation Criteria: See above. The evaluations and methods make sense if we are evaluating the capability of networks to model functions using polynomials over parity functions, but I feel like equating these to world models is a stretch. Theoretical Claims: No. Experimental Designs Or Analyses: I checked them, they are fine. Supplementary Material: No. Relation To Broader Scientific Literature: As expressed above, I am not sure the findings are as relevant as the authors claim to the problem of learning world models. Essential References Not Discussed: No that I can recall. Other Strengths And Weaknesses: The paper tends to be well written, though it can get bogged down in very strict formality. On the other hand as a I have said it is not clear to me that this is a good abstraction of what world models must solve and the experimental evaluation is a bit limited. Other Comments Or Suggestions: I would say that one way is to reframe this a problem of learning Boolean functions and not claim anything about world models. Questions For Authors: It is not clear when defining the problem setting how is this any different from the standard ICA, CRL, etc settings. And the authors clarification that "we do not assume that $p(z)$ is any structured distribution" is just vague. Can you provide a better definition? In lines 264-268 you say "Indeed, modern pre-training objectives such as next-token prediction and contrastive learning can be interpreted as solving a large number of prediction tasks simultaneously (Radford et al., 2019; Arora et al., 2019)." But is this really true? Is it not stretching the definition of task over its implied meaning? For me, these are just ways to learn a distribution, which may or may not correspond to different tasks. What separates different tasks from different instances of a task in this framework? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful review. We provide point-by-point responses to your concerns in the following. **Definition of the world model:** Our abstraction of the world model is motivated by a line of research in the machine learning literature. In brief, the fundamental aspect of our world model definition is _"a model that learns a representation of high-level data-generating variables"_ (Definition 2.2). We emphasize that many works in the existing literature share a similar definition as ours (e.g., [1-4] and many others; see our responses to _Reviewer L9Q8_ for further elaboration on this point) and thus feel that our definition is well-motivated and captures the main spirit of learning world models _in the context of machine learning_. - We did _not_ claim that our formulation precisely captures how the world model works in the brain. As you have mentioned in the review, the main point of our abstraction is to model how high-level representations/abstractions of data-generating variables can be learned from low-level inputs---note that this is also the main motivation of studying world models in the machine learning context [1]. On the other hand, we agree that how cognitive-level representations in the brain emerge might be a much harder question and for now, remains a mystery. - Our world model definition does _not_ require Boolean functions: Note that Definition 2.2 (and the whole Section 2 that presents our formulation) does not depend on Boolean functions at all. Boolean functions are simply used as a _mathematical tool_ to characterize the impact of _low-complexity bias_ on identifying data-generating variables, which is otherwise difficult to analyze in the real domain (note that this tool has also been used by other machine learning works [5, 6]). That being said, we believe that this treatment still captures many interesting machine learning settings and applications, e.g., representation learning, out-of-distribution generalization, self-supervised learning, etc. - Your review mentions that "If the input signals are so low-level and it is not clear if higher-level abstractions emerge from them...". We are not sure if we understood this correctly, but our formulation does require that high-level variables _can_ be obtained by inverting the data-generating function, which we believe is a minimal assumption to ensure that high-level variables are learnable from the inputs. **Detailed comparisons with ICA and CRL:** Thank you for your question. ICA assumes that different coordinates of the latent data-generating vector $\mathbf{z}$ are independent random variables, while CRL assumes that different coordinates of $\mathbf{z}$ are governed by a causal graph and aims to identify the causal relations between different coordinates. By contrast, Definition 2.1 does not impose any distributional constraints on the coordinates of $\mathbf{z}$, nor do our results rely on any of these assumptions. We will also add this discussion to the next version of our paper. **Multi-task learning:** The interpretation that pre-training objectives such as next-token prediction could be viewed as multi-task learning has been well-studied by prior work. Notably, the GPT-2 paper "Language models are unsupervised _multitask_ learners" [7] formulates next-token prediction as modeling the conditional distribution $p(output|input, task)$ with natural language as _task conditioning_. Task conditioning can also be implemented at an architectural level, i.e., using task-specific decoders as in our formulation. **Please let us know if you still have any questions or concerns. We are happy to engage in further discussion.** --- [1] Ha and Schmidhuber. World models. NeurIPS, 2018. [2] Li et al. Emergent world representations: Exploring a sequence model trained on a synthetic task. ICLR, 2023. [3] Richens and Everitt. Robust agents learn causal world models. ICLR, 2024. [4] Gurnee and Tegmark. Language models represent space and time. ICLR, 2024. [5] Barak et al. Hidden progress in deep learning: SGD learns parities near the computational limit. NeurIPS, 2022. [6] Emmanuel et al. Generalization on the unseen, logic reasoning and degree curriculum. ICML, 2023. [7] Radford et al. Language models are unsupervised multitask learners. 2019.
null
null
null
null
null
null
Differentiable Solver Search for Fast Diffusion Sampling
Accept (poster)
Summary: This paper proposes a novel solver search algorithm for fast sampling of diffusion models, which optimizes both timesteps and solver coefficients. The key idea is to treat the solver design as a learning problem, optimizing solver parameters to minimize the numerical error and improve image quality. Experiments on rectified-flow models (SiT-XL/2, FlowDCN-XL/2) and DDPM (DiT-XL/2) demonstrate that the searched solvers can achieve improved FID with 5-10 steps. The learned time steps and coefficients can generalize to different model architectures and resolutions empirically. ## Update after rebuttal - Most initial concerns have been addressed or clarified during the rebuttal. I'm supportive of acceptance as it's effective and well-supported by empirical evidence. - That said, I would not advocate for a higher rating due to the limited broader impact and significance of the contribution. Claims And Evidence: Yes, most claims are well-supported. Methods And Evaluation Criteria: Yes, the approach and evaluation criteria make sense. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes, the experimental designs make sense. Supplementary Material: Yes, I reviewed Appendix A-I. Relation To Broader Scientific Literature: This paper is most related to [1] where only time steps are learned. This paper extends the search space to both time steps and coefficients and employs a different optimization objective and strategy. [1]: Xue, Shuchen, et al. "Accelerating diffusion sampling with optimized time steps." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024. Essential References Not Discussed: A few prior works ([1,2]) have also explored the idea of accelerating the solver via learning. They should also be cited and discussed in the paper. [1]: Watson, Daniel, et al. "Learning fast samplers for diffusion models by differentiating through sample quality." *International Conference on Learning Representations*. 2021. [2]: Dockhorn, Tim, Arash Vahdat, and Karsten Kreis. "Genie: Higher-order denoising diffusion solvers." *Advances in Neural Information Processing Systems* 35 (2022): 30150-30166. Other Strengths And Weaknesses: **Strengths** - The proposed method consistently achieves improved FID score within few-step regime compared to previous methods like DPM-solver++ and UniPC. - The authors provide theoretical justification for the approach, including error bound analysis and theorems supporting. - The learned time steps and coefficients can generalize to different model architectures and resolutions empirically. **Weaknesses** - Clarity: The paper currently contains numerous typographical errors, inconsistencies, and formatting issues, which affect readability and clarity. See details in “Other Comments or Suggestions”. - The proposed approach requires generating tens of thousands of reference trajectories to learn the time steps and coefficients, which could be computationally expensive both in terms of time and space. - The learned solver might become suboptimal for different guidance scales. Other Comments Or Suggestions: Below are examples of specific errors noted during my review. There are more in other parts of the paper. I strongly recommend that the authors thoroughly proofread and revise the manuscript to address these issues comprehensively. - Line 191: “with prerained model” should be “with pretrained model” - Line 229: “of Our solver” → “of our solver” - Line 230: the equation is out of page, $b_i^j$ is used without definition. - Line 260: remove comma from “{1-\sum_{j=0}^{i-1}c_i^j,}_{i=0}^{N-1}” and “{c_i^k, }”. - Line 314: “reconstruction error(in Appendix)” → “reconstruction error[need a space](in Appendix~[need to add reference])” - Line 315: “Euler-250 steps” → “250-step-Euler”? - Line 409: “Of Solver Parameters” → “of Solver Parameters” - Line 381: “Comparison with Distillation methods” → “Comparison with distillation methods” - The use of capitalization and periods in section headings and table captions is inconsistent, confusing, and distractive: - Some sections only capitalize the first word like Section 4 “Optimal search space for a solver” while the other sections capitalize all the words like Section 2 “Related Works” - Section 4.2: “Focus on Solver coefficients instead of the interpolation function” capitalize the first character of “focus” and “solver”, which is even more confusing. - Section 5: why is there a additional period? - Table 1: “Comparsion with Distillation methods” why is the first character of “Distillation” capitalized? Also, “Comparsion” → “Comparison” Questions For Authors: - How does the proposed method compare to the prior work [1] which optimizes the time steps? - What is the total computation cost to train the time steps and coefficients? How does it compare to [1]? Could you elaborate on the computational cost and efficiency of the solver search process in more detail, particularly in relation to the performance gains achieved? - Are there any techniques or optimization strategies that could be explored to reduce the computational burden of the search process? - Since the discretization schemes of the reference trajectory (L steps) and learned trajectory (N steps) are different, how do you compute the MSE loss between these two trajectories? [1]: Xue, Shuchen, et al. "Accelerating diffusion sampling with optimized time steps." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to express our heartfelt gratitude for the valuable feedback you've provided on our manuscript. Your in-depth analysis and suggestions are of great significance to us, and we are committed to using them to enhance the quality of our work. **Q.1 Writing typos and inconsistent presentations** Thank you for pointing out the detailed writing typos and inconsistent presentations. We sincerely apologize for these inadvertent errors. We will meticulously review and revise every detail to enhance the readability of the text. **Q.2 Comparison with DM-Nonuniform[1]** DM-Nonuniform[1] primarily centers on the theoretical optimal timesteps, yet it fails to take into account the solver coefficients and model statistics. In contrast, our method conducts a statistical search for both the coefficients and timesteps concurrently. Through theoretical analysis, we have demonstrated that our method has a smaller error bound compared to those that neglect coefficients. This shows the superiority of our approach in more comprehensively handling the relevant factors in this context. We compared the performance with DM-Nonuniform[1] in tab.2 and tab.3. We copy the result here. | Methods \ NFEs | 5 | 6 | 7 | 8 | 9 | 10 | |--------------------------------------------------------|---------|---------|---------|---------|---------|---------| | DPM-Solver++ with uniform-$\lambda$-opt[1] | 12.53 | 5.44 | 3.58 | 7.54 | 5.97 | 4.12 | | DPM-Solver++ with uniform-$t$-opt[1] | 12.53 | 5.44 | 3.89 | 3.81 | 3.13 | 2.79 | | DPM-Solver++ with EDM-opt[1] | 12.53 | 5.44 | 3.95 | 3.79 | 3.30 | 3.14 | | UniPC with uniform-$\lambda$-opt[1] | 8.66 | 4.46 | 3.57 | 3.72 | 3.40 | 3.01 | | UniPC with uniform-$t$-opt [1] | 8.66 | 4.46 | 3.74 | 3.29 | 3.01 | 2.74 | | UniPC with EDM-opt [1] | 8.66 | 4.46 | 3.78 | 3.34 | 3.14 | 3.22 | | Searched-Solver | 7.40 | 3.94 | 2.79 | 2.51 | 2.37 | 2.33 | For DiT-XL/2-R512 | Methods \ NFEs | 5 | 6 | 7 | 8 | 9 | 10 | |-------------------------------------------------------|----------|---------|---------|---------|---------|---------| | UniPC with uniform-$\lambda$-opt[1] | 11.40 | 5.95 | 4.82 | 4.68 | 6.93 | 6.01 | | UniPC with uniform-$t$-opt[1] | 11.40 | 5.95 | 4.64 | 4.36 | 4.05 | 3.81 | | Searched-solver(searched on DiT-XL/2-R256) | 10.28 | 6.02 | 4.31 | 3.74 | 3.54 | 3.64 | **Q.3 Reduce the search burden** First, a significant amount of computational resources is wasted on constructing the target trajectory. Since this target trajectory can be reused for each step in the search for solvers, we can cache it to prevent redundant recomputation. Furthermore, we have observed that the solver optimized on base-sized or even small-sized models exhibits a high degree of generalization when applied to XL-sized models. Thus, using a small model as a proxy is a viable and practical choice. This approach not only reduces computational overhead but also provides a more efficient way to achieve good performance across different model scales. **Q.4 Alignment between two trajectories** Since the learned trajectory of length $N$ has its corresponding timesteps, we select a subset of length $N$ from the reference trajectory based on the timesteps of the learned trajectory. **Q.5 Total burden of searching** Searching one solver step with 50,000 samples using FlowDCN-B/2 requires approximately 30 minutes on 8 × H20 computation cards. [1]: Xue, Shuchen, et al. "Accelerating diffusion sampling with optimized time steps." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024 --- Rebuttal Comment 1.1: Comment: Thanks for your response. One follow-up question on the trajectory loss: The reference trajectory may not necessarily contain values at the learned time steps, right? Are you using interpolation to obtain values at the learned timesteps from the reference trajectory? --- Reply to Comment 1.1.1: Comment: Yes, the reference trajectory may not necessarily contain values at the learned time steps. However, the reference trajectory has much more points ($100$ reference steps in the default setting), so for each $x_s$ in the source trajectory, we can directly select the closest points from the reference trajectory based on the nearest timesteps, which is equivalent to the nearest neighbor interpolation.
Summary: The paper aims to accelerate reverse diffusion by integrating a novel differentiable solver search algorithm for better diffusion solvers. The paper demonstrates that a data-driven approach in the post-training scenario can also enable fast sampling. Using a compact search space related to the timesteps and solver coefficients, the proposed method can find the optimal solver parameter for each diffusion model. The experiment shows the effectiveness of the proposed method on multiple models compared to the current solver-based fast sampling method. ## update after rebuttal The extended visualization and evaluation shows the improvement of the Solver Search. Although the proposed method can not be generalized to multi-resolution scenario. It still offers a good solution for optimal timestep determination. Thus, my recommendation for this paper is weak accept. Claims And Evidence: The paper claims that the error caused by the non-ideal velocity estimation model can be estimated by a function related to the timesteps and coefficients. The claim is verified in the appendix. Methods And Evaluation Criteria: The proposed method is evaluated on text-to-image generation using multiple metrics. However, these metrics are limited. For example, CLIP-score, GenEval, aesthetic score, etc, are not included. Theoretical Claims: I checked correctness of Theorem 4.4. Experimental Designs Or Analyses: - The experiments only provide a quantitative comparison for DDPM/VP for text-to-image models but not rectified flow models. A similar evaluation should also be conducted. - Solver-based methods are also included in the comparison with distillation in table 1. - The comparison between the proposed method and FlowTurbo is limited, more results should be exhibited such as on different models and metrics, other than FID and IS. Supplementary Material: I reviewed the A - H Relation To Broader Scientific Literature: The proposed method might help reveal the error of each timestep and identify the importance of each diffusion timestep. Essential References Not Discussed: n/a Other Strengths And Weaknesses: - The quality comparison is limited. Only Figure 2 provides a few examples. - More quality results would be helpful to demonstrate the effectiveness of the proposed method across different prompts and diffusion models. - More comparison should be focused on FlowTurbo since they both are parameterized velocity refiners. Other Comments Or Suggestions: - using $\times$ instead of x in table 1. - There are replicated parts in the supplementary materials, G and L. Questions For Authors: - What is the CLIP-score, the aesthetic score, and GenEval score for PixArt-$\alpha$? It would be helpful if the method could be evaluated on these metrics on large diffusion models, such as SD3. - What is the overhead to derive the optimal coefficients? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback on our manuscript. Your insights are extremely helpful and have provided us with clear directions for improvement. **Q.1 Quality comparison** We plan to expand the quality comparison by including more models, such as SD3, Pixart-$\alpha$-R512, and Pixart-$\alpha$-1024. This will provide a more comprehensive evaluation of the performance and quality across a wider range of relevant models, enhancing the depth and validity of our analysis. The anonymous visualization link: https://anonymous.4open.science/r/NeuralSolver-ICML25/README.md **Q.2 More Comparison with Flow-Turbo** We presented the performance comparison in Tab. 4. Additionally, we have summarized the sampling and searching complexity in the table below, in relation to FlowTurbo. It should be noted that the value of $n$ will not exceed 15 steps. . | | Steps | NFE | NFE-CFG | Cache Pred | Order | search samples | Params| |--------------|-------|------|---------|------------|-------|------------------|----| | Adam2 | n | n | 2n | 2 | 2 | / | n | | Adam4 | n | n | 2n | 4 | 4 | / | n | | heun | n | 2n | 4n | 2 | 2 | / | n | | FlowTurbo | n | $>$n | $>$2n | 2 | 2 | 540000(Real) | $2.9\times10^7$| | our | n | n | 2n | n | n | 50000(Generated) | n + 0.5(n$\times$(n-1))| **Q.3 Pixart on GenEval** We provided the solver(searched on DiT-XL/2-R256) with PixArt-$\alpha$ on GenEval benchmark **Resolution 512 for PixArt-$\alpha$ on GenEval benchmark** | | steps | cfg | colors | counting | color_attr | two_object | single_object | position | all | |----------|-------|-----|--------|----------|------------|------------|---------------|----------|---------| | dpm++ | 5 | 1.5 | 72.07 | 27.19 | 5.75 | 26.26 | 91.25 | 3 | 0.37587 | | | 8 | 1.5 | 77.66 | 32.19 | 6.75 | 36.36 | 94.06 | 4.5 | 0.41921 | | unipc | 5 | 1.5 | 73.14 | 25.94 | 6.25 | 26.26 | 90.94 | 3 | 0.37588 | | | 8 | 1.5 | 78.72 | 32.50 | 6.5 | 40.15 | 93.75 | 5.5 | 0.42875 | | ours | 5 | 1.5 | 72.87 | 31.56 | 6 | 33.08 | 91.88 | 5 | 0.40065 | | | 8 | 1.5 | 76.86 | 33.44 | 7 | 40.40 | 94.06 | 5.5 | 0.42878 | **Resolution 512 for PixArt-$\alpha$ on GenEval benchmark** | | steps | cfg | colors | counting | color_attr | two_object | single_object | position | all | |----------|-------|-----|--------|----------|------------|------------|---------------|----------|---------| | dpm++ | 5 | 2.0 | 76.60 | 30.94 | 6.50 | 33.08 | 91.25 | 4.75 | 0.40519 | | | 8 | 2.0 | 76.86 | 37.19 | 5.25 | 39.65 | 93.75 | 5.75 | 0.43074 | | unipc | 5 | 2.0 | 77.66 | 31.87 | 6.50 | 34.85 | 92.19 | 5.25 | 0.41387 | | | 8 | 2.0 | 79.52 | 36.56 | 6.72 | 40.66 | 95.31 | 6 | 0.44134 | | ours | 5 | 2.0 | 77.62 | 33.75 | 5.25 | 37.37 | 92.81 | 4.75 | 0.41933 | | | 8 | 2.0 | 79.52 | 38.44 | 7.25 | 42.68 | 95.00 | 7.50 | 0.45064 | **Q.4 Total burden of searching** Searching one solver step with 50,000 samples using FlowDCN-B/2 requires approximately 30 minutes on 8 × H20 computation cards. --- Rebuttal Comment 1.1: Comment: Thanks for authors reply. The extended visualization and evaluation shows the improvement of the Solver Search. I have a follow up question regarding the method. Current flow-matching models apply timestep shift while sampling resolution changed. Does the method still work for varied resolution? --- Reply to Comment 1.1.1: Comment: Due to the strong coupling between our coefficients and time steps, we can no longer use timeshift simultaneously. However, we found that directly transferring the search results still yields satisfactory performance. To pursue ultimate performance, one should need to conduct a search specifically tailored for this resolution.
Summary: This paper proposes a differentiable solver search algorithm to find an optimal ODE solver for reverse-diffusion solving of pre-trained diffusion models. The authors use gradient-based optimization to identify solver parameters that lead to improved sample quality with very few function evaluations. The approach is evaluated on both rectified flow models and DDPM/VP frameworks, showing improvements in FID scores on ImageNet benchmarks under 10 sampling steps. Claims And Evidence: The authors claim that their differentiable search method significantly reduces discretization error compared to traditional solvers. This claim is supported by extensive experiments, including comparisons to state-of-the-art methods such as DPM-Solver++ and UniPC, as well as ablation studies examining the impact of search sample size and solver parameterization. The theoretical analysis, detailed in the appendix, provides error bounds that reinforce the empirical findings.​ Methods And Evaluation Criteria: The methodology addresses limitations of t-related Lagrange interpolation in existing fast sampling solvers by reparameterizing solver coefficients and timesteps into a differentiable framework. The evaluation is comprehensive, utilizing FID and other metrics across multiple model architectures and resolutions. The choice of benchmarks, including ImageNet-256 and ImageNet-512, and the inclusion of both rectified flow and DDPM-based models, provide a strong basis for assessing the method’s generality and effectiveness.​ Theoretical Claims: The paper provides theoretical support for its solver-search method by deriving explicit bounds on discretization error. Key results include Theorem 4.4, showing that solver error depends explicitly on solver coefficients and timesteps, and Theorem 4.2, establishing the optimality of expectation-based solver coefficients over traditional Adams-like interpolation. And also, Theorem 4.5 which argues analytically that the proposed solver achieves tighter error bounds than conventional multi-step methods. These results justify the approach theoretically, I also checked the proof of these claims but not very carefully. Experimental Designs Or Analyses: The experimental evaluation seems to bde quite comprehensive, with detailed comparisons to recent solver-based methods. The ablation studies are informative, demonstrating how the performance of the searched solver varies with different numbers of search samples and parameter settings. Supplementary Material: The supplementary material includes extended experimental results, additional metrics (sFID, IS, Precision, Recall), and detailed proofs of the theoretical claims. Relation To Broader Scientific Literature: The authors situate their work within the context of recent advances in fast diffusion sampling and solver-based methods. The paper builds on insights from prior works on DDPM/VP solvers and rectified flow models, providing relevant comparisons to state-of-the-art techniques like DPM-Solver++ and UniPC. This discussion clarifies how the proposed approach advances the current understanding of efficient diffusion sampling. Essential References Not Discussed: Authors reference most relevant works. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and well-structured 2. The experiments are quite comprehensive to evaluate the method Weakness: 1. The improvements, while consistent, are incremental compared to existing solvers. 2. The paper would benefit from a more detailed discussion on the computational overhead of the search process.​ Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your valuable feedback on our manuscript. **Q.1 Total burden of searching** Searching one solver step with 50,000 samples using FlowDCN-B/2 requires approximately 30 minutes on 8 × H20 computation cards. **Q.2 More Quality comparison** We plan to expand the quality comparison by including more models, such as SD3, Pixart-$\alpha$-R512, and Pixart-$\alpha$-1024. This will provide a more comprehensive evaluation of the performance and quality across a wider range of relevant models, enhancing the depth and validity of our analysis. The anonymous visualization link: https://anonymous.4open.science/r/NeuralSolver-ICML25/README.md
null
null
null
null
null
null
null
null
FedBEns: One-Shot Federated Learning based on Bayesian Ensemble
Accept (poster)
Summary: FedBEns proposes a one-shot federated learning utilizing a Bayesian ensemble approach. Unlike standard FL methods that simply rely on averaging, FedBEns combine client models using a mixture of Laplace approximations to model multimodal local posteriors. Empirically, FEDBEns demonstrates superior performance on benchmark datasets. Claims And Evidence: The claims are generally supported by empirical evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria (datasets like FMNIST, SVHN, and CIFAR10) are appropriate and standard in federated learning literature. Experiments varying the heterogeneity parameter and client counts effectively illustrate the method’s robustness in diverse scenarios. Theoretical Claims: This paper does not provide any new theoretical results. Experimental Designs Or Analyses: Experimental designs are sound, with clear baselines chosen from different categories of one-shot federated learning (FedFisher, RegMean, DENSE, OTFusion, FisherMerge). The analysis across varying levels of data heterogeneity and client numbers is comprehensive and methodologically robust. Supplementary Material: No, I did not review the supplementary material in any great detail. Relation To Broader Scientific Literature: FedBEns extends the literature on Bayesian federated learning methods, particularly enhancing one-shot federated learning. It directly builds upon prior works (FedBE, FedPA, FedPop, FedHB), explicitly modeling multimodal local posteriors through Laplace mixture approximations. However, the approach is quite incremental, essentially applying known Bayesian ensemble ideas and aggregation explicitly to the one-shot federated learning setting. While simple, no other work has explicitly pursued this direction in one-shot FL, to the best of my knowledge. Essential References Not Discussed: No critical references appear to be missing that would significantly alter the context or understanding of the proposed contributions. Other Strengths And Weaknesses: **Strengths:** - Clear justification for the multimodal Bayesian approach. - Empirical results demonstrating significant accuracy improvements over baselines. - Good evaluation across multiple datasets and varying heterogeneity levels. **Weaknesses:** - Computationally intensive due to multiple local models required per client, significantly impacting scalability, particularly in large-scale or resource-constrained settings. - Quite limited novelty: The paper’s contribution is essentially an explicit application of existing Bayesian ensemble and aggregation ideas to one-shot FL without substantial conceptual innovation. If empirical results were not as strong, this lack of novelty would likely not warrant a paper. - Lack of detailed analysis on practical constraints, such as communication overhead and latency, limiting the clear demonstration of practical applicability in real-world scenarios. Other Comments Or Suggestions: - Readability and clarity of the algorithm and experimental setups are very clear. - Further exploring and clearly quantifying computational and communication trade-offs in practical deployments would strengthen the paper's applicability and appeal. Questions For Authors: - Could you further clarify the scalability and communication overhead compared to simpler baseline methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1) Computational/Communication overhead quantification:** See answer 1) to reviewer wHXT. **2) Computational/Communication trade-offs and practical applicability:** The proposed method is inherently more computationally intensive, as it is based on an ensemble of M models. Its applicability depends on the specific application setting and the base model (e.g., NN architecture) used for training in the federated setting. Regarding the trade-off between computational costs and prediction performance, it is worth mentioning that our proposal substantially improves over existing methods with just a couple of mixtures (see Table 1 and Figure 2 of the manuscript). The reviewer suggested additional exploration of the computational and communication trade-offs. We provide the following insights in response. ***2.1) Comparison with FedFisher:*** FedFisher is our direct competitor. In Appendix B.4, we extend FedFisher to a multi-round setting. By letting FedFisher train over 5 rounds, its total communication and computation costs equal those of FedBEns with 5 mixture components. Our method shows to be competitive despite relying on a single communication round. We performed an additional experiment in which we equalized the client computation with FedFisher by running FedBEns with 5 mixture components on clients’ training datasets 5 times smaller. Remarkably, FedBEns performs better than FedFisher in most scenarios (see the table below and FedFisher K-FAC results in Table 1 of the manuscript for comparison). **FedBEns, 5 mixtures Kron, REDUCED DATASETS** | Dataset | Accuracy [%] (Avg ± STD, 3 seeds) | |-|-| | **Alpha = 0.05** | | | FMNIST | 50.87 ± 4.46 | | SVHN | 59,12 ± 1.42 | | CIFAR10 | 45.38 ± 0.62 | | **Alpha = 0.4** | | | FMNIST | 77.95 ± 0.31 | | SVHN | 81.15 ± 0.25 | | CIFAR10 | 53.82 ± 1.16 | ***2.2) Other tasks:*** To further explore the accuracy-costs tradeoff for practical applications, we conducted a more challenging experiment with a ResNet-18 on CIFAR100. We followed the experimental setting of the paper; however, we limited FedBEns to only 2 mixtures and Kronecker factorization. Local training lasted for 100 epochs. The table below shows that at the cost of doubling the computation/communications cost w.r.t. FedFisher, FedBEns achieves a higher gain in accuracy (doubled or even more). | Metric | FedBEns M=2 | FedFisher | RegMean | DENSE | OTFusion | Fisher-Merge | |:-|-:|-:|-:|-:|-:|-:| | **Accuracy**, CIFAR100 (alpha=0.05) | 8.56±0.47 | 3.14±0.23 | 3.01±0.54 | 5.88±1.07 | 4.79±0.19 | 2.81±0.33 | | **Accuracy**, CIFAR100 (alpha=0.4)| 16.95±0.67 | 4.79±0.31 | 4.85±1.16 | 13.92±0.54 | 4.74±0.56 | 3.87±0.48 | | **Communication Costs (MB)** | 716 | 358 | 93 | 47 | 47 | 94 | | **Local runtime (min)** | 28.0 | 14.0 | 9.3 | 9.2 | 9.2 | 10.1 | | **Server runtime (min)** | 2.3 | 1.2 | 0.6 | 10.4 | 0.2 | 0.3 | ***2.3) System heterogeneity:*** Another comment regarding the practical deployment of our approach is that the number of mixtures can be customized on the basis of each client's computational power. As an illustrative example, we considered CIFAR-10 with 5 clients: 3 employ a single Laplace approximation (M=1) and 2 are able to exploit three mixtures (M=3). In both cases, the final model returned to the clients is composed of a single model. The additional information coming from the clients with more computational power benefits the entire federation and leads the server to find a better global model, especially in heterogeneous settings. The table below shows the mean accuracy over five seeds for FedBEns in two settings: all clients use a single component (M=1 ALL) versus the customized setup described above. |Dataset|Accuracy [%] (Avg ± STD, 3 seeds)|| |-|-|-| | | **M=1 ALL**| **M CUSTOMIZED**| |**Alpha = 0.05**| | | |FMNIST| 57.02 ± 4.73| 59.97 ± 1.44| |SVHN| 66.11 ± 0.77| 68.39 ± 1.57| |CIFAR10| 39.89 ± 1.71| 42.32 ± 2.25| |**Alpha = 0.4**| | | |FMNIST| 77.07 ± 0.29|78.31 ± 1.76| |SVHN| 82.86 ± 0.21|82.75 ± 1.41| |CIFAR10| 53.97 ± 0.86|54.68 ± 1.88| In the case of acceptance, we will integrate these additional results into the paper. **3) Quite limited novelty:** We agree with the reviewer that our proposal relies on concepts commonly used in the Bayesian learning landscape. However, we would like to emphasize that our key contribution lies in recognizing how these ideas can significantly improve FL, specifically in a One-Shot setting. In particular, we highlight the importance of explicitly capturing different modes of the posterior rather than framing the FL task purely as an optimization problem. Additionally, we believe that, by building upon a well-studied theoretical framework, our method, also thanks to its inherent simplicity, is both principled and robust. Moreover it brings, as pointed out by the reviewer, very strong empirical results. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. Although undeniable that your method gets good results against one-shot methods, the fact that its communication and computation costs are much more similar to "M-shot methods", means that I am really struggling to view the respective benchmarks given in the main body of the paper as fair comparisons. I am less impressed with the numerical results than I was after the initial review. Regarding your response to **Computational/Communication trade-offs and practical applicability**, I can understand this, but this is like reasoning that M-shot methods are worth it over one-shot methods; except in this instance, the method is being *proposed* as a one-shot method. I will keep my score as is, but am hesitant to recommend acceptance of the paper given my current understanding of the algorithm complexity and its comparison to existing literature. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging in a discussion and providing feedback. In the following, we provide some comments hoping to clarify the concerns regarding: 1) Fairness of the comparison with SOTA One-Shot methods; 2) Why a One-Shot approach is worthwhile, even with comparable client costs to Few-Shots methods. **1) Comparison Fairness** We emphasize that also the evaluated baselines differ in communication and computation costs, with more resource-intensive methods generally performing better (see first table to reviewer wHXT’s rebuttal and Tab.1 in the paper). For example, FedFisher transmits roughly 5x more data than Dense and we have checked that a 5-shot Dense outperforms FedFisher in terms of accuracy (up to 11% on CIFAR10, 5 clients, $\alpha=0.1$). We do not think this would be a valid reason to blame FedFisher. In addition, the fairness of a comparison between One-Shot and Few-Shot approaches can be debatable, as they rely on fundamentally different assumptions: One-Shot methods operate without server’s feedback, while multi-round approaches exploit it to refine the global model. Thus, a direct comparison based solely on resource consumption overlooks this distinction. Our comparison with multi-round FedFisher was mainly performed to check whether, even in a setting that inherently advantages multi-round approaches, our method gives competitive results, *despite* being a One-Shot approach. The positive outcome even surprised us, as we expected FedFisher to perform significantly better in this scenario. We acknowledge that FedBEns can be costly, particularly with many mixtures. However, we regard the number of mixtures as a tunable parameter to balance the performance-cost trade-off in specific application scenarios. The accuracy of FedBEns with a single mixture (M=1) is comparable to, or slightly better than, the baselines across all datasets and heterogeneity levels (compare Fig. 2 and Tab. 1 in the paper), and FedBEns significantly outperforms the baselines already for M=2. In terms of computational/communication costs (see the first table in the rebuttal to reviewer wHXT), FedBEns (M=1) is comparable to FedFisher and only marginally more demanding than RegMean, the best performance methods aside from FedBEns. **2) Advantages of a One-Shot approach vs Few-Shots even when client costs are comparable** While FedBEns with M mixtures and FedFisher with M rounds entail similar per-client communication/computation costs, the overall time a client needs to participate to the training may differ significantly due to 1) server-to-client transmissions, 2) the straggler effect. After discussing these issues, we comment on the importance of a shorter client participation time. *Server-to-client transmissions.* Assuming equal server/clients uplink and downlink capacities, the communication time for M-round FedFisher scales as 2M, since the model must be sent from client to server and back in each round. In contrast, FedBEns communication time is halved since mixtures are sent once and then aggregated. If communication is the bottleneck (as is often the case in FL systems), training M-round FedFisher would require each client to remain available for roughly twice as long as FedBEns with M mixtures. (Note that FedBEns final model does not change if a client leaves after having sent its mixtures). *Straggler effect.* The reasoning above assumes homogeneous clients and synchronization. However, real-world systems are affected by stragglers, clients whose updates arrive significantly later than those of others [1]. Stragglers slow down the entire system, as the server must wait for their inputs before generating a new model to distribute. This, in turn, forces other clients to wait for the updated model, further increasing their participation time. In the case of one-shot algorithms, clients can leave as soon as they provide their contributions. *Importance of a shorter participation time.* In a cross-device setting, clients often exhibit volatile participation patterns. Therefore, minimizing the required client's participation time is critical: it may determine whether a client contributes to the training or drops out before completion, thereby preventing the federation from leveraging its dataset and degrading the final model’s quality [1]. Finally, we observe that in FedBEns, each client can transmit each mixture as soon as it is computed. As a result, even if a client disconnects before completing all M mixtures, the system benefits from its partial contribution (on an additional experiment in line with those of the first rebuttal, ‘System heterogeneity’ section, 2 clients with 3 mixtures each lose up to 4% accuracy vs 5 clients with 1 mixture, and up to 7% vs the customized scenario). We thank again the reviewer, we will expand this discussion in the paper if accepted [1] Kairouz, et al. "Advances and open problems in federated learning." Foundations and trends in machine learning, ‘21
Summary: This paper focuses on one-shot Federated Learning, where the model is aggregated in a single communication round. The authors provide an analysis through the lens of Bayesian inference and then propose a method to leverage the inherent multimodality of local loss functions to find better global models. Claims And Evidence: na Methods And Evaluation Criteria: na Theoretical Claims: na Experimental Designs Or Analyses: na Supplementary Material: na Relation To Broader Scientific Literature: na Essential References Not Discussed: na Other Strengths And Weaknesses: Overall, the results look promising. However, it would be beneficial to include discussions on the following aspects: - computation: The algorithm involves additional server-side training. It would be fair to compare its computational cost with other methods, as some model fusion methods don't require additional training. - Figure 1 is not very clear or easy to understand. For example, where is the global loss in the left figure? Are both figures plotted in the same way? Clarifying these points would improve readability. - It would be interesting to see whether this method converges faster when trained on the server. Other Comments Or Suggestions: na Questions For Authors: na Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1) Additional server-side computation:** We appreciate the reviewer’s observation—this is indeed an important point. Our method does entail increased server-side computation as the number of mixture components grows. However, we would also like to clarify that some of the baselines considered can be similarly demanding in terms of server-side resources. For instance, DENSE involves both synthetic data generation and knowledge distillation at the server, which results in a training time comparable to that of FedBEns with 5 mixture components. Furthermore, we would like to highlight that FedBEns outperforms DENSE even when using fewer mixture components (compare the results for FedBEns in Figure 2 with those in Table 1 in the manuscript) while requiring less server-side training time. To illustrate this more concretely, we provide below a table—also to be included in the paper—that reports the server training time for the CIFAR-10 experiment with 5 clients. The number of mixture components used in FedBEns is denoted as M. |Method|Server average execution time (seconds)| |-|-| |FedBEns (Kron)|M=5 173.5, M=4: 113.5, M=3: 65.9, M=2: 28.5, M=1: 7.1| |DENSE|170.2| |FedFisher|13.4| |Regmean|5.1| |OTFusion|1.1| |FisherMerge|0.9| Our approach increases server computation but improves predictive performance. Whether this additional computational cost is worth it depends on the practical context, and it can be justified if the server has enough computational power. For an additional discussion on the computational/communication cost, please refer to answer 1) to reviewer wHXT. **2) Figure 1, clarification:** The two subfigures in Figure 1 illustrate different functions, as also indicated by the respective titles at the top of each subfigure. The left plot shows the local loss functions for client 1 (orange curves) and client 2 (blue curves), each with its own optimum (represented by orange and blue points, respectively). The right plot depicts the global loss function (green curve), obtained using Eq. (2) by combining the losses of the two clients from the left plot. The key takeaway is that, when estimating the global loss, capturing the secondary optimum is more crucial than the primary one, emphasizing the need for a multi-modal approach. We will modify the figure caption to make it clearer. **3) Faster convergence when trained on the server?:** If we have correctly understood the reviewer’s point, our method can indeed also be used in a centralized setting—that is, where all training data is available on the server. However, in this case, the server’s role in merging different posteriors becomes redundant, as there is no need to combine client-specific posteriors. Naturally, centralized training also eliminates communication overhead, providing an additional efficiency advantage. On the other hand, the federated approach could benefit from the inherent parallelism of distributed data processing. Overall, the relative training speed between centralized and federated setups depends on the interplay of these three factors, as well as potential differences in the hardware available in each case. As an illustrative example, in a centralized setting and using the same hardware described in Sec. 5.3, on CIFAR10, computing one mixture component over the entire dataset would require 129.5 seconds for training the Neural Network, plus 25 seconds for computing the Hessian. In a federated setting, the time to compute a client’s mixture component would decrease proportionally with the number of clients, due to parallelism. However, aggregating the results incurs a cost that scales linearly with the number of clients, as computing the global log-posterior in Eq. (4) requires summing over the clients' posteriors. Assuming identical hardware at the server and at the clients and ignoring communication delays, the federated approach on CIFAR10 and 5 mixtures is more efficient up to $\sim 20$ clients. This threshold decreases when communication overhead is included or when the centralized server has superior computational capabilities. Last, also in a centralized setting, accuracy increases with mixtures. As an example we report the average accuracy (3 seeds) for CIFAR10 as a function of the number of mixtures M, with the same training details reported in the paper: M=1: 79.48%, M=2: 82.11%, M=3: 83.28%, M=4: 84.99%, M=5: 85.46%.
Summary: The paper introduces FedBEns, a one-shot federated learning (FL) algorithm using Bayesian inference to address multimodal local loss functions. It approximates local posteriors with a mixture of Laplace approximations (GMM) and aggregates them to estimate the global posterior. The server identifies global modes via SGD and performs ensemble predictions. Experiments on FashionMNIST, SVHN, and CIFAR10 show FedBEns outperforms baselines (e.g., FedFisher, RegMean) by up to 10% in accuracy, especially under high data heterogeneity (α=0.05). Claims And Evidence: Employing GMMs to model multi-modality of local loss for each client is straightforward and the proposed method is solid. Extensive numerical experiments support its validity. Methods And Evaluation Criteria: The method uses a mixture of Laplace approximations, which aligns with the goal of capturing multimodal posteriors in one-shot FL. Evaluation on standard datasets (FashionMNIST, SVHN, CIFAR10) with Dirichlet sampling for heterogeneity is appropriate. Theoretical Claims: Proposition 3.1 (Equation 2) claims the global posterior can be derived by combining local posteriors under conditional independence and same-prior assumptions. The proof (Appendix A) is correct, using Bayes’ theorem and factorization of likelihoods. Extension to unimodal to multimodal posteriors is straght-forward. Experimental Designs Or Analyses: Experiments (Tables 1-4) vary heterogeneity and client numbers, with 5 seeds for robustness. The ablation studies on mixture size, Hessian approximations, temperature, and prior variance (Figures 2-5) are sound. especially, an ablative study for the number of mixture components validates the superiority of employing multimodality instead of unimodal gaussian. However, the lack of statistical significance tests (e.g., t-tests) for accuracy comparisons may limit the reliability of superiority claims. In addition, the assumption of having validation data at the server is unrealistic in federated learning (FL), as FL typically assumes no raw training data exists at the server. If such data is available, it should be used for training rather than just validation. Supplementary Material: I reviewed all the contents in the supplementary material. Relation To Broader Scientific Literature: FedBEns builds on Bayesian FL by focusing on one-shot learning and multimodal posteriors, unlike FedPA’s multi-round MCMC approach. It extends Laplace approximations (MacKay, 1992) to mixtures, similar to the work in Eschenhagen et al. (2021). It contrasts with unimodal loss-based methods like FedFisher (Jhunjhunwala et al., 2024), emphasizing multimodal loss landscapes. Essential References Not Discussed: Please refer to "Relation to Broader Scientific Literature" Section Other Strengths And Weaknesses: The claim of computational efficiency is weakly supported—while a trade-off with mixture size is mentioned, no runtime or communication cost analysis is provided. Other Comments Or Suggestions: - Validation Set Usage: Why use a server-side validation set for hyperparameter tuning, given FL’s privacy constraints? If removed, how would performance change? - Computational Cost: Can you quantify the runtime and communication costs of FedBEns compared to baselines? - Does the proposed method work under feature hetergeneity such as domain shift? Questions For Authors: - Learnable Mixture Weights: Is it possible to make each mixture component weight learnable? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are happy the reviewer appreciated the paper. **1) Runtime/communication costs quantification:** We observe that the FedBEns per-client computational and communication costs are both linear in the number of mixtures M. On the contrary, in our implementation, the computation time at the server scales as M$^2$ (we have to find M modes of the global log-posterior through M SGD runs, and the cost of each evaluation of the global log-posterior scales linearly with M, see Eq. (4) ). We provide experimental results for the CIFAR10 dataset with 5 clients on the hardware specified in Sec. 5.3. Times are averaged over 5 runs. - Local training time: for each mixture component, 25.9s for training the NN and 5.0s for computing the Hessian with Kronecker factorization. - Communication cost: for each mixture component, 17.8 MB (3.6 MB for the NN and 14.2 for the Hessian with Kronecker factorization). - Server aggregation time: ($O(M^2)$). We report the execution time for different values of M: M=1: 7.1s, M=2: 28.5s, M=3: 65.9s, M=4: 113.5s, M=5: 173.5s. We provide the communication and computation costs with the other baselines in the following table. For additional comments on this aspect and results comparing FedBEns and FedFisher under similar computational cost, see answer 2) to reviewer dxFK: |Method|Client runtime (s)|Server runtime (s)|Communication cost (MB)| |-|-|-|-| |FedBEns(kron)|30.9$\cdot$M|7.1$\cdot$M$^2$|17.8$\cdot$M| |FedFisher-KFAC|30.9|13.4|17.8| |RegMean|26.0|5.1|7.3| |FisherMerge|26.2|0.9|7.31| |DENSE|25.9|170.2|3.65| |OTFusion|25.9|1.1|3.65| **2) Validation set usage:** We thank the reviewer for the interesting question. First, we note that FedFisher, RegMean, DENSE, OTFusion rely on a validation set at the server, both in their original papers and in our experiments. Our method uses a validation set during the server's gradient descent runs, to find the parameter configuration that achieves the best validation performance (Sec. 5.3). We have performed an additional experiment where no validation dataset is used and the number of server epochs was fixed to 100. Remarkably, FedBEns still outperforms the considered baselines, which continue to take advantage of the validation dataset, with only a minor drop in accuracy (up to a few p.p.; see Table 1 in the paper and the supplementary table below). We will highlight this important point more clearly in the revised version. **FedBEns, 5 mixtures Kron, NO VALIDATION** |Dataset| Accuracy [%] (Average, 3 seeds)| |-|-| |**Alpha = 0.05**|| |FMNIST|59.15| |SVHN| 70.26| |CIFAR10|48.81| |**Alpha = 0.4**|| |FMNIST|82.16| |SVHN|84.69| |CIFAR10|60.00| **3) Feature heterogeneity:** We are not sure how to interpret the expression “feature heterogeneity.” If it refers to structurally different data across clients—e.g., clients having different numbers or types of features—this would require a different model architecture per client. Since our training method assumes a shared model architecture across all clients, it is not directly applicable to such a setting. In contrast, our method explicitly assumes *statistical heterogeneity* across client datasets—that is, data drawn from different underlying distributions. We do not impose strong assumptions on these distributions beyond conditional independence. If the reviewer refers to *domain shifts* after global model training, our method remains applicable in certain scenarios, such as covariate shift. In fact, FedBEns can be particularly beneficial in this context: when new clients join the federation with different data distributions, the approximate posterior produced by FedBEns can serve as a prior. This acts as a regularizer, helping to prevent the model from drifting too far when fine-tuned on data that differs significantly from the original training distribution. We believe that exploring this aspect more thoroughly would be an interesting direction for future work. **4) Learnable Weights:** A principled way to assign the weights would be to set them equal to the model evidence of each Gaussian component. We experimented with this approach and found that performance remained essentially unchanged, as the model’s evidences were comparable to each other, in line with what has been found in [1]. Another possibility would be to let the server weigh each distribution, or all distributions belonging to a given client, based on the predictive performance of the model trained by that client. However, this approach may require the server to have explicit access to a dataset while, in our approach, it is not strictly needed (as discussed also in answer 2). [1] Immer et al. “Scalable marginal likelihood estimation for model selection in deep learning”, ICML ‘21 **5) Statistical Significance:** all results (Table 1 and 2) are statistically significant, p-value<0.05, computed with a paired t-test between our models and the best-performing competitor.
null
null
null
null
null
null
null
null
A Classification View on Meta Learning Bandits
Accept (poster)
Summary: This paper studies a meta-learning approach to multi-armed bandits (MAB), where multiple bandit instances (tasks) are drawn from an unknown prior distribution. The key contribution is formulating meta-learning bandits as a classification problem, leveraging a novel complexity measure called the classification coefficient. Claims And Evidence: Yes, most claims made in the submission are supported. However, 1. There is no lower bound analysis to show that the classification coefficient is the best or tightest complexity measure for this setting. The theoretical justification for why this measure is superior to alternative complexity metrics (e.g., information gain in latent bandits, KL divergence of reward distributions) is missing. Methods And Evaluation Criteria: The proposed method is well-motivated. However, there is no analysis of how regret scales with increasing task diversity or task ambiguity. Theoretical Claims: The high-level proof strategy follows standard approaches: First bound the number of rounds required for correct classification; Then analyze regret contributions from both classification and exploitation phases. Mostly correct, but the paper does not provide a lower bound to show whether this regret bound is tight or optimal. And the paper does not analyze cases where tasks are not well-separated, which could impact the validity of theoretical guarantees. Experimental Designs Or Analyses: 1. The experiments only use synthetic datasets, which are useful but do not demonstrate real-world applicability. The task distributions and separability conditions are artificial, making it unclear how well the method generalizes to practical applications. 2. The paper does not test real-world recommendation, healthcare, or RL-based MAB tasks, where meta-learning could provide significant benefits. 3. The theoretical results depend on task separability $\lambda$, but the paper does not systematically evaluate how different levels of $\lambda$ affect regret. Supplementary Material: No submission of supplementary material. Relation To Broader Scientific Literature: The approach aligns with previous work in latent bandits and meta-RL, where task identification plays a crucial role. Meta-learning in bandits aims to reuse knowledge from past tasks to improve future decision-making. Alternative approaches, such as latent variable models, clustering, or embedding-based meta-learning, are not explored. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Other Strengths And Weaknesses: 1. The paper introduces the Explicit Classify then Exploit (ECE) algorithm, which consists of two phases: a. The algorithm first identifies the task by collecting data from different arms and using a decision-tree-based classification method. b. Exploitation Phase: Once the task is classified, the algorithm uses the optimal arm selection strategy corresponding to that task, minimizing exploration regret. 2. The paper introduces a classification-based perspective for meta-bandits, which is novel compared to traditional Bayesian or clustering-based approaches. Other Weaknesses: 1. ECE relies on decision trees for classification, which may not scale well in high-dimensional settings or when task distributions are complex. 2. No ablation studies to understand the effect of task separability $\lambda$ or decision tree depth on classification accuracy and regret. Other Comments Or Suggestions: 1. The intuitive explanation of the classification coefficient \( C_\lambda(M) \) is unclear and difficult to understand. 2. The method is introduced in a straightforward manner, lacking motivational analysis, which makes it feel like a direct application of classification methods. 3. Additionally, there is insufficient experimental visualization, such as task classification processes and decision boundaries. Questions For Authors: 1. No discussion on what happens when tasks overlap—does misclassification degrade performance significantly? 2. The algorithm assumes a fixed set of known tasks—how does it generalize to new, unseen tasks? 3. Is $C_{\lambda}(M)$ provably the best complexity measure for task classification in meta-bandits, or could a better bound be derived? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. We provide below detailed replies. **Questions for authors** 1. Our analysis and experiments consider overlapping tasks [for “overlap”, we mean the bandits may have the same, or similar, reward distribution for some of the arms/contexts. If the reviewer was thinking of a different kind of overlap, we kindly ask them to please elaborate more on this]. The parameter $\lambda$ controls the degree to which at least one arm distribution *does not overlap* in any pair of bandits. Since the algorithm uses $\lambda$ to make the tests statistically robust, it suffers from the same misclassification rate with any $\lambda$, whereas the regret degrades as $O(\lambda^{-2})$ (see Theorem 3.1, 4.4) 2. Note that we also consider a setting where tasks are *not known* to the algorithm, which estimates them through samples (Section 4.1). The reviewer is right that the set of tasks is *fixed* in the paper: We do not provide results for generalization to tasks that are not seen in training. Arguably, generalization depends on “how” the test tasks differ from the training tasks. We conjecture that similar results could be obtained for the *lenient regret* (see https://arxiv.org/abs/2008.03959) when the test task is an $\epsilon$-perturbation of a training task. For more general discrepancies between training and test, the algorithm shall be re-designed to allow for additional test-time exploration, which is a nice direction for future works. 3. We proved that $C_{\lambda} (\mathbb{M})$ captures the complexity of an instance of our setting with both a lower bound (Theorem 3.3) and an upper bound (Theorem 3.1). This implies that $C_{\lambda} (\mathbb{M})$ is *as good as any other* complexity measure for this *specific* setting: Finite collection of separated bandits. For the latter setting, there might be other complexity measures that are equally good (we are not aware of). In other settings, there might be better complexity measures. **Lower bound** We actually provide a lower bound where a factor $C_\lambda (\mathbb{M})$ appears explicitly (see Theorem 3.3 and discussion thereafter). We thank the reviewer for making us realize that the lower bound is not highlighted enough. We will make this result more central in the updated manuscript. **Ablation study on the effect of separability** Reviewer is mentioning a few times that an analysis of the effect of $\lambda$ on empirical results is missing. We hear their concern: We will report in the Appendix more results for different $\lambda$ values. What we noticed from our experiments is the same effect that one can see by squinting at Figure 4 (a, b). In 4a, $\lambda$ is 0.4 and the elbow in the DT-ECE is very sharp. In 4b, $\lambda$ is 0.04 and the elbow is a lot smoother. With smaller $\lambda$ it takes more time to traverse the tree, both because each test requires more samples and the tree might be deeper. **Experimental design** We agree that experiments on real-world data better supports real-world applicability. We will add to the manuscript an experiment on movie recommendations with the MovieLens dataset (https://grouplens.org/datasets/movielens/), on the same line of the MovieLens experiment in Hong et al 2020a (see response to R. 1Rko "experimental design"). **Other weaknesses:** *“ECE relies on decision trees for classification, which may not scale well in high-dimensional settings or when task distributions are complex”* We recall that the regret of ECE is not affected by the dimensionality of the problem (see Theorem 3.1). However, we believe the Reviewer means something else here: The additional complexity of computing and storing the decision tree. The computational complexity is polynomial in both $d$ (dimensionality) and $M$ (size of the set of tasks), where the factors of $d$ disappear when the tasks are known. The space complexity is $M 2^{C^* (\mathbb{M})} \leq M 2^M$, which may become intractable for a large set of tasks. In those cases, the tree can be traversed *online* without pre-computing it (like in Algorithm 1) with negligible additional space complexity. We will add space complexity and considerations on balancing interpretability and memory requirements in the updated manuscript. **Intuitive explanation of coefficient $C_\lambda (\mathbb{M})$:** We thank the reviewer for pointing out that the definition of the classification coefficient is difficult to understand. See the comment for all the reviewers in the response to Reviewer HVdH for an intuitive explanation. **Insufficient experimental visualizations:** Following the Reviewer’s suggestion, we will add visualizations of the decision trees learned by our algorithm in the Appendix. As an example, we provide the one for Figure 4a at https://anonymous.4open.science/r/anon_icml_rebuttal-90C9/empirical_tree.pdf. Especially, note the short depth of the tree even for a hard instance that maximizes $C_\lambda (\mathbb{M}) = M$.
Summary: The authors address a meta-linear latent contextual bandit setting, where there are M total possible bandit settings, and where there is separation between these M settings. Authors thus propose a classification view that leverages this separation: first, classify the test task as one of M settings; then, perform the greedy policy assuming the test task is correctly classified ("ECE": Explicit Classify then Exploit). Authors prove properties about the proposed method when assuming full knowledge of the bandit settings, including a regret bound that uses a novel measure of complexity. Authors also propose a practical implementation of ECE with decision trees. The resulting exploration policy is interpretable, which can be an important feature in practice. Authors also prove properties of the practical method. Authors demonstrate that the performance of the proposed method is on par with versions of TS and UCB, which are not interpretable. Claims And Evidence: Theoretical claims: - Theorems and lemmas have proofs in the appendix. Due to limited time, from spot checking, Theorem 3.1 seems to have no (serious) issues. - Occasionally claims are made casually in the text that cite another work, but readers could benefit from a proof sketch in the appendix (e.g. Eq 5). Empirical: - The proposed method performs on par with versions of TS and UCB: this looks fine to me. Methods And Evaluation Criteria: Methods: - The methods (with and without full knowledge of $\mathcal M$) seem very reasonable to me, given the assumed setting. - I am curious what kinds of realistic settings the assumptions could correspond to, and especially ones where the proposed method should perform particularly well. - I am curious about what values $N_{cls}$ takes on in practice. Evaluation: - The experiments are done on simple settings but the paper contribution is primarily theoretical. - One could say that because the proposed method is interpretable, unlike existing baselines, it adds value even if it does not outperform. This would be true assuming there do not exist other tree-based exploration policies for this setting. Theoretical Claims: I checked the proof of Theorem 3.2 for correctness and I did not find (serious) errors. Experimental Designs Or Analyses: Yes, I read over the experiments section and it looks sensible. Supplementary Material: - I reviewed Appendix C - I reviewed parts of Appendix A and B (proofs), but only Theorem 3.2 and Lemma A.2 in detail. Relation To Broader Scientific Literature: This paper proposes using tree-based classification for interpretable exploration in a meta-learning contextual bandit setting where the total number of settings is fixed ($M$) and the settings have separation. Perhaps I am not that familiar, but I don't often see meta-bandit papers with separation and with tree-based classification. Essential References Not Discussed: There are many papers that are tangentially related that I am familiar with (e.g. other meta-bandit papers) but I wouldn't consider any of them to be essential for discussion. Other Strengths And Weaknesses: Strengths - The paper is clearly written. - The proofs are well-organized and concise. - The paper does not overpromise or overhype. Weaknesses - The linear bandit assumption is strong. However, such assumptions are also common. Other Comments Or Suggestions: - Typos in the abstract, e.g. "asses" and "When human design strategies", and also elsewhere, e.g. "For instance, to minimize the number of times a treatment different from the optimal one is administered to a patient" is not a full sentence. - Algorithm 1 line 4 should be argmax rather than max - It seems like / is used to denote set-minus (e.g. line 252, 587), which in my impression is less common than \. - In Section 3.1 it would improve clarity to explain that the non-optimal arms are such that for $i\neq j$, there is some arm $a$ for which $\mu_i(a)=(1+\lambda)/2$ and $\mu_j(a)=(1-\lambda)/2$. (I assume this is true based on the rest of the section, but it is not explicitly stated.) - I think line 815 does not follow from line 812, but line 818 is still true. The KL should not disappear on line 815. - Where does $\hat m_t$ come from in line 593? - Why is $C(M)\leq C^*(M)$ in (5)? - In the abstract, part of the motivation is "When human design strategies, they aim for the exploration to be *fast*, since the patient’s health is at stake". This phrasing suggests something more urgent / is measured via something other than better asymptotics. I think this is a minor point. Questions For Authors: 1. What is $N_{cls}$ in the experiments? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are glad to hear that the reviewer appreciated our work. We thank them for their thoughtful comments, useful suggestions, and for pointing out typos. We will make use of them to improve the manuscript. We address their questions below. **Value of $N_{cls}$ in the experiments** We used $N_{cls} = C / \lambda^2$ in all the experiments, where $C$ is a constant. We set $C = 4$ after trying $C = 2, 4, 8, 16$ (results are not particularly sensitive to this value). Moreover, we implemented an adaptive sampling strategy that checks the confidence of the test at each sample, so that the sampling ends whenever the desired confidence is reached or $N_{cls}$ samples have been taken. The adaptive sampling also didn’t affect results substantially, but it makes the algorithm slightly more efficient in some settings. We thank the reviewer for pointing out that this information is missing in the text. We will add a specific section with experimental details to the Appendix. **Other comments:** - Why is $C (M) < C^* (M)$ in (5)? Thank you for the opportunity to clarify. First, we provide below an detailed explanation of $C (M)$, which can help to understand the proof. Proof: This is a standard argument in decision trees that $C(M)$ is a lower bound to $C^* (M)$. Consider any subset of hypotheses $S$, and when starting from this subset, let the maximum number of hypotheses that we can rule out each round is at most $N$. Then, the depth of any deterministic decision trees with the subset $S$ must be at least $S/N$. Since $C^* (M)$ is the depth of deterministic trees for a larger set of hypothesis, $S/N < C^*(M)$. This holds for all subsets of $M$, hence $C(M) < C^*(M)$. Following reviewer’s suggestion, we will add this sketch of the proof of eq. 5 in the Appendix. - *In Section 3.1 it would improve clarity to explain that the non-optimal arms are such that for $i \neq j$ for which $\mu_i (a) = (1 + \lambda) / 2$ and $\mu_j (a) = (1 - \lambda) / 2$ (I assume this is true based on the rest of the section, but it is not explicitly stated.)* The reviewer is right! This is implied by the separation assumption (Asm. 1) but we agree with the reviewer it is very implicit. We will explicitly mention this in the text. ------ # **For all the Reviewers** **Intuitive explanation of $C_\lambda (\mathbb{M})$** Based on all the reviews, we understand that our exposition of $C_{\lambda} (M)$ could be improved with more intuition. We want to provide an intuitive explanation of its meaning here. We hope that the additional clarity will make the reviewers better appreciate this important contribution. Informally, the value of $C_\lambda (\mathbb{M})$ measures how much information we gain from a single split of the worst-case node of the decision tree (small value means more information). To see this, let us look at the math: $$C_\lambda (\mathbb{M}) := \max_{S \in 2^{[M]}} \min_{\pi \in [K]} \max_{i \in S} \frac{|S|}{|S_{\lambda}^\pi (i)|}$$ Let us unpack each term. $S$ is a set of candidate bandits and $i$ stands for the true bandit within $S$. $\pi$ is an arm we *test* on the true bandit. $S_{\lambda}^\pi$ is the subset of $S$ that we can eliminate from the set of candidates by pulling $\pi$ enough. Thus, the ratio $|S| / |S_{\lambda}^\pi|$ measures the amount of information we can gain from this split. In the extreme cases: We eliminate a single bandit from $S$, so that $C_{\lambda} (\mathbb{M}) = M$ is large (little information gained) We eliminate half of the bandits in $S$ and $C_{\lambda} (\mathbb{M}) = M / (M/2) = 2$ is small (a lot of information gained). Note that we cannot reduce $C_{\lambda} (\mathbb{M})$ by eliminating more than half of the candidates, as $i$ is chosen worst-case to select the largest set after the split. We will add this intuitive explanation together with extreme cases and visualizations in the updated manuscript.
Summary: This paper presents a novel classification - based approach to meta - learning bandits. Contextual multi - armed bandits are widely used for sequential decision - making, but common bandit algorithms have issues like high regret and lack of interpretability. The authors consider a meta - learning setting of latent bandits with a separation condition. They introduce a classification - coefficient to measure the complexity of the learning problem. The Explicit Classify then Exploit (ECE) algorithm is proposed, which classifies the test task and exploits the optimal policy of the classified task. To make the algorithm more practical, the Decision Tree ECE (DT - ECE) is developed. It is robust to misspecifications, only accesses samples from the context distribution, and provides an interpretable exploration plan. Experiments show that DT - ECE performs well compared to current bandit approaches. In conclusion, this classification view offers a new way to design interpretable and efficient exploration plans, and may inspire future research in more general problem settings. Claims And Evidence: While the authors aim to enhance the interpretability of bandit algorithms, their motivation lacks sufficient depth. The claim that bandit algorithms lack interpretability is controversial, as these algorithms, grounded in probability theory with well - defined mathematical assumptions, already possess a form of inherent interpretability. The paper fails to adequately justify the need for further interpretability. To strengthen the motivation, more real-world examples are essential. Methods And Evaluation Criteria: Considering only regret might be insufficient. In real-world applications, other factors like space complexity, which will be worse due to the tree, could also be important evaluation aspects that are not fully explored. Theoretical Claims: The paper's assumption of a stochastic environment when optimizing multiple candidate bandit instances is a flaw. In reality, these bandits often operate in non-stationary conditions. Each bandit might be at a different stage of convergence, leading to diverse current performances. Could authors explain this point? Experimental Designs Or Analyses: The experimental datasets are simplistic, consisting entirely of synthetic data with relatively fixed parameters. This simplicity raises concerns about the generalizability of your findings. Real world problems are often complex and variable, and the lack of diverse and realistic datasets means that it's difficult to assess how your proposed methods would perform in practical scenarios. The absence of scalability experiments is a significant drawback. The super-parameters of setting are fixed. Regarding the comparison with baseline methods, limiting the comparison to only one or two baselines is insufficient. There are many established methods in the meta bandit literature, and not comparing your approach with a wider range of them makes it difficult to position your work within the existing research landscape. A more comprehensive comparison would provide a better understanding of the relative strengths and weaknesses of your proposed algorithms. Supplementary Material: Yes. I have read the proofs. Relation To Broader Scientific Literature: Bandit study is related to many ML fields. Essential References Not Discussed: Qi Y, Ban Y, Wei T, et al. Meta-learning with neural bandit scheduler[J]. Advances in Neural Information Processing Systems, 2023, 36: 63994-64028. Sharaf A, Daumé III H. Meta-learning effective exploration strategies for contextual bandits[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(11): 9541-9548. Xie M, Yin W, Xu H. AutoBandit: A Meta Bandit Online Learning System[C]//IJCAI. 2021: 5028-5031. Other Strengths And Weaknesses: The paper is replete with a plethora of symbols, which, while essential for the mathematical rigor of the research, lack sufficient explanation of their importance. For example, the \(C_{\lambda}(\mathbb{M})\). Why it is a core contribution? Explanation for the meaningful is better to present the symbols. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We want to thank the reviewer for their feedback. We are replying to their comments below. **Claims and evidence** Interpretability is a crucial motivation of our paper and we want to make sure we are on the same page with the reviewer on this. Especially: - **Interpretability of bandit algorithms**: Translating a notion from supervised learning to the bandit setting, we say that exploration is interpretable if it can be represented through a decision tree with *small depth* (see Bressan et al. 2024 https://proceedings.mlr.press/v247/bressan24a Def. 2). Our work is the first to establish bounds on the depth of the decision tree for exploration (Lemma 4.3) - **UCB and TS are interpretable**: UCB and TS may also be represented through a decision tree, but its depth can be as large as the number of rounds. According to this notion UCB and TS are *not interpretable*. See an example of how our algorithm builds a compact decision tree (https://anonymous.4open.science/r/anon_icml_rebuttal-90C9/empirical_tree.pdf from the experiment of Figure 4a) whereas an analogous tree for UCB or TS exploration is too large to be visualized **Methods and evaluation criteria** Great point! Storing the full decision tree requires additional $M 2^{C^* (\mathbb{M})} \leq M 2^M$ space in the worst-case, which may be impractical in applications where the number of tasks $M$ is huge. In those cases, the tree can be traversed *online* without pre-computing it (like in Algorithm 1) with negligible additional space complexity. We will add space complexity and considerations on balancing interpretability and memory requirements in the updated manuscript. **Theoretical claims** “Each bandit might be at a different stage of convergence” we are not sure we understand the reviewer’s concern here. Note that we interact with only one bandit at test time. Moreover, stationarity is orthogonal to whether the reward distributions are stochastic or adversarial. We target the stationary stochastic setting, just like in the previous literature of latent bandits (with the exception of Hong et al 2020b), aiming for fast and interpretable exploration. This makes the approach *more* practical (than previous works) in some applications, e.g., the candidate bandits represent user “types” in a recommendation system: It is often reasonable to assume that a user doesn’t change “type” during the interaction. We agree with the Reviewer that some other applications are inherently non-stationary and extending our results is a non-trivial direction for future works. **Experimental design** While we developed a method intended for practical settings, we remark that the core contribution of the paper is conceptual. However, to support practicality of our method, we will add an additional experiment in which we compute the decision tree on the MovieLens dataset (https://grouplens.org/datasets/movielens/), similarly to Hong et al 2020a (see their Sec 5.2). Users are first clustered into types, based on historical data. Then, the decision tree that we learn essentially prescribes what movies to recommend in order to classify the type of an unknown user, to then maximize their reward by providing recommendations specialized for the classified type. - **Super-parameters are fixed.** We considered different values for the separation parameter, number of arms and bandits. Is there any other parameter the reviewer would like to see explored? - **Baselines.** The aim of the experiments is not to outperform any prior bandit algorithm, as our is the first method that is *interpretable* (in the sense above) and thus competes in a different field. If the reviewer thinks there is a baseline that is relevant to the latter field, we will be happy to include a comparison. **Essential references** Thanks for pointing out additional relevant work. We will be happy to include them in the paper with appropriate discussion. Qi et al and Xie et al are interesting but tackle orthogonal problems. S&D 2021 meta learn exploration for contextual bandits, which is similar in nature to our setting. Their algorithm, MELEE, trains exploration by imitating an expert (optimal exploration) in simulation. There are crucial differences: - We do not assume access to an expert during training, but only to simulators - MELEE is not interpretable according to our notion (see above), as we cannot bound the depth of a decision tree representing its policy - MELEE’s regret goes to zero asymptotically, but the regret rate is implicit in their result **Other weaknesses** Thanks for letting us know that the symbols are sometimes hard to grasp. We will make a careful revision of all the symbols and to convey enough intuition. See the comment for all the reviewers in the response to Reviewer HVdH for an intuitive explanation of the coefficient $C_{\lambda} (M)$.
null
null
null
null
null
null
null
null
Contrastive Private Data Synthesis via Weighted Multi-PLM Fusion
Accept (poster)
Summary: The paper introduces WASP, an approach for generating differentially private synthetic data by leveraging multiple pre-trained language models (PLMs) in a collaborative manner. They tackle (1) limited private samples, (2) noisy synthetic data, and (3) risky PLM selection. They employ a Top-Q voting mechanism to improve private data distribution estimation, uses contrastive prompts to reduce noise, and dynamically weights PLMs based on their performance. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: Using LLM for data publishing Essential References Not Discussed: Yes Other Strengths And Weaknesses: (1)Pros The paper introduces a novel framework (WASP) that combines multiple PLMs to generate high-quality DP synthetic data, addressing key limitations of existing PE methods. The use of Top-Q voting and contrastive prompts is innovative and effectively improves the quality of synthetic data while maintaining privacy guarantees. The paper provides a thorough theoretical analysis of the differential privacy guarantees of WASP, including proofs and sensitivity analysis. (2)Cons While the framework is effective, the complexity of combining multiple PLMs and the iterative nature of the process may raise concerns about scalability, especially for large-scale datasets or when using computationally expensive PLMs (e.g., GPT-4). The paper does not provide a detailed analysis of the computational cost or time complexity of WASP compared to baseline methods. The paper focuses exclusively on text data, and it is unclear how well WASP would generalize to other types of data (e.g., images, tabular data). The paper assumes a fixed privacy budget across all iterations. However, in practice, the allocation of the privacy budget across iterations could be optimized to further improve the quality of synthetic data. Other Comments Or Suggestions: NA Questions For Authors: See cons. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate our reviewer's insightful comments that help us improve our work. **Q: Complexity of combining multile PLMs and computational cost & complexity comparison.** First of all, as stated in Lines 105~107 on the left in our paper, no additional queries are made to PLMs under the same number of required synthetic samples $N$ in the WASP framework compared to baseline methods although samples are generated in a iterative manner. As shown in Table G in below, the computational cost of WASP is slightly higher than that of PE baselines, mainly due to increased in prompt length ($8$ in-context samples instead of $1$) and the "furthest histogram" calculation. Second, the primary computational cost is driven by synthetic sample generation, which accounts for nearly 3000 times the runtime of other operations (including DP Top-$Q$ Voting and PLM Importance Weighting). While this process is unavoidable, users can mitigate the impact of computationally expensive PLMs by choosing faster alternatives. Also, like shown in Table G, for large-scale datasets, WASP's computational cost remains manageable. Table G. Comparison of computational complexity and runtime (seconds) of WASP and PE series baselines within each iteration (averaged across iterations and also across different PLMs for PE series baselines). "Others" includes DP top-$Q$ voting and PLM importance weighting, i.e. line 7~10 in Alg. 1, with the latter beging a negligible time cost tensor normalization operation on the Nearest Histogram vector. |||Aug-PE|Aug-PE|Pre-Text|Pre-Text|WASP|WASP| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |||Generation|Others|Generation|Others|Generation|Others| ||Complexity|$O(N)$|$O(MN)$|$O(N)$|$O(MN)$|$O(N)$|$O(MN)$| |IMDb|$L=1,M=100$|1.341$\times$10$^{4}$|5.846|-|-|1.970$\times$10$^{4}$|6.090| ||$L=10,M=300$|-|-|1.348$\times$10$^{4}$|7.153|1.963$\times$10$^{4}$|7.668| |Yelp-Rating|$L=1,M=100$|1.539$\times$10$^{4}$|5.468|-|-|2.175$\times$10$^{4}$|5.528| ||$L=10,M=300$|-|-|1.561$\times$10$^{4}$|7.376|2.235$\times$10$^{4}$|7.608| |Openreview-Rating|$L=1,M=100$|1.268$\times$10$^{4}$|5.655|-|-|1.949$\times$10$^{4}$|5.871| ||$L=10,M=300$|-|-|1.215$\times$10$^{4}$|7.713|1.982$\times$10$^{4}$|7.911| **Q: Other data modality.** We believe that WASP framework can also be extended to other modalities. WASP's key components, including DP Top-$Q$ voting, PLM importance weighting, are general operations which can be directly applied to other data modalities. However, the process for synthetic data generation differs by modality. For example, for image generation [7,8], the applied models and APIs are different from text, with diffusion models and RANDOM_APIs oftenly used to modify existing samples with a hyper-parameter controling variation degree. Therefore, non-trivial adaptations should be made for the generation processure. Similarly, for tabular samples [9,10], non-trivial modifications are also required for crafting proper generation prompts and pipelines. As a case in point, the original PE method [7] focuses exclusively on image data, and Aug-PE [11] makes non-trivial adaptations with carefully designed generation techniques to generalize it onto text data. Given the extensive work required, we think generalizing WASP to other modalities deserves a separate paper/papers. **Q: Further optimization on privacy budget allocation.** We thank our reviewer for pointing out an interesting future step for our work. Since our primary focus lies in the fusion of multiple PLMs for private synthetic data generation, we decide to leave this as a possible future advancement. --- [7] Zinan Lin et. al., Differentially Private Synthetic Data via Foundation Model APIs 1: Images, ICLR2024. [8] Kecen Li et. al., PrivImage: Differentially Private Synthetic Image Generation using Diffusion Models with Semantic-Aware Pretraining, USENIX Security2024. [9] Mikel Hernandez et. al., Synthetic data generation for tabular health records: A systematic review, Neurocomputing2022. [10] Eugenia Papadaki et. al., Exploring innovative approaches to synthetic tabular data generation, Electronics2024. [11] Chulin Xie et. al., Differentially Private Synthetic Data via Foundation Model APIs 2: Text, ICML2024.
Summary: The paper proposes a novel framework called WASP, designed to generate synthetic data that mimics real private datasets while ensuring DP. WASP addresses three key challenges in existing methods: limited private samples, noisy synthetic data, and the risk of selecting the wrong PLM. It uses a Top-Q voting mechanism to estimate private data distributions more accurately, leverages contrastive learning to improve synthetic data quality, and dynamically weights multiple PLMs to mitigate model bias. The framework is tested on six datasets with both open-source and closed-source PLMs, demonstrating superior performance over existing methods. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence. However, while WASP performs well across multiple PLMs, the claim of being "PLM-agnostic" might be overstated. The ablation experiments primarily focus on a limited set of PLMs, specifically three GPT-based models when testing with K=2 PLMs. This limitation in the scope of PLMs tested raises questions about how well WASP would perform with a broader range of PLMs, including those from different architectures or vendors. Methods And Evaluation Criteria: The proposed method, WASP, and its evaluation criteria generally make sense for the problem. Theoretical Claims: No critical errors were found in the provided proofs. Experimental Designs Or Analyses: The experimental design in WASP demonstrates methodological rigor. It tests 6 NLP tasks including sentiment analysis, rating/field classification with 6 open-source and 3 closed-source PLMs. However, there are some potential issues: 1. Comparison Fairness: Aug-PE uses single PLMs, while WASP benefits from multi-PLM fusion; no comparison to Aug-PE with equal compute/API budgets. 2. Task Scope: Limited to classification; no validation on generation/sequence tasks (summarization). Supplementary Material: The supplementary materials are comprehensively reviewed. The supplementary materials provided essential technical validation for theoretical soundness of DP mechanisms, pactical implementation details, extended empirical evidence beyond main results and methodological differentiation from prior work. Relation To Broader Scientific Literature: 1. Enhanced PE Methods - Prior: PE (Lin et al., 2024; Xie et al., 2024) used Top-1 voting, struggling with data scarcity. - WASP: Introduces Top-Q voting with decaying weights, improving sensitivity analysis (Δ=2 → Δ=4) and enabling robust distribution estimation with limited samples. 2. Contrastive ICL + DP Synthetic Data - Prior: Contrastive ICL improved model responses (Gao & Das, 2024). - WASP: First to combine contrastive ICL with DP guarantees, using low-quality samples as negative examples to reduce noise (addressing Xie et al., 2024's "noisy data" issue). [1] Xie, C.,Lin, Z.,Backurs, A.,Gopi, S.,Yu, D.,Inan, H.A., Nori, H., Jiang, H.,Zhang, H.,Lee, Y.T., etal. Differentially Private Synthetic Datavia Foundation Model APIs 2:Text. In Forty-first International Conference on Machine Learning, 2024. [2] Lin,Z., Gopi, S., Kulkarni, J., Nori,H., and Yekhanin, S. Differentially Private Synthetic Datavia Foundation Model APIs 1: Images. In The Twelfth International Conference on Learning Representations, 2024. [3] Ye,J., Gao, J.,Wu,Z.,Feng, J.,Yu,T., and Kong, L.ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback. In Findings of the Association for Computational Linguistics: EMNLP 2022, pp.3671–3683,2022. Essential References Not Discussed: As far as I know, no related works that are essential to understanding the (context for) key contributions of the paper, but are not currently cited/discussed in the paper. Other Strengths And Weaknesses: It would be clearer to show the ablation results if the authors can add another comparison without both contrastive prompting and importance weighting when conducting ablation study in Table 4. Other Comments Or Suggestions: A more thorough comparison with other DP methods, such as DP-SGD, would offer a clearer understanding of WASP's advantages. Additionally, presenting a time cost comparison between the proposed method and standard PE would strengthen the evaluation. Questions For Authors: 1. Is the contrastive learning process efficient in terms of computational resources, especially when dealing with large datasets or complex tasks? 2. The federated DP analysis assumes ≤8 samples/user for Δ=32 sensitivity. What happens if users contribute more samples (e.g., 100/user)? Does WASP’s noise scale linearly, making it impractical, or are there optimizations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks very much! **Q: Fair comparison with equal API budget.** We emphasis that, as stated in Lines 105~107 on the left in our paper, no extra query to the PLMs is incurred by WASP compared to single-PLM PE baselines, when they are compared under the same number of required synthetic samples $N$. So we are making fair comparison to Aug-PE with equal API budgets (number of queries) throughout our experiments. **Q: Validation on generation tasks.** We provide results on question answering task with SQuAD dataset in Table D. These results also testify the effectiveness of our proposed WASP for generation tasks and will be added to our final version. Table D. Evaluation of STM performance (F1) on SQuAD dataset. The same setting is used as that in *Table 1* in the paper. ||Only Private|FuseGen|Aug-PE$_{GPT-2}$|Aug-PE$_{Llama-2}$|Aug-PE$_{Vicuna}$|Aug-PE$_{OPT}$|Aug-PE$_{ChatGLM3}$|Aug-PE$_{Flan-T5}$|WASP| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |F1|5.41|9.31|7.37|7.84|8.46|8.31|8.20|9.32|11.40| **Q: Ablation without both contrastive ICL and PLM importance weighting.** By removing both components, the final STM performance are $89.05$%,$58.72$%,$35.45$% for IMDb, Yelp-Rating and Openreview-Rating respectively. Comparing with results in *Table 4*, these results show that both components are vital in boosting the final performance of WASP. See full results in Table E1 in https://anonymous.4open.science/r/WASP/re.pdf . **Q: Comparison with DP-SGD.** We provide experimental results for first fine-tuning a single PLM with $M=100$ private samples under the same setting of *Table 1* in our paper with DP-SGD and then use the fine-tuned PLM for generation ("DP-SGD+Gen") in Table F (see Table F1 in https://anonymous.4open.science/r/WASP/re.pdf for more results). Results demonstrate the superiousness of WASP compared to DP-SGD method. We will add these into our final version. Table F. STM performance of WASP and "DP-SGD+Gen" with $N=6000,M=100,L=1$. |||GPT-2|Llama-2|Vicuna|OPT|ChatGLM3|Flan-T5| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |IMDb|DP-SGD+Gen ($K=1$)|87.44|84.63|84.93|81.47|83.18|89.14| ||WASP ($K=6$)|||89.52|||| |Yelp-Rating|DP-SGD+Gen ($K=1$)|50.04|49.95|57.46|55.68|45.79|60.85| ||WASP ($K=6$)|||61.21|||| **Q: Time cost comparison.** We compare the runtime of WASP and PE methods in Table G in response for Reviewer ngfH in below due to response limitation. Results show that, the additional time overhead for WASP per iteration is minor compared to PE methods. Since WASP makes no extra PLM queries compared to PE methods, its runtime increase comes from longer prompts (8 v.s. 1 in PE methods) for generation, and from the additional calculation of the "furthest histogram" for other operations during in-context learning sample selection. **Q: Efficiency of contrastive in-context learning.** Results in Table G in response for Reviewer ngfH in below show that increasing the total number of private samples $M$ or task complexity has little impact on WASP' runtime, confirming its efficiency. **Q: Optimization for user-level DP when data parties contribute more samples.** First, for fair comparison in federated setting, we strictly follow Pre-Text [4], assuming each data party controls no more than 8 samples samples to fit on-device setting. If users contribute more, optimizations like norm clipping [5,6], an off-the-shelf and widely applied technique, can keep noise levels practical by capping the norm of voting vectors $H^n_l,H^f_l$ given by data party $l$ at a preset bound $\zeta$, preventing user sensitivity $\Delta$ from scaling linearly with user private dataset size. Under the same $\zeta$, using more private samples (e.g. 100/user) will not degrade performance compared to using fewer (e.g. 8/user). Moreover, if a single party controls a large number of samples (e.g. 100), like shown in *Table 1* in our paper, the party alone can achieve good performance using sample-level DP whithout the need of collaborating with others. **Q: More different architecture PLM results with $K=2$ for testifying PLM-agnostic.** Table H shows results for $5$ open-source PLMs. These results show that pair-wise performance exceeds individual participating single-PLM alone, supporting our PLM-agnostic claim. Table H. STM performance of $K=1$ (diagnose) and $K=2$ (others) of WASP with $N=6000,M=100$. ||GPT-2|Llama-2|Vicuna|OPT|ChatGLM3|Flan-T5| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |GPT-2|85.65|86.72|85.91|85.85|86.92|89.33| |Llama-2||85.82|85.92|86.23|86.97|89.45| |Vicuna|||82.90|85.80|86.18|89.30| |OPT||||84.32|86.22|89.32| |ChatGLM3|||||86.12|89.63| |Flan-T5||||||89.28| --- [4] Charlie Hou et. al., PrE-Text: training language models on private federated data in the age of LLMs, ICML2024. [5] Anda Cheng et. al., Differentially private federated learning with local regularization and sparsification, CVPR2022. [6] Fumiyuki Kato et. al., Uldp-FL: Federated Learning with Across-Silo User-Level Differential Privacy, VLDB2024. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. The authors have addressed the key concerns well. Overall, the rebuttal significantly improves the submission. I’m upgrading my score. --- Reply to Comment 1.1.1: Comment: We are truly grateful for your effert and your acknowledgement on our rebuttal. Thank you so much for upgrading the score! We will be sure to incorporate the rebuttal into our final draft for refinement!
Summary: This paper studies differentially private generation of synthetic data using LLM APIs. The general idea is based a lines of prior works Private Evolution, which generate samples tailored for synthetic data by resampling from the ones that is closer to the private dataset. This work improves upon existing works by 1) less private data enabled 2) low quality synthetic samples 3)model bias of the LLM APIs. The proposed method WASP uses (1) a Top-Q voting mechanism where private samples vote for the nearest and furthest synthetic samples, (2) dynamically weighted LLM-based data generation informed by private histograms, and (3) contrastive in-context learning to enhance the quality and relevance of generated synthetic samples. Experiments show SOTA results compared to Aug-PE when the number of private data is small. Claims And Evidence: There are two claims. For privacy, the author theoretically proves the differential privacy of the voting mechanism. For utility, the author demonstrate the performance through experiments. The privacy proof is problematic and might require more clarity. See below for details. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Issue : Lemma D.3 is supposed to be the advanced composition theorem. But it is not cited correctly. The delta composition is missing. This could significantly weaken the privacy guarantee. Experimental Designs Or Analyses: Yes. Note that the private dataset has size 100, which is specifically designed to highlight the advantage of this proposed method. Supplementary Material: Some of the proof. Relation To Broader Scientific Literature: This falls under a new line of research for using LLM APIs to generate synthetic data that is similar as the private dataset. The citation is adequate. Essential References Not Discussed: No. Other Strengths And Weaknesses: I think the overall quality is good except for this mistake in the privacy proof. The contribution is incremental compared to prior work but still great progress towards making the generation quality better. Other Comments Or Suggestions: None. Questions For Authors: It would be good to see the experiment results after fixing this delta issue. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q: Lemma issue and results.** Thank you for pointing this out. Lemma D.3 should be added with "with $\delta_{total}$ increased to larger than $T\times\delta$". Denote the $\delta_{total},\delta_{iter}$ as the final $\delta$ and $\delta$ for each iteration respectively. Therefore, as we perform 5 iterations to collect the total synthetic dataset with 4 iterations related to DP (the first iteration performs zero-shot generation without real sample guidance), the final $\delta_{total}>4\cdot\delta_{iter}$. Therefore, as we applied $\delta_{iter}=1\times10^{-5}$ in our experiments, the final $\delta_{total}>4\times10^{-5}$ for all PE series baselines and WASP. Moreover, following Theorem 4.3 in [3], using $\delta_{iter}=1\times10^{-23}$ instead will guarantee overall $(4.0,1\times10^{-5})$-DP, which results in a noise scale roughly $2.14$ times as large as the original one used in our original experiments in the paper. Experimental comparison using the original and new $\delta_{iter}$ are included in Table C with other setting maintained the same as that for *Table 1* in our paper. These results demonstrate that, under tighter privacy guarantee ($\delta_{iter}=1\times10^{-23}$), the performance decrease is just minor, indicating the robustness of WASP and PE baselines. Table C. Comparison of using $\delta_{iter}=1\times10^{-5}$ (original experimental results) and $\delta_{iter}=1\times10^{-23}$ (satisfies $\delta=1\times10^{-5}$) with $4$ iteration steps related to DP. $\epsilon=4.0$ is fixed as the final combined DP budget. Experiments are performed using $6$ open-source PLMs with $L=1,M=100$. |||Aug-PE$_{GPT-2}$|Aug-PE$_{Llama-2}$|Aug-PE$_{Vicuna}$|Aug-PE$_{OPT}$|Aug-PE$_{ChatGLM3}$|Aug-PE$_{Flan-T5}$|WASP| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |IMDb|$\delta_{iter}=1\times10^{-5}$|85.38|85.77|82.76|83.86|85.82|89.00|89.52| |IMDb|$\delta_{iter}=1\times10^{-23}$|84.88|85.30|82.04|83.52|85.22|88.83|89.18| |Yelp-Rating|$\delta_{iter}=1\times10^{-5}$|45.28|47.42|54.42|50.81|55.17|58.69|61.21| |Yelp-Rating|$\delta_{iter}=1\times10^{-23}$|45.03|47.10|54.09|50.47|54.97|58.61|61.05| --- [3] Peter Kairouz et. al., The Composition Theorem for Differential Privacy, ICML2015.
Summary: Two of the main strategies for generating private synthetic text or image data are private fine-tuning and private evolution. Private fine-tuning can work well but this requires access to the parameters of a generative model (many of which are closed-source) and the computational resources to fine-tune the model. Private evolution uses API access to a generative model in order to generate private synthetic data. These methods sample some synthetic data, then privately compare the synthetic data to the real data, then query the API to create new data based on the feedback generated by the real data. The two main contributions in this work are the weighted multi-PLM fusion and the contrastive prompting. The weighted multi-plm fusion takes advantage of the fact that there are many pretrained language models available and that some of them will be better suited for generating particular types of data. The method developed by the authors (WASP) will start by querying several generative models and as it goes it will start to generate fewer samples from the models that are not performing well for a given task and more samples from models that are performing well for a given task. Contrastive prompting modifies the PE framework by privately measuring which of the synthetic data are most similar to the real data and which synthetic data are least similar to the real data points. This is used to construct contrastive prompts like “generate synthetic data that are more similar to this <good example> and less similar to this <bad example>”. The authors perform several experiments and ablations to demonstrate how the various components of WASP contribute to its performance and how WASP compares to baseline methods. The authors also extend WASP to a federated setting. These experiments show that WASP outperforms previous techniques across various metrics. Claims And Evidence: Aug-PE performs poorly when using the “incorrect” PLM - The authors show that the performance of Aug-PE depends on the PLM that is being used and that the best PLM to use for one dataset is not necessarily the best for every dataset. This claim is relatively well supported and is a good motivation for multi-PLM fusion. This is further supported by the ablation study showing that performance improves with the number of PLMs included in WASP. WASP’s advantage over Aug-PE extends to the federated setting - The authors show that federated-WASP performs better than PreText (which is a version of Aug-PE that works in the federated setting). Contrastive prompting and Multi-PLM fusion are both important to the performance of WASP - This claim is supported by an ablation study summarized in Table 3. However, I am dubious about the result for the importance of contrastive prompting. I have a question about this in the questions section of the review. Top-Q voting performs better than top-1 voting - Table 5 shows for 2 datasets that performance improves when Q increases. Methods And Evaluation Criteria: The proposed methods and evaluation criteria do make sense for the task at hand Theoretical Claims: I reviewed the claim that WASP satisfies (\epsilon, \delta)-DP, this uses standard DP results such as the privacy of the gaussian mechanism and composition of multiple gaussian mechanisms. This claim is correct. Experimental Designs Or Analyses: I am curious about the ablation results in table 3 and would like to see these results for the other datasets as well if possible. I also have a question about how this experiment was performed in the questions section. Supplementary Material: I reviewed the entire supplementary material, including the additional experiments, privacy proof, full algorithm descriptions, and the contrastive and non-contrastive prompts. Relation To Broader Scientific Literature: Dp synthetic text is a rapidly developing field that has recently become much more practical with the development of LLMs. This work could be an important step that improves private evolution, which is one of the primary techniques being explored for this problem. Essential References Not Discussed: I do not believe that there are essential references not discussed here. Other Strengths And Weaknesses: Strengths The PLM fusion technique is intuitive and well supported by the experiments, I am confident that this technique increases the quality of the DP synthetic data. The evaluation of these methods is very comprehensive across several popular datasets and many PLMs Weaknesses I am not sure that the contrastive prompting is beneficial, especially if the change in histogram sensitivity is not accounted for in the ablation. I am worried that much of the performance gains could come from PLM memorization. If the various PLMs memorized some data sources more than others then this could be the reason multi-PLM fusion is beneficial, not because the PLMs are better suited for the task in some other way. The relative insensitivity to epsilon makes me worry about this more. Other Comments Or Suggestions: No other comments Questions For Authors: The x axis of figure 1.b is not labeled or explained in the caption, what is this plot? For the ablation study about contrastive prompting, was the noise scale adjusted for the trials without contrastive prompting? The cost of the contrastive prompting is that you must make an additional measurement of the private data to generate the furthest histogram. If you do not apply contrastive prompting you only have to measure the nearest histogram so the sensitivity is reduced and you can make the measurement with less noise. This is not discussed in the ablation section and if it was not done this would not be a fair comparison. The fact that FuseGen can perform well for IMDb and Yelp-category without looking at the private data is very surprising, can you explain this? Table 6 shows that the performance of these methods is relatively insensitive to the privacy budget (especially on IMDb), does this imply that the PLMs may have memorized some of these datasets and most of the performance comes from that? In figure 6, what is the difference between the Aug-PE line and the w/o Con line? The various commercial PLMs have different costs, could your fusion framework be modified to trade off performance and API costs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate these insightful comments from our reviewer which help us improve our work. **Q: Different noise scale for different function sensitivity considering contrastive prompting.** You are absolutely right, and we indeed have applied different noise scales according to different sentitivity values of different methods in our abalation study. Specifically, while the noise applied in WASP using a sensitivity of 4 (see Theorem D.2), the noise scale was halved (from 9.690 to 4.845) for "w/o comtrastive prompting" ablation, as the sensitivity was halved, to assure a fair comparison. Thus, histogram sensitivity changes have been accounted for in the ablation. We will clarify this in the final version. **Q: Good performance of FuseGen and performance origination of WASP.** The success of FuseGen relies on the fusion of several PLMs which provides a broader and more diverse knowledge base for training a STM that can generalize better on the real dataset compared to using a single PLM. As shown in Table A below and in [2], FuseGen boosts the performance of the trained STM compared to the best performing single-PLM of the task (Flan-T5 for both tasks in Table A), with larger extent for more common task like IMDb resulting in superior performance. Therefore, fusing the knowledge from multiple PLMs is the first origination of the performance improvement of WASP. What's more, the addition of private samples also contributes to the final performance of WASP. Previous work ZeroGen [1] explores the performance of STM trained on the synthetic dataset given by a single PLM without the help of private data, which reveals the base level of knowledge "memorized" in a single PLM. Like shown in Table A, using Flan-T5, ZeroGen achieves a performance around $2.5$% less than Aug-PE for both datasets with private data completely exposed ($\epsilon=\infty$). Similar patterns can be witnessed for multi-PLM setting by comparing FuseGen and WASP with $\epsilon=\infty$. Therefore, although different PLMs may have different levels of knowledge about each dataset (as shown by ZeroGen results here and in [2]), the involvement of even a small amount of private samples consistently help in further boosting the STM performance. Moreover, the "insensitivity to privacy budget" of our WASP (and Aug-PE) in *Table 6* demonstrates the robustness of our proposed method. Table A. Comparison of STM performance of different methods. Flan-T5 is used for ZeroGen and Aug-PE as they are single-PLM methods, while $K=6$ is used for FuseGen and WASP. ||ZeroGen|Aug-PE ($\epsilon=\infty$)|FuseGen|WASP ($\epsilon=\infty$)| |:-:|:-:|:-:|:-:|:-:| |IMDb|87.06|89.48|89.07|89.96| |Yelp-Rating|57.08|59.62|57.96|62.02| **Q: Meaning of x-axis in Figure 1.b.** Sorry, the x-axis of *Figure 1.b* should be the task performance (accuracy) of the STM trained on the synthetic dataset. We will add this into our final version. **Q: Lines in Figure 6.** As stated in the caption of *Figure. 6*, top-Q voting ($Q=8$) is used for "w/o Con" whereas $Q=1$ for "Aug-PE". Note, to guarantee the same privacy budget, noise scale also increases (to twice its original value) as $Q$ increases from $1$ to $8$. The difference between "w/o Con" and "Aug-PE" demonstrate the effectivenss of applying top-$Q$ voting with $Q>1$. **Q: Trade-off between performance and cost.** Yes, WASP can be modified to take into account the API costs by adjusting the importance weighting function of each LLM with their associate API cost. For example, assuming each PLM $k$ costs $v_k$ per query on average, a trade-off function $w_k =U(w_k,v_k)$ can be applied to balance cost and performance and get the final PLM weight $w_k$. To keep $w_k,v_k$ at the same level, we can first normalize $v_k$ to make each $v_k$ range from $0.0$ to $1.0$. $U$ can be selected according to users' needs, e.g. $U(w_k,v_k)=\lambda w_k + (1-\lambda) v_k$ with a tunable parameter $\lambda$. Finally, all the $w_k$ after adjustments need to be normalized as $w_k=w_k/\sum_{k'=1}^K w_{k'}$. **Experimental Designs Or Analyses: Other dataset results with close-source PLMs in *Table 3*.** We show results for IMDb on closed-source PLMs in Table B, which also shows the superiority of WASP compared to baseline methods. Table B. STM performance using $M=100$ with $3$ closed-source PLMs ($K=3$ for WASP) under the same DP setting used in *Table 3*. ||Only Private|FuseGen|Aug-PE$_{GPT-3.5}$|Aug-PE$_{GPT-4}$|Aug-PE$_{GPT-4o}$|WASP| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |ACC|50.00|84.12|84.16|83.28|85.82|86.34| --- [1] Jiacheng Ye et. al., ZeroGen: Efficient Zero-shot Learning via Dataset Generation, EMNLP2022. [2] Tianyuan Zou et. al., FuseGen: PLM Fusion for Data-generation based Zero-shot Learning., EMNLP2024. --- Rebuttal Comment 1.1: Comment: Thank you for your response, I have adjusted my score based on your clarifications. I hope that the details you have provided me about how the privacy noise is adjusted in the ablations make it into further drafts of the manuscript. --- Reply to Comment 1.1.1: Comment: Dear reviewer, thank you very much for taking the time to reconsider the score. We sincerely appreciate your feedback and will make sure to incorporate the clarifications regarding privacy noise adjustment into the next version of the manuscript.
null
null
null
null
null
null
Data-driven Design of Randomized Control Trials with Guaranteed Treatment Effects
Accept (poster)
Summary: The paper introduces a novel two-stage design for randomized controlled trials (RCTs) that aims to improve efficiency compared to traditional single-stage designs. In the first stage, all treatment arms are explored uniformly, and a data-driven screening procedure prunes those with low estimated effect. In the second stage, the remaining arms are re-sampled to compute high-probability lower bounds (or certificates) on their treatment effects. The authors provide a theoretical analysis that show under certain conditions a top‑k policy is nearly optimal, and also extend the design to incorporate Bayesian priors. Claims And Evidence: The paper is clearly written, and the idea is original. The theoretical claims seem justified, at least to the best of my knowledge. I have reservations about the experimental results. The authors claim that two-stage designs outperform single-stage approaches, yet it is unclear in what sense this improvement is measured. There is limited evidence provided that these designs enhance sample size efficiency or statistical precision—key factors in classical randomized trials, where the width of the asymptotic confidence intervals is the primary concern. Methods And Evaluation Criteria: 1. **Evaluation Metrics:** The experiments focus exclusively on evaluating the certificate proposed in the paper. In practice, the primary goal in running trials is to reduce the sample size as much as possible—that is, to improve statistical efficiency. It is not clear from the experiments how much sample size saving the method can offer. It would be helpful to see comparisons that directly measure the reduction in sample size needed to achieve a desired level of statistical precision (e.g. confidence interval width). 2. **Single-Stage Baseline:** The paper does not explain in detail how the single-stage baseline is implemented, in particular what estimator is used to compute the treatment effects. One might expect that using an Augmented Inverse Probability Weighting (AIPW) estimator, rather than the classic empirical mean, could provide much stronger results. In fact, if one had access to a perfect model for $E[Y|X]$, the AIPW estimator should achieve the same efficiency as the proposed method by effectively imputing outcomes for patients assigned to "bad" arms. Clarification on this point and a comparison with a more advanced baseline would be valuable. Theoretical Claims: I did not check the correctness of any proofs. Experimental Designs Or Analyses: N/A Supplementary Material: I read the appendix to find the experimental details, but could not find a section describing the experimental setup. Relation To Broader Scientific Literature: It would be valuable to discuss the methods that focus on improving efficiency within the single-stage design e.g. [1,2,3,4,5]. In particular, commenting on whether this line of works can achieve the same efficiency as a 2-stage design in some scenarios. This would give a a more comprehensive view of the trade-offs involved in different trial designs. [1] Rickard Karlsson, Guanbo Wang, Jesse Krijthe, and Issa Dahabreh. Robust integration of external control data in randomized trials. [2] Lauren Liao, Emilie Højbjerre-Frandsen, Alan Hubbard, and Alejandro Schuler. Prognostic adjustment with efficient estimators to unbiasedly leverage historical data in randomized trials. [3] Pierre-Emmanuel Poulet, Maylis Tran, Sophie Tezenas du Montcel, Bruno Dubois, Stanley Durrleman, and Bruno Jedynak. Prediction-powered inference for clinical trials. [4] Piersilvio De Bartolomeis, Javier Abad, Guanbo Wang, Konstantin Donhauser, Raymond M. Duch, Fanny Yang, and Issa J. Dahabreh. Efficient Randomized Experiments Using Foundation Models. [5] Alejandro Schuler, David Walsh, Diana Hall, Jon Walsh, and Charles Fisher. Increasing the efficiency of randomized trial estimates via linear adjustment for a prognostic score Essential References Not Discussed: To my knowledge, all the essential references are discussed. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: I have a few questions regarding the experimental evaluation that I hope you could clarify: 1. **Evaluation Metric and Sample Size Savings:** - Could you elaborate on why the experimental evaluation focuses only on the certificate as the performance metric? - How does the certificate translate into tangible sample size savings or improved statistical efficiency in real-world RCT settings? - Can you provide insights or additional experiments that show the efficiency gains from the proposed method as the fixed sample size budget increases? 2. **Details on the Single-Stage Baseline Implementation:** - Could you clarify the implementation details of the single-stage baseline used in your experiments? - What estimator is used to compute the treatment effects in the single-stage setting? - Have you considered or benchmarked the performance of alternative estimators, such as the AIPW estimator? - In scenarios where a near-perfect model for $E[Y|X]$ is available, an AIPW estimator achieves the efficiency lower bound by imputing the outcomes assigned to the "bad" arms. How do you expect the performance of a single-stage method using AIPW to compare with your proposed two-stage design in this setting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer hoJN We thank the reviewer for their kind words and enthusiasm for our paper. We address your concerns below. ### **Claims and evidence/Questions For Authors:** >I have reservations about the experimental results. The authors claim that two-stage designs outperform single-stage approaches, yet it is unclear in what sense this improvement is measured. There is limited evidence provided that these designs enhance sample size efficiency or statistical precision—key factors in classical randomized trials, where the width of the asymptotic confidence intervals is the primary concern. >Could you elaborate on why the experimental evaluation focuses only on the certificate as the performance metric? The key point of our paper is that we are pruning arms, not trying to estimate the means of all interventions precisely. Our setting is designed for scenarios where the goal is to obtain a tight confidence interval for the *best* possible policy—not for all arms individually. In fact, our lower bound explicitly characterizes the width of the confidence interval for the best policy only. We intentionally do not aim to produce tight confidence intervals for lower-quality interventions, as they are ultimately discarded by our method. > It is not clear from the experiments how much sample size saving the method can offer. It would be helpful to see comparisons that directly measure the reduction in sample size needed to achieve a desired level of statistical precision (e.g. confidence interval width). >How does the certificate translate into tangible sample size savings or improved statistical efficiency in real-world RCT settings? Our method should be understood as providing a worst-best case scenario guarantee under a fixed budget. That is, given a fixed budget, we are able to offer a statistically valid guarantee on the performance of a high-impact treatment. >Can you provide insights or additional experiments that show the efficiency gains from the proposed method as the fixed sample size budget increases? Figure 2 in the paper illustrates the average value of our certificate as a function of the available budget. As shown in the plot, increasing the total budget reduces the gap between the sample splitting approach and the two-stage policy—demonstrating that sample splitting increasingly approximates the performance of the two-stage design as more data becomes available. >Could you clarify the implementation details of the single-stage baseline used in your experiments? >What estimator is used to compute the treatment effects in the single-stage setting? The estimate is given by the sample mean of each arm. In the first stage, we use uniform allocation across all $n$ original arms. We then compute the estimator using all collected data—there is no data discarded or ignored in this process. >Have you considered or benchmarked the performance of alternative estimators, such as the AIPW estimator? >In scenarios where a near-perfect model for is available, an AIPW estimator achieves the efficiency lower bound by imputing the outcomes assigned to the "bad" arms. How do you expect the performance of a single-stage method using AIPW to compare with your proposed two-stage design in this setting? We do not incorporate features in our current design, and therefore the class of methods suggested by the reviewer does not apply in our setting. Nevertheless, we thank the reviewer for raising this point—it would indeed be interesting to explore how such methods compare in an extended version of this work where features/context are available. We thank again the reviewer for their kind words and we look forward to clarify any extra question that arrises.
Summary: The paper studies a setup where one is trying to optimize the design of a randomized controlled trial to identify effective treatments more efficiently. Specifically, they describe a two-stage RCT design/algos to that end. ### update after rebuttal I thank the authors for their response, which helped me understand some of their results better. I still think UCB could be an interesting baseline in the two-step synthetic setting, where the underperforming treatments based on their upper/lower bounds can be eliminated after the first stage. Further, while I understand authors present empirical results as to the performance with respect to the choice of first stage budget $s$, it is important to have a more nuanced analytical understanding of it in the paper, since it will most likely depend on the inherent variability in the patient outcomes (hence should be chosen in a data driven way itself). For these reasons, I have chosen to maintain my score. Claims And Evidence: Theoretical claims made in the paper are supported. They are mainly adaptations of existing results from bandit literature and not substantial. It is not clear what "optimal k" is, and that if authors' algorithm finds this or not. Algorithm 1 is confusing in that the for loop goes from 1 to k, and then k is defined after that for loop, so that probably needs fixing. It seems like $k$ and $s_1$, which is the budget for "first stage" are super critical for the final output/success of the algorithm, but there is no recipe as to how to choose them. For that reason, I do not see how the existence of a top-k algorithm results are interesting/useful (i.e., how do I choose $k$ or $s_1$ is more important). The sample splitting approach can be a big throw-off for practitioners. RCTs are already small in sample size and throwing some part of the data after the first stage does not seem like the best use of data. Alternative approaches such as cross-fitting could be useful here. Also, it is not clear how this approach would affect subsequent subgroup analysis, which are often the most interesting parts of RCTs. i believe thinking about this carefully and including discussions is critical, as sample splitting will hurt statistical power at that step which already suffers a lot due to small sizes of RCTs. Methodologically, authors mention that sometimes it is not feasible to wait on the results of an RCT to guide next steps. This I think is another thing that limits the practical adaptation of the algorithm proposed in this paper. Methods And Evaluation Criteria: see below (exp design... section) Theoretical Claims: I did not check in detail, although the results are a combination of standard bandit results and concentration inequalities that can be found in prob/stat textbooks. Experimental Designs Or Analyses: In Figure 4, UCB method seems to work the best by a margin. Can you comment on what is the disadvantage method? It is fairly simple method and complex at all. Is there a reason why some of the baselines in the real-world experiments (e.g., UCB) are not benchmarked against in the synthetic experiments? Supplementary Material: No Relation To Broader Scientific Literature: The paper uses/adapts existing methods/results to the two stage RCT design problem. I do not see how its contributions would add to broader scientific/ML lit. Essential References Not Discussed: There is significant room for improvement for covering the related work. Authors mention that the adaptive trial design literature as an active/big area but they do not really do it justice when it comes to covering it. Off the top of my head I can think of [1-6] below, but there are probably others. The paper's coverage/positioning its contributions is severely lacking in that regard. [1] Villar, S. S. and Rosenberger, W. F. Covariate-adjusted response-adaptive randomization for multi-arm clinical trials using a modified forward looking gittins index rule. Biometrics, 74(1):49–57, 2018. [2] Villar, S. S., Bowden, J., and Wason, J. Multi-armed bandit models for the optimal design of clinical trials: Benefits and challenges. Statistical Science, 30(2):199–215, May 2015. [3] Aziz, M., Kaufmann, E., and Riviere, M.-K. On multi- armed bandit designs for phase I clinical trials. arXiv e-prints, art. arXiv:1903.07082, March 2019. [4] Atan, O., Zame, W. R., and van der Schaar, M. Sequential patient recruitment and allocation for adaptive clinical tri- als. In Proceedings of The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1891–1900, Apr 2019. [5] Bornkamp, B., Bretz, F., Dette, H., and Pinheiro, J. C. Response-adaptive dose-finding under model uncertainty. Annals of Applied Statistics, 5:1611–1631, 2011. [6] Bretz, F., Gallo, P., and Maurer, W. Adaptive designs: The swiss army knife among clinical trial designs? Clinical Trials, 14(5):417–424, 2017. Other Strengths And Weaknesses: see above Other Comments Or Suggestions: running title needs fixing algorithm 1 all lines are numbered 0 Questions For Authors: -- Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer S1mJ, We thank the reviewer for their kind words. We address your concerns below. ### **Questions/Weaknesses** >Theoretical claims made in the paper are supported. They are mainly adaptations of existing results from bandit literature and not substantial. We believe there are several novel theoretical contributions worth highlighting: First, to provide the bandit's approximation guarantee in Theorem 3.2, we developed a new characterization of optimal policies and proved how the problem can be reduced to a search over only top-k policies. Second, we would like to emphasize that our theoretical results supporting the Bayesian approach are not derived from bandit literature at all. Rather, these contributions are more closely related to submodularity and discrete optimization. > It seems like $s_1$ and $k$, [...] are super critical for the final output/success of the algorithm, but there is no recipe as to how to choose them. For that reason, I do not see how the existence of a top-k algorithm results are interesting/useful (i.e., how do I choose or is more important). > It is not clear what "optimal k" is, and if the authors' algorithm finds this or not. >[...] RCTs are already small in sample size and throwing some part of the data after the first stage does not seem like the best use of data.. To clarify, we demonstrate that the optimal policy, that is the policy that will yield the highest certificate, is a top-k policy. Here, the optimal $k$, which we denote as $k^*$, refers to the number of arms that such optimal policy selects for the second round. We agree with the reviewer that the selection of $s_1$ and $k$ is critical to the success of our algorithm. However, there appears to be a misunderstanding regarding the role of $k$: the purpose of our algorithm is precisely to **automatically** select a near-optimal value of $k$, rather than treating it as a user-specified input. To this end, we design our sample splitting algorithm to estimate a value of $k$ with approximation guarantees to the optimal $k$ ($k^*$). Regarding the choice of the first-stage budget $s_1$: in Figure 3, we analyze how shifting the budget between the first and second stages affects certificate performance. We find that allocating 20–50% of the budget to the first stage yields the best results. With this clarified, we would like to emphasize that concerns about statistical power in the first stage are somewhat misplaced. The point of our two-stage design is to improve statistical power in the second stage, which directly impacts the final certificate. > Algorithm 1 [...] the for loop goes from 1 to k, and then k is defined after that for loop, so that probably needs fixing We thank the reviewer for pointing out this typo. Instead of k in the for loop it should be $n$ -the original number of arms. > It is not clear how this approach would affect subsequent subgroup analysis. This is a very insightful question. Our view is that by concentrating samples on the more promising arms, we also gain statistical power for conducting subgroup analyses of heterogeneous effects within those treatments. In other words, the approach naturally prioritizes subgroup analysis for the arms with the greatest overall potential. >Methodologically, authors mention that sometimes it is not feasible to wait on the results of an RCT to guide next steps. This I think is another thing that limits the practical adaptation of the algorithm proposed in this paper. As we point out in the paper, there are important scenarios—such as drug trials or government interventions with very slow feedback loops—where outcomes can only be observed after several months or even years. In such settings, fully adaptive trials are often infeasible. In summary, while there is indeed a large and growing literature on adaptive trial designs, our work addresses a key practical limitation in their applicability, offering a principled alternative when adaptivity is constrained. ### **Experimental Designs Or Analyses:** >In Figure 4, UCB method seems to work the best by a margin. Can you comment on what is the disadvantage method? It is fairly simple method and complex at all. >Is there a reason why some of the baselines in the real-world experiments (e.g., UCB) are not benchmarked against in the synthetic experiments? Our designs are motivated by use cases where delayed outcomes make fully adaptive trials infeasible (e.g., deploying the UCB algorithm in practice). For this reason, UCB is not directly comparable to our setting, which is why it is not included in the benchmarks shown in Figure 1. However, in Figure 4, we do compare our approach to fully adaptive baselines in order to quantify the potential loss relative to an idealized, fully adaptive trial. This comparison highlights the performance tradeoffs while reinforcing the practicality of our method in constrained settings.
Summary: The paper introduces a two-stage randomized controlled trial design to enhance the best possible treatment effect guarantee while reducing wasted resources on sub-optimal arms. In the first stage, a data-driven screening process eliminates low-impact treatments, and in the second stage, the focus shifts to establishing high-probability lower bounds for the best-performing treatment. This approach is simpler than existing adaptive frameworks and can be implemented in scenarios with limited adaptivity. The optimality of top k design is discussed and a practical sample splitting method is developed to determine k. Empirical results demonstrate that this two-stage design outperforms single-stage approaches and is close to the fully adaptive approaches. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. I like the methodology part of the paper, which is concise and practical. Theoretical Claims: I roughly checked the math but not step by step. That said, I feel the theoretical claims are reasonable and valid. Experimental Designs Or Analyses: Yes. The numerical experiments are well conducted and analyzed. Supplementary Material: Yes. I reviewed the appendix. Relation To Broader Scientific Literature: The work is highly related to best arm identification literature but differs by not choosing ϵ-best arms that exhaustively identify all arms within a fixed distance of the best. Instead, it adaptively selects the number of arms to maximize the final certification strength, which has a substantial value in practice. The design is still based on RCT and the estimation only relies on the last stage, which avoids some critical concern in the adaptive experimental design literature. Essential References Not Discussed: No Other Strengths And Weaknesses: The idea of the paper itself is not fancy, but I like its practical relevance and theoretical elegance. Other Comments Or Suggestions: No Questions For Authors: I have some questions and comments: 1. The use of "treatment effect" in this paper is a bit confusing to me. Typically in RCT, there will be one control arm and many other treatment arms, so the treatment effect should be the gap between any treatment arm and the control arm. In this way, the calculation of the so-called treatment effect certificate should be adjusted --- If I understand the math correctly, the use of concentration inequality can be well extended given that everything is i.i.d; If I miss some key component and the adjustment is actually hard, the authors should change the notion of "treatment effect". 2. In the numerical experiments, it seems that two-stage sample split is worse than two-stage TS, especially the case without prior. I think this observation raises some concerns. It basically says that if we limit the design to be two-stage, actually we should follow TS, though it is tailored for minimizing regret. In addition, I am curious about the performance of fully adaptive TS. The reason is that if fully adaptive TS has the best performance (better than UCB), then there must be some theoretical essence about TS for maximizing certificate. 3. In the Appendix part C, the authors conduct multi-stage designs and obtain the certificate using all-stage data. I am wondering how the certificate is obtained given that the data is not i.i.d. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer HuaZ, We thank the reviewer for their kind words and enthusiasm for our paper! We address your questions below. ### **Questions/Weaknesses** >The use of "treatment effect" in this paper is a bit confusing to me. Typically in RCT, there will be one control arm and many other treatment arms, so the treatment effect should be the gap between any treatment arm and the control arm. In this way, the calculation of the so-called treatment effect certificate should be adjusted --- If I understand the math correctly, the use of concentration inequality can be well extended given that everything is i.i.d; If I miss some key component and the adjustment is actually hard, the authors should change the notion of "treatment effect". In the Bernoulli setting, the average treatment effect is precisely the mean from the distribution. This is a consequence of the outcome of the model being one (say for the treated) and zero (the untreated). Nevertheless, we do agree, about the need to be more precise in the others settings and we thank the reviewer for the feedback (we will add clarifications in the camera ready version) >In the numerical experiments, it seems that two-stage sample split is worse than two-stage TS, especially the case without prior. I think this observation raises some concerns. It basically says that if we limit the design to be two-stage, actually we should follow TS, though it is tailored for minimizing regret. In addition, I am curious about the performance of fully adaptive TS. The reason is that if fully adaptive TS has the best performance (better than UCB), then there must be some theoretical essence about TS for maximizing certificates. Thank you for your insightful observation regarding the Thompson sampling experiments. Our intention with these experiments was to demonstrate that our two-stage approach offers sufficient flexibility to accommodate non-uniform sampling strategies. We appreciate the reviewer noting the effectiveness of Thompson sampling in computing a better certificate. We agree with this assessment and believe this likely occurs in scenarios where the variance in the distributions is substantial relative to the gaps between arms, which can indeed make rapid accumulation of empirical means challenging. >In the Appendix part C, the authors conduct multi-stage designs and obtain the certificate using all-stage data. I am wondering how the certificate is obtained given that the data is not i.i.d. The final certificate was obtained as in the experiments with two stages with the caveat of only using data from the last stage. The reviewer is absolutely correct that in order to give a theoretical guarantee in this scenario we would need to correct for the dependency inherent of picking a particular subset arms at each stage. However, we believe that we could deliver such a guarantee through only slight modifications which account for the lack of i.i.d-ness, and that the final results would not differ significantly. ### **Other Comments Or Suggestions** We thank the reviewer for the thorough read of our paper and identifying formatting mistakes, and will update our manuscript accordingly.
Summary: This study proposes a two-stage RCT design aimed at improving efficiency in treatment effect estimation by reducing unnecessary resource allocation to sub-optimal treatments. The idea is pretty straightforward: use top-K policy to screen the inferior arms, then put more resources on the better arms. The authors derive optimal designs, demonstrate their feasibility through sample splitting, and provide empirical evidence of improved performance over single-stage approaches. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have checked the proof of the main theorems (3.7 and 3.8), which look correct to me. Experimental Designs Or Analyses: Yes. I checked all the experiments. Some questions are raised in weakness discussions. Supplementary Material: I checked through their supp materials to validate that they have provided the replication code. Relation To Broader Scientific Literature: This work is a fair addition to the literature of two-stage adaptive experiments. Essential References Not Discussed: Not aware of any. Other Strengths And Weaknesses: Strength: it is nice to see the top-K policy being applied to the two-stage experiment. Also interesting to notice the application with the Bayesian thinking by incorporating priors. Weakness: 1. In Alg 1, there is data splitting procedure. How much is this impacting the efficiency of the algorithm? Is it possible to do a cross fit type of algorithm? Sampling splitting is typically an inferior choice as data points in many RCTs are pretty expensive. 2. in practice, researchers are also making decisions about choosing $s_1$ and $s_2$. The author had some discussion on the tension between these two parameters, but what is the practical guidance from the theory for the best choice of $s_1$ and $s_2$? 3. For incorporating the prior, there is always a $1-1/e$ gap guaranteed by the lower bound. Is that saying (theoretically) incorporating prior is a slightly worse strategy compared with sample split? This is counter-intuitive as I would assume prior information on the parameters can provide more guidance for policy designing. 4. For simulation (Compare against Adaptive designs), Sample Split is actually doing close to (a little bit worse than) Two-stage Thompson sampling (TS). Is this a general phenomenon or simply due to the experimentation design? Other Comments Or Suggestions: 1. line 63, right column: the references follow an unnecessary punctuation 2. line 197, left column: what is the "=0" about? 3. line 166, right column: this line is a bit messed up Questions For Authors: Please see the weakness part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer Kupv, We thank the reviewer for their kind words and enthusiasm for our paper. We address your concerns below. ### **Questions/Weaknesses** > In Alg 1, there is data splitting procedure. How much is this impacting the efficiency of the algorithm? Is it possible to do a cross fit type of algorithm? Sampling splitting is typically an inferior choice as data points in many RCTs are pretty expensive. We thank the reviewer for their valuable suggestion. We focus on the sample split algorithm for ease of presentation but it can be easily extended to a cross-fit style of approach. We run a small experiment with this, comparing the results of cross validation to that of sample split, using the original settings from Figure 1 (with equal budget between stage 1 and stage 2). We find that sample splitting performs as well, if not better, than cross validation. Using cross validation with k=2 performs slightly worse (~1% worse), while for k=3,4,5, we find that it performs ~4% worse compared to sample splitting. This occurs because cross validation sacrifices test data points for training ones; for example, doing k=4 folds results in ¼ of the data being used for evaluation, and ¾ for some analog of training. As a result, we find that the size of these evaluation data points are critical to reducing variability. > In practice, researchers are also making decisions about choosing $s_1$ and $s_2$. The author had some discussion on the tension between these two parameters, but what is the practical guidance from the theory for the best choice of $s_1$ and $s_2$ ? The reviewer raises a very important inquiry. In Figure 3 of the paper, we compare the impact of shifting budget between $s_1$ and $s_2$ on the performance of various policies for certificate generation. We find that having the first stage occupy between 20-50% of the budget allows for optimal performance. This occurs because the first stage needs to have a large enough size to eliminate sub-optimal arms, while not being too large that the second stage is small. Intuitively, the first stage should be just large enough so that any suboptimal arms can be eliminated. > For incorporating the prior, there is always a $1 - 1/e$ gap guaranteed by the lower bound. Is that saying (theoretically) incorporating prior is a slightly worse strategy compared with sample split? This is counter-intuitive as I would assume prior information on the parameters can provide more guidance for policy designing. We note that the $1-1/e$ bound is not against the absence of a prior, but rather the optimal selection of arms even with the prior. In essence, while our bound is worse, our comparison point is against a better opt value. Naturally the reviewer is right to point out that a better prior will lead to better finite sample guarantees for the Monte Carlo approximation and thus probably a less pessimistic lower bound. > For simulation (Compare against Adaptive designs), Sample Split is actually doing close to (a little bit worse than) Two-stage Thompson sampling (TS). Is this a general phenomenon or simply due to the experimentation design? It is likely that for our particular scenario -- where the posterior properly captures the variance of the arms -- Thompson sampling is better than uniform sampling. However, this might not be the case if the gaps between arms are big enough. Such a scenario will lead to a faster accumulation of the empirical means than the convergence of Thompson sampling. ### **Other Comments Or Suggestions** We thank the reviewer for the thorough read of the paper and pointing out this formatting mistakes. We plan to correct these in our final version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the careful responses. 1. To note, when I was saying cross-fitting, I was talking about: splitting data into two-halves U and V, then use U as training and V as validation, then switching the role of the sets by using U as validation and V as training, then combine the results. I wasn't sure if you are referring to such as practice when mentioning "cross-validation". Cross-fitting is pretty common in analyzing property of estimators while keeping full use of the data. 2. The suggestion for choosing $s_1$ and $s_2$ sounds good to me. 3. The clarification that we are comparing against a better optimal makes sense to me. 4. The explanation makes sense to me. I am happy to see maybe a more rigorous (theoretical) analysis/comparison of these methods in some future work. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We find your suggestions very useful, and we will incorporate them into our final manuscript. In response to your mention of cross-fitting, we agree with your definition and have experimented with that method, which we discussed in the earlier reply (under the name of "cross validation"). Let us know if there are any other questions we can answer.
null
null
null
null
null
null
Training Flexible Models of Genetic Variant Effects from Functional Annotations using Accelerated Linear Algebra
Accept (poster)
Summary: This paper proposed WASP for phenotype prediction based on the genomic data, which utilizes an iterative algorithm and an approximation inverse to optimize the goal efficiently. Experimental results on GWAS show its superiority over LD Score Regression. Claims And Evidence: Not very clear 1. The introduction notes that researchers in phenotype prediction typically emphasize incorporating functionally informed priors. However, my understanding of this paper is that its primary focus is on improving the optimization efficiency of e.q. (2) and (3). I did not observe any explicit methods for integrating or modeling such priors. 2. The paper mentioned that LD Score Regression is the SOTA method, which was published around 10 years ago. In the related work, there are some improved approximation methods in line 212 and some methods that utilize these priors in line 179. Did they improve the performance for phenotype prediction? Methods And Evaluation Criteria: For baselines, the only use of LD Score Regression is too weak. The paper mentioned that a bunch of approximation approaches, they should be considered in verifying the efficiency of the WASP. Theoretical Claims: There are no theoretical guarantees. Experimental Designs Or Analyses: The baselines are too weak. Supplementary Material: Yes. Appendix A introduces how to do simulations. Relation To Broader Scientific Literature: The Essential References Not Discussed: No Other Strengths And Weaknesses: Weakness: There is no clear framework of the proposed WASP. No theoretical guarantee for the convergence. Only one dataset and one baseline. Other Comments Or Suggestions: No other comments. The paper would benefit from clearer organization. Questions For Authors: For this paper, I am still not clear how to involve the informed priors to improve the accuracy in prediction? Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review. In our paper, we train the first large scale models on recently released huge genomics and functional datasets, enabled by modern linear algebra techniques. Below we address your points on additional baselines and downstream uses of our model. **On weak baselines**. LD score regression is the de facto framework that is currently used for modeling genetic variability due to its tractable computational demands. Its simplest version – where f is assumed to be constant -- was indeed published ~10 years ago as we cited, but there has been continued research on it ([example](https://doi.org/10.1101/2025.03.07.25323578)). Indeed, no prior work has been able to train at the scale that we have! Even in the original LD score regression paper, they considered a simple model that could be fit with a closed form formula. Therefore, we devised a gradient descent method to train a large model using LD score regression. As we show, this unfortunately does not work well, necessitating exact likelihood inference and therefore our development of WASP. The fact that there is no better alternative method to LD score regression highlights the importance and impact of our contribution. Next, we clarify that the methods in lines 179 and 212 are not competitors to WASP. The methods in line 179 take pre-trained functionally-informed priors and use them to improve genetics tasks – they therefore complement our development of WASP, which is a method for pre-training functionally-informed priors. Moreover, we build methods for fast linear algebra operations on large LD matrices. The methods on line 212 include other methods for speeding up linear algebra problems in genetics; genetics is a broad field and these techniques do not solve the problems involved in functionally-informed priors. In particular, they approximate $R$ without necessarily describing how to invert the matrix in the WASP likelihood, $A_\phi^{(i)}$. As an example, we cite Berisa, T. and Pickrell, J. K 2016 who discovered the block-diagonal approximation of $R$ to learn about biophysical processes in the cell. We adapted their discovery to devise our mini-batching strategy for WASP. **On one dataset.** UKBiobank is the only dataset that satisfies two considerations: 1) the summary association and LD data is public while other databases such as China Kadoorie Biobank are not, 2) UKBiobank is a huge dataset of hundreds of thousands of individuals that we expect to benefit from a larger model. Put differently, UKBiobank is a foundational dataset similar to OpenWebText. In contrast to vision models, where there are a myriad of datasets (CIFAR, SVHN, ImageNet, etc), the genetics datasets are naturally much larger and most publications focus on fitting to the largest amount of data available. **On informed priors and disease prediction accuracy.** The improved posterior that WASP provides can be used for a myriad of downstream tasks such as disease prediction (simply take a MAP estimate of the effect of each variant) but also for causal variant identification, or to interpret the importance of the diverse functional annotations that we used (Phylo, Encode, Fantom). Indeed the goal of WASP is to solve the machine learning problem of fitting the data in Eqn2 as accurately as possible and, in our previous work section, we highlight some of the many efforts that use the functionally informed priors (that result from solving the prediction problem) for downstream applications that we just mentioned. **On lack of framework and theoretical convergence.** Succinctly, WASP proposes a loss function Eqn 2 and a computationally efficient way to train the loss in section 4. The approximation that we used for mini-batching has been well studied as cited [Berisa & Pickrell, 2016 & Salehi Nowbandegani et al. 2023) and all the convergence of our linear algebra techniques is well-understood (Saad 2011 or Hobgen, 2013). Additionally, WASP is a consistent estimator as explained on the first point [here](https://openreview.net/forum?id=oOtdWiLb1e&noteId=Uw5dMoGCtr). Thus WASP is a principled method for computing the likelihood in Eqn 2. If your theoretical convergence concern is about our use of neural networks then that is an empirical question that we address with the performance on held-out data.
Summary: The paper addresses the challenge of predicting how genetic variants affect phenotypes from large datasets. LD score regression, make simplifying assumptions to avoid computationally expensive linear algebra across genomic metrics. Their method leverages preconditioned linear algebra and GPU acceleration for improved results. Claims And Evidence: Yes. This paper is out of my area of expertise, so I may not be the best judge. Methods And Evaluation Criteria: Performance is evaluated using semi-synthetic data (controlled experiments) and real-world GWAS data (UK Biobank). The comparison is mainly with LDSR, there could be other baselines potentially, like vae? Is this part of the literature? Theoretical Claims: The maths makes sense. Experimental Designs Or Analyses: The authors perform experiments on semi-synthetic data and real data. They are well motivated, however variances are missing. Some of the results can be very close, running multiple seeds might allow for better comparision Supplementary Material: Went through Appendix A. Relation To Broader Scientific Literature: The contributions of this paper fit into the broader landscape of genome-wide association studies (GWAS) and Bayesian modeling. Essential References Not Discussed: na Other Strengths And Weaknesses: S1. The authors aim to tackle a realistic problem statement and offer solution in O(n^2) as compare to slower O(n^3). S2. Instead of using precomputed LD scores, WASP directly optimizes the marginal likelihood, which theoretically leads to better parameter estimates. W1. They use chromosome-level mini-batching, but it's unclear how it performs on whole-genome datasets. W2. The paper is missing uncertainty quantification. Other Comments Or Suggestions: - The different metrics such as BMI, Athama, etc do not have an introduction in the paper. Some context might be more helpful to better understand the significance of the numbers. To a layperson (aka me, I thought BMI is supposed to be between 18-25 but the numbers are much higher here) Questions For Authors: 1. The paper is motivated for genomic studies, however they mainly focuses on chromosome-level analysis rather than whole-genome scalability. Is this a good psuedotask? 2. Are non gaussian priors explored? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review! In our paper, we train the first large scale models on recently released huge genomics and functional datasets. Below we address your points on additional baselines and extensions of our model. **On additional baselines.** Note there are no methods that have been able to train models at the scale we do. Even in the original LD score regression paper, they considered a simple model that could be fit with a closed form formula. Therefore we devised a gradient descent method to train a large model using LD score regression. As we show, this unfortunately does not work well, necessitating exact likelihood inference and therefore our development of WASP. Indeed there are variational inference methods for fitting these models among our citations. However, these models were at best used to fit a handful of hyperparameters, sometimes using grid search; there is no straightforward way to adapt them to scalably fit a large scale model using gradient descent. In particular, this is because one must update the posteriors over the effect sizes over the whole genome before taking a step to update the hyperparameters. **On whole-genome datasets.** Our mini-batching on the SNP level, similar to SGD, is used to accelerate training. However, our resulting model applies to the whole genome. Note we showed that LDSR makes an extreme version of the WASP mini-batching assumption (Sec. 4.1) and is still used for interpreting the whole genome! What is the connection between models that predict disease from a whole genome and WASP, which is trained on subsets of chromosomes? We begin with a whole-genome generative process that explains the traits of each individual $y$, (Eqn. 1) and we multiply by $X$ to get a generative model of associations on chromosomes $\hat\beta$ (Eqn. 2). The likelihood of these two models is only a constant away from each other, so doing well on one is equivalent to doing well on the other. In particular, we explain below how to interpret our metrics in Tables 1 and 2 as our ability to do whole-genome prediction using the held-out chromosomes. **On more seeds**. We didn’t provide multiple runs as fitting each model is computationally expensive. Here, we ran two more seeds for Enformer trained on BMI via LDSR and WASP. We saw a standard deviation in log likelihood of 14 for LDSR and 8 for WASP, which is much smaller than the differences in the performance of different methods: 96. Nevertheless, we will report error bars for all reported numbers in future drafts. **On non Gaussian priors**. Although the Gaussianity assumption is by far the most popular in the population genetic literature, in our citations we include references to methods with sparsity-inducing non-Gaussian priors. Applying modern linear algebra techniques like WASP could help us train large models with these priors in future work! **On metric interpretation**. We take, for each variant $m$ in held-out chromosomes 21 and 22, its observed association with BMI, height, and asthma $\hat\beta_m$; then we try to explain this data using our Gaussian model in Eqn 2. The metric is the difference likelihood of these associations under a trained prior $f$ vs a model that assumes there’s no genetic effect on these traits ($f_m=0$ for all $m$). It can therefore be interpreted as how well our trained models explain the associations on held-out chromosomes. Another interpretation is the likelihood of how well we could predict traits in the UKBiobank cohort $y$ if we were to only use held-out chromosomes 21 and 22 for our predictions. This is because the models in Eqns 1 and 2 are equivalent. In practice, to do whole-genome estimation, we would train a different model holding out each chromosome and combine their predictions on each held-out chromosome; the likelihood of such a “whole-genome” model would be the sums of likelihoods like those in tables 1 and 2. This is why the numbers we show are not in the regular BMI range. We will elaborate on this explanation in the paper.
Summary: This paper introduces WASP, a method leveraging accelerated linear algebra to train flexible neural network models for predicting genetic variant effects from functional annotations. By employing banded LD matrix approximations and a structured preconditioner, WASP efficiently handles large-scale genomic data while avoiding costly matrix inversions. Experimental results demonstrate superior performance over LD score regression in predicting phenotypic associations and recovering causal variants. The approach enables training larger models with richer genomic features, advancing functionally informed priors for polygenic trait analysis. ## update after rebuttal. The authors have provided additional results and explanations. I have raised the score accordingly. Claims And Evidence: The authors use semi-synthetic data to validate WASP, but the data is generated by an Enformer-based model, which biases the evaluation in favor of neural networks and does not prove real-world generations. A stronger validation would involve testing on data generated by different models or real causal variants. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have reviewed the formulas and theoretical claims presented in the paper. Based on my examination, I did not find any major flaws or inconsistencies in the mathematical derivations. Experimental Designs Or Analyses: The paper lacks a comprehensive model comparison that focuses on both computational complexity and performance. In Figure 2, the authors compare the calculation complexity on WASP and other methods (no preconditioner, Nystrom and Cholesky). However, in Table 1, the authors compare the performance on WASP and LDSR. To provide a more balanced comparison, the authors may consider adding a computational complexity evaluation on LDSR. This would allow them to demonstrate that WASP outperforms LDSR not only in terms of performance but also in computational efficiency, presenting a more holistic view of its advantages across all relevant aspects. Supplementary Material: I have reviewed the supplementary material, which primarily includes details of the experiments and the data collection process. I did not find any issues or additional points of concern in the supplementary material. Relation To Broader Scientific Literature: By adapting iterative algorithms and structured preconditioners from Gaussian process literature, WASP overcomes the bottleneck of LD matrix inversion to accelerate linear algebra operations. Essential References Not Discussed: I did not identify any major omissions in the reference. Other Strengths And Weaknesses: This paper effectively integrates neural network training with modern fast linear algebra techniques, providing a computationally efficient approach to GWAS data analysis. This combination is a promising direction for accelerating model training and improving scalability in genomic studies. However, the experimental results and analysis can be more diversified and detailed. One potential limitation is the lack of discussion on the impact of sliding window algorithm and the window size on the final result. The method assumes that only variants inside the window are considered, but this choice could slightly influence model accuracy and interpretation. A deeper analysis of how window size - especially in regions spanning different chromosomes or chromosomal structures - affects performance would strengthen the paper’s claim. Other Comments Or Suggestions: The writing of the paper still needs to be improved. For example, there is a lack of punctuation: “however there are orders of…” should be “however, there are orders of…”; “Unfortunately this is numerically” should be “Unfortunately, this is numerically”. Typo errors: “we the use well established” should be “we then use well established”. The tables in the paper are labeled as “Table 1” and “Table 2”, but in the text they are referred as “Table 6.3”, which might lead to confusion. Questions For Authors: How exactly is WASP incorporated into the model training process? The paper provides mathematical formulas showing how WASP works. However, it does not clearly specify when, where and how WASP is applied in the training pipeline. Is it used during forward propagation, back propagation, loss computation, or another stage? A detailed explanation or a pipeline diagram would greatly improve clarity. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed and insightful review. We ran your suggested experiments which have significantly strengthened our paper. **On semi-synthetic simulations.** You raised a fair point that having a randomly initialized Enformer as a ground-truth model might benefit WASP. Thus, we re-ran all the semi-synthetic simulations but now set a biologically-informed model of inheritance as the ground-truth function. That is, we considered a threshold-based model that whenever the sum of a track in a window around a variant is above a threshold, this track adds a multiplicative increase to the effect size: $$f_{\theta, m}=\sum_{d}\mathbb{1}\left(\left(\sum_{w}C_{\mathrm{track}, m, d, w} \right)> \mathrm{thresh}_d\right)\times\mathrm{enrich}_d.$$ We chose the parameters such that $f$ is sparse (mostly only 4 tracks pass the value $\mathrm{thresh}_d$) and multi-modal (with randomly selected multipliers $\mathrm{enrich}_d$). Indeed, given the flexibility of the Enformer NN, WASP approximates the correct $f$ much better than other alternatives used in practice and that we compare to as seen in this [figure](https://drive.google.com/file/d/1mf-xaMQnK3znZLDeFI1nIeTT5FDkF4rL/view?usp=sharing). This experiment better highlights the impact of having a flexible neural network model. **On computational complexity.** WASP is roughly 36% more computationally expensive than LD score regression due to the likelihood computation. However, the discrepancy between WASP and LD score regression is not that large as the cost of the likelihood is not as expensive as the forward and backward pass of the NN that both LD and WASP have to do. Moreover, increasing the model size would decrease the gap. We will discuss this runtime consideration in the paper. Also, to clarify, the goal of section 4 and, in particular, Figure 2, is to discuss alternatives techniques to most efficiently compute the likelihood for WASP. **On the effect of window size**. We make two distance-based approximations: (1) we break chromosomes into windows of size 1,000,000 and treat observed associations in each window as independent of other associations, and (2) there is no LD for variants further than 1,000,000 positions away from each other. Assumption (1) is mostly of no concern as WASP would still be a consistent statistical estimator, that is, given enough data the estimator will converge to the correct optimum. Intuitively, this is the case as we train our model using parts of the evidence and treating all the other variables as missing (a proof can be found on the LD score regression paper). Moreover, we do not expect our method to be very sensitive to this parameter, and shrinking the window size should interpolate between the behavior of WASP and LDSR since LDSR is WASP with a window size of 1 (section 4.1). According to prior work we took a very conservative approach to assumption (2) by having no LD variants further than 1M positions away. However, to probe your question we trained a model that incorporates a stronger assumption where variants that are more than 100,000 positions from each window have no LD with variants inside the window. We noted that there is no significant drop in performance on the test set: the likelihood only dropped by 14, while training with LDSR instead of WASP drops the likelihood by 96, and using a linear rather than an Enformer architecture drops the likelihood by 51. Finally, we do find this analysis to be interesting and we’ll expand it on the paper with other window changes. **WASP on training process.** WASP is involved in the forward and backward pass of the loss computation. Given a mini-batch of $\beta$ and tracks (where the mini-batching could arguably be considered part of WASP), WASP computes the log determinant and quadratic terms of the loss using fast linear algebra methods. For the backward pass WASP uses a stochastic estimator of the loss. Below we’ve added pseudocode to illustrate these two steps. // Pseudocode for WASP fwd 1: $\log(f)$ = NN(geno, anno) // compute NN output for mini-batch 2: $A = R F R + \sigma^2 R + \epsilon I$ // form matrix section 4.1 3: $ P $ // compute preconditioner from section 4.4 4: $\log(|A|) = SLQ(A, P)$ // compute see section 4.3 5: $A^{-1} \beta = CG(A, P, \beta)$ // compute see section 4.3 6: $\ell = log(|A|) + \beta^T A^{-1} \beta$ // compute log-likelihood 7: return $\ell$ // Pseudocode for WASP bwd 1: $u_i \sim N(0, I)$ // sample M probes 2: $ \frac{1}{M} \sum_i \text{VJP}(u_i, A, A^{-1} u_i ) = \frac{1}{M} \sum_i u_i^T \nabla A (A^{-1} u_i) \approx \nabla \log(A) $ // section 4.3 3: $ \text{VJP}(-\beta^{T}A^{-1}, A, A^{-1}\beta ) = - (A^{-1} \beta)^T \nabla A (A^{-1}\beta) = \nabla \beta^{T} A^{-1} \beta$ // section 4.3 4: return $\frac{1}{M} \sum_i u_i^T \nabla A (A^{-1} u_i) - (A^{-1} \beta)^T \nabla A (A^{-1}\beta) $ // gradient of $\ell$
Summary: The paper introduces WASP, a method for training large-scale neural network models to predict the effects of genetic variants from functional annotations. The paper begins by introducing linear models for variant effects, the need for good priors, and how they can be fit using anonymous summary statistics. This introduces the challenge of LD matrix $R$ inversion. The paper then introduces a batching approach and a preconditioner for computing the likelihood, overcoming the coputational challenge. An Enformer model for $f_{\theta}$ is then trained on large scale functional genomic data from ENCODE, FANTOM, PhyloP and ESM-2. The resulting WASP method is validated using UKBB data against altnerative approaches, demonstrating improved performance. ## update after rebuttal The authors provided additional experiments and clarficiations. My score is unchanged at 4 as this is a good paper that should be accepted. Claims And Evidence: The idea for batching turns out to be quite straightforward: chunk the genome into blocks, by assuming block-diagonal LD structure. Then again assume locality, with distant variants not correlating. Both assumptions are presented as-is with relevant citations of prior work. I still wonder whether there is any sensitivity to the distances used here? Methods And Evaluation Criteria: Benchmarking approach is adequate, with realistic UKBB data used. Empirical evidence for the used assumptions would be great still. Theoretical Claims: N/A Experimental Designs Or Analyses: - How was the size of the Enformer network optimized? Is this a critical parameter - the experiment shows that making it smaller reduces performance, but what happens in the other direction - is there a risk of overfitting, and if so, how would we detect it. Supplementary Material: Not reviewed Relation To Broader Scientific Literature: The paper introduces related work and makes a clear case for what limitations WASP seeks to overcome. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - The paper is very well written and introduces all required concepts to clearly motivate the need for WASP. The method itself is elegant, breaking down a numerically intractable problem into chuks that are amenable to minibatch neural network training approaches. Other Comments Or Suggestions: N/A Questions For Authors: - It is unclear to me if there is a contribution in 4.3. Is my understanding correct that this in essence a rationale of what prior work to leverage (without introducing new concepts)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your support and thoughtful review! Below we discuss new experiments that address the sensitivity to the LD window distance cutoff, larger model sizes and clarify the purpose of Section 4.3. **On distance sensitivity and assumptions.** We make two distance-based approximations: (1) we break chromosomes into windows of size 1,000,000 and treat observed associations in each window as independent of other associations, and (2) there is no LD for variants further than 1,000,000 positions away from each other. Assumption (1) is mostly of no concern as WASP would still be a consistent statistical estimator, that is, given enough data the estimator will converge to the correct optimum. Intuitively, this is the case as we train our model using parts of the evidence and treating all the other variables as missing (a proof can be found on the LD score regression paper). Moreover, we do not expect our method to be very sensitive to this parameter, and shrinking the window size should interpolate between the behavior of WASP and LDSR since LDSR is WASP with a window size of 1 (section 4.1). According to prior work we took a very conservative approach to assumption (2) by having no LD variants further than 1M positions away. However, to probe your question we trained a model that incorporates a stronger assumption where variants that are more than 100,000 positions from each window have no LD with variants inside the window. We noted that there is no significant drop in performance on the test set: the likelihood only dropped by 14, while training with LDSR instead of WASP drops the likelihood by 96, and using a linear rather than an Enformer architecture drops the likelihood by 51. Finally, we do find this analysis to be interesting and we’ll expand it on the paper with other window changes. **On Enformer size.** We picked the size of a network that we could fit with reasonable compute. In principle, training a large-enough model for long enough should result in overfitting. Based on your question and our computational resources, we trained a larger Enformer model on the BMI data (we increased the number of attention layers from 2 to 8) and noted slight improvements on the test likelihood (increased by 11) suggesting that we are not yet overfitting. Indeed it appears that there is more potential in leveraging the huge data collected in large genetics efforts with even larger models. Measuring the scaling laws of these models could therefore be an interesting direction for future work. **On purpose and contributions in Section 4.3.** Indeed Section 4.3 is a review of iterative methods used for optimizing systems of large matrices. In particular, it introduces the identities for the gradients of the matrix inverse and log determinant; these formulas have been previously introduced in [1,2,3]. Although the construction of the preconditioner in section 4.4 is ours. We’ll clarify our contributions and prior work in section 4.3. [1] Gardner et al. 2018. GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration. [2] Saad 2003. Iterative Methods for Sparse Linear Systems. [3] Saad 2011. Numerical Methods for Large Eigenvalue Problems.
null
null
null
null
null
null
Model Selection for Off-policy Evaluation: New Algorithms and Experimental Protocol
Reject
Summary: The paper proposes a new method for model selection in the offline policy evaluation (OPE) setting. That is, given multiple policies from an offline RL algorithm, the question is which model (either a value function or simulator MDP) among many is best for evaluating these policies? The authors propose new criteria for determining the best model and wrap their solution in the same tournament structure used in a previous work. They prove bounds on the error of their model selection algorithms in the realizable case and provide empirical evidence in an offline version of the hopper environment that their approach does not have catastrophically high errors like previous approaches and perform some interesting ablations. *** Post-rebuttal update *** I have read the authors’ response and the other reviews. However, the authors mis-understood my question about an ensemble (I meant to end the tournament early, with say 4 models, and then average their votes). And their description of the MB-MF comparison again sort of mis-understood my question because I was asking about the comparison of the new protocol to BVFT, which is the direct ancestor, not a full comparison of model-based and model-free methods. Overall I agree with the other reviewers that the paper lacks clarity about exactly how the algorithm works and is not focused on the "main result" so I've downgraded my score. Claims And Evidence: Overall, I’m favorable to this paper. The approach is a somewhat incremental change to some of the previous works but the empirical evidence shows significant improvement when making these changes and the solutions (particularly the use of LSTDQ) are certainly non-trivial. In particular, I like that the authors were able to show that while in some cases other methods marginally outperform LSTD-Tournament, they each have bad failure cases that LSTD-Tournament sems to avoid. I also like the Experimental protocol the authors set up, which I think is an important contribution of the paper to the model selection literature. I do think there are some places where the paper could be improved though and listed some of them in specific sections below. Methods And Evaluation Criteria: The focus of the whole paper is on picking a single model to use in the policy evaluations. But why is picking a single model a good idea? In the realizable case, sure, it’s great to pick the right model, but there is some probability you won’t, and in the unrealizable case it seems like picking an ensemble of “good but not right” models would likely be the best strategy. I saw no discussion of the benefits of finding an ensemble of possible models, which seems like something this method could be used for. Theoretical Claims: The model-based approaches proposed in this work seem somewhat ad-hoc. The regression based selector and sign-flip rules are both presented somewhat “out of the blue” and while theoretical results are proven about both, the bounds seem fairly large/loose and not compared to bounds on the previous methods so it is unclear what reviewers should be taking from the theorems. That is, the bounds are polynomial but are they better? Experimental Designs Or Analyses: The fact that both of these new approaches are out-done by a model-free approach also brings their efficacy into question as sign-flips curves in Figure 3 are often flat or trending in the wrong direction with increased data. There isn’t really an explanation for that and it seems, frankly, like the paper would be stronger with an in-depth analysis of the model-free approach and less focus on the model-based ones. Supplementary Material: skimmed Relation To Broader Scientific Literature: In the model-free case, LSTD-Tournament is certainly a refinement on BVFT, with the replacement of Q-pi abstractions by a more computationally efficient LSTDQ analysis. But I was disappointed that the authors left “a detailed description of their differences to future work”. LSTD-Tournament is the best algorithm in the current paper, and arguably the only one that really stands out given that it outperformed the model based approaches. And the algorithm structure itself is very similar to BVFT except for the comparison rule at the innermost loop of the approach. Reviewers are left to wonder how much of a change the new approach really is to BVFT and relegating that comparison to future work makes judging the novelty of the top algorithm in this paper much more difficult. Essential References Not Discussed: The paper has appropriate references. Other Strengths And Weaknesses: Again, I still have a favorable view of this paper. The math seems right, the experimental protocol is well thought out, and there is a clear improvement in LSTD-tournament in the experiments, but the factors above limit the scope and breadth of the result. Other Comments Or Suggestions: None Questions For Authors: See the individual boxes above for particular questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the contributions of our work and the helpful comments for improvement. &nbsp; --- **Why is picking a single model a good idea? ensemble?** First, it is not clear what "ensemble" of models means in the context of OPE: do we simply average the final prediction? or we average the transition dynamics (like in [MJTS20]?) How do we weight each model? We are not aware of widely accepted ways of using ensemble in the context of OPE. Second, there are ways to reduce the "ensemble" idea to our setting. In particular, if by "ensemble" we mean averaging transitions (like [MJTS20]), then an ensemble of "base MDPs" is simply a well-defined MDP itself. There is no reason why one cannot add such a model to $\mathcal{M}$. Furthermore, if the weights for averaging "base" models are unknown, we may also consider $\mathcal{M}$ to be a set of models with different weights. That said, if the unknown weight space is continuous, this is more of a learning problem than a model selection problem, as our methods are not computationally efficient for continuous $\mathcal{M}$. A plausible solution is to run other methods to learn such ensemble models in a separate learning phase, and apply our method in the selection phase. &nbsp; [MJTS20] Sample Complexity of Reinforcement Learning using Linearly Combined Model Ensembles. &nbsp; --- **Model-based (MB) approaches: "ad hoc", "out of the blue", "out-done by model-free"** The motivation for considering MB is that it has access to more available information than MF algorithms (if you have $\mathcal{M}$, you can induce the value functions, but not vice versa), so it is natural to expect that by leveraging the additional information in $\mathcal{M}$ one can potentially do better than MF. This is what we expected before running the experiments, and it was surprising to see that LSTD-tournament works better than the MB methods, which we find interesting to report. As another side product, we wanted to compare to [ZDMAK23], which is one of the very few existing methods for MF selection. However, they require a helper function class in addition to $\mathcal{Q}$. The regression-based algorithm is essentially a clever way to implement [ZDMAK23], where the helper class ($\mathcal{G}\_i$) can be constructed from $\mathcal{M}$. --- **the [MB] bounds seem fairly large/loose** **I was disappointed that the authors left [comparison between MF and MB guarantees] to future work** Ok this is a very tricky question. The MB guarantees (Theorems 4 and 5) are actually **very standard** (Line 230R). You can find very similar analyses in the OPE literature under the standard "Bellman completeness" assumptions for function approximation ([XJ21;XCJMA21]). The reviewer described these bounds as "loose", but these are really the dream results (in terms of cleanness and interpretability) you would want in RL theory. We obtain them thanks to the additional information available in $\mathcal{M}$. What is more "unusual" is Theorem 2 for LSTD-Tournament, which is largely a direct corollary of Theorem 1, the guarantee of LSTDQ. The key differences between Theorems 4,5 vs 1,2 is the definition of coverage: Theorems 4 and 5 use the standard $C^\pi$ (called "concentrability" [AMS08]), whereas Theorem 1 and 2 uses a matrix singular value that is highly specialized to LSTDQ. As far as we know, how to properly characterize the behavior of this value as a coverage parameter and how it compares to $C^\pi$ are largely open questions. While we would certainly like to study this question, it is a fundamental issue that exists at a much more basic level. That is, you can ask the same question without even talking about model selection: just consider 3 basic OPE algorithms for learning $Q^\pi$: 1. FQE (under Bellman completeness) 2. LSTDQ (under realizability) 3. Abstraction (under $Q^\pi$-irrelevance, which is realizability for piecewise-constant class) Their guarantees can be found in the literature, which depend on different coverage parameters. Again, no one knows how they compare. Both LSTDQ and abstraction have very algorithm-specific definitions of coverage that are very hard to parse and interpret. [JRSW24] recently made progress in understanding it for abstraction and BVFT ("aggregated concentrability"), showing it can be exponentially worse than $C^\pi$ in some cases (but there are also trivial cases where it's much better). The situation for LSTDQ and LSTD-tournament is likely similar. [JRSW24] Offline Reinforcement Learning: Role of State Aggregation and Trajectory Data. So without delving further into this rabbit hole, let us just say that this comparison is a conceptual mess, and a clean and elegant answer might not even exist. Your review expressed "disappointment" which seems a pretty strong sentiment, so we want to offer an explanation.
Summary: This paper studies the problem of model selection for OPE, where you have one evaluation policy and several candidate OPE estimates, and the goal is to find the best OPE estimate. The paper presents new OPE selection procedure for both model-free and model-based OPE methods, leveraging some new theoretical insights as to how the Q function estimates and approximate models are related. The proposed approaches are then compared on Gym Hopper to show preliminary empirical results, following a experimental protocol that uses different procedures to generate candidate OPE and error bars by bootstrapping, which is different from past work. ## update after rebuttal In my opinion, there are promising aspects of this paper in its theoretical constructions, but the presentation clarity could be improved (e.g. L170-right after Theorem 2, assuming readers know what ε is without explaining; the "sign-flip" version in Sec 4.2 is also briefly stated without detailed explanation). I wonder if there is too much content the authors want to include within the page limit. On the other hand, the experimental evaluation protocol is not fully justified (the rebuttal mentioned some good points, but these should be included in the paper itself), and the experimental results are somewhat limited and mixed. Therefore, I am maintaining my overall recommendation. Claims And Evidence: Mostly. See detailed comments below. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check the proofs, but at a quick glance the all theorems presented in the paper make sense. The construction of linear basis features (L178) to reduce the problem into LSTDQ is quite clever. Experimental Designs Or Analyses: The experiments were only on one environment (Hopper) but with variations on its parameters (gravity and noise). Supplementary Material: No. Relation To Broader Scientific Literature: Model selection for OPE is an important, often overlooked, problem. Essential References Not Discussed: Tang & Wiens. Model Selection for Offline Reinforcement Learning: Practical Considerations for Healthcare Settings. Machine Learning for Healthcare Conference, 2021. Other Strengths And Weaknesses: See below. Other Comments Or Suggestions: See below. Questions For Authors: - In the introduction, the setting of "model selection of OPE" might need to be further distinguished from the more common setting of "using OPE for model selection of policies". In the abstract, the paper even started with the latter setting, but did not point out difference clearly enough. Otherwise readers might be confused which is which. A diagram may help. - Does the proposed framework work for importance sampling OPE? IS doesn't seem to be either model-based or the model-free settings described in the paper. - How realistic is it to induce different candidate value functions from "variations of the groundtruth environment" (L83)? Will it necessarily reflect what we get from an offline data and multiple OPE methods? (also L324 right) - L115: are we assuming the "blackbox base algorithms" (which can be MC or TD etc) they produce the “correct” Q functions, like an oracle? - On L171-right, what is ϵ? It's not part of the result you show in Theorem 2. - For the model-based version (Sec 4 on page 4), do you only estimate the transition model (e.g. Eq 6)? What about reward model? - In L272 Prop 3, it says if Qπ is in the set, then applying Tπ to it still gives π. Essentially the "regression error" would be zero. But that doesn't necessarily suggest that smaller regression error is better, especially since these quantities are calculated on the dataset D which may lack coverage (L223-right). Can you clarify or provide more justification for the regression-based selector? - L326 the claim of alternative protocol being expensive: how expensive? Can you provide a big-O for a sense of scale? - Weak experimental results: In Fig 3, the proposed method is very close to TD-sq seems best, even for larger N, but in the theory section there lacks any mention of TD-sq. - L434 "as predicted by theory" - I don't think this experiment is supported by the earlier theory, it tracks with the intuition which was stated on L420. - L436 the experiment on Misspecification seems lacking and doesn't show any trends, so what's the takeaway? - There is no conclusion. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. We do not find major concerns in the review - can you let us know what are the main factors that lead to your "weak reject"? &nbsp; --- **Importance sampling (IS)** We do not consider IS-based OPE. Vanilla IS does not need any function approximation, which is a major source of hyperparameters (e.g., neural architecture for TD). Its variants, such as doubly-robust, require a Q-function as control variates, which can be selected using our methods. &nbsp; --- **How realistic...to induce candidate value functions from "variations of the groundtruth environment"?** Great question; we intended to discuss in conclusion had we had more space. In practice, we will likely get candidate value functions by running methods like TD (say different neural architectures). When they produce inaccurate Q-functions, the errors are likely not that "uniform" compared to our generations (when we change the gravity, it changes in the same way in every state). Varying environment in more complex ways (e.g., only change gravity in certain regions) may mitigate this problem, which we leave for future investigation. In fact, prior works have explored generating candidate Qs by FQE with different architectures, which we also tried in an early stage. The problem, as discussed on Line 80 left and Ft 1, is that often times all candidates are poor: Unlike in reality where we can devote much time to the problem at hand and come up with good candidate architectures, here we run a lot of experiments and simply don't have the energy to tailor candidate architectures to each problem instance. This is why we switched to the current protocol. While somewhat "unrealistic", its ability to carefully control the level of misspecification is an outstanding strength. Of course, it would be nice to perform empirical evaluation using both protocols if one has enough computation. --- **Are we assuming the "blackbox base algorithms" (which can be MC or TD etc)...produce the “correct” Q functions, like an oracle?** What do you mean by "correct"? If you mean that every base algorithm must produce the correct $Q\_{M^\star}^\pi$, then that's certainly not the case, otherwise the selection is trivial. On the other hand, we do have the assumption (Line 101 Right) $Q^\pi = Q\_{M^\star}^\pi \in \{\mathcal{Q}\}$ for theoretical derivations, which means that one of the base algorithms must produce the true Q-function. As long as that's the case, other base algorithms can produce arbitrary functions **that need not be "correct" in any sense.** Extension to the case where only a good approximation of $Q^\pi$ can be found in $\{\mathcal{Q}\}$ is routine in RL theory (Line 95 Right), and we also explore this empirically in the "misspecification" experiments (which is partly why we include it despite the lack of "trends", to answer your other question). --- **L171, $\epsilon$** $\epsilon$ refers to the RHS of the bound in Theorem 2, and solving for $n$ we have $n=O(1/\epsilon^2)$. This is a standard sample-complexity discussion and we will clarify that. --- **Reward model in MB?** We assume true reward is known in derivation and experiments for simplicity, but it is a trivial extension to allow for candidate models to have different reward models. --- **but that doesn't necessarily suggest that smaller regression error is better, especially...dataset...may lack coverage** You are absolutely right. The role of coverage is reflected in the $C^\pi$ term in Theorem 4, which a standard way to characterize data coverage (see [CJ19;LVY19]). If data lacks coverage, $C^\pi$ will be large and the guarantee vacuous. Note that **this will be an issue for every method.** When data lacks coverage, OPE and its model selection will be **_fundamentally hard_**. Therefore, pretty much the only thing we can do is to come up with algorithms where $Q^\pi$ (or $M^\star$ in the model-based setting) has $0$ (or minimal) loss, and typically you would have a guarantee (like Theorem 4) when certain coverage assumptions are satisfied. Also note that, $Q^\pi$ (and $M^\star$, resp.) is not even a loss-minimizer in TD-sq in Eq.2 (and mb\_naive in Eq.6, resp.), which prevent them from enjoying such guarantees. So, the motivation for designing the regression-based estimators, as well as any other estimator (this is also the case for prior work like BVFT [XJ21]), is to achieve guarantees like Theorems 1,4,5, which are nontrivial. --- **L326 Expensive** We have brief big-Oh discussion on L382L, but "expensive" here is more of a numerical one. The major cost in Q-caching is simulation steps and policy calls (neural-net inference). For MF.G, the cost is data size (3200) * trajectories (128) * horizon (1024) * target policies (10) * candidate models (15) * $M^\star$ choices (3) ≈ 10^11. Runs for a few days ~ a week on a 4090 PC. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. My main concerns for not giving a higher rating include: - The theoretical results are interesting, but presentation and clarity wise it's a bit lacking. I'd say it's more difficult to follow than the BVFT paper. - Method for generating candidate value functions on Hopper-v4: lack of discussion on the limitation / unrealistic-ness / proof-of-concept nature of the setup. - Lack of evidence that the proposed new experimental protocol will in general produce the same conclusions as previously established protocols --- Reply to Comment 1.1.1: Comment: Thanks for the additional comments and summary. To respond to each of your points - We appreciate that you find the theoretical results interesting. Since your original review did not mention clarity issues explicitly, we would appreciate if you could be more concrete about where clarity is lacking so we can improve. - As mentioned in the rebuttal, the limitations of the protocol are something we are 100% happy and inclined to discuss had we had more space. We will surely add the discussion in revision when more space is granted, so we request that this point not viewed as a major concern for the paper. - The reviewer seems to believe that we claim "the proposed new experimental protocol will in general produce the same conclusions as previously established protocols". Perhaps the reviewer thinks we propose the new protocol to be a "cheap and stable replacement" of old protocols. **This is NOT what we claim and we do not expect the conclusions to be identical.** As we mentioned in the rebuttal (the paragraph on "How realistic..."), candidate functions generated in the old protocol often have poorer and uncontrollable quality than actual practice; in contrast, the functions generated in our case have controllable quality and enable additional interesting investigations (e.g., the gap experiments), but the errors can be unrealistic. **Neither is a perfect replication of reality and they are complimentary to each other.** (We wrote in rebuttal: "Of course, it would be nice to perform empirical evaluation using both protocols if one has enough computation.") We hope this clarifies the reviewer's misunderstanding in our claims and we will edit the paper to clarify.
Summary: The work proposes two new model selection algorithms for off-policy evaluation. These methods are inspired by "batch value function approximation" (BVFT) and its shortcomings. The authors propose to use LSTDQ as model/Q-function selector when following the steps of BVFT and dub the resultiung algorithm "LSTD-Tournament". The authors then show that the algorithm can also be extended to the model-based setting. Further, to ensure fairer comparison of selectors, the authors propose an evaluation protocol. ## update after rebuttal I have read all reviews and am keeping my score. Claims And Evidence: The results presented in section 6 do not clearly support the claims made in the paper. Firstly, the authors experimental setting uses Hopper-v4 with modifications to the used gravity and added noise to transitions. Instead of manual modifications to a gymnasium environment I would recommend that the authors use published benchmarks that already add such modifications. The work by [Kirk. et al (2023)](https://dl.acm.org/doi/abs/10.1613/jair.1.14174) discusses various such environments for the purpose of evaluation of zero-shot generalization capabilities (something which is closely aligned to this work). Out of these I would recommend the [CARL](https://github.com/automl/CARL) collection of environments as it provides a broad variety of modifications suitable for the authors experiments. This would also alleviate the need for the authors to define their own ranges of "gravities" for experimentation, resulting in improved reproducibility. The experimental results seem highly doubtful to me and are the main reason I vote for rejection. Comparing the results of MF.G (g=-30, $\sigma$=100.0) and MF.N (g=-30, $\sigma$=100.0) give vastly different results even though the basic setup should be exactly the same. The gravity values and noise values are exactly the same. As those are the only parameters that should differ between experiments, there is no reason why the baseline mb_naïve has the lowest OPE error in the MF.G setting and the highest in the MF.N setting. These inconsistencies can be traced through all experiments, which makes me doubt the validity of the experiments. It is also claimed that this naive baseline performs poorly in high-noise environments (lines 356-358, right column) but this is simply not the case. It is the best performing for $\sigma$=100.0 in MF.G g=-30, g=-3, MB.G g=-24 and second best in MB.G g=-30. The claims with respect to "simulator gaps"§ and "misspecification" are based on subsets of only 3 $\mathcal{M}$ which does not give a real meaningful comparison. Further, these experiments take the noisy variant of the transition dynamics instead of the changed gravities which would give a more meaningful comparison as the transition dynamics are truly different and not just a "fuzzy" version of the "ground truth" environment. No reason is stated for why the remaining 12 experiments with varying gravities and noise levels are not reported in either an aggregate statistic or reported in full in the appendix. The need for a novel evaluation protocol does not seem well substantiated and the text does not make it clear how the proposed setup differs from commonly used protocols. Methods And Evaluation Criteria: The selection methods do make sense. However, as stated in the prior section, the need for a different evaluation protocol does not seem to be well substantiated and, as differences to existing protocols are not clarified, I do not see how this is a claimed contribution of the work. Theoretical Claims: I did not check the proofs for correctness as the experiments already qualify the paper for rejection. Experimental Designs Or Analyses: For my critique of the experimental design refer to the "Claims and Evidence" section. Additional experimental design choices that are unjustified and thus questionable are the choice of horizon $H$ and number of MC rollouts $l$. Supplementary Material: I have reviewed sections A, C, D & E though not as thoroughly as the main text. I was searching for missing results from the main text. Relation To Broader Scientific Literature: Without Appendix A, the work does not provide an adequate discussion of the broader related work. Surpringly, the main text never refers to Appendix A Essential References Not Discussed: I am unaware of any works that would crucially need to be discussed. Other Strengths And Weaknesses: I believe the idea of using LSTDQ in the fashion of BVFT is an interesting and promising idea that could prove very useful. Other Comments Or Suggestions: Adding aggregate statistics to summarize the results across gravities and noise levels might provide a clearer picture of the strengths and weaknesses of the proposed selectors. The results should also be contrasted with true performance values of policies in the target environments as the OPE might not be meanigful in some settings. Take for example MF.G with g=-51. Most Methods achieve a low OPE, though this might be only due to the environment being so hard to solve with such high gravities, that all policies basically behave equally bad. Thus, providing true performance values in the environment will make it easier to understand if OPE values are meaningful or not. Further, in the zero-shot generalization for online RL literature, it has become standard practice to probe the "performance" with a random policy and an expert policy on the sampled gravities/noise values. These can the be used to properly normalize the performances and thus allows for providing a clearer picture of performance. An example of such a protocol can be viewed in https://openreview.net/forum?id=o8DrRuBsQb Questions For Authors: Why do performances of the algorithms vary so drastically when the same gravity and noise values are applied? (For details see my comments in the "Claims and Evidence" section) Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. The main criticisms arise from technical misunderstandings, which we clarify first. --- ## Major Technical Misunderstandings **Comparing ...MF.G (g=-30, $\sigma$=100.0) and MF.N (g=-30, $\sigma$=100.0) give vastly different results even though the basic setup should be exactly the same.** As mentioned clearly in Section 5.1, an experiment is determined by the groundtruth $M^\star$ and the candidate model set $\mathcal{M}$, among other elements. The two experiments mentioned by the reviewer coincide in $M^\star$=(g=-30, $\sigma$=100.0), but **differ crucially in $\mathcal{M}$**: MF.G uses the "gravity grid" ($\mathcal{M}\_g$), and MF.N uses the "noise grid" ($\mathcal{M}\_n$); see Section 6.1. Algorithms generally behave differently when they are given different $\mathcal{M}$. **the remaining 12 experiments...are not reported.** If the reviewer thinks there **must be** 15 experiments because $|\mathcal{M}\_g|=15$, then that's incorrect. In Figure 2 top (MF.G), all 3 plots correspond to the same $\mathcal{M}\_g$ but 3 different $M^\star$. --- &nbsp; **[Varying noise levels creates] just a "fuzzy" version of the "ground truth" environment.** We respectfully disagree. In RL, dynamics is formally defined by the transition function $P(s'|s,a)$. **Models with the same gravity but different noise levels have different $P(s'|s,a)$,** so they are technically different. While describing them as "fuzzy versions" of the same MDP is neither rigorous nor helpful, the reviewer is perhaps concerned that they may be roughly the same, which makes the experiments in MF.N trivial. **This is simply not the case in Figure 2.** Had all the models in MF.N ($\mathcal{M}\_n$) be roughly the same, all methods would have nearly 0 OPE error, and it would be impossible to beat randomly selecting a model ("random") since all models produce the same prediction. Moreover, theoretically mb\_naïve is known to be biased towards more deterministic models (Line 222 Left). (This is also why mb\_naïve is **deceptively good** in MF.N when $\sigma=10$.) Having candidate models of varying noise levels poses significant challenges to such simple methods. This also explains your confusion that: **why the baseline mb\_naïve has the lowest OPE error in the MF.G setting and the highest in the MF.N setting** This is precisely because models in MF.G have similar levels of stochasticity (so the bias of mb\_naïve is not fully exposed), but those in MF.N have different levels of stochasticity. **It is also claimed that this naive baseline performs poorly in high-noise environments (Lines 356-358 Right) but this is simply not the case. It is the best performing for $\sigma$=100.0 in MF.G g=-30, g=-3, MB.G g=-24 and second best in MB.G g=-30.** We agree that the text here is too brief and perhaps misleading. We are mostly referring to the MF.N experiments when $\sigma$ is high. --- &nbsp; **Take for example MF.G with g=-51. Most Methods achieve a low OPE, though this might be only due to the environment being so hard to solve with such high gravities, that all policies basically behave equally bad.** We are afraid that the reviewer might be carrying concepts and mindsets for policy optimization (which is standard in empirical RL) into the policy evaluation problem. **The performance of target policies generally does not determine the difficulty of policy evaluation.** Even if a target policy has poor performance, the OPE algorithm still needs to correctly predict its (low) value so that the user does not deploy it in the real system. **There are no reasons to believe that this will be generally easier than evaluating a good policy**; for example, if the offline data comes from a good policy, it can be more difficult to evaluate a poor policy than a good one due to lack of coverage. --- &nbsp; **it has become standard practice to probe the "performance" with a random policy** Again, this is a practice for policy optimization, and we are doing evaluation. In fact, we did something similar in spirit: all plots show the "random" baseline that randomly picks one of the candidate models to predict the performance of the target policies. This helps rule out the degeneracy that OPE error is low simply because all candidate models predict the correct value. --- ## Misc **Providing true performance values in the environment will make it easier...** We did show this in Figure 1 (left). The plot shows the performance of 10 target policies across models in MF.G of different gravity values (x-label is a typo; should have been "gravity"). --- **subsets of only 3 $\mathcal{M}$** First, it's not "3 $\mathcal{M}$", but $|\mathcal{M}|=3$. This is due to our limited computational budget. On the other hand, this still results in nontrivial model selection problem, as all methods still suffer nontrivial OPE errors in Figure 4.
Summary: The paper tackles the setting of off-policy evaluation (OPE). It analyzes different method for OPE, model-free as well as model-based. The paper introduces the general setting with a short overview over related work and lists its contribution. The paper presents a short overview of preliminary theory. The paper introduces a new model-free selector, follows with a section on model-based selectors and states a model-based experiment protocol with an exemplification of it. ## update after rebuttal Score increased. See rebuttal comment. Claims And Evidence: The paper claims to develop new model-free and model-based selectors with theoretical guarantees. On the model-free side the LSTD tournament is introduced which merges the ideas of LSTDQ and BVFT. The connection and interpretation of Theorem 2 to its surrounding descriptions is not clear. The sign-flip average bellman error is introduced, but the thread is hard to follow why. The paper claims to develop a new experimental protocol for experimental evaluation. The new experimental protocol is introduced, but it is not clear how one would actually apply it in practice. The exemplification of the protocol is hard to follow. Methods And Evaluation Criteria: The paper evaluates its new contribution with an exemplification on the Hopper environment and some adjusted derivation of Hopper. In general such a demonstration seems fitting to the presented claims. The comprehensibility of what the results demonstrate is not given. The evaluation criteria are hard to follow. Theoretical Claims: There are four theorems presented in the paper. I did roughly check the proofs in the appendix. Besides several formatting problems, e.g., 821 lem:model..., it looks fine. Experimental Designs Or Analyses: The experiments are done on Hopper and some derivations of it. The experiments seem to be designed appropriately, but it is hard to follow what is done and why. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The paper is heavily related to [XJ21] for BVFT and [MPW23, PKBK23] for LSTDQ, it relates to several model-based OPE publications. Essential References Not Discussed: I am missing an essential reference and discussion about the "current way to do OPE". Other Strengths And Weaknesses: Overall, the paper presents relevant and interesting bits of information, but in the current state comprehensibility is not given. The paper is thus in need of a major revision. The central thread of the paper is extremely hard to follow. There are several reasons for this. The paper introduces multiple independent claims that are not closely woven together. The motivation for putting those claims together in a single paper instead instead of multiple different papers is not clear. There is no conclusion section, the main results are in a subsection which do not end the paper and are hard to comprehend. The paper would benefit a lot of the following elements: Emphasizing the motivation why the presented claims make sense to be published together. Introducing a central thread that is easy to follow and helps with connecting the subtopics. Adding a conclusion that helps to wrap up the paper and clarifies what to take home from the paper from the authors' perspective. Refactoring most subsection und paragraph titles to be more intuitive. Referencing them along a central thread would be helpful as well. Furthermore following points are noteworthy: The current citation style does not fit the ICML citation style in my understanding. There are a lot of formatting problems and typos in the paper. The labels for Figure 2 and 3 could be improved. The Figure formatting could be streamlined. Other Comments Or Suggestions: It seems unusual the not name the environment in the abstract. It is well known and there is only one environment used in the paper. There are inconsistencies in the paper, e.g., about the contributions: two fold (025) vs. 4-fold (055). The abbreviation "w.p." is unusual, thus confusing. Questions For Authors: It seems that one of the main selling points of the paper should be that the reader should learn about a well-working OPE framework. This could enable the reader to tackle his OPE task at hand. The reader could feel confident to do with by having clear instructions at hand and a demonstration of those instructions. - First, do you agree with this point? - Where would I as a reader find this and how can I follow along the demonstration? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. --- &nbsp; **The paper presents relevant and interesting bits of information.** **In general such a demonstration seems fitting to the presented claims.** We are glad that the reviewer finds new insights in our paper and that they are supported with evidence. --- &nbsp; **It analyzes different method for OPE, model-free as well as model-based.** **... the reader should learn about a well-working OPE framework** The reviewer misdescribes our work as analyzing different OPE methods. Quite different, we study model selection over OPE, i.e., selecting between different ``base'' OPE methods that either produce Q-functions or dynamics models (Line 80, right column). &nbsp; **I am missing an essential reference and discussion about the "current way to do OPE".** As mentioned above, we are working on the model selection problem, which is ``one-level up'' above (albeit closely related to) the OPE problem. Mathematically we treat base OPE problems as "oracles" that produce either Q-functions or dynamics models (Line 80, right column), which covers a wide range of practical OPE methods considered in the literature (see Appendix A for relevant references about OPE and model selection in OPE). This way, we do not need to worry much about how OPE is done, but focus on selecting from the output of OPE algorithms. --- &nbsp; **Do you agree with this point [that "the main selling points... should be... a well-working OPE framework"]?** Again, we do not contribute to OPE itself but study its model selection problem. It is unclear to us whether this confusion has affected the reviewer's understanding of the work, or that the reviewer actually understood the difference (between OPE and model selection over OPE) and was simply not careful when summarizing and describing our work, since the review does not provide further technical comments. &nbsp; **This could enable the reader to tackle his OPE task at hand.** It seems that the reviewer feels the paper is like a "practical guide/manual" for OPE. (An example of this would be [VJY21] in our citation.) It is not. Given the under-investigation, we are far from the "practical guide" phase of research; the focus of our work is novel theoretical derivations and comprehensive empirical evaluation of the methods. **The "protocol" is not about practical usage**, but how we empirically test and understand these algorithms in simulation environments. --- &nbsp; **the paper introduces multiple independent claims that are not closely woven together.** We hear you and we are very aware that the paper is unconventional in this aspect when we submit the work. The paper is indeed filled with little bits of new insights here and there. That said, we do think the paper has a clear theme: progress in multiple dimensions on the model selection problem for OPE, which is heavily under-investigated (see Appendix A for relevant works). In particular, we design novel selection algorithms in Sections 3 and 4. Then, naturally one would want to evaluate the algorithms empirically, but existing frameworks/protocols for doing so have many problems, as briefly explained on Line 78, Left Column. So we also propose new protocols that allow more comprehensive empirical understanding of the selection algorithms. We feel these components are naturally connected and valuable to anyone interested in the model selection problem. We indeed considered the reviewer's implicit suggestion that it might be better to "slice" the work into "multiple papers" (*"The motivation for putting those claims together in a single paper instead instead of multiple different papers is not clear."*) After careful consideration, we still believe the works is best presented in its current form. For example, one could consider taking out the new selection algorithms and their analyses and publish a purely theoretical paper; however, while the new algorithms take novel insights to come up with, once they come to mind, their theoretical properties naturally follow from existing analyses, so the theoretical contributions alone are likely too thin for a typical conference paper. --- &nbsp; **"is unclear""hard to follow"** The review ends several comments with phrases like "unclear" and "hard to follow". Without further concrete comments, it is very difficult for us to understand what is unclear. Perhaps OPE and its model selection does not fit your background and/or it's not a problem that interests you; otherwise, we would like to hear more concrete comments that can help us improve the paper. --- &nbsp; **Conclusion** We will add one. The omission was simply due to space limit. **w.p.** This means "with probability", which we spell out in Theorem 1. This is a very standard and widely used abbreviation in ML theory. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying several aspects of your research. I appreciate the time and effort you've taken to address my concerns. Here are some further thoughts based on your responses: My main problem with the paper lays in the overall structure. Those "little bits of new insights here and there" are very difficult to combine into a comprehensible paper. The idea to publish a paper on "progress in multiple dimensions on the model selection problem for OPE" is a very difficult task to tackle. I generally respect the authors' motivation to do so, but I really think in this case it hurts the clarity and comprehensibility too much. To slice the work into multiple papers is indeed what I would recommend. If parts of the new slices get too thin for a typical conference paper, it might be worthwhile to extend some directions of the work first. Reading the other reviews strengthens this belief. I think the omission of a conclusion due to a space limit - even if planned for the final version - is bad practice as it is easily the most frequently read part of the paper. While two other reviews have also been critical, one review is more favorable, highlighting the value of your contributions. I am considering adjusting my score from 1 to 2. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for sharing further thoughts and considering raising the score on our paper. While we do think the paper deserves to be published in its current form, we understand your concerns given the somewhat unnatural structure and organization of the paper.
null
null
null
null
null
null
Minimax Optimal Regret Bound for Reinforcement Learning with Trajectory Feedback
Accept (poster)
Summary: This paper studies the setting of tabular MDP, with trajectory feedback, i.e., every round after executing a policy the learner obtains the entire trajectory with the total reward along the trajectory. The per-step reward is not given to the learner. This paper provided an algorithm for this setting and also proved that the algorithm achieves $\tilde{O}(\sqrt{SAH^3K})$ regret upper bound, which also matches the lower bound, asymptotically. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I checked all proofs. They all look correct to me. Experimental Designs Or Analyses: No experiment is provided in this paper. Supplementary Material: I went through all the supplementary materials, including the proof. Relation To Broader Scientific Literature: The closest literature to this paper is (Efroni et al., 2021), which proposes the setting of learning MDP with trajectory rewards. In the previous paper, they show the regret upper bounded by $O(\sqrt{S^2A^2H^4K})$, which is suboptimal. This paper improve this rate to $O(\sqrt{SAH^3K})$. Essential References Not Discussed: No. As far as I know, this paper includes sall related references. Other Strengths And Weaknesses: Strength: 1. This paper is well-written, except few typos I pointed out below. 2. The paper provides a tight characterization to the regret. Weakness: 1. This paper studied the setting of tabular MDP, which is quite restrictive. Additionally, this paper merely tightens the bounds provided in Efroni et al. 21, which is incremental. 2. I concern about the novelty of this paper. The results are good results, and they match the lower bound, which is also good, but by looking at the proving techniques, it feels that everything is well developed in previous literature and nothing feels special to me. 3. The upper bound of the algorithm requires a large "burning in" phase, which I believe, still having improvement space. Other Comments Or Suggestions: Typos: (1) in Eq. (1), (Y^t - \phi_{\tau^t}^T r) to (Y^t - \phi_{\tau^t}^T r)^2 (2) On the right side of Line 175, d^\pi_P to d^\pi_P(\tau) (3) In Line 722 (Line 6 of Algorithm Raw-Exploration), missing symbol "\leftarrow" Suggestions: 1. The author mentioned that the preference-based learning can be a possible application to the RL with trajectory feedback. It would be better write down explicitly how does results in this paper implies about preference-based learning. 2. Questions For Authors: I have the following questions: 1. Regarding the "burning in" term in the regret, e.g. the term that does not scale with K, what is the best you can obtain? 2. Is it possible to generalize the results into linear MDP, or MDP with function approximation, since the tabular MDP is most restrictive? 3. The algorithm designed in this paper first learns the transition model $P$, then learn the reward model $r$. Is it possible to learn these two simultaneously? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for the reviewer's detailed comments and constructive suggestions. Below we present our response. **Regarding significance:** We would like to stress that even for RL in the standard setting, minimax optimal regret bounds have not been obtained until recent few years, and the burn-in terms were not fully optimized until very recently. Therefore, as the first work that achieves minimax optimal regret bound for RL with trajectory, we believe our work is non-trivial in terms of technique. **Regarding technical novelty:** While the algorithmic framework depends on previous work, the key technical observation is that the number of possible trajectories $O( (SA^H)$ is smaller to that of possible policies $O( A^{SH})$. This is why we can remove the extra $\sqrt{S}$ factor compared to previous work Efroni et al., 21. **Regarding the large burn-in terms:** Indeed our current regret bound has large burn-in terms, and we would leave improving those terms as a future direction. The current term is $S^{24}A^4H^{32}$. The main reason of such a large burn-in term is due to learning the reference model. This term can be significantly reduced under the assumption of an effective reference model. **Regarding other comments:** **About the typos:** Thanks for pointing out the typos. We will fix the typos accordingly in the next revision. In the second typo, we use $d_{P}^{\pi}$ to denote the feature vector with respect to $\pi$, which does not depend on the trajectory $\tau$. **About the preference-based learning:** Preference-based RL (PbRL) is another RL paradigm to deal with the lack of reward signals. We will discuss the similarities and differences between the two settings in the next revision. **Regarding the questions:** **Abut the exact burn-in term:** Following the current analysis, the final regret bound is $\tilde{O}(\sqrt{SAH^3K}+S^{24}A^4H^{32})$. **About the extensions to other settings, e.g., linear MDP and MDP with function approximation:** At the moment, we are not aware of any possible extensions to linear MDP or MDP with function approximation. We believe these problem settings should enjoy some advantages of low-rank structure so that we can construct a feature space with lower dimensions. For example, in linear MDP, we can take $ (\phi_{s_1,a_1}, \phi_{s_2,a_2},..., \phi_{s_H,a_H})$ as a $dH$-dimensional unified feature to learn the reward kernel $ (\theta_{1}, \theta_2,..., \theta_H)$. However, the exploration and policy design over this feature space is more complicated compared to the counterparts in the tabular case. It requires more efforts to solve the experimental design problem over these complicated structures. **About learning the reward and transition kernel simultaneously:** Thanks for the interesting question. Although we cannot establish this definitively, we conjecture the answer is negative in the worst case. The problem could be reformulated as finding a mixed policy $\bar{\pi}$ to meet two following conditions simultaneously (we set $\phi(\tau)$ to be $\phi_{\tau}$ for display): $$\min_{\bar{\pi}\in \Delta^{\Pi}}\max_{\pi\in \Pi}\sum_{\tau}Pr_{\pi,P}[\tau]\phi^{\top}(\tau)\Lambda(\bar{\pi})^{-1}\phi(\tau)=O(SAH)$$ $$\min_{\bar{\pi}\in \Delta^{\Pi}}\max_{\pi\in \Pi}\sum_{s,a,h} \frac{d_{P}^{\pi}(s,a,h)}{d_{P}^{\bar{\pi}}(s,a,h)}=O(SAH).$$ For the first problem, the target is to maximize $\log(\det(\Lambda(\bar{\pi})))$ where we recall that $\Lambda(\bar{\pi}) = \sum_{\tau}Pr_{\pi,P}[\tau]\phi(\tau)\phi^{\top}(\tau)$. For the second problem, the target is to maximize $\log(\Pi_{s,a,h}d_{P}^{\bar{\pi}}(s,a,h))=\log(\det(diag(\Lambda(\bar{\pi})))$, where $diag(\Lambda)$ denotes the matrix by setting all non-diagonal elements of $\Lambda$ to 0. As a result, the two optimization problem are substantially different when $h\geq 2$. So we conjecture that the answer to your question is probably negative in the worst case.
Summary: This paper investigates reinforcement learning with trajectory feedback, where the agent receives only the cumulative reward for an entire trajectory rather than individual state-action rewards, while still observing all visited state-action pairs. The authors establish the first asymptotically nearly optimal regret bound of $O(\sqrt{S A H^3 K}) $ for this setting, matching the asymptotically optimal regret bound in standard RL. To achieve this, they construct a tighter confidence region for rewards by leveraging the structure of the linear bandit instance associated with RL with trajectory feedback. Claims And Evidence: All theoretical claims are followed by proofs in the appendix or the main paper. Methods And Evaluation Criteria: As a theoretical paper, the proposed algorithm seems well suited for the problem. Theoretical Claims: I've reviewed some of the proofs, and while most appear correct, I would appreciate further clarification on the proof of Lemma B.1. This lemma seems to play a key role in improving the dependence of the regret bound on $S $. However, the definition of $ \lambda(\bar{\pi}, \pi) $ is unclear. How do the authors define the probability that $ \bar{\pi} $ "distributes" over $ \pi $? Additionally, the interpretation of the partial derivative of $ F $ with respect to this quantity is not well explained. These concepts may be standard in this type of analysis, but it would be helpful to reintroduce them rigorously, especially for readers in RL who may not be familiar this. Experimental Designs Or Analyses: not applicable. Supplementary Material: I checked some of the proofs, in particular for Lemma B.1 see the comment above on theoretical claims. Relation To Broader Scientific Literature: Efroni et al. (2021) established a regret bound of $ \tilde{O}(\sqrt{S^2 A^2 H^4 K}) $ for RL with trajectory feedback, which holds for all $ K > 0 $, not just asymptotically. The authors suggest that refining the analysis could improve this bound to $ \tilde{O}(\sqrt{S^2 A H^3 K}) $, but reducing the dependence on the number of states requires a fundamentally new approach. This paper presents the first algorithm that asymptotically achieves the lower bound of $\tilde{O}(\sqrt{S A H^3 K}) $ for RL with trajectory feedback, matching the optimal rate in standard RL, though the bound holds only in the asymptotic regime (for large $ K $, not all $ K > 0 $). Like Efroni et al. (2021), the algorithm leverages the connection between RL with trajectory feedback and linear bandits but introduces a novel confidence region around rewards. Additionally, it employs a policy elimination method to learn the transition kernel, similar to the approach of Zhang et al. (2022b). Essential References Not Discussed: To the best of my knowledge, the related work is clearly presented. Other Strengths And Weaknesses: **Strenghts**: - The paper is generally well-written. - The technique of building the confidence region around the reward estimation by considering trajectories, rather than defining it for each deterministic policy and applying a union bound, enables an improved dependence on the number of states in the regret bound and appears to be a novel approach. - This work is the first to achieve an asymptotically near-optimal regret bound for this problem. **Weaknesses**: - Some parts of the paper require further clarification, such as the proof of Lemma B.1 and the precise application of the classical Kiefer-Wolfowitz theorem in this setting (as claimed on page 5, line 224). - The proposed algorithm is not computationally efficient, whereas some existing approaches, such as UCBVI-TS from Efroni et al. (2021), are computationally efficient but yield worse regret bounds ($ O(H^7 S^4 A^3 K) $). - The regret bound established in this work holds asymptotically in $ K $ and does not apply for all $ K > 0 $, unlike the bound in Efroni et al. (2021). Other Comments Or Suggestions: - The phrase on page 6, lines 282-284, may be lacking some connectors, which makes the formulation unclear: ex.: "to the L1 norm" - Page 4 line 208: $Y_t$ should be $Y^t$? Questions For Authors: Why the authors choose to use an elimination-based online batch learning process rather than using the new confidence region directly with a linear bandit algorithm? I may be overlooking a key step and would appreciate further clarification. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for the reviewer's detailed reviews and valuable suggestions. Below we present our response. **Regarding Lemma B.1:** In Lemma B.1, $\bar{\pi}$ represents a mixed policy. That is, by following $\bar{\pi}$, the learner takes each deterministic policy $\pi\in \Pi$ with a certain probability as $\lambda(\bar{\pi},\pi)$. As a result, $F(\bar{\pi})$ could be regarded as a multi-variable function with respect to the probability vector $[\lambda(\bar{\pi},\pi)]_{\pi\in \Pi}$. We will revise the proof accordingly in the next version. **Regarding clarification of KW theorem:** In the proof, we do not directly use KW theorem. Instead, we generalize the KW theorem to a distributed version in Lemma B.1. In the next revision, we are planning to present the original version of the KW theorem together with our generalized version to make the difference clear. **Regarding computational inefficiency:** We admit that the current algorithm is computationally inefficient, and indeed, devising algorithm with minimax optimal regret bound and polynomial running time for RL with trajectory feedback is an intriguing open problem. However, we notice that obtaining statistically (nearly) optimal algorithms is usually the first step for completely settling problems in machine learning theory, and we believe our techniques would be useful for later developments in the theory of RL with trajectory feedback. **Regarding large burn-in terms:** Indeed our current regret bound has large burn-in terms, and we would leave improving those terms as a future direction. On the other hand, we would like stress that even for RL in the standard setting, minimax optimal regret bounds have not been obtained until recent few years, and the burn-in terms were not fully optimized until very recently. Therefore, as the first work that achieves minimax optimal regret bound for RL with trajectory, we believe it is reasonable to have large burn-in terms in our bound. Moreover, we believe our new techniques are crucial for fully understanding the regret bound of RL with trajectory feedback. **Regarding other comments and suggestions:** **About lines 282-284:** We will rewrite this sentence as *"Instead of approximating $P$ by $p$ under the $L_1$-norm, it is required the trajectory distribution under $P$ could be covered by that under $p$, up to a constant ratio."* **About line 208:** Thanks for pointing out the typo. We will fix accordingly. **Regarding the elimination-based online batch learning:** In this work, the fundamental problem is still reinforcement learning. More precisely, even if assuming that the reward is known, we are required to learn the transition kernel. This step cannot be trivially implemented using a linear bandit algorithm. As a result, we choose to use a linear bandit algorithm to learn the reward function and an RL algorithm to learn the transition kernel. Another two reasons to apply the eliminated-based batch learning are that: (1) Designing an algorithm that can simultaneously learn both the reward function and transition kernel presents significant challenges. As a result, we have to learn the reward and transition kernel separately; (2) By batch learning, we can efficiently reduce the statistical dependencies between different batches, which makes the analysis less complicated. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my concerns and for committing to clarify some parts of the paper (such as Lemma B.1 and the use of KW theorem). I will keep my positive score.
Summary: This work considers the problem of online learning in a tabular finite horizon MDP with stochastic rewards and aggregate bandit feedback, where agent observes only the sum of rewards she collected after each episode. An algorithm based on policy elimination is proposed, which builds on the linear bandits perspective to the problem. This approach, while not computationally efficient, achieves nearly minimax optimal regret in all problem parameters (assuming the number of episodes $K$ is sufficiently large). One of the main observations used to design the algorithm, is that the number of possible trajectories in the MDP ($(SA)^H$) is much smaller than the number of "arms", i.e., deterministic policies ($A^{SH}$). Thus, building on confidence regions based on trajectories rather than policies, leads to dependence on $\sqrt S$ rather than $S$ that would be the result of a vanilla linear bandits algorithm, even in the known dynamics case. ## update after rebuttal I chose to increase my rating to accept. After further consideration, and taking into account that none of my concerns were major, I think there is more than enough in this work to merit acceptance. Claims And Evidence: Yes Methods And Evaluation Criteria: N/A Theoretical Claims: I went over proofs of Lemmas 5.2 and 5.3, and the proof overview of Theorem 5.6. I didn't find any significant issues. Experimental Designs Or Analyses: N/A Supplementary Material: The parts relevant to the proofs I checked. Relation To Broader Scientific Literature: The problem we initially proposed by Efroni et al. (2021). Later, Cohen et al. (2021) study the adversarial setting, and Cassel et al. (2024) study the stochastic Linear MDP setting. All the previous approaches provide $\widetilde O(\sqrt K)$ regret in the the tabular stochastic setting, but with suboptimal dependence on the rest of the problem parameters $S,A, H$. However, they are computationally efficient. The approach proposed in this work gives optimal dependence on all parameters (assuming $K$ large enough), but is not computationally efficient. Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: **Strengths** * The approach yields a (near) minimax optimal regret bound for the problem. * The technical overview and presentation of the algorithm are well written and relatively clear. **Weaknesses** * The algorithm is not computationally efficient, and the burn-in period (requirement on $K$ for minimax optimality) is quite demanding. Ultimately, establishing the optimal dependence on problem parameters is secondary in importance, even more so for a non-computationally efficient approach. Further, this is essentially the first paper that proposes a non-computationally efficient approach to the problem. That said, it is clear the problem is far from trivial and it seems this work offers some valuable insights. The algorithmic approach is pretty elegant and intuitive, at least at a high level. Other Comments Or Suggestions: Algorithms * Algorithm 5 line 3 computes $c(s,a,h)$ (the occupancy of $\bar\pi$ wrt $p$?) but it does not seem to be used anywhere. * Algorithm 2 does not pass a $\delta$ (lines 3,6) to Raw-Exploration (algorithm 6) * I suggest you add a "Conditioned on parameter settings in Section A,..." to the relevant Lemma statements. - In the proof overview of Theorem 5.6, should $\tilde y=2\epsilon_0$ be $\tilde y=2\epsilon_\ell$? Lemma 5.2 - In the proof, $\tilde P \to \hat P_2$ (or change the Lemma statement to be about $\tilde P$) - Eq. 28 is not synced with Lemma 5.3 (e.g., $log^2(T/\delta) \nleq \log (T) \log(1/\delta)$) - Also in Eq. 28, the last inequality seems to follow from a condition on $K$ that is not mentioned in the statement of the Lemma. Lemma 5.3 - “By Lemma D.2 … it holds that $R \in \mathcal R$” is this the $\mathcal R$ defined in line 13 of Algorithm 4, when invoked with $(p, T, \Pi)$ of the current Lemma? Better to be explicit about this to help the reader. Also, $\mathcal R$ denotes your reward distribution, which is unrelated. - In Eq. 30, same comment as for Lemma 5.2, $\log^2(Z/\delta) \nleq \log (Z) \log(1/\delta)$ for general $Z$.  - There seems to be a sum over $\tau$ missing in Eqs. 31 and 32 Misc * "To circumvent issues mentioned above, practitioners often rely on heuristics (e.g., reward shaping (Ng et al., 1999) or reward hacking (Amodei et al., 2016))." - Amodei et al. propose methods to **avoid** reward hacking. Reward hacking refers to the scenario where the agent exploits an inaccurate reward signal. * "The optimal Q-function and V-function are given by ..." Why use $\sup$ and not $\max$ for $Q^*$? * "we write the inner product $x^\top y$ as $xy$ for simplicity" - I would suggest to revise this decision, it generates place for confusion with scalar multiplication. * "the regret stemming from learning $\tilde P$ can be bounded by..." It this point it is not clear what is the reference transition kernel so the sentence does not convey much information. Questions For Authors: I don't have any important questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thorough evaluation and constructive feedback. Below we present our response. **Regarding the computational issue:** We admit that the current algorithm is computationally inefficient, and indeed, devising algorithm with minimax optimal regret bound and polynomial running time for RL with trajectory feedback is an intriguing open problem. However, we notice that obtaining statistically (nearly) optimal algorithms is usually the first step for completely settling problems in machine learning theory, and we believe our techniques would be useful for later developments in the theory of RL with trajectory feedback. **Regarding the burn-in terms:** Indeed our current regret bound has large burn-in terms, and we would leave improving those terms as a future direction. On the other hand, we would like stress that even for RL in the standard setting, minimax optimal regret bounds have not been obtained until recent few years, and the burn-in terms were not fully optimized until very recently. Therefore, as the first work that achieves minimax optimal regret bound for RL with trajectory, we believe it is reasonable to have large burn-in terms in our bound. Moreover, we believe our new techniques are crucial for fully understanding the regret bound of RL with trajectory feedback. **Regarding other comments and suggestions:** **About $c(s,a,h)$:** We are sorry for this mistake. $c(s,a,h)$ is only used in the analysis for Algorithm 5. We will move the definition of $c(s,a,h)$ to Appendix D.4 in the next revision. **About $\delta$ in Algorithm 2:** We will use $\delta$ as a common parameter across all algorithms and delete $\delta$ from the input of Algorithm 6. **About parameters in Appendix.A:** Thanks for the suggestion. We will refine the lemma statements to improve clarity. **About the proof overview of Theorem 5.6:** Thank you for pointing out the typo. It should be $\tilde{y}=2\epsilon_{\ell}$. **About $\tilde{P}$ in Lemma 5.2:** Thank you for pointing out the typo. We will replace $\tilde{P}\to \hat{P}_2$ accordingly. **About Eq.(28):** Here we use the fact that $\log(A)\log(B)=O(\log^2(AB))$ for $A,B\geq 1$. We have revised the inequality and reset the value of $\epsilon_0$ as $\epsilon_0=90000\log^3(\frac{SAHK}{\delta})\left( \frac{SAH^2}{K^{\frac{1}{4}}}+\frac{S^4AH^6}{K^{\frac{1}{2}}} \right)$. There should be an extra $H$ factor in the right-hand side of Eq.(28) to make the inequality holds. The revised version of Eq.28 is as follows. $$\max_{\pi\in \Pi_{\mathrm{det}}}| W^{\pi}(\hat{R},P) - W^{\pi}(R,P)|\leq H\sqrt{\log(SAH)\log(16/\delta)}\left(b_1+ 325\sqrt{\frac{SAH\log(K)\log(8SAH/\delta)}{\bar{K}_1}} \right)\leq 1000\log^2\left(\frac{SAHK}{\delta}\right)\cdot \left(\frac{SAH^2}{K^{\frac{1}{4}}} + 4SAH^2b_1 \right).$$ We remark that this $H$ factor is in the second order term so the final regret bound is still asymptotically optimal. **About $\mathcal{R}$:** We are sorry for the abuse of notations. You are correct that $\mathcal{R}$ is the confidence region defined in line 13 of Algorithm 4. The proof of Lemma D.2 is under the proof of Lemma 5.2, which studies the properties of Algorithm 4. We will make this clear in the next revision.We will also replace $\mathcal{R}$ with $\mathsf{R}$ when introducing the reward distribution. **About Eq.(30), Eq.(31) and Eq.(32):** Thanks for pointing out the typos. We will fix accordingly. **About the term "reward hacking":** We will remove the term "reward hacking" from this paragraph. **About $\sup$:** We will replace $\sup$ with $\max$. **About $xy$ and $x^{\top}y$:** We will fix the notation about $\sup$ and $x^{\top}y$. We only use $xy$ as a shorthand of $x^{\top}y$ in Appendix D.4 to reduce the complexity of the notations. **About "the regret stemming from learning $\tilde{P}$ can be bounded by ...":** We will mention that *"$\tilde{P}$ serves as an efficient tool to help design the exploration policy"* before this sentence.
Summary: This work studies Reinforcement Learning with only Trajectory Feedback, where the agent does not observe the reward for each individual step separately. Under this setting, the author proposes a novel algorithm based on the arm-elimination method over all possible deterministic policies and achieves a near-optimal regret bound, which matches the lower bound for the setting with single-step rewards up to logarithmic factors. Claims And Evidence: The author provides a clear claim of the result in the theorems and includes a proof sketch to outline the key steps of the theoretical analysis. Methods And Evaluation Criteria: The main contribution of this work focuses on the theoretical analysis of regret and does not have experiment. Theoretical Claims: The author provides a clear proof sketch for the case where the transition probability is already known. In this setting, the author utilizes the observation that the number of trajectories is significantly smaller than the number of deterministic policies and represents the reward of each policy as a linear combination of the rewards for each trajectory. By leveraging this method, the author reduces the problem to a linear bandit problem, achieving better performance with a lower-dimensional representation. However, the correctness of the analysis when the transition probability is unknown is not entirely clear, and I have several concerns: 1. As the author mentions in the regret guarantee, the upper bound matches the lower bound for a more powerful learner with single-step rewards, suggesting that the main challenge in learning an MDP comes from estimating the transition dynamics. Given this, it seems unreasonable that the algorithm in Ref-Model only dedicates a small fraction of rounds to estimating the transition matrix and then primarily focuses on reward estimation using the estimated transition model for the majority of rounds. More explanation is needed to justify why the key challenge can be effectively addressed in the first $\sqrt{K}$ rounds. 2. In Lemma 5.2, the Ref-Model is claimed to achieve an approximate transition probability function with a small error $\epsilon_0$. According to the regret analysis in this first stage, $\epsilon_0$ is approximately $1/\sqrt{K}$. However, in general cases, it typically requires $O(1/\epsilon^2)$ rounds to obtain an $\epsilon$-optimal estimator. Thus, it seems unreasonable that the algorithm can achieve a $1/\sqrt{K}$-optimal approximation with only $\sqrt{K}$ rounds. Further clarification is needed on how this estimation is justified. Experimental Designs Or Analyses: The main contribution of this work focuses on the theoretical analysis of regret and does not have experiment. Supplementary Material: No, due to time limitations, I only reviewed the main paper and did not check the supplementary material. Relation To Broader Scientific Literature: This work mainly focuses on reinforcement learning with trajectory feedback; however, the proposed algorithm is highly computationally inefficient, making it primarily relevant for the theoretical analysis of reinforcement learning rather than practical applications. Essential References Not Discussed: This paper provides a comprehensive discussion of related work in linear bandit and trajectory-based reinforcement learning method. Other Strengths And Weaknesses: 1. The proposed algorithm is highly computationally inefficient, making it primarily relevant for the theoretical analysis of reinforcement learning rather than practical applications. 2. For the non-dominant term, the dependency on $S, A, H$ is extremely large, making the regret guarantee non-trivial and only meaningful for an excessively large number of rounds $K$. Other Comments Or Suggestions: 1. The notation of $K$ and $T$ is consing. As the author mention in the introduction, there may exists a gap of the episode length $H$ between the number of episode and tnumber of stage $H$. However, the author use the both notation in algorithm, e.g., Algorithm 1 take a oracle to Traj-Learning with parameter $K$ and when introduce the Traj-Learning algoritmh, with a notation of $T$. better to union. 2. For the non-dominant term in Theorem 5, there is still a dependency on the number of $K$. It would be better to explicitly separate the effect of $K$ in the non-dominant term to provide a clearer understanding of its impact on the regret bound. Questions For Authors: 1. As the author mentions in the regret guarantee, the upper bound matches the lower bound for a more powerful learner with single-step rewards, suggesting that the main challenge in learning an MDP comes from estimating the transition dynamics. Given this, it seems unreasonable that the algorithm in Ref-Model only dedicates a small fraction of rounds to estimating the transition matrix and then primarily focuses on reward estimation using the estimated transition model for the majority of rounds. More explanation is needed to justify why the key challenge can be effectively addressed in the first $\sqrt{K}$ rounds. 2. In Lemma 5.2, the Ref-Model is claimed to achieve an approximate transition probability function with a small error $\epsilon_0$. According to the regret analysis in this first stage, $\epsilon_0$ is approximately $1/\sqrt{K}$. However, in general cases, it typically requires $O(1/\epsilon^2)$ rounds to obtain an $\epsilon$-optimal estimator. Thus, it seems unreasonable that the algorithm can achieve a $1/\sqrt{K}$-optimal approximation with only $\sqrt{K}$ rounds. Further clarification is needed on how this estimation is justified. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for the reviewer's detailed assessment and helpful suggestions. Below we present our response. **Regarding your concerns about correctness:** 1. In the first $K_0=O(\sqrt{K})$ episodes (other factors ignored), the main target is to identify the infrequent state-action-state triples with probability $O(\sigma_0)=O(K^{-1/2})$. For the left triples, we want to compute an $(3,\sigma_0)$-approximate estimation for the transition probability $P_{s,a,h}$. Note that this approximate transition matrix is only for designing the exploration policy, not for computing the near-optimal policy. Hence, computing such an approximate transition matrix requires only $O(\sqrt{K})$ episodes. 2. We set $\epsilon_0 = O(K^{-1/4})$ ignoring other factors (please refer to Appendix.A). You are correct that it requires $1/\epsilon_0^2$ episodes to learn an $\epsilon$-optimal policy. Indeed, we compute this $\epsilon_0$ optimal policy to improve the efficiency of exploration in the following episodes. In the main algorithm, we use $K_0=O(\sqrt{K})$ episodes to identify the triples with probability $O(K^{-1/2})$. If we apply direct exploration, the regret in this stage might be $K_0H$, which violates the minimax bound $O(\sqrt{SAH^3K})$ due to the dependencies on $S,A$ and $H$. Instead, we first compute the set of $\epsilon_0$ optimal policies, and then conduct exploration within this policy set. In this way, the regret due to exploration is bounded by $O(K_1 H + K_0 \epsilon) = O(\sqrt{SAH^3K})$. **Regarding the computational inefficiency:** We admit that the current algorithm is computationally inefficient, and indeed, devising algorithm with minimax optimal regret bound and polynomial running time for RL with trajectory feedback is an intriguing open problem. However, we notice that obtaining statistically (nearly) optimal algorithms is usually the first step for completely settling problems in machine learning theory, and we believe our techniques would be useful for later developments in the theory of RL with trajectory feedback. **Regarding the large dependency on $S,A,H$:** Indeed our current regret bound has large burn-in terms, and we would leave improving those terms as a future direction. On the other hand, we would like stress that even for RL in the standard setting, minimax optimal regret bounds have not been obtained until recent few years, and the burn-in terms were not fully optimized until very recently. Therefore, as the first work that achieves minimax optimal regret bound for RL with trajectory, we believe it is reasonable to have large burn-in terms in our bound. Moreover, we believe our new techniques are crucial for fully understanding the regret bound of RL with trajectory feedback. **Regarding other comments or suggestions:** We are sorry for the typos. The $T$ notation in Section 5 denotes number of episodes, not number of steps. We will use $\check{K}$ to replace $T$ throughout Section 5 and the corresponding analysis in the next revision. We will also separate $K$ from other factors. The final regret bound would be $\tilde{O}(\sqrt{SAH^3K}+S^{24}A^4 H^{32})$. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal and it addresses my concern regarding the algorithm. As the authors mentioned, the approximate transition matrix is only used for designing the exploration policy. There is a subsequent exploration stage in Algorithm 5, and the update policy is based on newly collected data (Algorithm 5, Line 7). Additionally, in Algorithm 1, all Traj-Learning is based on the reference model computed during the first stage. It is interesting that if we replace it with the estimated transition probability function from Algorithm 5 during the iterative structure, would that help reduce the burn-in terms? Overall, I will maintain my positive score. --- Reply to Comment 1.1.1: Comment: Thanks for the follow-up question. **Regarding improving the burn-in terms using new estimated transition matrix:** We think the answer is not. In our current analysis, a $(3,\sigma_{0})$-approximate transition matrix suffices for designing the exploration policy. A better approximation of the transition matrix does not improve the efficiency of policy design.
null
null
null
null
null
null
Volume Optimality in Conformal Prediction with Structured Prediction Sets
Accept (poster)
Summary: The submission studies volume minimization in conformal prediction. It proposes a dynamic programming algorithm to construct prediction sets with marginal (and approximate conditional) coverage. The sets returned by the method have provably optimal volume within the class of union of k intervals. Experiments on synthetic datasets support the claim that the proposed method minimizes the interval length compared to existing alternatives. Claims And Evidence: Evidence supports claims. Methods And Evaluation Criteria: Experiments support the theoretical claims made in the paper. Experiments on real-world datasets would support the practical utility of the proposed method. Theoretical Claims: Proofs of Theorem 2.1 and Proposition 2.3 Experimental Designs Or Analyses: Experiments use synthetic distributions published in existing literature. Supplementary Material: The proofs discussed above. Relation To Broader Scientific Literature: Volume optimization is a topic that has gained significant attention in conformal inference literature. The submission bridges concepts of dynamic programming with distributional conformal prediction, which are well-established in their respective sub-areas. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: **Strenghts** * The paper is well-written * The topic of volume optimization in conformal inference is timely * Being able to construct sets that are union of intervals is important for multimodal distributions **Weaknesses** * Experiments on real-world datasets would strengthen the utility of the proposed method I will expand on a few comments and questions and I am looking forward to discussing with the authors! Other Comments Or Suggestions: **Prior works on volume minimization** *Kiyani et al, 2024 "Length Optimization in Conformal Prediction"* also studies optimality of prediction sets with an efficient algorithm for hypothesis classes with bounded complexity. It would be important to discuss the relation between the theoretical findings in this submission and this previous work. Furthermore, comparing the two algorithms would strengthen evidence. *Teneggi et al, 2023 "How to Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control"* also considers mean interval length minimization for conformal risk-control in high-dimensional settings. **Connections with hypothesis testing** Could the authors expand on this connection? **Proposed algorithm** A brief high-level description of the intuition behind the DP algorithm in the main text would increase accessibility of the submission to a broader audience. Is this algorithm optimal in terms of complexity? Or could, for example, the last step of selecting the solution with minimum volume be avoided? In Proposition 2.3, what is the role of point 1. ? How does this property of the algorithm play a role in the rest of the contributions? **Experiments** A comparison of runtimes would help practitioners understand the tradeoffs between the different methods. -- **Minor Comments** * Lines 27-30, right column: CP also provides a coverage upper-bound, which guarantees the sets are not trivial. * Lines 119-127, right column: I understand $P^{n+1}$ is the measure of the test point. For $P^{n} \times \lambda$, does $P^{n}$ mean the measure of **all** previously observed points? * Lines 292-294, right column: it is true that the construction is only needed for the calibration points, but it also needs to be performed for each new point at inference, which can be expensive. Questions For Authors: I do not have any further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for detailed suggestions and providing more related works. We will ensure to make these changes to improve our paper and include discussions to related works. - Experiments on real-world datasets would strengthen the utility of the proposed method We implement our method and baselines on real-world datasets (See response to Reviewr Ubed for details). We also include the runtime comparison. - A comparison of runtimes would help practitioners understand the tradeoffs between the different methods. We will include the runtime comparison in the revision. - Kiyani et al, 2024 "Length Optimization in Conformal Prediction" also studies optimality of prediction sets with an efficient algorithm for hypothesis classes with bounded complexity. It would be important to discuss the relation between the theoretical findings in this submission and this previous work. Furthermore, comparing the two algorithms would strengthen evidence. We have a discussion about Kiyani et al, 2024 in Appendix A. We will include the comparison with them in the revision. - Teneggi et al, 2023 "How to Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control" also considers mean interval length minimization for conformal risk-control in high-dimensional settings. We will discuss Teneggi et al, 2023 in the related work section. - Connections with hypothesis testing: Could the authors expand on this connection? Since both coverage and volume are two measures of sets (one is the data generating probability and the other is the Lebesgue measure), the volume optimality problem subject to coverage property is naturally connected to hypothesis testing between the two measures. This is exactly how we prove the impossibility result in the unsupervised setting. Such a connection also holds when one considers conditional coverage and conditional volume optimality, and a similar impossibility result would hold. More generally, one could consider (conditional) coverage and (conditional) restricted volume optimality. This is exactly the conformal prediction framework that we consider in the paper. While we proved that the proposed DP method satisfies both guarantees, it would be interesting to also study the sample complexity of the problem. To be more specific, one could ask if the coverage slack of order $\sqrt{(k+\log n)/n}$ in Theorem 2.5 is optimal or not. We believe the lower bound approach for this problem could also be tackled via the hypothesis testing connection. This will be a very interesting direction to explore in the future. - Is this algorithm optimal in terms of complexity? Or could, for example, the last step of selecting the solution with minimum volume be avoided? The final step of selecting the minimum volume solution can indeed be avoided by maintaining the best solution directly within the DP table. However, this does not improve the overall time complexity since the last step takes only $O(kn/\gamma)$ time. In practice, it is still possible to approximately implement this algorithm more efficiently through various tricks, for example by bucketing the samples. - In Proposition 2.3, what is the role of point 1. ? How does this property of the algorithm play a role in the rest of the contributions? Points 1 & 2 jointly guarantee that the output of the DP algorithm achieves the coverage and the restricted volume optimality with respect to the empirical distribution. Then, together with the uniform concentration, it leads to the restricted volume optimality with respect to the distribution of the testing sample in Point 2 of Theorem 2.5. - Lines 27-30, right column: CP also provides a coverage upper-bound, which guarantees the sets are not trivial. You are correct. We will change the wording here in the revision. - Lines 119-127, right column: I understand $P^{n+1}$ is the measure of the test point. For $P^{n} \times \lambda$, does $P^{n}$ mean the measure of all previously observed points? We have training data $X_1, …, X_n$ and testing data $X_{n+1}$. The $P^{n+1}$ is the joint distribution of all $n+1$ samples. In $P^{n} \times \lambda$, $P^n$ means the joint distribution of the training data with $n$ samples. - Lines 292-294, right column: it is true that the construction is only needed for the calibration points, but it also needs to be performed for each new point at inference, which can be expensive. The computational bottleneck is the construction of the nested system $\{S_j(x)\}{j\in[m]}$. It needs to be constructed for all calibration points $\{X_{n+1},...,X_{2n}\}$, together with the testing point $X_{2n+1}$. The $n+1$ nested systems are sufficient to evaluate the conformity score at each $y\in\mathbb{R}$. On the other hand, if one has more than one testing point, say $X_{2n+1}, …, X_{2n+r}$, then the nested systems will need to be computed for each individual one, and you are correct that it may be expensive when $r$ is large. --- Rebuttal Comment 1.1: Comment: I sincerely thank the authors for their thoughtful consideration of the comments raised by all reviewers! The real-data results strengthen evidence in support of the proposed method, and the authors' response clarified my questions. I am happy to recommend acceptance, I kindly ask the authors to address the outstanding concerns by Reviewers hJ91 and UBed regarding title and relation with existing works in the revised version of the paper, which are important points.
Summary: This paper studies the problem of providing guarantees regarding the efficiency (ie, small size/volume and thereby informativeness) of conformal prediction (CP) sets. The paper first presents an impossibility result for volume optimality where any CP method can only find a trivial solution. Then, the paper introduces a definition of “restricted volume optimality” where the CP sets are restricted to belong to a set family of finite VC-dimension, specifically sets that are a union of $k$ intervals. The main algorithm presented leverages dynamic programming and a conformity score based on distributional conformal prediction (Chernozhukov et al., 2021) and nested prediction sets; analysis is given regarding approximate conditional coverage and conditional restricted volume optimality when a ‘good’ estimator of the conditional CDF is available. ## update after rebuttal My primary concern was the lack of real-data results--the authors addressed this in their rebuttal with some experimental results on standard UCI tabular datasets, so I raised my score to weak/borderline accept. Unfortunately, the authors did not respond to the remaining secondary questions/concerns I had, so I'll restate those here, to highlight that they were *not* resolved--while I decided to raise my score before hearing back from the authors because I overall think reasons to accept outweigh those to reject, it would have been nice to get clarity on these, to feel more certain about my recommendation.... A couple follow-up questions: (a) Can you confirm what the target coverage was for the supervised setting UCI datasets? It looks like it was 70%, but just checking. (b) If so, why are the new experiments being run with different target coverage levels? **One might worry that multiple target coverage levels were tried for each, and only the best results are reported. In a final version, I think it'd be helpful/important to provide at least a couple examples where the experiments are repeated with a grid of different target coverages (or, even a calibration plot over these values), to see how results vary (or not) accordingly.** Especially since end-users are typically most interested in higher target coverage levels, eg, 90-95%. **My last remaining concerns were regarding the title (see Claims and Evidence) and references (see Essential References Not Discussed and Relation To Broader Scientific Literature).** These have not been address by the authors' response, though perhaps due to character limits. I hope that the authors will (1) consider updating the title as have suggested, especially as reviewer hJ91 had a similar concern about the title being misleading and (2) incorporating further discussion of some related work on other approaches to improving efficiency of prediction sets. Claims And Evidence: The main claims appear to be reasonable and supported by evidence. However, I think that the title (which can be viewed as a claim itself) is perhaps too broad for the scope of the paper, in two ways: (1) Given that other a number of other works have studied optimizing for CP efficiency (e.g., by selection, designing conformity scores, aggregating prediction sets, analyzing efficiency, etc), and given that the main contributions of the paper focus on the notion of “restricted volume optimality,” I think it would be more accurate if “Restricted” were added to the title, eg either as “Restricted Volume Optimality in …” or “Volume Optimality in Conformal Prediction with Restricted Prediction Sets.” (2) Secondly, in my view I’d suggest removing “Structured” from the title, or at the very least it should be clarified in the paper what is meant by structured (i.e., it seems that it means “restricted,” in the sense defined). That is, it appears that the word “structure/structured” is only ever used in the title and in one subheading--it appears to never be used in the main text of the paper even to explain what is meant by it. When I first saw the paper’s title, I had thought that “Structured” meant multidimensional labels as in structure-prediction tasks, and I think that others could similarly mis-aligned expectations. Methods And Evaluation Criteria: A main limitation of the paper’s evaluation is that the experiments appear to only evaluate on synthetic datasets (e.g., Gaussian data, Censored Gaussian, Mixture of Gaussians, ReLU-Transformed Guassians). The paper’s evaluation would be strengthened by also including real datasets, even real but simple tabular datasets e.g., from the UCI ML repository. Including evaluation on real data is arguably especially important for this paper as some of the methods rely on kernel density estimation (KDE), which will be expected to perform well for synthetic Gaussian or mixture-of-Gaussian data, but could perform worse on data with more complex distributions. Theoretical Claims: I briefly looked over the proofs for the main theoretical claims, ie, for the impossibility result (Theorem 2.1), marginal coverage and restricted volumn optimality (Theorem 2.5), and approximate conditional coverage/conditional restricted volume optimality (Theorem 3.3). They seem reasonable, leveraging some standard CP arguments and concentration inequalities. Experimental Designs Or Analyses: I reviewed the main experiment settings. As I mentioned previously, a main limitation of the paper is that the experiments appear to only evaluate on synthetic datasets (e.g., Gaussian data, Censored Gaussian, Mixture of Gaussians, ReLU-Transformed Guassians). Evaluation on real datasets would strengthen the evaluation. Supplementary Material: I briefly looked over most of the supplementary materials, though did not closely check details. I.e., I briefly looked over the related work (Appendix A), additional experiments (Appendix B), and Additional Proofs (Appendix D). Relation To Broader Scientific Literature: Whereas the majority of the conformal prediction literature focuses on proving guarantees for coverage with efficiency (volume optimality) only evaluated empirically, this paper focuses on providing theoretical analysis for a restricted form of volume optimality. There are several prior works that study the question of volume optimality in CP, which the authors mention in the introduction and discuss more thoroughly in the Related Work (Appendix A). It should be noted that the authors focus on split conformal prediction, and thus their algorithms and main results (Theorem 2.5) and (Theorem 3.3) should be noted to provide optimality only within the split conformal framework, although as they note it seems that much of the analysis could extend to full conformal. (I.e., the restricted volume optimality results include the size of the calibration set $n$ in the guarantee; in full conformal, the data are used more efficiently and so the calibration set would effectively be larger; relatedly, cross-validation-style CP methods also make efficient use of the data, but would likely require further analysis.) It may be relevant for the authors to mention in the “Other Related work” section (or may simply be of interest to the authors for future work) the literatures on (1) aggregating prediction sets to achieve greater efficiency, which often takes a form of cross-validation-style CP, and on (2) selecting prediction sets based on efficiency. The following are relevant references that the authors may be interested in looking at or mentioning to acknowledge this broader literature on improving the efficiency of CP sets: *Papers on cross-validation-style aggregation of split CP sets (which empirically tend to be more efficient than split CP sets):* - Vovk, V. Cross-conformal predictors. Annals of Mathematics and Artificial Intelligence, 74(1):9–28, 2015. - Vovk, V., Nouretdinov, I., Manokhin, V., and Gammerman, A. Cross-conformal predictive distributions. In Conformal and Probabilistic Prediction and Applications, pp. 37–51. PMLR, 2018. - Barber, R. F., Cand`es, E. J., Ramdas, A., and Tibshirani, R. J. Predictive inference with the jackknife+. The Annals of Statistics, 49(1), 2021. - Kim, B., Xu, C., & Barber, R. (2020). Predictive inference is free with the jackknife+-after-bootstrap. Advances in Neural Information Processing Systems, 33, 4138-4149. - Prinster, D., Liu, A., and Saria, S. Jaws: Auditing predictive uncertainty under covariate shift. Advances in Neural Information Processing Systems, 35:35907–35920, 2022. - Prinster, D., Saria, S., and Liu, A. Jaws-x: addressing efficiency bottlenecks of conformal prediction under standard and feedback covariate shift. In International Conference on Machine Learning, pp. 28167–28190. PMLR, 2023. *Other papers on aggregating/selecting CP sets based on efficiency:* - Liang, R., Zhu, W., & Barber, R. F. (2024). Conformal prediction after efficiency-oriented model selection. arXiv preprint arXiv:2408.07066. - Yang, Y., & Kuchibhotla, A. K. (2024). Selection and aggregation of conformal prediction sets. Journal of the American Statistical Association, 1-13. *Miscellaneous other paper with length optimality result:* - Teneggi, J., Tivnan, M., Stayman, W., & Sulam, J. (2023, July). How to trust your diffusion model: A convex optimization approach to conformal risk control. In International Conference on Machine Learning (pp. 33940-33960). PMLR. Essential References Not Discussed: In my view, a more comprehensive related work section would cite many of the references I provided in the “Relation To Broader Scientific Literature” section to at least acknowledge this related literature on designing CP sets for efficiency (i.e., either by aggregating, selecting, or optimizing). However, among those references, perhaps the most relevant is Liang et al. (2024), as well as a couple references on cross-validation-style CP/aggregating prediction sets, to acknowledge that literature. Other Strengths And Weaknesses: *Other strengths:* Overall, the paper provides an interesting perspective on the efficiency of CP sets, with potentially promising methods. *Other weaknesses:* Occasionally the language is a bit informal in a way that can distract or be ambiguous, for instance the authors state that they “believe” XYZ rather when perhaps “conjecture” would be more appropriate. For instance, they state “Though not explicitly stated in (Chernozhukov et al., 2021), we believe that the DCP procedure essentially achieves (12) for k = 1” and “We believe the comparison between the full conformal versions of the two methods will lead to the same conclusion.” Other Comments Or Suggestions: NA Questions For Authors: Overall, as I’ve mentioned, my main concern for this paper is that I think some experiments on real datasets (even simple tabular UCI datasets) should be provided, in addition to the synthetic-data experiments. Secondarily, addressing my comments on mentioning other related literature on combing CP sets, as well as revising certain informalities in the writing, would improve it in my view. If some real-data experiments (e.g., on say 3 UCI datasets) were added with results that supported those of the current experiments, I would consider improving my evaluation of the paper; addressing the secondary concerns around mentioning related literature and revising the writing would further help. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for the detailed and constructive feedback and the additional references that we missed. We will implement these changes in the revision of the paper. We evaluated our method and different baseline methods on several real-world datasets in both unsupervised and supervised settings, and observed improved or competitive volumes in both settings. For the unsupervised setting, we implement our conformalized DP method and the baseline conformalized KDE method on two real-world datasets (Acidity and Enzyme) used in density estimation literature (Richardson and Green (1997)). We set the target coverage at 80%, and repeat the experiments 50 times. We output the average and s.d. for volume and empirical coverage. We observe that our conformalized DP outputs a smaller volume prediction set than the KDE with the best bandwidth for almost all k>=2. ### Acidity Dataset Conformalized KDE Results | Bandwidth | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | |-----------|-------|-------|-------|-------|-------| | **Volume** | 2.4927 ± 0.1960 | 2.3934 ± 0.2044 | 2.5749 ± 0.2571 | 2.7617 ± 0.2038 | 2.8196 ± 0.2587 | | **Empirical Coverage** | 0.8401 ± 0.0226 | 0.8092 ± 0.0261 | 0.8133 ± 0.0315 | 0.8013 ± 0.0256 | 0.8021 ± 0.0294 | Conformalized DP Results | k | 1 | 2 | 3 | 4 | 5 | |---------|-------|-------|-------|-------|-------| | **Volume** | 2.5999 ± 0.1937 | 2.3627 ± 0.1755 | 2.4099 ± 0.1832 | 2.3615 ± 0.1452 | 2.2172 ± 0.1349 | | **Empirical Coverage** | 0.8121 ± 0.0476 | 0.8507 ± 0.0224 | 0.8552 ± 0.0245 | 0.8582 ± 0.0196 | 0.8341 ± 0.0253 | --- ### Enzyme Dataset Conformalized KDE Results | Bandwidth | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | |-----------|-------|-------|-------|-------|-------| | **Volume** | 0.9826 ± 0.0969 | 1.4640 ± 0.0940 | 1.5218 ± 0.1332 | 1.4526 ± 0.1809 | 1.3867 ± 0.1799 | | **Empirical Coverage** | 0.8081 ± 0.0217 | 0.8056 ± 0.0231 | 0.7999 ± 0.0268 | 0.7962 ± 0.0272 | 0.7971 ± 0.0236 | Conformalized DP Results | k | 1 | 2 | 3 | 4 | 5 | |---------|-------|-------|-------|-------|-------| | **Volume** | 1.2207 ± 0.1188 | 0.8997 ± 0.1414 | 1.0118 ± 0.1975 | 1.0131 ± 0.1878 | 1.0520 ± 0.2002 | | **Empirical Coverage** | 0.8101 ± 0.0461 | 0.8144 ± 0.0353 | 0.8416 ± 0.0390 | 0.8458 ± 0.0379 | 0.8558 ± 0.0353 | For the supervised setting, we implement our method and the baseline methods on four UCI datasets Abalone, AirFoil, WineQuality, Superconductivity. These four real-world datasets are used in conformal prediction by Dhillon, Deligiannidis, and Rainforth (2024). We observe that our method outputs prediction sets with smaller volumes than baselines on two datasets WineQuality, Superconductivity. It provides competitive results on the other two datasets Abalone and AirFoil. The most prominent example showing the advantage of using unions of several intervals is the dataset WineQuality. In this dataset, the label Y space is integers from 0 to 10 for the quality of wine. The conformalized DP with 5 intervals clearly dominates all other methods including our own with k=1 due to the multimodality of prediction. ### Abalone | Method | Emp Cov | Avg Vol | Runtime (s) | |-------------------------|--------|---------|-------------| | CQR | 70.694 | 3.820 | 44.119 | | DCP-QR | 69.737 | 3.788 | 46.851 | | DCP-QR* | 68.780 | 3.579 | 48.201 | | ConformalizedDP (k = 1) | 69.737 | 3.692 | 166.875 | | ConformalizedDP (k = 5) | 68.182 | 3.611 | 659.637 | ### AirFoil | Method | Emp Cov | Avg Vol | Runtime (s) | |-------------------------|--------|---------|-------------| | CQR | 74.751 | 5.895 | 10.509 | | DCP-QR | 74.086 | 5.388 | 9.667 | | DCP-QR* | 72.425 | 5.416 | 9.205 | | ConformalizedDP (k = 1) | 70.764 | 5.289 | 51.588 | | ConformalizedDP (k = 5) | 74.086 | 6.275 | 224.675 | ### WineQuality | Method | Emp Cov | Avg Vol | Runtime (s) | |-------------------------|--------|---------|-------------| | CQR | 71.000 | 1.130 | 104.084 | | DCP-QR | 69.538 | 0.826 | 109.458 | | DCP-QR* | 69.538 | 0.732 | 109.746 | | ConformalizedDP (k = 1) | 72.154 | 0.786 | 302.068 | | ConformalizedDP (k = 5) | 73.385 | 0.186 | 1031.165 | ### superconductivity | Method | Emp Cov | Avg Vol | Runtime (s) | |-------------------------|--------|---------|-------------| | CQR | 70.397 | 13.722 | 3160.658 | | DCP-QR | 71.197 | 13.803 | 2998.426 | | DCP-QR* | 71.197 | 12.070 | 3206.956 | | ConformalizedDP (k = 1) | 70.938 | 11.808 | 3809.463 | | ConformalizedDP (k = 5) | 70.774 | 12.632 | 6163.017 |
Summary: Conformal prediction is a framework to construct label sets such that the marginal probability of coverage is guaranteed to be above a desired level. This paper studies the conformal label sets for unidimensional regression problems, where the conformal label sets are restricted to be a union of $k$ intervals. The motivation is to minimize the set sizes while maintaining the conformal coverage guarantee. The paper first considers the unsupervised setting. It defines the optimal size as the minimum achievable size for label sets restricted to be a union of $k$ intervals. It proposes using dynamic programming to empirically approximate label sets close to the optimal size. Together with the conformal framework, the paper proposes a non-conformity score that achieves close to optimal label set sizes with high probability (quantifying the distance from the optimal and the high probability). Next, the paper extends similar arguments to the supervised setting. It defines the conditional optimal sizes (conditioned on the covariate) with the same restriction. It proposes using dynamic programming (similar to before) to empirically approximate label sets close to the conditionally optimal sizes. Together with the conformal framework, the paper proposes a non-conformity score that achieves close to conditionally optimal label set sizes with high probability (quantifying the distance from the conditionally optimal and the high probability). It also achieves conditional coverage with high probability. Lastly, the paper includes experimental results to validate their proposed method. Claims And Evidence: The claims are supported for the most part. I have questions about the experiments and the results (see below). Methods And Evaluation Criteria: 1. It would help to see a similar analysis done on real-world data. 2. How does the proposed method compare to the baselines on computational cost? Theoretical Claims: I briefly checked the proofs. Experimental Designs Or Analyses: 1. The analysis does not include the standard deviations of the reported quantities. Since "outperforming" is a strong word (lines 407-408, column 2), one should consider the statistical significance of the results. 2. Figures 1, 2, and 3 are not easy to see in the main paper. 3. Since Fig. 1a depicts the best proposed model, Fig. 1b should illustrate the best baseline model (I believe $\rho=0.25$ is the optimal hyperparameter from the search space chosen). Supplementary Material: I briefly reviewed the Appendix. Relation To Broader Scientific Literature: This paper adds to the previous work on optimal conformal prediction sets. Specifically, this paper looks at the unidimensional setting and studies optimal prediction sets where the sets are a union of $k$ intervals. Essential References Not Discussed: There are works on the marginal size of the conformal label sets, termed inefficiency. Most show that conformal inefficiency asymptotically converges to that of an oracle under different settings: unsupervised learning [Lei et al., 2013, 2015], regression [Lei and Wasserman, 2013], binary classification [Lei, 2014], and multi-class classification [Sadinle et al., 2019]. Similarly, Vovk et al. [2014, 2016] and Sadinle et al. [2019] provide results under per-class/label coverage. Additionally, Dhillon et al. [2024] quantify conformal inefficiency in the finite-sample setting. The paper includes some but not all references. References G. S. Dhillon, G. Deligiannidis, and T. Rainforth. On the expected size of conformal prediction sets. In S. Dasgupta, S. Mandt, and Y. Li, editors, Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, volume 238 of Proceedings of Machine Learning Research, pages 1549–1557. PMLR, 02–04 May 2024. J. Lei. Classification with confidence. Biometrika, 101(4):755–769, 10 2014. J. Lei and L. Wasserman. Distribution-free prediction bands for non-parametric regression. Journal of the Royal Statistical Society Series B: Statistical Methodology, 76(1):71–96, 07 2013. J. Lei, J. Robins, and L. Wasserman. Distribution-free prediction sets. Journal of the American Statistical Association, 108(501):278–287, 2013. J. Lei, A. Rinaldo, and L. Wasserman. A conformal prediction approach to explore functional data. Annals of Mathematics and Artificial Intelligence, 74(1):29–43, Jun 2015. M. Sadinle, J. Lei, and L. Wasserman. Least ambiguous set-valued classifiers with bounded error levels. Journal of the American Statistical Association, 114(525):223–234, 2019. V. Vovk, I. Petej, and V. Fedorova. From conformal to probabilistic prediction. In L. Iliadis, I. Maglogiannis, H. Papadopoulos, S. Sioutas, and C. Makris, editors, Artificial Intelligence Applications and Innovations, pages 221–230, Berlin, Heidelberg, 2014. V. Vovk, V. Fedorova, I. Nouretdinov, and A. Gammerman. Criteria of efficiency for conformal prediction. In A. Gammerman, Z. Luo, J. Vega, and V. Vovk, editors, Conformal and Probabilistic Prediction with Applications, pages 23–39, Cham, 2016. Other Strengths And Weaknesses: Strengths: 1. The area of research is well-motivated for practical impact. 2. The theoretical and empirical results demonstrate the benefits of the proposed algorithm. Weaknesses: 1. The paper is not easy to parse. For instance, the dynamic programming algorithm, one of the key features of the proposed method, is not described well. 2. The experiments are missing real-world data and reporting of statistical significance. 3. Several important references are missing. Other Comments Or Suggestions: 1. Explain the proposed dynamic programming algorithm (Algorithm 1) in the main paper. It is not easy to follow without any explanations. 2. Mentioning that the paper deals with unidimensional regression tasks in the abstract and the introduction would help. 3. Using "structured prediction sets" in the title is misleading as the structure is limited to the union of $k$ intervals. 4. The coverage, as detailed in lines 193-198, column 2, depends on the exchangeability of the calibration and test data while being independent of the training data. 5. The threshold in Eq. 9 is not defined. 6. Similar to Eq. 6, it would help to write the definition of conditional restricted optimal volume with $\inf_{C \in \mathcal{C}_{k}}$ (in Section 3.1). 7. The paper denotes the constructed label set as a strict subset of the label space, $\hat{C}(X_{n + 1}) \subset \mathcal{Y}$. However, it is not a strict subset and $\hat{C}(X_{n + 1}) \subseteq \mathcal{Y}$. 8. The set value setting is not defined (lines 57-58, column 1). 9. \citet is for textual citations, and \citep is for parenthetical citations. Typos: 1. "...score based on dynamic programming, the proposed method..." $\rightarrow$ "...score based on dynamic programming. The proposed method..." (lines 102-103, column 1) 2. "Among all data-dependent set that satisfies (3), ..." $\rightarrow$ "Among all data-dependent sets that satisfy (3), ..." (line 83, column 2) 3. "...conformalized KDE is highly densitive to the choice..." $\rightarrow$ "...conformalized KDE is highly sensitive to the choice..." (caption of Fig. 1d) 4. The citation for Barber et al. [2021] is incorrectly written as Foygel Barber et al. [2021] Questions For Authors: 1. Theorem 2.5 and 3.3 both assume i.i.d. data. How does the statement change if the calibration and test data are exchangeable and not i.i.d.? 2. How is the index that satisfies the second requirement in Assumptions 2.4 and 3.1 chosen? Or, how is the index for step 1 in Section 4.1 chosen? 3. When is the bound in Eq. 8 satisfied? 4. Why is $P \ll \lambda$ required for the initial analysis in Section 2.1 (lines 76-93, column 2)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank the reviewer for constructive suggestions and providing comprehensive references. We will make sure to implement these changes and include all related works in the revision. - It would help to see a similar analysis done on real-world data. We evaluated our experiments on various real-world datasets as well for the rebuttal, please see our response to Reviewer UBed for details. - How does the proposed method compare to the baselines on computational cost? We will include the running time comparison. Due to the time complexity of dynamic programming, our method is naturally slower than the baselines. But as we have shown in additional real dataset experiments, the running time of our method while larger is still comparable with that of baselines. Furthermore the running time of our method is dominated by the method for producing the conditional CDF estimate. - The analysis does not include the standard deviations of the reported quantities. Since "outperforming" is a strong word (lines 407-408, column 2), one should consider the statistical significance of the results. You're absolutely right that statistical significance is crucial when claiming "outperformance," and we will incorporate the averages and standard deviations of repeated tests to validate the results. For the unsupervised experiments on real-world datasets, we include the average and standard deviations in the result. Please see our reply to Reviewer UBed for details. The standard deviation for volume and empirical coverage is at most 10% and 2.5% respectively. We will make sure to report the standard deviations in all numerical experiments in the revision. - Since Fig. 1a depicts the best proposed model, Fig. 1b should illustrate the best baseline model (I believe $\rho=0.25$ is the optimal hyperparameter from the search space chosen). We agree that it makes sense for Fig. 1b to show the KDE method with the best bandwidth. We implement the baseline KDE method with $\rho=0.25$, and we will update the figure accordingly. We wanted to point out that for $\rho=0.25$, KDE still uses a relatively large interval to cover the left Gaussian, resulting in a total volume of 3.7049, much larger than our method. While the KDE method is very sensitive to the choice of bandwidth, our proposed DP is very stable with the choice of $k$ as long as it exceeds a certain threshold. - Theorem 2.5 and 3.3 both assume i.i.d. data. How does the statement change if the calibration and test data are exchangeable and not i.i.d.? If the data are only exchangeable, we still have the marginal coverage guarantee in Theorem 2.5 and 3.3. The volume optimality and the conditional coverage will not hold since the volume optimality is not well-defined in this case and the conditional coverage is generally impossible without additional assumptions. - How is the index that satisfies the second requirement in Assumptions 2.4 and 3.1 chosen? Or, how is the index for step 1 in Section 4.1 chosen? In Step 1 of Section 4.1, the index $j^*$ depends on $m$, $\alpha$, $n$ and $\delta$. The choices of $n$ and $\alpha$ are clear. For the other two parameters, $m$ stands for the discretization of $[0,1]$ in the nested system, and we typically need $m>1/\alpha$. Larger $m$ provides finer discretization for the nested system, and $m \leq n$. The number $\delta$ stands for the statistical error of estimating the (conditional) CDF. In later numerical experiments, we chose $m=50$ and $\delta=\sqrt{(k+\log n)/n}$ with $k$ being the number of intervals. - When is the bound in Eq. 8 satisfied? Eq. 8 is always satisfied with high probability, since the VC dimension of $\mathcal{C}_k$ is $O(k)$. - Why is $P \ll \lambda$ required for the initial analysis in Section 2.1 (lines 76-93, column 2)? Here, we intend to provide an example of the optimal-volume solution (the display after (3)) under the condition of $P \ll \lambda$, which means $P$ is absolutely continuous w.r.t. $\lambda$. When we introduce the notation of volume optimality in the next display, we emphasize that the condition $P \ll \lambda$ is no longer needed.
Summary: Conformal prediction is a technique that produces prediction sets with marginal $(1-\alpha)$-coverage guarantees; in general there are many subsets of the label space that may satisfy this coverage guarantee, and conformal methods do not necessarily produce the smallest such set (by measure) that satisfies this guarantee, or even a set close in measure to the optimal one. This paper studies the problem of obtaining prediction sets that achieve these coverage guarantees while also being the best in terms of size – formally, this is any prediction set whose volume equals the infimum of volumes over sets that satisfy the coverage guarantee – called volume-optimality. Firstly, the authors prove that there is no distribution-free method that can achieve fully volume-optimal prediction sets, that is, for any conformal predictor $\hat{C}$ that achieves the marginal coverage guarantee on all distributions, there is some distribution for which it does not produce a volume-optimal prediction set. They move to trying to achieve an approximate version of volume optimality, and restrict themselves to achieving this over the collection $\mathcal{C}_k$ of unions of $k$ intervals. The high-level idea is to achieve approximate volume-optimality over the empirical distribution defined by the calibration set (which converges in total variation to the true distribution since $\mathcal{C}$ has finite VC dimension). This can be done by using a dynamic programming approach to efficiently find a union of $k$ intervals that includes at least $(1-\alpha)$ of the empirical probability weight while satisfying the approximate volume-optimality property. The approach can be generalized in settings with context to achieve conditional coverage guarantees with approximate volume-optimality if you are able to get a good estimate the conditional distribution over the label space. Experiments show that the paper’s approach achieves smaller prediction set size than existing approaches. Claims And Evidence: The theoretical claims all make sense, and the experiments show improvement in terms of prediction set size over existing methods. Methods And Evaluation Criteria: Yes, it seems fine. Theoretical Claims: I did not check the proofs in the appendix, but the DP algorithm works and the arguments in the main paper look sound. Experimental Designs Or Analyses: The experiments looked good. Since the guarantees are in expectation over the calibration set and the data is being simulated anyway, it could be good to get also the results averaged over multiple draws of the calibration data. Supplementary Material: I looked very briefly at some of the additional experiments. Relation To Broader Scientific Literature: This work is concerned with theoretical guarantees / optimality on the size of prediction sets generated by conformal prediction, a relatively less-explored part of the CP literature. It is related to work like Lei et al. 2013, Isbicki et al. 2020, Kiyani et al. 2024, which all similarly consider conformal prediction for regression, though it defines a slightly different kind of volume-optimality, and uses the unique dynamic programming approach to optimize for the prediction set’s volume and define a non-conformity score. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: Strengths: A good deal of work in the CP literature is aimed essentially at making incremental improvements in terms of prediction set size (through empirical testing) without providing theoretical guarantees, so I think the topic of this paper is very relevant and interesting. The technique seems general enough in the sense that if you can come up with an expressive enough collection of prediction sets $\mathcal{C}$ that you can also efficiently optimize over and use to construct a non-conformity score one could perhaps construct other kinds of useful convex prediction sets in higher dimension. Weaknesses: Due to considering a more expressive collection of potential prediction sets and optimizing through the DP algorithm, there is probably tradeoff in terms of runtime during the calibration process with more lightweight procedures, and it’s not clear how this runtime performance compares against the other approaches with volume-optimality guarantees. Other Comments Or Suggestions: Couple of typos: Line 083: should be “sets” instead of “set” Figure 1 (d): should be “sensitive” Questions For Authors: 1. What is the issue (if any) generalizing this approach to prediction sets in $\mathbb{R}^n$? Does it blow up the DP space too much? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive suggestions and insightful questions. - Since the guarantees are in expectation over the calibration set and the data is being simulated anyway, it could be good to get also the results averaged over multiple draws of the calibration data. We appreciate this suggestion. You are right, it is important to get the results averaged over multiple draws. In the revision, we will ensure that all numerical experiments include both the average and standard deviations. In the additional experiments, we show the averages and standard deviations of repeated tests (See our response to Reviewer Ubed for details). - Question: What is the issue (if any) generalizing this approach to prediction sets in $\mathbb{R}^n$? Does it blow up the DP space too much? This is an excellent question. The DP approach can potentially be extended to low-dimensional spaces, such as dimension $d=2$ or $d=3$, to construct prediction sets that are unions of disjoint rectangles. However, as the dimension $d$ increases, the time and space complexity of DP grow exponentially, making it impractical for high-dimensional settings. Furthermore this is unavoidable; this problem is NP-hard in high-dimensions even for simple generalizations of intervals like Balls (even for one ball). However, we can use our framework with any algorithm that can find low-volume confidence sets (from a prescribed family) in high dimensions. More broadly, finding low-volume confidence sets in high-dimensional settings is an important and challenging task in high-dimensional statistics. Naturally, this is out of the scope of this work. In a followup project, we are currently working on computing approximately small volume prediction sets in high-dimensions like balls and ellipsoids, and unions of them.
null
null
null
null
null
null
Learning Safe Strategies for Value Maximizing Buyers in Uniform Price Auctions
Accept (poster)
Summary: The authors study repeated uniform price auctions with respect to bidder behavior. They consider value maximizing buyer that has return of investment (RoI) constraint. The paper proposes safe bidding strategies that allow bidder to be sure that RoI will not be violated in the future rounds. The main contribution is learning algorithms that are efficient (they have sublinear regret in number of rounds T in both full-information and bandit settings). The work is enriched by considering setups with richer class of competing bidding strategies (non-safe) and relaxation of RoI constraints on each round. Finally, some claims are supported by synthetic experiments. Claims And Evidence: The submission is strongly supported by claims in the form of Theorems and Lemmas supplied with proofs. The main contribution of the work – theoretical results. So, the evidence is presented correctly. The work is supported by a large appendix, where all the statements are carefully proven. Methods And Evaluation Criteria: The main contribution of the work – theoretical results. The main methods – proofs are made correctly and are presented in clear way. There are some experimental synthetic results to support the claims. They are adequate (I have a minor question – see Q section) Theoretical Claims: I’ve not checked all the proofs in detailed way (checking each implication). However, checked all main blocks – they are OK (do not seem contradictory) + Checked carefully several proofs: for Lemma 4.1 and for Theorem 4.2. No issues found. Experimental Designs Or Analyses: I’ve read experimental design and analyses (both in the main part – Section 5.3 and Appendix G). Line 2024 “We sample the values from the Unif[0,1] distribution”. This assumption sounds very theoretical. Is it true that in real life (common case) the values are distributed uniformly? Supplementary Material: Yes. I’ve reviewed parts of Appendix: A, C.1, C.2, G, I. Relation To Broader Scientific Literature: The key contributions are related to works: - Brˆanzei, S., Derakhshan, M., Golrezaei, N., and Han, Y. Learning and collusion in multi-unit auctions. In Thirtyseventh Conference on Neural Information Processing Systems, 2023. - Galgana, R. and Golrezaei, N. Learning in repeated multiunit pay-as-bid auctions. Manufacturing & Service Operations Management, 2024. - Potfer, M., Baudry, D., Richard, H., Perchet, V., and Wan, C. Improved learning rates in multi-unit uniform price auctions. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. The main difference: That works assume that the bidders are quasilinear utility maximizers, while this paper study value-maximizing buyers with RoI constraints (which is drastically different setup). Essential References Not Discussed: I’ve not found such Other Strengths And Weaknesses: Strengths - Solid paper with clear theoretical contributions - Very nice storyline and claim support (decomposition in lemmas/theorems, proofs, structure overall) Weaknesses - presentation: o a) the story (and contributions) of the paper repeats 3 times with different level details: Abstract, Into, Main part (Sections 2-6). A few numbers of proof sketches o b) several missed introduction of terms / variables in code Other Comments Or Suggestions: - I would suggest to reduce Abstract (~ 3 times) saving space (most content presents in Intro) and use this space to add more proof sketches Questions For Authors: - Lines 124-125, right part: What is “bid”? In this place of the paper this term is introduced, while not being defined. Is it 1 number, M numbers (per each good), or K numbers? Far-far from this place (page 4) there is the discussion about bidding language, however it does not help define the initial intention in Sec.2.1.. - Lines 141-142, right part: What is “the j-th smallest winning bid” formally? The questions refers to the previous one as the term bid is not defined earlier (is it 1 number per bidder, or M numbers.. etc). E.g., if a bid is not a single number, then how a comparison between them is organized. - Lines 144-145, right part: “the clearing price” is not defined. What is it formally? - Line 255, left part: “removing weakly dominated strategies” I have not found discussion on what is happening with weakly dominant strategies, if keep them. Can it be clarified? I see an example in Lines 236-242 about infeasible strategies that forces to introduce “safe” strategies. Is it true that “unsafe strategy” = “weakly dominant”? If not what is the difference? - Algorithm 2: Please, specify its inputs and outputs - Definition of w: In Theorem 4.2 and Algorithm 1 there is usage of w (b_l = w_{z_l}) while it is not defined earlier (within the theorem and the algorithm). In Line 118 right side, I see some definition of w as a function of v (so, in fact, there is w = w(v)), but it is very far from the theorem statement and the pseudocode. I highly recommend to state how w is obtained in Theorem 4.2 and Algorithm 1 Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging feedback and are glad the theory and structure were clear and well-received. We believe the changes suggested in response to your thoughtful comments can be incorporated into the camera-ready version using the extra page. **Re the use of Unif[0,1]:** We tested other distributions (e.g., Laplace, truncated half-normal) and found similar trends with minor differences. We chose Unif[0,1] for consistency with prior theoretical [1] and experimental work (Galgana & Golrezaei, 2024). [1] Ausubel, LM., et al. Demand reduction and inefficiency in multi-unit auctions. The Rev. of Econ. Stud., 2014. **Re the presentation:** Given the length and technical depth of the manuscript, we chose to present the main contributions comprehensively in the introduction and provide a clear roadmap. As suggested, we will shorten the abstract and use the space to add more proof sketches—especially for Theorem 4.4 (regret lower bound) and the tight instances in Theorems 5.3–5.5, which we believe are novel and of independent interest. **Re the comment on missing terms/variables:** Could the reviewer clarify which terms or variables are unclear? As noted in our response to Reviewer 9dTj, we plan to add a notation subsection at the start of Section 2 to define all terms more clearly. **Re the term “bid”:** The term “bid” refers to a single number here. We imply this in the next line (Line 126): *“… if bidder $i$ has $j$ bids in the top $K$ positions ...”* The bidder submits a vector of bids $\mathbf{b}\in\mathbb{R}^M$ using the $m$-uniform bidding format (the bidding language): $\mathbf{b}=\langle (b_1, q_1), \dots, (b_m, q_m)\rangle,$ where the bidder bids $b_1$ for the first $q_1$ units (i.e., the first $q_1$ entries of the vector are $b_1$), $b_2$ for the next $q_2$ units, and so on. **Re the j-th smallest winning bid:** As mentioned earlier, the word “bid” refers to a single number. In a multi-unit auction with $K$ units, each bidder submits multiple such bids and the $K$ highest bids across all bidders win. Thus, the $j$-th smallest winning bid refers to the $(K - j + 1)$-th highest bid overall. **Re the notion of Clearing Price:** The clearing price (or per-unit price) refers to the $K$-th highest bid, which each bidder pays per unit they receive (see Line 133, right). We use Example 1 (Lines 176–186) in the paper to illustrate each term: Consider an auction with $n=2$ bidders and $K=5$ identical units. There, $m=2$, and the submitted bids are: $\mathbf{b}_1 = \langle (5, 2), (3, 3) \rangle = [5, 5, 3, 3, 3]$, $\mathbf{b}_2 = \langle (4, 2), (2, 2) \rangle = [4, 4, 2, 2]$. Each entry in $\mathbf{b}_1$ and $\mathbf{b}_2$ corresponds to a single bid. The sorted list of all bids is: $[5, 5, 4, 4, 3, 3, 3, 2, 2]$. The top $K = 5$ bids (i.e., the winning bids) are $[5, 5, 4, 4, 3]$. The clearing price (i.e., the $5$th highest bid) is $3$. Here, the 2nd smallest winning bid is $4$. **Re “removing weakly dominated strategies”:** By the definition in Line 256-257, If a safe strategy $\mathbf{b}$ is weakly dominated by another safe strategy $\mathbf{b}'$—that is, $\mathbf{b}'$ yields at least as much value as $\mathbf{b}$ for **every** competing bid profile, —then a RoI-constrained value-maximizing bidder can safely ignore $\mathbf{b}$. Therefore, we remove such strategies from the class of safe bidding strategies. Keeping them does not improve the bidder’s performance, but removing them yields a **finite** strategy class, which significantly simplifies the design of online learning algorithms. **Re the unsafe and weakly dominant strategies:** To clarify: **unsafe strategies and weakly dominant strategies are conceptually unrelated** as stated below. Per Def 2 (Lines 243-251), a safe strategy satisfies the RoI constraint for **every** possible competing bid profile; otherwise, it is not safe (or as the reviewer described, “unsafe”). Thus, safe and unsafe strategy classes are **disjoint**. Within the **safe class**, we further refine the strategy space by removing weakly dominated strategies—those that are never better and sometimes worse than another safe strategy. The notion of weak dominance is therefore defined **only within the safe class**. Considering this, **an unsafe strategy cannot be weakly dominated or weakly dominant** based on our definition in Lines 253–255. **Re inputs and Outputs of Alg. 2:** The output of Alg. 2 is the updated value $\varphi^t(e)$ for all edges $e$. The inputs are: the structure of the DAG, for $t \geq 2$, the values $\Gamma^{t-1}(v)$, $\varphi^{t-1}(e)$, and $\mathsf{w}^{t-1}(e)$ for all nodes $v$ and edges $e$. For $t = 1$ (initialization), only the DAG structure is required. **Re $\mathbf{w}$ in Thm 4.2 and Alg. 1:** $\mathbf{w}\in\mathbb{R}^M$ is the average cumulative valuation vector where $w_j=\frac{1}{j}\sum_{k=1}^j v_k$, i.e., the $j$-th entry is average of the first $j$ entries of the valuation vector as defined earlier in Line 118. --- Rebuttal Comment 1.1: Comment: Thanks for the answers Considering > Re the comment on missing terms/variables: Could the reviewer clarify which terms or variables are unclear? In fact, I've listed the ones in the section titled "Questions For Authors". So, you have clarified all of them. However, my main point w.r.t. it is to highlight that text in the lines can be improved. Hope for seeing in a reviewed version of the manuscript. All questions have been anwered clearly for me.
Summary: This paper introduces the notion of safe bidding strategies for value-maximizing buyers in uniform price multi-unit auctions, ensuring return-on-investment (RoI) constraints are met. A value-maximizing buyer aims to maximize the received value, while only factors in the payment in the RoI constraint. In a uniform price multi-unit auction, the auctioneer sells $K$ identical units of a single good to buyers who may demand multiple units at decreasing marginal values, with the per-unit price set at the $K$-th highest bid. The private type for each buyer is then a vector of values describing the decreasing marginal value for the $k$-unit. To reduce the exponentially large bidding space, in practice, buyers adopt an $m$-uniform bidding format, where they submit $m$ bid-quantity pairs $(b_i, q_i)$ to demand $q_i$ units at bid $b_i$ instead of the vector of bids for each additional unit. This work characterizes a finite class of safe strategies and develops a polynomial-time algorithm to learn the optimal safe strategy with sublinear regret. The paper also evaluates the performance of safe strategies against a clairvoyant with a richer class of strategies, computing the optimal richness ratio $\alpha$. Notably, when the clairvoyant selects the optimal bidding from the class of strategies that are RoI-feasible (not necessarily safe) and have at most $m$ bid-quantity pairs, the richness ratio $\alpha$ is $1/2$, independent of $m$. Claims And Evidence: The analysis and proof in this work is comprehensive and non-trivial. Methods And Evaluation Criteria: The main results from this work are i) characterization of the safe bidding strategies, ii) poly-time learning to bid algorithms, and iii) various bounds proved on regret and richness ratios. Although there are some complementary empirical sections, the most technical part seems to be the theoretical ones. Theoretical Claims: I didn’t verify all the details, but the theoretical results look sound to me. The amount of work is really impressive (~25 pages of different proofs). Experimental Designs Or Analyses: I didn’t check carefully as this is mainly a theory paper. Supplementary Material: I didn’t check carefully. Relation To Broader Scientific Literature: The proof of bounds might be of broader interests. Essential References Not Discussed: I don’t know any. Other Strengths And Weaknesses: The theoretical contribution of this work is non-trivial and the mount is pretty impressive. I see one main shortcoming of this work is that the paper seems to be too dense, and the presentation could be largely improved. For example, in the “Learning to Bid in Repeated Settings” paragraph in Section 2.1, the space of the bid is undefined, and only gets further discussed in “Bidding Language” on the next page. For the repeated setting, it remains unclear to me whether those values are identical across rounds or re-drawn at the beginning of each round. It is also confusing whether the $K$ unit of goods are sold within one round or spread across multiple rounds. It is also confusing whether the RoI constraint is imposed per round or across multiple rounds. If I understand correctly, both the $K$ unit of goods and the RoI constraints are well-defined within each round, and only the learning to bid algorithm needs to span over multiple rounds. If that is the case, it might be better to first describe the complete auction, including RoI constraints, etc. Only after the auction is completely defined, state the learning to bid problem on top of the underlying repeated uniform price auctions. I would have given a higher score if this work is better organized. Other Comments Or Suggestions: See above. Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. We're glad the theoretical depth and effort came through. The changes suggested by the reviewer are easily manageable with the additional page allowed for the camera-ready version, and we believe they will improve the organization of the manuscript. **Regarding paper being dense:** The density of the presentation is partly due to the space constraints given the breadth of our results. We clarify the questions the reviewer has raised and present a plan to update the model section for better readability. **Regarding the notion of bids:** Each bidder submits a vector of bids $\mathbf{b} \in \mathbb{R}^M$ using the $m$-uniform bidding format: $$\mathbf{b} = \langle (b_1, q_1), \dots, (b_m, q_m) \rangle.$$ That is, the bidder bids $b_1$ for the first $q_1$ units (i.e., the first $q_1$ entries of the vector are $b_1$), $b_2$ for the next $q_2$ units (i.e., the next $q_2$ entries are $b_2$), and so on. This description is included in the introduction (Line 50-52, right side), and as the reviewer suggested, we will add further clarification in Section 2.1. **Regarding valuations:** In the repeated setting, the bidder's valuation curve remains fixed across rounds, which is standard in the literature on learning in multi-unit auctions (e.g., [1, 2, 3]). The bidder is allowed to submit different bidding strategies in each round to minimize (approximate) regret over time, which is ensured to be sublinear. We will make this assumption explicit in the camera-ready version. [1] Brânzei, S., et al. Learning and Collusion in Multi-Unit Auctions. NeurIPS, 2023. [2] Galgana, R. and Golrezaei, N. Learning in Repeated Multiunit Pay-as-Bid Auctions. MSOM, 2024. [3] Potfer, M., et al. Improved Learning Rates in Multi-Unit Uniform Price Auctions. NeurIPS, 2024. **Regarding the number of sold units and RoI constraints:** The reviewer’s understanding is correct. In each round, $K$ units of the goods are sold independently (see Lines 143–147, right column), and the RoI constraint is imposed on a per-round basis (see Remark 2.1, Lines 165–187, left column, for a detailed discussion). **Regarding Section 2.1:** We thank the reviewer for the helpful suggestion. In the camera-ready version, we will restructure Section 2.1 into three subsections: (1) preliminaries, including notations and terminology; (2) the standalone multi-unit auction format, including allocation and payment rules, the $m$-uniform bidding language, and the per-round RoI constraints; and (3) the repeated setting, introducing the new notations and formally stating the learning-to-bid problem and performance metrics.
Summary: This paper focus on one buyer’s bidding strategy in repeated uniform price multi-unit auctions. The buyer aims to maximize value under RoI constraints in each round. The authors restrict the buyers to adopt an $m$-uniform bidding format and introduce the notion of safe bidding strategies, which ensure that RoI constraints satisfied regardless of other buyers’ bids. They characterize a class of safe strategies and show that computing the optimal m-uniform safe bidding strategy is equivalent to finding the maximum weight path in the directed acyclic graph, simplifying the computational problem significantly. Building on this, the authors propose two online algorithm, the one under full-information feedback and the other one under bandit feedback. Claims And Evidence: I find all the claims clear and convincing. Methods And Evaluation Criteria: The proposed methods and evaluation criteria look reasonable to me. Theoretical Claims: I did not check the proofs. But the theoretic results align with my understanding of the techniques used and many similar results appear in the literature. Experimental Designs Or Analyses: There is only one experiment, which looks convincing to me. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The paper may only interest a small group of researchers (the mechanism design community and the bandit algorihtms community). Essential References Not Discussed: All the necessary references are properly discussed. Other Strengths And Weaknesses: Strengths: 1. The paper characterized the minimal class of safe bidding strategies and show that the solution of the optimal safe bidding strategies is equivalent to finding the maximum weight path in the directed acyclic graph. This reduction simplifies the problem and provides a clear computational pathway for finding optimal strategies within this class. 2. The paper proposed two online algorithms achieving sub-linear regret under full-information feedback and bandit feedback respectively. Weaknesses While the focus on safe bidding strategies provides valuable insights, the performance of the proposed strategies diminishes when considering a larger class of bidding strategies. In particular, Thm 5.4 and Thm 5.5 indicate that when $m’$ is significantly larger than $m$, the term $m’/m-\sigma$ becomes very large. Thus, the performance of safe bidding strategy is worse comparing to a larger class of bidding strategies. Consequently, the effectiveness of the proposed algorithms may be limited when extended beyond the specific class of safe bidding strategies. Other Comments Or Suggestions: I do not have other comments. Questions For Authors: Is it possible to solve the issue mentioned in the weaknesses above? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging feedback. We're glad the reduction to the maximum weight path in the directed acyclic graph and regret guarantees were clear and appreciated. **Regarding the comment that the paper might only interest a small group of researchers:** We believe that the paper appeals to both practitioners and theorists. The key motivation of this work is emission permit auctions which are becoming popular to curb industry emissions. We provide a simple class of strategies and a learning algorithm that is robust to much stronger benchmarks (Section 5) and pliable to more nuanced bidder behaviors (Section 6). Moreover, several proof techniques, specifically in Section 5, are novel and might be of broader interest, as pointed out by Reviewer 9dTj. **Regarding the comment on the performance drop of safe strategies compared to a larger class of bidding strategies**, we would appreciate clarification on what is meant by the “larger class.” **Comparison to $m$-Uniform RoI-Feasible Non-Safe Strategies.** If the reviewer is referring to the class of $m$-uniform RoI-feasible bidding strategies that are not safe, the performance drop of safe strategies compared to this larger class of strategies (chosen by the clairvoyant) is natural and unavoidable: the bidder (learner) must ensure that RoI constraints are satisfied **without knowing the competing bids** and therefore resorts to safe strategies, while the clairvoyant can choose an optimal strategy with full knowledge of the competing bids. - **Performance Gap is Bounded.** We would like to highlight that the performance gap, as stated in Theorem 5.3, is **bounded and independent of m**. Moreover, this bound is tight (please see Appendix F.1 and F.2) implying that it can not be theoretically improved. - **Empirical Performance Gap is Even Smaller.** Empirically, as illustrated in Figure 3 (left side), in practice, the optimal RoI feasible strategy that is not safe achieves at most 1.25x of the value obtained by the optimal safe strategy; significantly better than the theoretical bound of 2. **On the Gap Between $m$- and $m'$-Uniform Strategies.** We thank the reviewer for highlighting the potential performance gap when the bidder is restricted to $m$-uniform strategies, while the clairvoyant benchmark selects from a richer set of $m'$-uniform strategies with $m' > m$. **Below, we explain why this gap is expected, how our theoretical bounds characterize it tightly, and why it is unlikely to be a concern in practice.** - **The Choice of $m$ is Flexible.** We would like to clarify that the choice of $m$ (i.e., the bid granularity) is made by the bidder and is not fixed. In other words, the bidder is not constrained to use a small value of $m$. While a larger $m$ offers the potential to improve performance, it also increases the size of the action space and the time it takes to learn the optimal $m$-uniform bidding strategy (the space/time complexity of the learning algorithm is $O(mM^2)$ as stated in Appendix C.3.3). - **Our Theoretical Bounds are Tight.** The upper bounds in Theorems 5.4 and 5.5 are tight, e.g., for Theorem 5.4, for any $\delta \in (0, 1]$, we construct a problem instance for which the ratio of the value obtained by the optimal safe bidding strategy with at most $m’$ bid-quantity pairs to the value obtained by the optimal safe strategy chosen with at most $m$ bid-quantity pairs is at least $\frac{m'}{m} - \delta.$ Please refer to Appendix D.5 and D.6 for the formal proofs and Appendix F.3 and F.4 for the construction of these instances. Thus, theoretically, it is not possible to improve this bound. In practice, bidders typically use small values of $m$, such as 4 or 5 (Line 173-175, right column), which we believe are reasonable trade-offs. - **Theoretical Worst-Case is Rare in Practice.** As shown in Appendix F.3 and F.4, the problem instances that attain the worst-case upper bounds are highly nontrivial, and their constructions are quite intricate. For instance, the valuation vector must take the form $ \mathbf{v} = [1, 1 - \epsilon, \ldots, 1 - \epsilon]$ with $ \epsilon = O(\frac{m \delta}{m'}),$ and parameters such as $M = K = T = O(N^{m'})$ where $N = O(\frac{m'}{\delta}).$ The competing bids also need to be carefully designed across rounds. **This level of intricacy strongly suggests that such worst-case gaps are unlikely to arise in realistic settings.** - **Empirical Gaps are Much Smaller.** Our experimental results corroborate this insight. In particular, Figure 3 (right) illustrates the empirical gap when fixing $m = 1$ and varying $m'$. While Theorem 5.4 provides an upper bound that grows linearly with $m'$, in practice, the observed gains plateau quickly. For instance, increasing $m'$ to 10 results in only a $\sim$1.25x improvement—far below the 10x theoretical bound. This demonstrates that while the theoretical worst-case exists, it is rarely encountered in practical scenarios.
null
null
null
null
null
null
null
null
Epsilon-VAE: Denoising as Visual Decoding
Accept (poster)
Summary: - The paper proposes an alternative autoencoder that itself uses a (conditional) diffusion / rectified flow model as decoder replacing the standard VAE for latent diffusion models (LDMs). - For that, the paper experimentally explores the design space in terms of decoder / denoiser architecture, the implementation of the conditioning, the training objective as a combination of different loss functions, the sampling of noise levels during training, and the time schedule for sampling at test time. Within this exploration, the authors propose - a specific combination of loss terms including a novel adversarial trajectory matching loss replacing the patch-wise GAN loss in the usual VAE training for LDMs, - a reversed logarithmic mapping for the timesteps during sampling at test time to have denser steps early in the inference process. - The experimental evaluation shows advantages in reconstruction over other VAE baselines, which also translate to advantages in generation FID when combinated with a LDM. The approach enables a 4x stronger spatial compression with comparable FID than the well established StableDiffusion VAE + LDM pipeline, which results in a 2.3x increase of throughput (image/sec). Claims And Evidence: Most claims are supported by convincing evidence except for: - The authors claim that the approach "enhances downstream generation quality by 22% and provides 2.3x inference speedup" (lines 32 ff., left column), which can be misunderstood as the experimental evaluation only shows that either the first or the second can be achieved but not both at the same time, by changing the downsampling factor of the autoencoder. It should be made clear that it achieves 22% better generation quality for the same downsampling factor or 2.3x inference speedup with comparable FID by increasing the downsampling factor. - The authors "anticipate two key improvements" with one being "more effective generation of latent representations, allowing the downstream latent diffusion model to learn more efficiently" (lines 142 ff., left column). The paper only provides evidence for improved generation quality, but it remains unclear whether this is because of a higher level of detail achieved by the denoising-based VAE decoder or whether the different latent space also enables more efficient training of the LDM, e.g., in terms of faster convergence. Methods And Evaluation Criteria: Both methods and evaluation criteria make sense for the problem at hand: - The paper extensively explores and ablates the design choices for their approach with all of them contributing to the final performance. - The evaluation uses ImageNet in different resolutions and COCO as well as (r)FID, PSNR, and SSIM, which are established benchmark datasets and metrics for image reconstruction and generation. Theoretical Claims: There are no theoretical claims that require proofs. Experimental Designs Or Analyses: All experimental designs and analyses seem to be valid. Supplementary Material: I reviewed the complete supplementary material. Relation To Broader Scientific Literature: The paper is using a lot of ideas from recent improvements of denoising generative models like the rectified flow schedule with velocity parameterization, the VAE encoder architecture from StableDiffusion / Latent Diffusion Models, a similar combination of loss functions for training with reconstruction, LPIPS, and adversarial loss terms (but adjusted for denoising), denoiser architectures from ADM and DiT, a logit-normal distribution for timestep sampling during training from StableDiffusion 3, but it explores the best configuration for a different setting being: Lightweight generative image compression instead of training the second-stage LDM, e.g, for text-to-image generation. For this particular task, certain design questions have to be answered differently showed by the advantage of the UNet-based ADM architecture over a Transformer-based DiT, for example. There is a relevant related work [6] that proposes a similar two(/ three with VAE)-stage approach with two diffusion models but at a higher level of (semantic) compression, still using the standard VAE for the first image compression. There has not been that much prior work focusing on improving the VAE / tokenizer part of latent diffusion or autoregressive approaches, but a lot of concurrent work [1, 2, 3, 4]. Prior works mostly used existing pre-trained VAEs from StableDiffusion or trained a similar architecture with different hyperparameters and downsampling factors like MAR [5] for example. - [1] Deep Compression Autoencoder for Efficient High-Resolution Diffusion Models. ICLR 2025 - [2] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models. arxiv 2025 - [3] Exploring Representation-Aligned Latent Space for Better Generation. arxiv 2025 - [4] FlexTok: Resampling Images into 1D Token Sequences of Flexible Length. arxiv 2025 - [5] Autoregressive Image Generation without Vector Quantization. NeurIPS 2024 - [6] Würstchen: An efficient architecture for large-scale text-to-image diffusion models. ICLR 2024 Essential References Not Discussed: I am not aware of any essential missing references. Many related works are discussed in the appendix, but not in the main paper. It might make sense to briefly describe the most relevant ones in the main paper, e.g., [1]. A later version of the paper could discuss the concurrent work mentioned in the above review section. [1] Würstchen: An efficient architecture for large-scale text-to-image diffusion models. ICLR 2024 Other Strengths And Weaknesses: Strengths: - The paper is well-written and easy to understand. - The experimental results are convincing. - The proposed approach achieves improvements in reconstruction, which are shown to translate to improvements in generation when combined with a LDM. - It enables a larger downsample factor than the standard VAE while providing comparable (even slightly better) generation quality. - The extensive ablation study (table 4) validates the effectiveness of all design choices. - The paper introduces additional technical contributions like the adversarial denoising trajectory matching and the reversed logarithm time spacing during inference. Weaknesses: - Because of the denoising procedure with 3 network forward evaluations used in the paper, the decoding is more expensive than with a standard VAE, which the paper misses to openly discuss. This limits its application in cases with real-time visualization, e.g., during the generation process of images. - The approach seems to be limited in terms of downsampling factors and leveraging the full potential of the denoising generative paradigm: - The main paper misses to explore even larger compression factors in combination with training a LDM, which could be interesting to see whether even more efficient training of the expensive diffusion model is possible. - For both uniform or logarithmic spacing of timesteps for sampling, the reconstruction FID degrades for more than 3 denoising steps which is counterintuitive (see figure 3). - The paper is partially a bit repetitive: - Velocity prediction (lines 145 ff., right column) and eq. (9) already discussed before together with rectified flows (lines 102 ff., right column) and eq. (6). - Noise scheduling (lines 199 ff., right column) is related work that also has been partially addressed already in the related work section. - Model configurations (lines 238 ff., right column) and decoder architecture (lines 312 ff., left column) overlap in content. - The qualitative comparisons in the paper and the appendix are not very convincing showing only very minor differences in high-frequency details. - The proposed reversed logarithm mapping for the timesteps during sampling lacks intuition and would benefit from a visualization. Other Comments Or Suggestions: No other comments or suggestions Questions For Authors: 1. How would the pipeline of eps-VAE and LDM benefit from even higher downsampling factors? - Could the efficiency of the diffusion model be improved (both in terms of training and sampling) without hurting the generation quality significantly? 2. Why does the reconstruction performance degrade when increasing the number of steps starting from 3? Convincing responses to these questions could alleviate my concerns regarding the limitations of the proposed approach. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive comments. We will rephrase our claims and reduce the redundancy as suggested, and include suggested related work in the revision. Below we provide a point-by-point response to all of your questions. Please let us know if you have any further questions. **Q1: Because of the denoising procedure with 3 network forward evaluations used in the paper, the decoding is more expensive than with a standard VAE, which the paper misses to openly discuss. This limits its application in cases with real-time visualization, e.g., during the generation process of images.** Thank you for pointing out this limitation, and we will include a detailed discussion of it in the revised version. However, we would like to note that our denoising process demonstrates promising results even with a single iteration (please refer to Figure 3, left and right, in the original paper). Consequently, this allows our model to be adapted for scenarios with latency-sensitive requirements, such as real-time visualization during image generation as mentioned, by reducing the decoding step to a single pass. **Q2: How would the pipeline of eps-VAE and LDM benefit from even higher downsampling factors? Could the efficiency of the diffusion model be improved (both in terms of training and sampling) without hurting the generation quality significantly?** To address the questions regarding higher downsampling factors, we present additional results for a 32 x 32 downsampling factor in the table below, comparing them to our 16 x 16 results. Notably, Epsilon-VAE achieves a 25% improvement in generation quality over SD-VAE at the 32 x 32 factor, alongside a 3.2x inference speedup than SD-VAE at the 16 x 16 factor with comparable FID (highlighted in bold). We observed similar training speedups for latent diffusion models utilizing Epsilon-VAE at this higher downsampling rate. These gains are more pronounced than those observed when increasing the downsampling factor from 8 x 8 to 16 x 16 (where we observe 22% quality improvement and 2.3x inference speedup, respectively, as shown in the original paper). These findings strongly suggest that the benefits of the Epsilon-VAE and LDM pipeline are amplified with higher downsampling factors. We will include detailed results in the revised paper. | Method | Downsample factor | Throughput | rFID | FID w/o CFG | | -------- | :-------: | :-------: | :-------: | :-------: | | **SD-VAE** | **16 x 16** | **1220** | **2.93** | **14.59** | | SD-VAE | 32 x 32 | 3991 | 9.33 | 21.31 | | Epsilon-VAE (M) | 16 x 16 | 1192 | 1.91 | 10.68 | | **Epsilon-VAE (M)** | **32 x 32** | **3865** | **3.80** | **15.98** | **Q3: Why does the reconstruction performance degrade when increasing the number of steps starting from 3?** To enable large step sizes for the reverse process during inference, we introduced the denoising trajectory matching loss to implicitly model the conditional distribution $p(x_0|x_t)$, shifting the denoising distributions from traditional Gaussian to non-Gaussian multimodal forms (please refer to [1*] for a detailed discussion on this). However, the assumptions underpinning this approach are most effective when the total number of denoising steps remains small. Consequently, there appears to be an optimal range or "sweet spot" for the number of total inference steps. We will elaborate on this phenomenon and provide further clarification in the revised version. [1*] Tackling the Generative Learning Trilemma with Denoising Diffusion GANs, ICLR 2022. **Q4: The qualitative comparisons in the paper and the appendix are not very convincing showing only very minor differences in high-frequency details.** This is because in the original paper, we show compressed high-resolution images with Epsilon-VAEs using low downsampling factors. We will include additional uncompressed visual results under more changeling settings (e.g., 128 x 128 images with 16 or 32 downsampling factors) in the revised version to highlight our advantages. **Q5: The proposed reversed logarithm mapping for the timesteps during sampling lacks intuition and would benefit from a visualization.** As detailed in our response to Q3, the denoising trajectory matching loss results in denoising distributions that deviate from the standard Gaussian. This deviation suggests that the conventional uniform sampling approach may no longer be optimal. Hence, we empirically investigated alternative sampling strategies and found the reversed logarithm mapping to yield the best performance. We will clarify the intuition and provide additional visualizations as suggested in the revision. --- Rebuttal Comment 1.1: Comment: I appreciate the rebuttal from the authors. Regarding related work, I found another similar paper [1] that does not impact the significance of the contributions but should be discussed in the final version. While following a different motivation, this paper essentially also proposes a VAE with a denoising decoder but uses the encoding as the "initial noise" instead of as conditioning for a standard diffusion model starting from a standard Gaussian distribution. This could also be an interesting idea for speeding up the proposed approach. [1] Minimizing Trajectory Curvature of ODE-based Generative Models. ICML 2023 Regarding the rebuttal, I would like to first inform the authors (in case they have not noticed) that it is possible to include anonymous links to additional figures in the rebuttal, which I think would be beneficial to address some of my concerns but also from other reviewers, e.g., regarding the provided qualitative results in the paper with unclear zoom-in boxes due to image compression (cf. reviewer NjT6). Furthermore, I would like to comment on some of the points regarding my review: > Q1 Multi-step denoising / decoding Thank you for pointing me to Fig.3. However, this figure does not contain a comparison with the baseline SD-VAE but only shows an ablation study. I still think this fair comparison in terms of NFEs would be important. > Q2 Higher compression rates Thank you very much for providing additional experimental results. I think these results are convincing and therefore this concern has been addressed satisfactorily. > Q3 Degraded reconstruction performance with more than 3 steps The response makes sense, thanks! > Q4 Unconvincing qualitative comparisons If you already have qualitative results for the revised version, it would be helpful to provide them for the rebuttal via *anonymous* links. Otherwise, I cannot evaluate whether these visual results are convincing or not. > Q5 Intuition and visualization of reversed logarithm mapping Again, a visualization would have been great to have for the rebuttal. The rebuttal addresses some but not all of my concerns. After reading all reviews, I am still leaning towards accepting this paper, but currently slightly more towards weak accept than accept because of the unresolved concerns. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the valuable feedback and for bringing reference [1] to our attention. We agree that this is indeed relevant and appreciate the suggestion. We will incorporate a discussion of [1] in the revised version. We also find the idea of potentially leveraging its approach to speed up our model interesting and will note this as a direction for future work. Below we provide point-by-point responses to your comments. We hope they could address your remaining concerns and help with finalizing the final rating. **Q1 Multi-step denoising / decoding.** We provide a direct comparison between SD-VAE and our *one-step* Epsilon-VAE in the table below. The table presents image reconstruction quality on ImageNet 256 x 256 with the 8 x 8 downsampling factor. We include two variants: Epsilon-VAE (B), which has a similar inference speed to SD-VAE, and Epsilon-VAE (M), which matches SD-VAE in the number of parameters. As shown, both Epsilon-VAE (B) and (M) outperform SD-VAE across all metrics. These results confirm the effectiveness and efficiency of our one-step models compared to SD-VAE. | Method | rFID | PSNR | SSIM | | -------- | :-------: | :-------: | :-------: | | SD-VAE | 0.74 | 25.68 | 0.820 | | Epsilon-VAE (B) | 0.57 | 25.91 | 0.826 | | Epsilon-VAE (M) | 0.51 | 26.45 | 0.830 | **Q4 Unconvincing qualitative comparisons.** Please find the qualitative results with different downsampling ratios (including 8 x 8, 16 x 16, and 32 x 32) on ImageNet 128 x 128 in https://anonymous.4open.science/r/ICML2025-4F8E/ (the file name is "3ACU-Q4_qualitative.png", best viewed when zoomed-in). We use these settings for visualization to highlight the differences since they are more challenging for reconstruction. Epsilon-VAE achieves higher fidelity and better perceptual quality, especially under extreme downsampling factors. **Q5 Intuition and visualization of reversed logarithm mapping.** Please find the visualization results of both uniform sampling and reversed logarithm sampling (including 1-step, 2-step, and 3-step models) on ImageNet 128 x 128 in https://anonymous.4open.science/r/ICML2025-4F8E/ (the file name is "3ACU-Q5_sampling.png", best viewed when zoomed-in). We can find that the reversed logarithm sampling method could lead to improved details in local regions with complex textures or structures, especially when the number of sampling steps is increased to three.
Summary: In this work, the authors propose using denoising diffusion model as the decoder in autoencoder for image reconstruction and generation. $\epsilon$-VAE develops denoising decoder conditioned on the learnable latents. The work includes solid experiments in validating the design choices for image reconstruction. Further DiT-based latent diffusion models on $\epsilon$-VAE latents achieves comparable performance as popular SD-VAE. Claims And Evidence: The authors claim superior performance on image reconstruction which is well supported by the experimental results. But I find the generation performance of $\epsilon$-VAE latents not fully examined. In particular, in Table 3, the authors shows better FID than DiT with SD-VAE under the diffusion setting reported in this work. However, the reported performance is worse than numbers reported in the original DiT paper. Also, the authors only report results without CFG. Whereas CFG is broadly applied to diffusion models and should be compared to fully validate the performance of proposed $\epsilon$-VAE. Methods And Evaluation Criteria: Yes, evaluation and benchmark make sense. Theoretical Claims: Theoretical claims in the work are valid. Experimental Designs Or Analyses: Experimental designs and analysis are valid. Supplementary Material: Mainly sections C and D. Relation To Broader Scientific Literature: The paper is very related to the previous latent diffusion models which relies on autoencoder to acquire latent space. The work follows the pipeline and aims at improving the autoencoder design. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. My main question is the performance of $\epsilon$-VAE on image generation as mentioned in "Claims And Evidence". It would greatly help validate the contribution of the work by showing how $\epsilon$-VAE works with CFG. 2. In latent diffusion, DiT with $\epsilon$-VAE shows better performance than SD-VAE under the setting of this work. Is the setting optimized for $\epsilon$-VAE or is it also benefitting standard DiT with SD-VAE? 3. Can authors provide more details of how inference is conducted on higher resolutions with models trained at 128 x 128 images? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments. Below we provide a point-by-point response to all of your questions. Please let us know if you have any further questions. **Q1: The reported performance in Table 3 is worse than numbers reported in the original DiT paper.** We did the following modifications to the original DiT training recipe as mentioned in Section C of the original paper, leading to a slight drop in performance: (1) we reimplement the pipeline with JAX for training models on TPUs; (2) for simplicity and training stability, we remove the variational lower bound loss term; (3) we reduce the number of training steps to 1M to conserve compute. **Q2: It would greatly help validate the contribution of the work by showing how ϵ-VAE works with CFG.** We provided results with CFG under the 8 x 8 downsample factor in the table below, where we find that Epsilon-VAE (M) performs relatively 20% better than SD-VAE and further improvements are obtained after we scale up our model to Epsilon-VAE (H). These results are consistent with the results without CFG, confirming the effectiveness of our model. We will provide more detailed results of other models and under different downsample factors in the revised version. | Method | FID w/ CFG | | -------- | :-------: | | SD-VAE | 3.51 | | Epsilon-VAE (M) | 2.83 | | Epsilon-VAE (H) | 2.69 | **Q3: In latent diffusion, is the DiT setting optimized for Epsilon-VAE or is it also benefitting standard DiT with SD-VAE?** Our training protocol for latent diffusion was consistently applied across all evaluated VAE models. The three minor modifications to the original DiT training recipe, as outlined in our response to Q1, were implemented solely to conserve computational resources. Our current setting does not favor either our Epsilon-VAE or the standard SD-VAE, but it ensures a fair basis for comparison. **Q4: Can authors provide more details of how inference is conducted on higher resolutions with models trained at 128 x 128 images?** Inference on higher resolutions works out-of-the-box. Since our model is a fully convolutional UNet, it can directly process images with resolutions exceeding the 128 x 128 training size without any modifications to the input.
Summary: The author propose a new autoencoder paradigm, in which decoder is in the form of a diffusion model. The advantage of this is that it has better reconstruction quality compared with standard VAE. The proposed architecture is straight-forward, by directly upsampling encoded latents and then run diffusion at the input resolution to get the original signal backs. They introduce adversarial loss to encourage small reconstruction error from the diffusion decoder. In their main experiment, they use existing architecture, such as VQGAN encoder and discriminator for this purpose. They demonstrate epsilon-VAE achieve better reconstruction than other latent space VAE. Claims And Evidence: Yes. Methods And Evaluation Criteria: Method-wise, I am not convinced about the epsilon-VAE idea. At a high-level, they are proposing a direction opposite to latent diffusion. Instead of doing diffusion at latent space (which was first proposed in Stable Diffusion for budge reduction), it does that instead in the original input space. It is not suprising that they can achieve better reconstruction performance, (similar argument is already proven by Imagen and latter work) but the computation is much higher. Besides, it is counter-intuitive to replace a single step decoding with a iterative process, which is more computationally heavy. Evaluation criteria makes sense. Theoretical Claims: no theoretical claims. Experimental Designs Or Analyses: The experiment designs make sense. However, I would argue a benchmark on computation at various decoding resolution is necessary. The memory could be quadratically increased with the input size, which is not desirable. Supplementary Material: yes. Relation To Broader Scientific Literature: Many generative models (e.g., image, video, and other LMM) use latent diffusion style to train. The proposed framework could speed up the training by combining the autoencoding and generation step together. Essential References Not Discussed: They have good coverage. Other Strengths And Weaknesses: Strength: 1. Overall the authors did a great job for experiment design which has a thorough coverage of their design choices and potential use cases, e.g., image-conditioned diffusion training. 2. The proposed framework is generic, which works with existing off-the-shelf encoder network and diffusion architecutres. Weakness: 1. As is stated above, my major concern is on its high-level insights. They are proposing an interesting direction opposite to latent diffusion. Instead of doing diffusion at latent space (which was first proposed in Stable Diffusion for budge reduction), it does that instead in the original input space. It is not suprising that they can achieve better reconstruction performance, (similar argument is already proven by Imagen and latter work) but the computation is much higher. Experiment evidences wise, it does not justify why such a design choice is worthy to pursuit in my opinion. Besides, it is counter-intuitive to replace a single step decoding with a iterative process, which is more computationally heavy. Other Comments Or Suggestions: none Questions For Authors: none. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Below we provide a point-by-point response to your questions. Please let us know if you have any further questions. **Q1: Clarification on high-level insights.** Thank you for raising this concern. Rather than opposing latent diffusion, we offer a complementary perspective by demonstrating that the diffusion process can also be applied to the outer structure of the latent diffusion model – the autoencoder part. Our goal is to revisit the standard autoencoder framework and integrate diffusion into the decoding step to enhance the entire model’s generative capacity. We empirically show that a latent diffusion DiT integrated with Epsilon-VAE significantly outperforms its SD-VAE counterpart, with the performance gap being significant in high compression regime. **Q2: Diffusion-based iterative decoding.** While replacing single-step decoding with an iterative process may seem counter-intuitive due to increased computational cost, the diffusion-based decoder addresses this concern in three key ways. First, it offers scalable inference, where even a single-step variant already outperforms a plain VAE decoder, and additional steps further enhance quality (please refer to Figure 3, left and right, and Table 7 in the original paper, where a single-step Epsilon-VAE-lite achieves 7.18 rFID and a VAE obtains 11.15 rFID in the same setting). Second, it provides controllable trade-offs between computation and visual fidelity, allowing the number of steps to be adjusted at inference time based on application needs. Third, as shown in Section 4.2 of the main paper and our response to Q3 below, it enables training under higher compression ratios, which helps offset the added cost of iterative decoding by reducing the size of latent representations. An additional advantage of scaling the autoencoder over the latent model lies in computational efficiency. Recent trends show latent diffusion models increasingly adopt Transformer architectures (e.g., DiT), where self-attention scales quadratically with input resolution. In contrast, our convolution-based UNet decoder offers more favorable linear scaling. *As models grow, shifting complexity to the autoencoder helps reduce the burden on the latent model, leading to a more efficient overall system.* **Q3: A benchmark on computation at various decoding resolutions.** In response to the reviewer's valuable point regarding computational cost at various decoding resolutions, we conducted experiments with Epsilon-VAE using a 16 x 16 downsampling factor. Our analysis reveals that: (1) the dominant factor in the overall memory footprint during the generation process is the LDM; (2) increasing the decoding resolution leads to a more substantial increase in the memory requirements of the LDM compared to the Epsilon-VAE decoder itself; (3) Epsilon-VAE outperforms SD-VAE under the same 8 x 8 downsampling factor at the 256 x 256 resolution, but with slightly worse throughput and memory efficiency; and (4) importantly, by employing a 16 x 16 downsampling factor instead of 8 x 8, Epsilon-VAE demonstrates a significant 2.3x inference speedup and a 3.3x reduction in total memory cost compared to SD-VAE, while maintaining comparable image generation performance (highlighted in bold). This demonstrates the efficiency gains achieved with our proposed approach in mitigating the potential for undesirable decoding memory scaling. | Method | Resolution | Downsample factor | Throughput | LDM memory (GB) | Decoder memory (GB) | FID w/o CFG | | -------- | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | | **SD-VAE** | **256 x 256** | **8 x 8** | **522** | **78.8** | **1.7** | **11.63** | | Epsilon-VAE (M) | 256 x 256 | 8 x 8 | 491 | 78.8 | 2.8 | 9.39 | | Epsilon-VAE (M) | 128 x 128 | 16 x 16 | 3910 | 5.9 | 1.6 | 11.14 | | **Epsilon-VAE (M)** | **256 x 256** | **16 x 16** | **1192** | **20.7** | **3.7** | **10.68** | | Epsilon-VAE (M) | 512 x 512 | 16 x 16 | 240 | 82.6 | 6.9 | 9.20 |
Summary: This paper proposes to use a diffusion decoder in an autoencoder training for image generation. The autoencoder is trained with a diffusion loss, together with a LPIPS loss and a GAN loss defined on the one-step generation. The authors show that the proposed method outperforms prior state-of-the-art autoencoders for both reconstruction and image generation. Claims And Evidence: The key claim of this paper is that a diffusion-loss based autoencoder (with LPIPS and GAN losses) can outperform the prior autoencoders while being efficient efficient. The authors evaluate rFID, PSNR, SSIM for reconstruction, and generation FID, the results show that the proposed method outperforms prior works. Visualization is also consistent with the claim. Methods And Evaluation Criteria: Diffusion models are scalable probabilistic models. Using diffusion loss is thus a promising way to better model the probabilistic decoding and for learning image tokens. Evaluation metrics include rFID, PSNR, SSIM, and generation FID, which are standard in this field. Ideally human evaluation can be added, while it may not be necessary given the large amount of visual samples. Theoretical Claims: Theoretical proofs are not the focus of this work. The formulations are correct. Experimental Designs Or Analyses: The experiments are solid. Authors evaluate rFID, PSNR, SSIM, gFID and show visual samples for comparisons with prior methods for both reconstruction and generation. The number of parameters is also listed in supplementary material (Table 7) and shows a fair comparison. In ablation study, Table 4 shows the effect of different design choices which help understanding the importance of different components for the method. Supplementary Material: I checked all supplementary material. Relation To Broader Scientific Literature: This work is one of the first works that show using the diffusion loss for the decoder can help training the autoencoders for reconstruction and generation on latent. The proposed method also serves as a novel autoencoder that achieves state-of-the-art quality while being efficient. Essential References Not Discussed: I did not find key missing related works. Other Strengths And Weaknesses: See prior sections for strengths. Using diffusion loss for autoencoder training is an important and promising direction to explore, and this work is one of the first in this direction that shows solid improvement on common benchmark ImageNet which are general images. Autoencoders are of key importance for generative models, this work proposes a novel autoencoder with state-of-the-art performance, therefore it is an important contribution to the field. Minor weakness: The LPIPS and GAN loss are applied on the estimated one-step sample, which may not be accurate and may cause objective bias in theory. Although this can potentially be addressed with finetuning a diffusion decoder with frozen z. Other Comments Or Suggestions: The zoom-in boxes in Fig. 4 seems to be not very clear even for the GT. Questions For Authors: Besides rFID, how much improvement it gets from the additional LPIPS and GAN losses for the actual visual quliaty in images? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive feedback. Below we provide a point-by-point response to your questions. Please let us know if you have any further questions. **Q1: The LPIPS and GAN loss are applied on the estimated one-step sample, which may not be accurate and may cause objective bias in theory. Although this can potentially be addressed with finetuning a diffusion decoder with frozen $z$.** We acknowledge the reviewer's point regarding potential objective bias due to applying the LPIPS and GAN loss on the estimated one-step sample. However, we would like to emphasize that our Epsilon-VAE differs significantly from traditional diffusion models in that its diffusion decoder is conditioned on encoded latents. This conditioning provides a strong prior about the input image to reconstruct, resulting in a more accurate estimated one-step sample than in typical diffusion scenarios. Therefore, we believe the potential for objective bias is considerably reduced. We agree that finetuning the diffusion decoder with frozen $z$ is a promising avenue for further improvement and will explore this in future work. **Q2: Besides rFID, how much improvement it gets from the additional LPIPS and GAN losses for the actual visual quality in images?** We find that the LPIPS loss enhances textural fidelity and structural coherence in generated images, while the GAN loss contributes to sharper high-frequency details. These observations align with their established roles in traditional VAEs. Detailed visual comparisons demonstrating these improvements will be included in the revised version. **Q3: The zoom-in boxes in Fig. 4 seem to be not very clear even for the GT.** Thanks for pointing out this. This is because the compressed images were included in the original paper to ensure a reasonable file size. We will provide the original, uncompressed images in the revised version.
null
null
null
null
null
null
Stable Fair Graph Representation Learning with Lipschitz Constraint
Accept (poster)
Summary: This paper proposes a Stable Fair Graph Neural Network (SFG) to address training instability in fairness-aware graph representation learning by introducing a Lipschitz constraint for stability and employing a stochastic optimization algorithm. Extensive experiments demonstrate that SFG outperforms existing methods in both fairness and utility on real-world datasets. Claims And Evidence: NA Methods And Evaluation Criteria: NA Theoretical Claims: NA Experimental Designs Or Analyses: NA Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The paper is well-written with a clear and logical structure, making it easy to follow. 2. It is grounded in solid theoretical foundations, providing a rigorous basis for the proposed approach. Weaknesses: 1. The significant accuracy fluctuations observed during the optimization process in Figure 1 leave me puzzled, as they contradict prior research findings. While adversarial learning is known to be inherently unstable, the extent of this instability in the presented results is particularly surprising. Could you clarify the reasons behind this behavior? Other Comments Or Suggestions: NA Questions For Authors: NA Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q. The reason for significant accuracy fluctuations observed during the optimization process in Figure 1 Thank you for your insightful comment. The significant accuracy fluctuations(even though we have used a small learning rate) observed in Figure 1 are caused by **the addition** of weight fluctuations **in the encoder and mask**, which result from the conflict in optimization objectives. First, adversarial learning would leads to **significant weight fluctuations in the GNN encoder**, such as LECI[1]. Second, current adversarial-based graph models **generate a mask to shield sensitive attributes**, which conflicts with the optimization objective of $\parallel \textbf{m} - \textbf{1}_d \parallel_2^2$, as show in Eq. (45), thereby causing significant fluctuations in the mask weight($w_g$) during the optimization process. We also demonstrate this point through experiment, please refer to the figure(https://anonymous.4open.science/r/SFG-559B/mask.pdf). The message-passing mechanism of GNN is also a possible reason for the **amplification of weight changes**. To solve training instability of such graph fair models, we theoretically propose DRO and tight Lipschitz Constraint with mask weight to control weight fluctuations. **References** [1] Shurui Gui, Meng Liu, Xiner Li, Youzhi Luo, and Shuiwang Ji. 2023. Joint learning of label and environment causal independence for graph out-of-distribution generalization. In Proceedings of the 37th International Conference on Neural Information Processing Systems (NIPS '23). Curran Associates Inc., Red Hook, NY, USA, Article 174, 3945–3978.
Summary: The paper introduces a tight upper bound and distributionally robust optimization to address the challenges of training instability that have been ignored by most previous methods for fair graph representation learning. SFG’s novel upper bound is tight and considers the changes of masks, enhancing training stability while preserving model utility and fairness. The use of a projected stochastic subgradient algorithm makes the non-convex problem convert into a multi-convex problem, facilitating the optimization process. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Theoretical derivations are detailed and easy to follow. Experimental Designs Or Analyses: The manuscript describes the experimental settings and designs, and some hyperparameters are detailed. The experimental results and analysis are sufficient. The stability improvement is obvious, and preserving fairness and utility is also improved in most metrics. Supplementary Material: The derivation process in the supplementary material is detailed and clear. The ablation study and parameter sensitivity study are detailed and sufficient. Relation To Broader Scientific Literature: The paper analyzes the differences between proposed tight bound and current research. The paper describes the differences between SFG and other models. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: The paper proposes a novel tight bound that controls the weights of the model to achieve training stability and innovatively combines the generator with the graph model’s Lipschitz bound for adversarial-based fair GNN. This method addresses the stability-performance (utility and fairness) trade-off in learning fair graph representation, a challenge that has been ignored in prior works. Additionally, the paper proposed Distributionally Robust Optimization to avoid over-fitting when chasing fairness, and it’s a novel way to enhance stability and robustness. The paper clearly articulates the problem being addressed, the proposed solutions, and the derivation process. Weaknesses: The update of the weight in Eq (9) has not been explained, and the Multi-convex component in Figure 1 is unclear. Also, the analysis of the GNN complexity is unclear. Other Comments Or Suggestions: The relationship of $R_{max}$ Eq. (15) and $R_{sfg}$ in Eq. (16) has not been clearly analyzed. Questions For Authors: Please provide an explanation for the weight update in Eq (9) and the Multi-convex component in Figure 1. Please provide the relationship of $R_{max}$ in Eq. (15) and $R_{sfg}$ in Eq. (16). What is the computational method used to calculate the GNN complexity? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > W1/Q1 > The update of the weight in Eq (9) has not been explained, and the Multi-convex component in Figure 1 is unclear. Thank you for your valuable comment. We provide the following clear description for the weight update: We update the weights layer by layer, and $B_{new}^{(i,t)}$ always uses the updated weights at each step. As shown in the Multi-convex component of Figure 1, the Non-convex optimization is transformed into Multi-convex optimization through a layer-by-layer update approach. The green background indicates that only the i-th layer is updated in the current step, while the light gray indicates that the layer is not updated in the current step. It can also be seen that this approach takes into account the presence of mask weight( $w_g$ \). > W2/Q3 > > the analysis of the GNN complexity is unclear. Thank you for your careful review. We provide the following clear description for the analysis of GNN complexity: We adopted the approach used by most GNNs[1,2] to calculate the GNN complexity. We analyze the complexity by decomposing MPNN(i.e., Eq. (2)) into three high-level operations:(1) $Z^l=X^lW^l$(feature transformation);(2) $X^{l+1}=AZ^l$(neighborhood aggregation); (3) $\sigma(\cdot)$(activation). We calculate the time and space complexity for each part, and adopt a sparse form based on MPNN. As shown in Table 4, the complexity consists of multiple parts and is detailed. > Q3/OCS > > Please provide the relationship of Rmax in Eq. (15) and Rsfg in Eq. (16). Thank you for your valuable suggestion. $R_{max}(f)$ denotes the worst-case risk, $R_{sfg}(f)$ is an extension of $F(f, \eta)$ in the multi-view setting. Therefore, optimizing $R_{sfg}(f)$ is equivalent to optimize the loss on the worst-case distribution $R_{max}(f)$, which can improve the robustness and stability across different views. **References** [1] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S. Yu Philip. 2020. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems 32, 1 (2020), 4–24. [2] Blakely Derrick, Lanchantin Jack, and Qi Yanjun. 2019. Time and Space Complexity of Graph Convolutional Networks. (2019). https://api.semanticscholar.org/CorpusID:269411067
Summary: This paper focuses on addressing the challenge of training instability in adversarial-based fair GNN models. To mitigate this issue, it establishes a tight upper Lipschitz bound to regulate stability and leverages Distributionally Robust Optimization (DRO) to improve the encoder’s robustness across different fairness views. The key novelty lies in deriving a precise upper Lipschitz bound for a fair graph model with a generator and incorporating DRO to prevent fairness overfitting, ultimately ensuring stable training. Claims And Evidence: The claims presented in this paper are generally clear and straightforward. Additionally, the proof process is structured well and effectively communicated. Methods And Evaluation Criteria: I believe that enforcing a Lipschitz constraint is essential for ensuring the stability of fair GNNs, and deriving a tight upper bound in the presence of a generator is a novel contribution. Moreover, applying DRO offers an innovative approach to improving the stability of fair models. The datasets utilized in this paper are widely recognized within the fair graph research community. Regarding evaluation criteria, the paper assesses performance from the perspectives of stability, utility, and fairness, which I find to be a reasonable approach. Theoretical Claims: I have reviewed the paper’s key theoretical derivations, including Theorem 3.1 and Proposition 3.4, and did not find any issues with them. Experimental Designs Or Analyses: I reviewed the descriptions provided in the experimental settings, including the selection of the Lipschitz constant and the model’s key hyperparameters. The paper presents these details clearly, and the chosen hyperparameters are commonly used. In the experimental analysis, the paper examines the improvement margins in stability and utility and also analyzes a specific case, which I find to be a reasonable approach. Supplementary Material: I examined the derivation process of the key theorems in the appendix, along with the influence of key hyperparameters on the model's overall performance. This section is clearly presented, and I did not identify any issues. Relation To Broader Scientific Literature: This paper addresses the challenge of training stability in fair GNNs, a critical yet often overlooked issue in the fair graph research community. In the introduction and related work sections, the authors compare SFG with other fairness models, including those involving Lipschitz bounds, providing a well-reasoned discussion. Moreover, these concerns are relevant to a broader range of graph learning tasks. Essential References Not Discussed: none. Other Strengths And Weaknesses: Strengths: This paper is the first to recognize the instability issue in adversarial-based fair GNN models and introduces a tight Lipschitz bound to enhance stability. The derivation of the upper Lipschitz bound is both theoretically sound and practically justified, with a clear and well-structured explanation. SFG significantly improves stability while maintaining accuracy and fairness in fair graph models. Additionally, this work presents the first empirical study on applying DRO to graph fairness, strengthening the encoder’s robustness across different fair views. A thorough complexity comparison is also provided. Weaknesses: The training process could be explained more clearly. The meaning of the circles between Step 3 and Step 4 in Figure 1 is not sufficiently detailed. Other Comments Or Suggestions: The Lipschitz constant range should remain consistent throughout the paper. In Section 4.4, the constant range for the German dataset should be integrated into the experimental settings outlined in Section 4.1. Questions For Authors: Is Step 4 of SFG trained alternately or in a unified manner? Do the circles between Step 3 and Step 4 represent an expansion of the potential distribution range to better approximate the true distribution? Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: > W1/Q1 > The training process could be explained more clearly. Thank you for your valuable suggestion. We provide the following clear description of the training process: We train SFG in an alternating manner throughout the entire process. In each epoch, we first train discriminator(i.e., $L_d$ in Eq. 42) to recognize the sensitive attributes by generated mask, then we train encoder and classifier using BCE loss(i.e., $L_c$ in Eq. 43). Finally, we train generator(i.e., $L_g$ in Eq. 45) and encoder to achieve fairness. This process is repeated for all epochs. Therefore, SFG used Lipschitz Constraint with mask weights and DRO can enhance stability. > W1/Q2 > > The meaning of the circles between Step 3 and Step 4 in Figure 1 is not sufficiently detailed. Thank you for your insightful comment. Our purpose of using DRO is to narrow the gap(i.e., Shrink Distribution Discrepancy) with the real data distribution by finding the worst-case distribution of the training data, thereby improving the robustness and stability with the mask weight. The circles between Step 3 and Step 4 in Figure 1 **indeed represent an expansion of the potential distribution range to better approximate the real data distribution**. > OCS > > The Lipschitz constant range should remain consistent throughout the paper Thank you for your careful review. We update the range of constant $\tau$ as {1,2,4,5,6,20,50}.
Summary: This paper presents a novel approach "Stable Fair Graph Neural Network (SFG)" that addresses the issue of instability in adversarial-based fair graph representation learning. The main contributions and findings are as follows (1) The authors derive a tight, easy-to-compute upper Lipschitz bound for the composite model that includes both the sensitive attribute mask generator and the GNN encoder. (2) the paper introduces a stochastic projected subgradient algorithm that employs a block-coordinate update mechanism. Claims And Evidence: The tight upper Lipschitz bound, which ensures the stability of the framework, is well supported by the developed theorems (Proposition 3.4). However, the claim that SFG effectively constrains weight fluctuations is not convincingly demonstrated in Figure 5. It is difficult to determine which case exhibits lower fluctuation since subfigures (a) and (b) use different legend scales. Additionally, using the absolute change in weights as an indicator of fluctuation would be a more reasonable approach. Methods And Evaluation Criteria: The chosen dataset and metric are appropriate for the evaluation. However, the use of weight changes in Figure5 to reflect the function may not be entirely suitable instead the absolute of weight changes would be more suitable. Theoretical Claims: Didnt comprehensively evaluate the correctness of the proofs. Experimental Designs Or Analyses: This paper claims that Lipschitz bound limits the range of weight changes and leads to the stability of the model. Then, it might be important to include an important baseline: training the model with a smaller learning rate that could also lead to small weight changes. Supplementary Material: No supplementary Material Relation To Broader Scientific Literature: https://arxiv.org/pdf/2005.02929 Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: (1) This paper developed a theorem to justify the method. (2) The paper is generally well written. Weakness: (1) The Ablations in Figure 3 show that SFG did not consistently achieve the best performance in AUC such as in German and Credit datasets. (2) The main results in Table 1 show that SFG can not consistently achieve the best performance across the datasets and the improvements is not significant as well. Other Comments Or Suggestions: N/A Questions For Authors: (1) what is the architecture of the GNN used for SFG? Can this method also work well for other Graph neural networks? such as graphsage or GAT? (2) In table 1, the proposed SFG did not constantly achieve the best performance across the dataset, what 's the reason for why it performs worse on German dataset? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable suggestion and insightful comment. > CE. The indicator of weight fluctuation > the absolute of weight changes would be more suitable. We list **the absolute weight changes(FairVGNN: 0.079, SFG:0.018)** between two epochs in following table(both are uniformly rescaled to the range [0, 1]), and it will be supplemented in the revised version to facilitate readers' understanding. Figure 5 also illustrates from both macro and micro perspectives that SFG can constrain weight fluctuations. From a macro perspective, compared to the weight range of **[-1, 0.75]**(FairVGNN) in Figure 5(a), the range of **[0, 0.75]**(Our SFG) in 5(b) exhibits **smaller overall fluctuations(43% of FairVGNN)**. From a micro perspective, the relative weight change(FairVGNN: **7.64%**, SFG:**1.08%**) in the red boxes is calculated as the weight difference between two epochs **divided by the total range** of variation. This average comparison also reflect that our SFG **exhibits smaller fluctuations(14% of FairVGNN) between two epochs**. * The weight changes in red box 2 $$ \\begin{array}{|c|c|c|} \\hline \\text{Model} & \\text{FairVGNN} & \\text{SFG} \\\\ \\hline \\text{epoch 119} & 0.663, 0.703, 0.863 & 0.320, 0.080, 0.293 \\\\ \\hline \\text{epoch 120} & 0.754, 0.777, 0.790 & 0.293, 0.093, 0.307 \\\\ \\hline \\text{absolute change} & 0.091, 0.074, 0.073 & \\mathbf{0.027}, \\mathbf{0.013}, \\mathbf{0.014} \\\\ \\hline \\end{array} $$ > ED. Baseline with smaller learning rate We added a baseline with a smaller learning rate(**1e-4**, FairVGNN), the three **nearly unchanged weights** in the figure(https://anonymous.4open.science/r/SFG-559B/small-lr.pdf) demonstrate that **a smaller learning rate can cause the optimization process to get stuck at saddle points**, preventing effective learning of weights. Moreover, in the gradient descent algorithm $w=w_0-\eta \Delta w$ , **large gradients can still lead to significant weight changes**(Please refer to figure https://anonymous.4open.science/r/SFG-559B/small-lr-2.pdf). This is also the reason why we propose the Lipschitz Constraint. Additionally, a smaller learning rate also reduces utility performance(e.g., **4.17% on Credit**) of baseline. Thank you for your suggestion enriched our discussion on training stability, and would be incorporated into the revised version. > W1. The Ablations Performance > The Ablations in Table 3 show that SFG did not consistently achieve the best performance in AUC. Compared to our ablation model **SFG w/o ct**, SFG is the result of a comprehensive consideration of stability, utility, and fairness. As shown in Figure 7, SFG significantly outperforms SFG w/o ct in terms of training stability, which can inspire further research for training stability of adversarial-based fair models, leading to more reliable and trustworthy models. > Q1. The architecture of the GNN As elaborated in line 180 and 1080, the architecture of the GNN is **GraphSAGE** in our implementation. The result using **GCN** as backbone in follwing table also proved the effectiveness of our method(SFG outperforms the baseline by **118%** on Bail and 90% **on** Credit for $\Delta EO$). Our tight upper bound(Eq. 5) of SFG is derived based on **the general message-passing framework**(Eq. 1) and is therefore applicable to most GNN backbones, such as GCN, GraphSAGE. Additionally, since the calculation of GAT coefficients involves node representations, it is not applicable to our bound. $$ \\begin{array}{|c|c|c|c|c|c|} \\hline \\text{Dataset} & \\text{Model} & \\text{Acc} & \\text{AUC} & \\Delta_{DP} & \\Delta_{EO} \\\\ \\hline \\text{Bail} & \\text{FairVGNN} & 84.76 & 85.62 & 6.45 & 4.89 \\\\ \\hline \\text{Bail} & \\text{SFG} & 86.43 & 85.93 & 4.98 & 2.24 \\\\ \\hline \\text{Credit} & \\text{FairVGNN} & 78.06 & 71.36 & 6.21 & 4.67 \\\\ \\hline \\text{Credit} & \\text{SFG} & 79.74 & 72.06 & 4.54 & 2.45 \\\\ \\hline \\end{array} \$$ > W2/Q2. The Consistent Performance > In table 1, the proposed SFG did not constantly achieve the best performance across the dataset, what 's the reason for why it performs worse on German dataset? The reason is that increasing parameters(FairSAG's encoder has **four times** the parameters of SFG) helps improve performance on **small dataset**, which fails on **large dataset** such as Bail and Credit. When FairSAD uses the same number(1$\times$) of parameters as SFG on German, SFG outperforms FairSAD by 27% for AUC. The following table also demonstrates that SFG outperforms FairSAD(2$\times$) and FairSAD(3$\times$), which uses two and three times the number of parameters, by 9.04% and 8.25%, respectively. $$ \\begin{array}{|c|c|} \\hline \\text{Model} & \\text{AUC} \\\\ \\hline \\text{SFG}(1\\times) & 69.38 \\pm 4.77 \\\\ \\hline \\text{FairSAD}(1\\times) & 54.32 \\pm 2.55 \\\\ \\hline \\text{FairSAD}(2\\times) & 63.63 \\pm 6.99 \\\\ \\hline \\text{FairSAD}(3\\times) & 64.09 \\pm 3.19 \\\\ \\hline \\end{array} $$ --- Rebuttal Comment 1.1: Comment: I thank the authors for their comprehensive and detailed responses to my questions. After reading their rebuttal, I believe the authors have addressed all the questions I raised. Therefore, i raise the recommendation to 3. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thoughtful consideration of our rebuttals. Your valuable suggestions and insightful comments have deepened our understanding for training stability of adversarial-based fair graph models, which significantly enhances the necessity and importance of our theoretical framework. We are truly grateful for your engagement and for recognizing our work's contributions!
null
null
null
null
null
null
Physics-Informed DeepONets for drift-diffusion on metric graphs: simulation and parameter identification
Accept (poster)
Summary: The paper introduces a Physics-Informed Deep Operator Network (DeepONet) approach for solving the drift-diffusion equation on metric graphs. The authors decompose the graph into different types of edge domains, each represented by a pre-trained DeepONet sub-model. These sub-models are assembled using physical coupling conditions at the nodes to form a global solution. The proposed model efficiently simulates forward propagation in complex networks and accurately identifies parameters for inverse problems. Experimental validation demonstrates that the method exhibits strong generalization and robustness across various graph structures. Claims And Evidence: The main claims of the paper are well-supported by evidence: 1.The proposed modular domain decomposition approach based on DeepONet is rigorously validated through theoretical insights and numerical experiments. 2.The model generalizes well to different metric graph structures, as demonstrated by extensive numerical results. The error analysis is well-documented and supports the stated claims. 3.The parameter identification experiments systematically analyze the impact of noise, confirming the robustness and accuracy of the proposed optimization framework. Methods And Evaluation Criteria: The methods and evaluation criteria in the paper are clear and appropriate: 1.The paper thoroughly describes the drift-diffusion equation modeling, the DeepONet training process with physical constraints, and the global optimization strategy based on node coupling conditions. 2.The finite volume method (FVM) is used as the reference solution, which is a well-established numerical approach, ensuring a rigorous evaluation. 3.The evaluation metrics, including spatiotemporal root mean square error (RMSE) and relative error, are widely accepted in PDE numerical approximation tasks. Theoretical Claims: The paper does not present rigorous theoretical proofs but primarily focuses on methodological contributions and empirical validation. Experimental Designs Or Analyses: The experimental design is well-structured and methodologically sound: 1.The experiments cover multiple test graphs with varying topologies, ensuring strong generalization capabilities. 2.The initial and boundary conditions are randomly generated using Gaussian processes, providing a diverse range of scenarios. 3.Detailed error quantification is provided, demonstrating the model’s robustness under different noise levels. 4.The parameter identification experiment is designed realistically using simulated sensor data, effectively evaluating performance under noisy conditions. Supplementary Material: The paper does not explicitly provide additional supplementary materials. Relation To Broader Scientific Literature: The paper clearly situates its contributions within the broader literature: 1.It builds upon existing DeepONet research (e.g., Lu et al.) and extends it to physics-informed PDE learning. 2.The paper thoroughly discusses advancements in physics-informed neural networks (PINNs) and operator learning approaches. 3.The key limitations of standard PINNs (e.g., the need for retraining for each new problem) are well-articulated, highlighting the advantages of the proposed method in terms of reusability and generalization. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: • The paper proposes a modular, scalable, and highly generalizable computational framework for PDE solutions on complex networks. • It introduces a unified framework for solving both forward and inverse problems efficiently. • The experiments are extensive and well-documented, ensuring reproducibility and clarity. Weaknesses: • The paper does not provide a quantitative comparison of computational efficiency against traditional numerical methods. • No code or software repository is provided at this stage (though the authors mention future open-source release). • The study focuses on a relatively simple nonlinear drift-diffusion model (logistic flux function), and more complex equations should be explored in future work. Other Comments Or Suggestions: To further strengthen the paper, I recommend including a quantitative analysis of computational costs, comparing the method’s efficiency against traditional solvers. Additionally, the authors should consider making their code and dataset publicly available to facilitate adoption and further research in this area. Finally, extending the approach to more complex nonlinear PDEs would enhance its applicability in real-world scenarios. Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Quantitative comparison of computational efficiency against traditional numerical methods** For the pure simulation task, the FVM solver is typically faster than our method. This is a caveat of most physics-informed neural network and operator network approaches. However, our methodology shines in the inverse problem setting, were dedicated approaches are needed for each problem type when using traditional numerical methods. To the best of our knowledge, unfortunately, such solvers don't exist for our specific setting and a direct comparison is infeasible. However, we estimate the complexity involved in both approaches for the inverse problem: FVM: O(N_GradientSteps * Nt * (ne*N_edges+N_vertices)^2) Ours: O(N_GradientSteps* ((3 * n_\beta + 1) * N_edges)) and thus a reduction from quadratic to linear complexity. Here N_GradientSteps is the number of gradient steps needed, Nt the number of time-steps for the time-discretization, ne the number of grid points on an edge for the FVM scheme, n_\beta the number of parameters for inflow and outflow edges, N_vertices and N_edges the number of vertices and edges of the metric graph. **Release of code** We have prepared a repository that we make public as soon as possible. We would also be happy to share this as an anonymous repository with the reviewers if permitted. However, according to the ICML rules, we are currently allowed to share only figures in an anonymous repository, which we did, see the reply to Reviewer 1. **Extension to more complex equations and further real-world applications** We are sorry for having been very brief on further applications of drift-diffusion equations on metric graphs. Indeed, beyond the toy example of traffic flow, at least two more applications come to mind: The first is the transport of cargo inside of biological cells that is realized by molecular motors traveling along a network of one-dimensional filaments (i.e. the graph in our setting). Previous work by different authors have demonstrated that, starting from an accurate microscopic model, a mean field limit produces exactly the type of drift-diffusion equations that we consider. Finally, when studying the transport in gas networks, (non-linear variants) of drift-diffusion equations also appear as approximation to the (otherwise hyperbolic) governing equations. They go under the name ISO3 model for gas transport. We will add references to these applications in the possible revision to the manuscript. Future work for other types of equation could include hyperbolic equations for gas transport, with no diffusion, which is currently not discussed in the manuscript.
Summary: ## Summary of paper This paper discusses how functionals can be flowed on a metric graph by learning surrogates of drift-diffusion equations. The method applies DeepONet backbone as the physical-informed dynamics surrogate model to learn how observations at inflow vertices can be pushed to the outflow vertices following a directed knowledge graph. Training of the drift-diffusion surrogate was done via small model graphs (c.f. Figure 1) to obtain inflow, outflow and inner operators. During test time, the learned models are extended to complete or complex graphs (c.f. Figure 4). To extend the learned surrogates over complex graphs, the author proposed to learn unknown outflow condition via RBF interpolation and henceforth a plug-and-play style of additional loss is minimized under which author claimed to be efficient and sufficient to generalize. Finally, the inverse problem over metric graphs can be learnt efficiently by introducing measurement loss training given trained surrogate models. ## Score (ICML should be at scale 10) * Originality: 6/10 * Soundness: 4/10 * Presentation: 6/10 ## Pros * A novel operator learning idea over metric graphs. * Relative time efficient computation to generalize to larger graph. ## Cons * Though self-consistent, authors only did self-comparison. The only change is the parameter of surrogates, measurement noises, and number of training samples. Different backbone such as FNOs, are ruled out. The baseline graph-based nerual networks, e.g. GNN, are not compared as well. * Transfer from simple graph to complex graphs requires learning flow values at each vertices, impling $n_{\beta}$ or $2n_{\beta}$ for each edge. This implies as the number of edges grows, the complexity grows linearly. How to ensure the extension is robust when the graph grows in large scale? The training data is also given under a strong GP prior in which fitting RBF interpolant is somewhat using that prior knowledge. How to reconcile the correlation? ## Questions * Theorem 2.2 stated in line 116-140 lacks proof. Considering explain further in appendix rather saying combining proofs of two papers. * Line 276, Figure 4 G2 is identical of Figure 1 G2. Any more complex examples rather than seen graph in training time? Multi-inflow multi-outflow would be more persuasive if considering different graph structures. * Line 290, Figure 5 "Almost indistinguishable" is a very bold and wrongful claim. It is apparently different comparing left column and right column. * What is the potential application of drift-diffusion model over the graph. The only example is a traffic network toy example. * What is the loss of violation of continuity and Kirchoff-Neumann condition, respectively, during test? Claims And Evidence: see above Methods And Evaluation Criteria: See above Theoretical Claims: See above Experimental Designs Or Analyses: see above Supplementary Material: see above Relation To Broader Scientific Literature: see above Essential References Not Discussed: see above Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Ethics Expertise Needed: ['Discrimination / Bias / Fairness Concerns', 'Inappropriate Potential Applications & Impact (e.g., human rights concerns)', 'Privacy and Security', 'Legal Compliance (e.g., GDPR, copyright, terms of use)'] Ethical Review Concerns: none Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Comparison to alternative operator learning frameworks** Thank you for this remark. In fact, the DeepONet shares a lot of similarity with the FNO approach as pointed out in *Kovachki, Nikola, et al. "Neural operator: Learning maps between function spaces with applications to PDEs." Journal of Machine Learning Research 24.89 (2023): 1-97* As a result one could apply our surrogate coupling technique with a different choice of the DeepONet architecture to obtain an FNO setup and vice versa. Our methodology of coupling surrogate models based on the graph topology can be used with different operator learning methods, e.g., (physics-informed) DeepONets, FNO, etc. The case for general GNNs is a little more difficult to compare as the main GNN architectures aims at learning hidden embeddings for the input data and not necessarily for solving PDEs on edges. Alternatively, there is a Graph Neural Operator technique that can act in the same way as the DeepONet or FNO for one edge operator thus creating a surrogate operator, see *Li, Zongyi, et al. "Neural operator: Graph kernel network for partial differential equations." arXiv preprint arXiv:2003.03485 (2020)*. In this setup the graph for the GNN should not be confused with the graph that the PDE model is posed on, e.g. the possible street network. For GNO one uses a different (auxiliary) graph approximating the integral kernel function representing the PDE on ONE edge. As such it would certainly be possibly to learn a Graph Neural Operator for one edge and then use the problem metric graph for coupling these GNO models. We will discuss these frameworks in a possible revision. **Robustness for larger graphs** This is a very interesting point. The local accuracy of our model is guaranteed by the accuracy of the surrogate model, in our case the physics informed DeepONet, which makes sure that the PDE is resolved up to the precision of the training of this model. The overall accuracy of the coupled surrogate models depends on the solution accuracy at the vertices, only, which we enforce for the least-squares solver. The scaling to large networks therefore depends on the robustness of the least squares solver and as we here rely on the JAX implementation of an ADAM SGD method this should scale well with increasing network size utilizing the computational power of the underlying GPU. In the reply to Reviewer 1, we show the applicability of our approach to larger networks with 1034 nonlinear coupled PDEs. **Loss of violation of continuity and Kirchhoff-Neumann condition** We will include a table illustrating the violation of the continuity and the Kirchhoff-Neumann condition averaged over multiple runs in a possible revision. Both terms are defined in lines 290-295 of the manuscript. For a better comparison between problems that differ in scale, we would like to change the definition slightly and average the values over the number of inner vertices (outer sum) and the number of adjacent edges (inner sum). The same applies to the squared measurement misfit terms defined in lines 356-361 of the manuscript, where a division by the number of measurements and by the number of edges seems appropriate. The following examples for inverse problems on graphs with 102 and 306 edges show this | Problem size | Total loss | Continuity | Flux condition | Measure val | Measure flux | |-------------------| -------------------|-------------------|-------------------|-------------------|-------------------| 102 w/ averaging | 2.04E-02 | 2.70E-04 | 2.37E-04 | 9.95E-03 | 9.91E-03 306 w/ averaging | 2.01E-02 | 3.33E-04 | 2.01E-04 | 9.78E-03 | 9.79E-03 102 w/o averaging | 2.14E+00 | 4.44E-02 | 4.06E-02 | 1.03E+00 | 1.02E+00 306 w/o averaging | 6.36E+00 | 1.85E-01 | 1.02E-01 | 3.04E+00 | 3.04E+00 **Discussion of strong GP prior** While we use the Gaussian (RBF) kernel in several places our main goal to avoiding an inverse crime was to choose different parameters for the generation of training data (inflow/outflow/initial: length scale $\ell=0.5$ and $512$ Gaussian centres), to generate random data for simulation (inflow/outflow/initial: length scale $\ell=0.4$ and $468$ Gaussian centres). In the inverse setting, we employ a length scale of $\ell=0.2$ and only $10$ Gaussian centres to learn the flow couplings and unknown initial conditions. **Further applications** Please see reply to the third reviewer. **Proof of Theorem 2.2** Thank you for your remark, we agree and formulated the proof and would add it to the appendix (and also be happy to provide it already now via an anonymous link). The proof follows the steps: 1) Reformulation of the problem in entropy variables; 2) existence of iterates for a time-discrete regularized approximation; 3) A priori estimates via time-discrete entropy dissipation; 4) compactness and passing to limit. We will adjust Figure 4 in a possible revision to reflect larger networks.
Summary: The paper builds a physics-informed DeepONet setup for solving drift-diffusion PDEs on metric graphs. They train separate models for inflow, inner, and outflow edges, then stitch them together using a domain decomposition trick. Once trained, these models can be reused on any graph—kind of like Lego blocks. It works well for both simulation and inverse problems, and you get all that without retraining or extra overhead. Clean, efficient, and scalable. ## update after rebuttal I have already given accept Claims And Evidence: Claims in the paper are the following: 1. Novel Lego-like domain decomposition approach to solve PDEs on graphs 2. Graph-agnostic training of the edge surrogate DEEPONET model based on inner, inflow, and outflow edges 3. Novel DEEPONET architecture enables robust model evaluation Evidence: 1. There is evidence 2. I'm not sure how the training is Graph-agnostic when experiments are only set on three particular types of graphs. 3. Not familiar with related work, so I can't comment on the novelty of the architecture. Methods And Evaluation Criteria: Authors propose L2 relative errors and plots to visualize predictions of the model. My only comments related to the evaluation would be about plots. For instance, in the Figures 5 and 8. It is not clear what each axis represents. A bit better formatting would make it easier to understand. Theoretical Claims: The theoretical proof seems to be correct. However, I am unsure about that since I am unfamiliar with the field. Experimental Designs Or Analyses: I checked. The experimental setup looks solid—they use physics-informed losses, train on FVM-generated data, and validate on unseen graphs. The inverse problem setup is clean, too—just adds data-fitting terms. Results hold up even with noise. No major issues. Supplementary Material: Reviewed supplementary materials: 1. Experimental part (Loss plots) 2. Theoretical part (Numerical solvers) Relation To Broader Scientific Literature: Authors claim that the DEEPONET approach should be helpful for physics-informed neural networks (PINNS), which are used in many application areas such as fluid dynamics, continuum mechanics and elastodynamics, inverse problems, fractional advection-diffusion equations, stochastic advection-diffusion-reaction equations, stochastic differential equations, and power systems. Given such a wide range of PINNS applications, authors' contributions must be pretty significant. Essential References Not Discussed: I cannot comment on this as I am not familiar with the field. Other Strengths And Weaknesses: ## Strengths: 1. Strong theoretical rigor—clear formulation of the PDE on metric graphs and sound use of DeepONets with physics-informed losses. 2. Solid experimental validation—shows generalization to unseen graphs and accurate inverse modeling, even with noisy data. ## Weaknesses: 1. Some plots (e.g., solution comparisons) lack clarity or aren’t very informative without context. 2. Missing ablation on model generalization to significantly different graph structures. 3. Could use more discussion on limitations—e.g., scalability to larger, real-world networks or non-drift-diffusion PDEs. Other Comments Or Suggestions: 1. Not sure how the time variable t was introduced right after formula (4)—it kind of just appears without explanation. A quick clarification or reminder that the PDE is time-dependent would help avoid confusion. 2. Minor typo: “traing” should be “training” in the conclusion Questions For Authors: 1. How well does the method generalize to very different graph topologies than those used in training? This would help assess scalability and robustness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Experiments on larger graphs** We built larger networks with more edges using a directed network construction of varying sizes (102, 306, 1034 edges) and apply our methodology to this. This is easily done by providing the adjacency structure and the inflow and outflow nodes. We also included **multiple inflow and outflow nodes** into our network. Thus we were able to solve simulation and inverse problems (noise 0.01) for examples with more than 1000 nonlinear coupled edge-PDE models with errors: inference setting: L2 error below 8.80e-03 (abs) and 2.13e-02 (rel); inverse setting: L2 error of solution below 1.48e-02(abs) and 3.65e-02 (rel); inverse setting: L2 error of initial condition below 4.53e-02(abs) and 9.92e-02 (rel); inverse setting: l2 error of edge velocity below 2.63e-02(abs) and 2.45e-02 (rel); In a possible revision we will include these results via tables of error measures and error plots for the larger PDE-network averaged over many runs similar to Tables 4, 5. Figures of the graphs and solutions can be found in the anonymous repository: https://github.com/anonymous-icml2025-234ailp/icml-2025-rev We will include the error plots indicated by *difference3d.png shown in this repository in the revised version of the manuscript. **Figures** We added an axis description to Figure 5 (see anonymous github), where the graph is embedded into two-dimensional space for $\xi_1$ and $\xi_2$ axis, which at this point is not associated with any physical space. The $z$ axis represents the function value of our model at different points in time. An updated figure can be found in the anonymous github repository. Considering Figure 8, the y-axis represents the loss values of the various terms and the x-axis corresponds to the training epoch. We will give a more detailed explanation in a possible revision in the caption. **Discussion of limitations, different types of PDEs** Indeed we are considering directed networks, which is often natural for the cases when PDEs are posed on these. It would of course be possible to formulate certain PDEs on undirected networks and then our methodology should also apply. For different PDEs the choice of the PDE operator is crucial. A drift-diffusion problem with large diffusion coefficient is likely very easy to learn while a purely hyperbolic equation (no diffusion) with strong transport and/or nonlinearities will require more tailoring of the surrogate architecture. We will emphasize this in a possible revision but a detailed study of different PDE models will be a topic of future work. --- Rebuttal Comment 1.1: Comment: Thanks authors for clarification, I have no further questions.
null
null
null
null
null
null
null
null
Empirical Privacy Variance
Accept (poster)
Summary: For a variety of memorization based privacy attacks on language modeling, this paper studies how the average attack success can vary across hyperparameters that are meant to give the same DP guarantees. This phenomenon is observed across model sizes, architectures, datasets, and DP guarantees. Several correlations are observed, leading to proposed heuristics for choosing hyperparameters which they show improve over previous design principles at mitigating the average attack success rate. Further evidence of varying privacy guarantees is shown by instantiating privacy auditing. ## Update after Rebuttal The authors performed experiments that showed the phenomenon was not a consequence of per-instance privacy variance, amongst answering my other concerns, and providing discussion I found very helpful. I raised my score to an accept given this. I have further raised my score to a 5 as I believe the other reviewers concerns have been addressed. I hope this reflect my belief this paper has been conducted with significant rigour and provides valuable insights/motivation for the field. Claims And Evidence: There seems to be a subtle difference in what empirical privacy means in this paper (an average over a set), and what privacy typically means in the literature (a bound on maximum leakage). I believe being more precise will aid in future reproducibility, helps explain inconsistencies with audit attacks, and also make the motivation for policy decisions more precise. I describe this in more detail below. Privacy is typically defined and empirically measured as a worst-case over datapoints (see auditing literature), but “empirical privacy” in this paper is for their average (across secrets) attack success rate. Admittedly the average is over a set of secrets (or author-attribute pairs for ToFU) that was filtered and so could have implicitly been pushed to worst-case points. But a more direct method would have been to report just the worst leakage over the secrets. Moreover, at least in the case of ToFU, the filtered set still represents a quarter of the original set. Hence, it seems it is more accurate to say this paper is studying the average of the per-instance guarantees [1] over a relatively large subset of the training datapoints, which is known to be quite different from the typical DP guarantees when using DP-SGD [2]. I would recommend the authors make the difference in their use of empirical privacy (average over a set of datapoint) to the worst-case literature clearer, as to also be clearer about the claims. This difference may also explain why their empirical privacy did not correlate with privacy audit results: the privacy audits are testing themselves per-instance guarantees over a different set of datapoints (also using a different attack). This said, as argued by the authors choice of attacks and cited literature, the behaviour of leakage for such average case users may be relevant in designing systems to not be too bad in the worst-case (DP guarantees) but also much better on average (as measured by the average in this paper). [1] Wang, Yu-Xiang. "Per-instance differential privacy." Journal of Privacy and Confidentiality 9.1 (2019). [2] Thudi, Anvith, et al. "Gradients look alike: Sensitivity is often overestimated in {DP-SGD}." 33rd USENIX Security Symposium (USENIX Security 24). 2024. Methods And Evaluation Criteria: The datasets used make sense for the evaluation of privacy leakage. The evaluation methods also make sense for studying average leakage over data points; privacy is typically defined as worst-case leakage over data points so the distinction with that could be made clearer (see claims section). Theoretical Claims: No theory (and hence proofs) are presented. Experimental Designs Or Analyses: I read Appendix E which described additional experimental details, alongside Appendix H.1 which described auditing details, and found no issues. Supplementary Material: I read appendices A-E and H.1 and found no issues. Relation To Broader Scientific Literature: Other notions of variance in privacy leakage have been observed, but these were in the context of leakage for individual data points (see a memorization study here [3]). This paper studies variation from hyperparameters for a fixed set of datapoints, and observed predictable behaviours over the hyperparameter choices. This seems novel in the context of DP to the best of my knowledge. [3] Carlini, Nicholas, et al. "The privacy onion effect: Memorization is relative." Advances in Neural Information Processing Systems 35 (2022): 13263-13276. Essential References Not Discussed: None that I noticed. Other Strengths And Weaknesses: Strengths: 1) Experimental study is novel and thorough 2) The paper is mostly clear (see suggestion and previous comments) 3) The idea that average leakage can vary predictably with hyperparameters may have consequence to future decisions in standardizing DP Weakness: 1) General ambiguity in what is measured in this paper by “privacy” versus DP guarantees. Other Comments Or Suggestions: Below are suggestions to help clarify aspects of the paper. I look forward to hear what the authors think, and am happy to consider raising my score given their response. 1) Consider adding discussion on the differences between worst-case DP analysis, and expected privacy leakage experienced by users studied in this paper. Ideally this would be before or at the beginning of the “empirical privacy measures” paragraph to make the notion of privacy studied (in the context of the literature) more well-defined. Terms like per-instance privacy express the leakage over an individual datapoint [1], and its known these values behave quite differently to typical worst-case DP guarantees [2], posing limitations in using per-instance studies to understand worst-case guarantees (see references earlier in the review). 2) The above can also then be incorporated into the privacy standardization discussion in “why is this relevant”; I believe the argument being made in this paper is that standardization based only on worst-case leakage does not account for variation in expected leakage over datapoint, and this could now be made clearer/more precise in the text. 3) On the comparison to privacy auditing, there are two (clear) changes to the previous memorization methodology. Both methods are evaluating per-instance guarantees on different sets of points, and also using different attacks. Maybe a nice control is to run the auditing attack (or the proposed memorization attacks in the paper) on the other set of datapoints to disentangle what caused the lack of correlation (is it the attack or the set of points?). This also would shed light on the hypothesis: I believe the experiments so far have suggested the empirical privacy variations were attack independent (Figure 2), so one would hope this is still true for the loss attack. Questions For Authors: 1) Could the authors elaborate on what the “one to many $\varepsilon$-to-risk relationship” for composition is describing? I believe for a given $\varepsilon$ and $\delta$ you can assume $\sigma$ (ideally) is chosen to be the minimal per-step noise? Unless you’re discussing changing $\sigma$ per-step, so many sequences of $\sigma$ could give the same final $\varepsilon$? 2) Could the authors also clarify what “$\varepsilon$ cannot be used for certification” means in the “why is this relevant” paragraph? In particular I found the phrasing saying a model calibrated to $\varepsilon$ cannot ensure compliance for another model confusing. I’m guessing this is a typo; does it make sense for a DP model to be used to certify another DP model? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for initiating the interesting discussions. Let us begin by explaining 1) what we mean by “empirical privacy”, 2) our empirical privacy measures, and 3) the choice of computing the average score. 1) As noted in the introduction, we take a practical perspective on empirical privacy through users’ perceptions of model behaviors. This is motivated by the gap between the increasing use of DP in LLMs, and the tangible privacy risks that arise in user interactions. We believe empirical privacy should hold richer meanings in this context, extending beyond what is usually referred to (privacy attacks). 2) Our considered empirical privacy measures largely follow this standpoint. They are based on memorization and are specific to the generative nature of LLMs, rather than being generic such as the AUC of membership inference which can be applied to any ML model. 3) Finally, we average the empirical privacy scores over a set of secrets because i) this is how these metrics are evaluated in the literature (e.g., the works that proposed ACR and VMR); ii) we aim to ensure that the conclusions we draw are robust. Circling back to the reviewer’s comments, **Average- vs. worst-case.** We thank the reviewer for the insightful comment. We agree that this adds another layer of subtlety which we overlooked. We thus re-examined whether switching from average- to worst-case measures would affect our conclusions. It turns out that they remain unchanged: empirical privacy variance persists; the regression study yields similar qualitative results; the correlation between empirical privacy scores and audited values remains low. These results suggest what’s critical is not the distinction between average- and worst-case empirical privacy measures. Instead, the fundamental gap lies between **what DP promises** (preventing reidentification/membership inference) vs. **how we measure privacy** (whether a model generates a specific secret given a prompt). This gap is different from the one raised by the reviewer (per-instance guarantees vs. worst-case guarantees). Furthermore, we believe it is not appropriate to view our empirical privacy measures as representing the "average of the per-instance guarantees" or “expected leakage over datapoints” as: 1) we focus on a small subset of secrets that exhibit strong sign of memorization (Appendix E.5); and 2) a secret is only a small substring of an instance. We will add the above discussions and the new results in the camera-ready. **Privacy auditing.** While we agree with the reviewer that the “nice controlled study” would be great to have, there are technical challenges that prevent this. 1) Empirical privacy measures on datapoints used for auditing: Auditing uses specially crafted canary samples; they are long and each member is inserted once in the training data. Applying empirical privacy measures on them will likely yield insignificant results. 2) Auditing on extracted secrets used for empirical privacy measures: The secrets are pieces within training samples but not whole samples; they could occur in more than one sample. These create mismatches with the requirement on data in auditing. Moreover, the size of the secret set is no more than 50, while auditing typically requires a much larger set. That said, since the dataset used by the auditing method represents a worst-case dataset to maximize the lower bound the method can achieve, an alternative fair comparison is to use the worst-case empirical privacy measures, as suggested by the reviewer. Since we still observe a low correlation between empirical privacy and auditing scores, this suggests the choice of dataset is likely not an influencing factor. We believe both hypotheses we made in the “Open questions” in Sec. 5.1, 1) The auditing method is not sufficiently powerful; 2) There is a fundamental gap between membership inference and memorization-based attacks (which we also bring up in the previous point), are likely; probably even the mix of both. **Q1 - one-to-many relationship.** $\sigma$ is not chosen to be the minimal per-step noise but rather dependent on $b$ and $T$. It is computed using privacy accountants to satisfy a target $(\varepsilon,\delta)$-DP guarantee (Sec 2). For Laplace or Gaussian mechanism with a fixed $\delta$, the privacy budget $\varepsilon$ uniquely determines the privacy level via the single hyperparameter that controls the noise scale. In contrast, the compositional nature of DP-SGD allows infinitely many configurations to achieve the same $\varepsilon$, each yielding its own privacy level. We intend to highlight that empirical privacy variance is a unique characteristic of DP-SGD, and more broadly, of DP algorithms that involve composition. **Q2 - certification.** As illustrated in Fig 4, all models in the red region (with $\varepsilon < \varepsilon^\star$) will fail to pass the privacy tests. Please refer to our last response to Reviewer u6Yf for more details. --- Rebuttal Comment 1.1: Comment: Thanks for the response! I'm happy with the clarification on what was causing the mismatch (if it was a per-instance observation). I believe the paper can be quite strong with this added discussion, which provides more evidence on the claim (a mismatch between what we are measuring and what privacy is promising). I look forward to the additional figures and discussion in the revised paper, and have raised my score. Just a clarification, could the authors provide some details on what the worst-case measures they tested were? I'm guessing it's the datapoint/secret with the most leakage? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging our response and raising the score. We will definitely include these discussions & new results in our revision, and also reflect in our acknowledgement. To reply to the reviewer’s question – yes, the worst-case measure evaluates the highest leakage over the set of secrets, basically, switching the aggregation calculation from the mean over the secrets (used in the original average-case measure) to the maximum over the secrets. As an additional note, the set of secrets we curated already represent those with high leakage.
Summary: This paper uses the concept of empirical privacy variance to show that models trained with DP-SGD under the same $(\epsilon, \delta)$ guarantee but using different hyperparameter settings can yield varying empirical privacy. Empirical variance metrics are defined that quantify how much a model memorizes information. The experiments reveal varying scores under these metrics depending on different hyperparameter configurations, with generality across multiple dimensions of the problem (different empirical variance metrics, secret subsets, model sizes...). Regression analysis is used to measure the relationship between DP-SGD hyperparameters (including joint effects) and the empirical variance score. Practical hyperparameter selection heuristics are proposed to mitigate this phenomenon. Finally, hypotheses on the cause of empirical privacy variance are discussed. Claims And Evidence: The claims in the paper are supported by well-structured and clear evidence. I believe the experimental evidence in the main paper is sufficiently articulate and convincing. I did not review supplementary results in the Appendices in detail. Methods And Evaluation Criteria: The chosen evaluation metrics, models and data sets are appropriate for answering the research question explored in the paper. Theoretical Claims: There are no theoretical proofs in the paper. The main claims are supported by empirical evidence. Experimental Designs Or Analyses: I checked the soundness and validity of all experiments presented in the main paper, which I find to be well-structured, relevant to the proposed question, and clearly presented. Supplementary Material: I skimmed through the supplementary materials to get a general idea of the type of additional content, however I did not review them in detail. Relation To Broader Scientific Literature: The paper proposed tools to examine the phenomenon of empirical privacy variance in DP-SGD, and analyses the case of private fine-tuning of language models. The paper is located in the landscape of DP-SGD literature. DP-SGD was first introduced by Abadi et al. (2016). The impact of hyperparameter configuration on model utility has been explored among others by Ponomareva et al. (2023). This paper focuses on the effects of hyperparameter choice on empirical privacy instead. The works most closely related to this paper are mentioned in Appendix A: Hayes et al. (2023) show how different hyperparameters impact the success rate of reconstruction attacks; Kaissis et al. (2024) propose a metric to quantify the maximum excess vulnerability among mechanisms that share the same privacy budget. These works are focused on theoretical analysis of such problems, while this paper is focused on a practical, real-world setting (privately fine-tuned language models) and proposes heuristics and intuitive hypothesis on potential causes of this phenomenon. Essential References Not Discussed: I'm not aware of essential references that aren't cited. However, I think the short section comparing the paper to related work in Appendix A should be discussed in the main paper, with a dedicated and more articulated section. That would make it easier for the reader to locate this paper in the literature landscape. I am aware that the authors chose to present that in Appendix at this stage due to space constraints, and I encourage them to include a more extended presentation of the related work in future iterations. Other Strengths And Weaknesses: The main strength of the paper is that it highlights an interesting and general no-free-lunch problem arising from hyperparameter setting selection in DP-SGD. To the best of my knowledge, this is the first time empirical privacy variance is proposed and investigated on a practical level. The experiment results provide compelling evidence that the problem is general, and likely to impact a variety of models using DP-SGD. For this reason, I do think that the implications of empirical privacy variance are potentially broad and of interest to the DP community. This paper poses the basis for interesting future exploration, while also providing practical heuristics to mitigate the problem. One limitation of the paper is that empirical privacy might need to be measured differently depending on the specific task at hand, and it is unclear to which extent the proposed heuristics apply and generalize. This might be better clarified by including a limitations section in the next iteration of the paper. Other Comments Or Suggestions: I don't have additional comments. Questions For Authors: I encourage the authors to address the minor points I brought up in the review, particularly the inclusion of a Limitations section and a more detailed Related Work section in the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our claims and methods as clear, sound and relevant, and our contributions to be of broad interest to the community. We will take the reviewer’s feedback and incorporate a more detailed related work section into the main body, incorporating discussions in our response to Reviewer E9Wi. We will also include a limitation section discussing what has not been covered in our work but deserves attention: in particular, the subtle difference between shuffling and Poissong subsampling (which is currently mentioned in a footnote), as well as the privacy cost of hyperparameter tuning. Regarding the reviewer’s comment on the generality of heuristics, we clarify that while they are developed based on regression analysis in two specific scenarios (fine-tuning GPT-2-S on Enron at $\varepsilon=4$ with the target variable as ACR, and Llama-2-7b on TOFU at $\varepsilon=16$ with the target variable as AIR), our experiments demonstrate that they generalize across various models, datasets, privacy budgets, and empirical privacy measures (VMR). The detailed results are presented in Figs. 7 and 10 in the main paper, as well as Figs. 18, 20, 21, and 22 in Appendix G. In total, we have spent over 10,000 H100 GPU hours to ensure that our findings are general and robust. We hope this convinces the reviewer about the broad applicability of these heuristics. Please let us know if you have further questions or suggestions. Thank you!
Summary: The paper empirically estimates the privacy loss of language models fine-tuned with DP-SGD in many different configurations, including different hyperparameters, model sizes, and dataset characteristics. The paper finds that models calibrated to the same DP guarantee can have very different empirical privacy losses depending on the full configuration, and calls this phenomenon "empirical privacy variance". The paper investigates the effect of hyperparameters further, and finds rules of thumb on how the hyper parameters change empirical privacy loss that are derived from experimental results, and generalise in additional experiments. The paper also investigates possible causes of empirical privacy variance, but does not find a clear cause. ## update after rebuttal I'm keeping my score due to the subsampling discrepancy in the paper. I have read the extensive final response from the authors. I agree with the authors that there are reasons to believe the conclusions regarding the subsampling rate in Section 4 would not change with correct privacy accounting, but if these were to be included in the paper, they would need a massive caveat that basically says "we don't really know if these conclusions are correct". I also agree that training any kind of large model with Poisson subsampled DP-SGD is very difficult in practice, which is why I suggested recalculating the privacy bounds for the models that have already been trained. However, I disagree with the other arguments from the authors, which boil down to arguing that the paper should be accepted because previous papers with the same issue have also been accepted. This is not a good argument, as consistently applying it would mean that bad research practices cannot be eliminated if they are ever accepted. In summary, I think the paper could be accepted if the results are presented properly, which is why my score is a 2 instead of a 1. However, this presentation would be very different from the current one, and include many limitations and caveats not in the submission, so I'm keeping my score at a 2 instead of a 3. Claims And Evidence: The paper's claims are supported by extensive experiments. However, the mismatch between shuffling-based subsampling and Poisson subsampling is a major issue with the experiment design, more details in the specific section. Methods And Evaluation Criteria: The ACR metric seems to potentially underestimate the privacy loss for the secrets in the Enron dataset, since the metric only looks (as I understand it) for reproductions of the exact secret string, not other strings that would reveal the same secret. For example, it may be easier to get a model to produce the string "Richard B Sanders/HOU/ECT" than the full secret "Forwarded by Richard B Sanders/HOU/ECT". In this case, ACR would be based on the prompt that produced the latter string, though the former reveals almost as much. Theoretical Claims: The paper does not make theoretical claims. Experimental Designs Or Analyses: The paper does not discuss the implications of the mismatch between using shuffling-based subsampling in practice and Poisson subsampling in privacy accounting that the paper's experiments suffer from. As noted by Chua et al. (2024b;c), there is a difference between the subsampling amplification that the two subsampling schemes provide, leading to different privacy bounds if accounted for correctly. As a result, all $\epsilon$ values shown in the paper are incorrect. This does not substantially change many results of the paper. For example, the $\epsilon$ values in Figure 2 would be adjusted, but all other points would remain the same, and the conclusions drawn from the figure would not change. However, the results examining the effect of the batch size could be affected, since this effect could depend on the subsampling scheme. As is, I can't recommend accepting a paper where all $\epsilon$ values are incorrect, and the only mention of this issue is a footnote that brushes this off as something outside the scope of the paper. However, given the otherwise high quality of the paper, I could recommend accepting if this issue were prominently discussed, for example by making it clear that the $\epsilon$ values are only estimates, and that the batch size may not behave the same way with a correctly computed $\epsilon$. Alternatively, you could do the correct privacy accounting manually, since doing that with parallel composition is fairly simple. See the start of Section 3 in Chua et al. (2024c) for the analysis with deterministic subsamples, and Theorem 4.1 in Chua et al. (2024b) for a proof that the same analysis gives a valid privacy bound for any shuffling. Supplementary Material: I read the Appendix, but did not check the additional experiments in Sections F and H in detail. Relation To Broader Scientific Literature: The relevant literature is discussed for all findings of the paper. Essential References Not Discussed: No essential references missing. Other Strengths And Weaknesses: The topic of the paper is important. The results follow a general pattern that has been found in existing work, but the empirical privacy perspective on this pattern is both novel and important. Other Comments Or Suggestions: - In Figure 3, the "increasing dataset size" is easy to interpret as the x-axis label for the top row. - Lines 267-270 (left): the sentence "a model calibrated to a given $\epsilon^*$, deemed to meet privacy requirements" is confusing. I initially interpreted this to mean that $\epsilon^*$ is deemed to meet privacy guarantees, but the argument only makes sense if the model is deemed to meet privacy guarantees independently of the $\epsilon^*$. - In Figure 5, the x-axis label is confusing, since higher utility should be better, but higher test loss is worse. - Figure 9 is too small to read easily. Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for raising the subtlety between implementing DP-SGD with shuffled batches, but performing privacy accounting as if Poisson subsampling was used. In the camera-ready version, we are happy to expand the discussion and move it to the main paper under a “limitation” section. Additionally, we would like to clarify a few more points. While as pointed out the exact $\\varepsilon$ values are incorrect, we believe that the qualitative observations we make are still relevant. For example, in the plots where we have $\\varepsilon$ on the x-axis e.g. Figures 2, 3 and 10, if the correct privacy accounting of shuffle-based DP-SGD were to be applied for the specific parameter settings we have, **it would only re-scale the x-axis in some (non-linear) manner**, but the empirical variance of the y-axes will remain the same which is precisely the main take-away from this paper. That is, the x-axis, instead of being supported on $\\varepsilon$ in $\\{1, 2, 4, 8, 16\\}$, would be supported on some other different values, though still monotonically increasing. Thus, while the observations made by Chua et al. (2024b;c) are valuable, we consider them **relevant primarily if some model is released after training on truly sensitive data, with a claimed privacy guarantee that is incorrect.** But in our case, we feel this observation is not that critical, as it just amounts to some rescaling of values which do not affect the qualitative conclusions from our study. But thank you for raising this; we will elaborate more on this in the revision. Additionally, we note that Chua et al. (2024c) was posted on arXiv on Nov 6, 2024, which is within three months of the ICML 2025 submission deadline (Jan 30, 2025). According to [the ICML 2025 reviewer instructions](https://icml.cc/Conferences/2025/ReviewerInstructions), this qualifies as a **“concurrent work”** that “Authors are not expected to discuss”. In fact, while we are aware of the earlier work Chua et al. (2024b), which highlights the gap between these two sampling approaches, we were not aware of any efficient implementations of Poisson subsampling for DP fine-tuning LLMs: 1) Chua et al. (2024c) was not available online when we initiated our work; 2) They conducted experiments on MLPs only, and extending the implementation to transformer training is non-trivial and warrants further research. As a consequence, we adhere to the conventional DP fine-tuning framework for LLMs that employs shuffle-based DP-SGD. Since this was the main concern raised in the review, we hope the reviewer could reconsider the rating if the above response adequately addresses their concern. We are happy to answer any other questions during the discussion period if it helps to clarify this. > For other minor comments-- **ACR underestimates the privacy loss.** In our studies, the measured empirical privacy scores depend heavily on the chosen set of secrets. The reviewer is correct that using a shorter (sub)string as the secret or not requiring an exact match could result in a lower measured privacy loss. However, the magnitude of these scores is not our primary focus; rather, our arguments center on the **variance** of the empirical privacy scores. **Figures.** Thanks for the suggestions, we will incorporate them in the camera-ready version. **Clarifications on “deemed to meet privacy requirements”.** Here's what we intend to convey: even if a legislative body determines that a model with a privacy budget $\varepsilon^\star$ passes their privacy tests, this does not imply that models with a stricter privacy budget ($\varepsilon \le \varepsilon^\star$) will pass. For instance, as illustrated in Fig 4, all models in the red region will fail to pass the privacy tests. --- Rebuttal Comment 1.1: Comment: Thank you for the response. **Subsampling discrepancy** I agree that many of your conclusions would not change if you did privacy accounting correctly, and adding discussion of this to a limitations section is a good way to make this clear to readers. However, the conclusions concerning the batch size could change, so I don't think they should be included in the paper without correct privacy accounting. This would effect a lot of the results in Section 4, so I'm hesitant to recommend accepting the paper without any reviewers seeing such major changes. **Clarifications on “deemed to meet privacy requirements”** I think your new version of explaining this still needs clarification. I interpreted this as: - Legislative body has chosen empirical privacy tests that do not look at $\epsilon$ directly. - Model A that is $\epsilon^*$-DP passes privacy tests. - Model B with $\epsilon \leq \epsilon^*$ can still fail privacy tests. If this is correct, I think you need to make it clear that the privacy tests do not look at $\epsilon$ directly. Even from your newer version, one can easily get the impression that the privacy test is just "is $\epsilon$ small enough". --- Reply to Comment 1.1.1: Comment: Thanks for the suggestion on the second point; we will expand the discussion. In the paper, we interpret “privacy tests” as evaluations of empirical privacy, as this is the main focus of our study. Regarding the subtlety between shuffling and Poisson subsampling and its potential impact on the effect of sampling rate $q$, our intuition is that the conclusion is unlikely to change with a different sampling scheme: 1. Hayes et al. (2023) theoretically show that the success rate of a tight upper of reconstruction attacks increases with the sampling rate, reconciling with our findings. 2. Prior work shows that larger batch sizes improve utility. It is more plausible that this improvement comes at the cost of empirical privacy. 3. Chua et al. (2024c) suggest that utility of models trained under DP-SGD with Shuffle and Poisson are basically the same, and we expect this to hold for empirical privacy. Nevertheless, we acknowledge that we cannot **entirely** rule out the possibility that switching from shuffling to Poisson subsampling might alter the relationship between empirical privacy and $q$. That said, we wish to emphasize the following: 1. “Reporting DP guarantees under Poisson subsampling while training with shuffled batches” is a **common** (albeit inaccurate) practice in the community. This dates back to Abadi et al. (2016), and has since been adopted in most papers on large-scale DP training. Below is an incomplete list of references confirmed to use shuffling. - Yu et al. "Differentially Private Fine-tuning of Language Models." ICLR 2022. [dp-transformers](https://github.com/microsoft/dp-transformers) - Li et al. "Large Language Models Can Be Strong Differentially Private Learners." ICLR 2022. [code](https://github.com/lxuechen/private-transformers) with Opacus==0.13.0 which implemented shuffling-based sampler - Yue et al. "Synthetic Text Generation with Differential Privacy: A Simple and Practical Recipe." ACL 2023. Implementation follows dp-transformers. - Yu et al. "Privacy-Preserving Instructions for Aligning Large Language Models." ICML 2024. [code](https://github.com/google-research/google-research/tree/master/dp_instructions) - Zhang et al. "DPZero: Private Fine-Tuning of Language Models without Backpropagation." ICML 2024. [code](https://github.com/Liang137/DPZero) - Panda et al. "Privacy auditing of large language models." ICLR 2025. Implementation based on [code](https://github.com/awslabs/fast-differential-privacy) which uses shuffling-based sampler - McKenna et al. "Scaling Laws for Differentially Private Language Models." arXiv:2501.18914. [paper](https://arxiv.org/pdf/2501.18914) Appendix A, paragraph “Fixed Physical Batch Size”. 2. We are not aware of efficient implementations of Poisson subsampling for transformers. The need to accommodate variable batch sizes forces memory allocation based on worst-case scenarios, significantly reducing efficiency and complicating load balancing. It further exacerbates the memory strain for DP fine-tuning in LLMs. Opacus lacks built-in support for advanced memory management and distributed training strategies that transformers demand, making them unsuitable for direct use on LLMs without substantial modifications. 3. The reviewer suggested the privacy accountant for deterministic batch sampling in Chua et al. (2024c). While this provides an upper bound on the privacy loss, it is conservative as it neglects the amplification effect from shuffling. This is why people rarely rely on such accountants [link](https://arxiv.org/abs/2208.04591) in practice. We believe this is more appropriate in scenarios where the goal is to **release** a model with a formal privacy guarantee. However, our objective is to **calibrate** models to the same target guarantee for comparative analysis. Using a loose bound would undermine this calibration and distort the interpretation of our empirical findings. We must point out that if the reviewer applies the standard uniformly, all the cited works could be rejected on the grounds “all ε values are incorrect.” This would be an unreasonable stance. Also, the reviewer's critique would discredit major conclusions from prior work, e.g., Sec 3 of Li et al. (2022) study hyperparameter tuning and make recommendations to use large batch size. Disregarding findings due to nuances in sampling schemes while ignoring the broader insights and methodological value would make no progress possible. We recognize that DP-ML is a broad field and contains nuances that need to be seriously taken and addressed. The distinction between sampling schemes is important, and we appreciate the progress in Chua et al. (2024b;c). However, the technical challenges involved (e.g., building a framework for LLM) are not something that can be resolved overnight. We respectfully ask the reviewer to adopt a consistent and constructive standard—one that acknowledges the context, intent, and scientific value of each contribution.
Summary: The paper investigates privacy implications when fine-tuning language models using DP-SGD. Importantly, when using DP-SGD, the same (ε, δ)-DP guarantee can be achieved through multiple hyperparameter configurations (batch size, number of iterations, learning rate, etc.). The authors' key finding is that despite having identical theoretical privacy guarantees, models trained with different hyperparameter configurations exhibit variations in empirical privacy (which they measure through memorization metrics). This "empirical privacy variance" means that the theoretical (ε, δ) parameters alone don't fully characterize the practical privacy leakage of the model, raising questions about the practical interpretation of DP guarantees in language model fine-tuning. Authors suggest some new heuristics to hyper-parameter tune the parameters of DP-SGD so that we get a better trade-off between utility and empirical privacy. Claims And Evidence: - The main claim of the paper is that there is a variance in "empirical privacy" when the "theoretical privacy" is fixed. This claim is supported by their extensive experiments. - There are other claims about the role of dataset size and model size and their role in empirical privacy. These claims motivate their hyperparameter selection. I think they have moderately supported these claims but for suggesting general heuristics for hyperparameter turning I would expect more empirical validation. - One of the claims of the paper which I did not find a support for is that the choice of hyperparameters is the main factor explaining the variance. However, I think there could be other sources contributing to this variance. For example the attacks might be more successful for smaller models. Or there could be variance stemming from the choice of dataset. Methods And Evaluation Criteria: The method for training private models (DP-SGD) and also their empirical privacy evaluation methods (ACR and VMR) make sense. Theoretical Claims: - N/A Experimental Designs Or Analyses: Their experimental design makes sense to me. Supplementary Material: No Relation To Broader Scientific Literature: As stated by authors, the finding that same privacy parameters could lead to different implications for empirical attacks is already studied in the literature. I think authors need to clarify their contribution in comparison with these works a bit better. Essential References Not Discussed: - Other Strengths And Weaknesses: - The first weakness with the paper is that there is an implicit assumption that the source of variance is the selection of hyperparameters. I think the methods used for privacy evaluation are actually more of a source for variance. For example, ACR itself has a very brittle optimization step that could fail in finding the optimal solution (it does most of the time). I - argue that ACR and VMR should not be set as a goal. It is only a metric that could indicate privacy violations. Having a low ACR is not at all and indication of privacy. So I argue performing hyperparameter tuning to lower these rates does not necessarily lead to more private models. - The main contribution of paper seems to be about showing the existing variance. As authors argue, this is already discussed in the literature. - Writing of the paper is confusing. I do not understand many sentences in the paper. - As a strength, I find the empirical studies in the paper rigorous and impressive. Other Comments Or Suggestions: See above Questions For Authors: - I did not understand this sentence: "A direct consequence is that, in DP-SGD, ε cannot be used for certification: a model calibrated to a given ε*, deemed to meet privacy requirements, cannot ensure compliance formodels with stricter DP guarantees (ε ≤ ε∗)." Could you please elaborate? - Is there a theory that suggests ACR and VMR should be small for differentially private mechanisms? Your seem to be assuming this but I have not seen a theoretical analysis on this. - The two hypothesis in section 5.1 are not clear. Can you please elaborate? The first hypothesis is somewhat clear, you are saying that there is gap in our understanding of true epsilon values. But I have a hard time understanding the second hypothesis. - I don't understand this sentence, please elaborate: Why is this relevant? Consider classic DP mechanisms such as the Laplace and Gaussian mechanisms (Dwork et al., 2014). Their noise parameter (scale parameter b for Laplace and σ for Gaussian) inversely correlates with ε and uniquely determines privacy risk: increasing it lowers the signal-tonoise ratio, making it harder for adversaries to extract meaningful information. This establishes a one-to-one, monotonic ε-to-risk relationship. In contrast, the composition nature of DP-SGD results in a one-to-many ε-to-risk relationship, making ε insufficient to fully capture privacy risk. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for the review. > Summay of our key points - We measure empirical privacy in controlled settings - We conduct extensive experiments (> 10,000 H100 GPU hours) for robust findings - Our contributions go well beyond “showing the existing variance” - We recommend empirical privacy as an additional dimension of privacy alongside DP, rather than targeting it in isolation > Detailed responses **Sources of variance.** The empirical privacy variance we observe is **conditioned on** fixing the dataset, model, training algorithm and empirical privacy measure (Sec 3.3 para 1). We also show variance from *training randomness* is small (Appendix F.3). The reviewer raised variance in *measuring empirical privacy scores*. We believe this is not the case for our experiments. For ACR, we run the optimization with 3 seeds and report the highest score for more accurate estimates; VMR and AIR average 10 stochastic decodings to reduce variance (Appendix E.6). **Comparison with related works.** Appendix A discusses related work. Hayes et al. (2023) and Kaissis et al. (2024) show the attack success rate of a reconstruction attack (which relies on strong assumptions) can vary for mechanisms calibrated to the same DP guarantee. We focus on realistic privacy risks that emerge in language model interactions, leading to the concept of empirical privacy variance. Beyond this, we have more substantial contributions: 1) extensive experiments across various models, datasets, secrets, and empirical privacy measures; 2) in-depth qualitative analysis of the impact of hyperparameters and investigation of the cause; and 3) broader implications: for practitioners, we provide heuristics that enhance empirical privacy; for researchers, we expose the hidden cost of hyperparameter tuning; for policy makers, we discuss how this phenomenon could complicate standardization. As Reviewer MX8d noted, “...this is the first time empirical privacy variance is proposed and investigated on a practical level. The experiment results provide compelling evidence that the problem is general, and likely to impact a variety of models using DP-SGD… the implications of empirical privacy variance are potentially broad and of interest to the DP community.” Given the above, we kindly ask the reviewer to reconsider the rating based on our contributions. **More empirical validation.** Figs. 7 and 10 and Appendix G (Figs. 18, 20-22) confirm our heuristics’ generality across models, datasets, $\varepsilon$’s and empirical privacy measures. Could the reviewer kindly suggest additional evidence they would like to see? **Issues with ACR/VMR.** While ACR has limitations, we believe a lower ACR/VMR score indicates lower memorization and thus better privacy **in general**. Importantly, we don’t advocate for targeting these **in isolation**. Our heuristics operate on configurations calibrated to a specified DP guarantee (Sec 4.3), adding a dimension not captured by DP. **Q1 - certification.** We mean: even a legislative body determines that a model with $\varepsilon^\star$ passes their privacy tests, this does not imply that models with a stricter privacy budget ($\varepsilon \le \varepsilon^\star$) will pass. As illustrated in Fig 4, all models in the red region will fail. **Q2 - theory for ACR/VMR.** The statement “ACR and VMR are small for DP mechanisms” is incomplete: 1) “small” is not quantitatively defined; 2) the privacy parameters of the DP mechanism are not specified. We do not have a theory for it—as we point out in the caption of Fig. 4, bridging DP and empirical privacy measures is generally hard. The reviewer might be suggesting that without theoretical guarantees, a model with DP can have large ACR/VMR, so our findings are not surprising. But we want to emphasize: what we believe makes our finding interesting is the **simultaneous existence** of both high and low ACR/VMR under the same theoretical privacy budget (see Fig. 2). This naturally motivates our study in Sec. 4 to target the low end by proper hyperparameter selection. **Q3 - hypothesis in Sec 5.1.** The hypothesis here is two-fold. First, while all mechanisms are calibrated to the same $(\varepsilon, \delta)$-DP, the final LLMs (where we measure empirical privacy) might have different “real” $\varepsilon$’s, all upper bounded by $\varepsilon$. Second, these “real” $\varepsilon$’s reflect empirical privacy: a higher value corresponds to poorer empirical privacy. **Q4 - why relevant.** For Laplace or Gaussian mechanism, $\varepsilon$ uniquely determines the privacy level via the single hyperparameter that controls the noise scale. In contrast, the compositional nature of DP-SGD allows infinite configurations to achieve the same $\varepsilon$, each with unique privacy levels. Fig. 4 highlights that empirical privacy variance is a unique characteristic of DP-SGD, and more broadly, of DP algorithms that involve composition. Please let us know if you have further questions. Thank you!
null
null
null
null
null
null
Rethinking Point Cloud Data Augmentation: Topologically Consistent Deformation
Accept (poster)
Summary: This paper proposes a novel data augmentation method, SinPoint, for the 3d point cloud, leveraging the topological consistency deformation technique. It utilizes a sine-based mapping function for deformation under the Markov process. The approach has demonstrated the effectiveness of data augmentation by theory analysis and experiments. Claims And Evidence: see strength and weakness Methods And Evaluation Criteria: see strength and weakness Theoretical Claims: see strength and weakness Experimental Designs Or Analyses: see strength and weakness Supplementary Material: see strength and weakness Relation To Broader Scientific Literature: see strength and weakness Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. This paper introduces homeomorphism, or the topological consistent structure, for data augmentation in point cloud analysis, which is a novel perspective. 2. Combining the theoretical analysis and experiments, the paper demonstrates the importance of data expansions through data augmentation. 3. The paper proposed SinPoint, which integrates the topological consistent sin-based deformation functions and the Markov chain. The results of 3D classification and part-segmentation have illustrated the effectiveness. Weaknesses: 1. Concepts of homeomorphism need further discussion. From mathematical aspects, the 3D surfaces in the mentioned datasets (e.g., ModelNet40) are mostly homeomorphisms across different classes. For example, the plane and guitar are homeomorphisms, while the plane and cup are not. The proposed method mostly employs the sine-based deform functions and designs the Markov chain data augmentation process, which is more likely to be a stable and continuous deformation process than a topology-consistent transformation. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer Agtr: **Thanks for your time and insightful reviews. We appreciate your recognition of our work. We responded in detail as follows:** **Q1:** Concepts of homeomorphism need further discussion. From mathematical aspects, the 3D surfaces in the mentioned datasets (e.g., ModelNet40) are mostly homeomorphisms across different classes. For example, the plane and guitar are homeomorphisms, while the plane and cup are not. **A1:** Thank you for your recognition of our work. Indeed, as you say, when we look at different samples, their shapes and structures can be transformed into each other through a series of continuous deformations without involving holes or tears; then they are homeomorphic, such as a plane and a guitar. A cup with a handle has a hole, and the plane is non-hole, so they are not homeomorphic. **This is relative to the different samples.** **However, our SinPoint is to consider a single sample.** Our aim is **to make a single sample produce continuous smooth deformation through homeomorphic mapping,** which **makes the deformed sample maintain semantic consistency while increasing the geometric diversity of a single sample, so that the model can learn more discriminant features.** Data augmentation based on homeomorphism can enhance the diversity of a single sample, enabling the model to learn more discriminative features and **improve model performance (see Tables 1, 2, 6, 9, and 10). The anti-interference ability of the model is further improved (see Table 3).** **Since we are not specifically designing machine learning models to learn features of homeomorphisms between different samples, we do not consider homeomorphisms between different samples.** **Q2:** The proposed method mostly employs the sine-based deform functions and designs the Markov chain data augmentation process, which is more likely to be a stable and continuous deformation process than a topology-consistent transformation. **A2:** Thanks for your professional questions. Our data augmentation can be regarded as two processes, as shown in Figure 2. **First, a topology-consistent deformation sample is generated through sine transformation based on homeomorphism, and the original sample and the augmented sample are input into the Markov augmentation process based on the basic transformation to further increase the diversity of samples.** **From Table 4, we can find that the independent Markov gain is 1.0 and the independent SinPoint gain is 1.2. In this case, only augmented samples are used, and when further mixed samples are used to expand the variance of the data, the performance is improved again. In this case, the mixed Markov gain is 1.5, and the mixed SinPoint gain is 1.7. This also shows that SinPoint's performance improvement is always better than the Markov process. When the three work together, the performance gain is 2.0. This also shows that SinPoint and Markov can promote each other. And SinPoint played a crucial role.** **We thank you again for your careful review, which helped us a lot. We have addressed all your concerns in detail. If you have other questions, we can discuss them again, and we look forward to your feedback.** --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. My major concerns have been addressed well. Thus, I keep my rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Agtr, Thanks again for recognizing our work. We appreciate the discussion and your positive feedback. Many thanks for your valuable time and effort. Best and sincere wishes, The authors
Summary: The paper proposes to use sine functions to augment the point cloud for the point cloud classification and segmentation tasks, with a Markov chain augmentation process to further improve the performance. The method achieves SOTA on different tasks with various backbones. Claims And Evidence: Although the method achieves SOTA performance, some of the claims are not well justified: 1. The paper's first main contribution is analyzing data augmentation from a statistical perspective, which is a strong claim. While I am not an expert in the theory of data augmentation, it's easy to find prior works related to this topic, for example [1]. Despite this claim, the paper’s analysis of data variance appears trivial, and it lacks an in-depth discussion on how increasing variance—specifically through the proposed augmentation—enhances model generalization to unseen data. 2. The second main contribution is proving that both a single sine function and a sum of multiple sine functions are homeomorphisms. However, this proof seems too trivial to be considered a significant contribution, i.e., given the definition, it's obvious that the functions are homeomorphisms. 3. While I acknowledge the novelty of the sine-based point cloud augmentation method, the contribution of the Markov chain augmentation process is not well justified. The process consists of four base transformations, none of which are novel. A better way to demonstrate its effectiveness would be to compare it against using each transformation individually (i.e., setting transition probabilities to 0) to isolate the impact of the Markov chain process itself. [1] Dao et. al., 2019. A Kernel Theory of Modern Data Augmentation. Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense. Theoretical Claims: The proofs are correct. Experimental Designs Or Analyses: 1. The authors use SinPoint-SSF for classification and SinPoint-MSF for segmentation, which intuitively makes sense. However, it would be beneficial to show results where MSF is applied to classification and SSF to segmentation, as one could argue that MSF provides more diversity (higher variance), which might improve generalizability, as claimed in Section 3. 2. The Markov chain augmentation process and mix sampling were attached to SinPoint, leading to improved performance over baselines. In Table 4, mixing training samples appears to contribute significantly to the results. However, the authors did not specify the percentage of original samples used or explain how the mixup rate was chosen, which is important for reproducibility and understanding its impact. 3. The effectiveness of the Markov chain augmentation process still needs further justification, as I previously mentioned under Claims and Evidence. Supplementary Material: I read all the parts in the appendix. Relation To Broader Scientific Literature: This method could be used in broader tasks in point-cloud understanding. It's also related to the theory of data augmentation. Essential References Not Discussed: To the best of my knowledge all the essential related works are cited. Other Strengths And Weaknesses: - Other Comments Or Suggestions: 1. Definition 1 is not used anywhere in the paper, as it measures the variance of the prediction but not the data input. 2. Algorithm 1 doesn't include Markov chain augmentation process and sample mixing, this is confusing as it's named SinPoint. Questions For Authors: I found the sine function-based augmentation interesting, and the results demonstrate its effectiveness. I would be happy to raise my score if the authors could address my concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 8z5w: **Thanks for your time and insightful reviews. We responded in detail as follows:** **Q1:** ...for example [1]. Despite this claim, the paper’s analysis ... appears trivial, and it lacks an in-depth discussion on how increasing variance—specifically through the proposed augmentation—enhances model generalization to unseen data. **A1:** Thank you for your valuable opinions, especially the one you mentioned [1], which is very valuable to us. This is a work that studies the basic theory of data augmentation, which provides deeper theoretical support for our work. Especially in the process of constructing Markov augmentation, our method is further proved to be correct and effective theoretically. Compared to [1] which mainly studies the theory of basic transformations, we propose a novel SinPoint to construct augmented samples. Meanwhile, this construction can steadily expand the variance of the data. As shown in Figure 2 (in https://github.com/Anonymous-code-share/ICML2025), the distribution of the augmented samples generated by our method is closer to the real distribution, which enables better coverage of the data distribution in the test set. This increased variance helps to reduce the bias of the model, thereby improving its ability to generalize to unseen data. Other methods can lead to huge biases that are detrimental to model learning. Due to word constraints, we will provide an in-depth analysis in the latest version. **Q2:** The contribution … sine functions are homeomorphisms. However, this proof seems too trivial to be …, … it's obvious …. **A2:** Thank you for your valuable advice. For a professional scholar like you, understanding is relatively simple. However, in the field of point cloud augmentation, most of the methods in Table 1 lack theoretical basis. Our method is the first one based on homeomorphic theory for point cloud augmentation, and we have to admit that it is indeed a significant contribution in the field of point cloud data augmentation. We hope that more detailed definitions and proofs will help different readers quickly understand the relevant theories of our SinPoint. **Q3:** A better way to demonstrate its effectiveness would be to compare it against using each transformation individually to isolate the impact of the Markov chain process itself. **A3:** Thank you for your constructive comments. We add ablation experiments to analyze the influence of different transformations on the model. It can be seen from the experiments that the Markov augmentation process has a better stable gain. The results are as follows: **Table R1: Ablation analysis for a single transformation with Markov.** | DGCNN | OA | |:---|:---:| | Markov | 93.7 | | scaling | 92.6 | | shifting | 92.1 | | rotation | 92.9 | | jittering | 92.7 | **Q4:** However, it would be beneficial to show results where MSF is applied to classification and SSF to segmentation. (Justification for two methods (SSF and MSF)) **A4:** Thank you and Reviewer SdQc for your constructive comments. This experimental analysis was also considered in an earlier manuscript but was removed due to formatting issues in the submission version. According to your suggestion, we will re-add this part of the experiment and analysis in the latest version, and some results are compared as follows: **Table R2: SSF and MSF on classification and segmentation.** | DGCNN | Classification (OA) | Segmentation (mIoU) | |:---|:---:|:---:| | SinPoint-SSF | **90.2** | 85.3 | | SinPoint-MSF | 89.8 | **85.5** | **Q5:** However, the authors did not specify the percentage of original samples used or explain how the mixup rate was chosen, which is important for reproducibility and understanding its impact. **A5:** Thanks for your careful review. We show in Figure 5 that the original sample and the enhanced sample are input together for training, without setting the mixing rate. As can be seen from Table 4, SinPoint alone can improve the performance by 1.2, and the improvement after mixing samples is only 0.5. Mixed samples do not play a dominant role in performance improvement. SinPoint and the Markov augmentation process are the decisive factors. **Q6:** Definition 1 is not used anywhere in the paper, as it measures the variance of the prediction but not the data input. **A6:** We put Definition 1 in the text to facilitate logical coherence and reader understanding because Definition 1 and Theorem 1 are related. And definition 1 is also used in the proof of Theorem 1 in the supplementary material. **Q7:** Algorithm 1 is confusing as it's named SinPoint. **A7:** Thanks for reminding me. We will modify the title of algorithm 1 to SinPoint without Markov. The pseudo-code of the Markov process is added in the latest version. **We thank you again for your careful review, which helped us a lot. We have addressed all your concerns in detail. If you have other questions, we can discuss them again, and we look forward to your feedback.** --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I still believe the theoretical analysis in this paper is relatively weak and would benefit from a more in-depth discussion. However, considering the novelty of the sine-based point cloud augmentation method, the SOTA performance, and the additional ablation studies demonstrating the method’s effectiveness, I would raise my score to weak accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 8z5w, Thanks again for recognizing our work. We appreciate the discussion and your positive feedback. Many thanks for your valuable time and effort. We promise to add more in-depth discussion and theoretical analysis in the latest version. Best and sincere wishes, The authors
Summary: This paper presents "SinPoint," a new data augmentation method for point clouds. The main idea is to deform point clouds in a way that's supposed to preserve their overall structure, using sine functions to create the deformations. The authors argue, using the concept of homeomorphism, that these deformations don't change the underlying topology of the point cloud. They offer two versions of their method: SinPoint-SSF, which uses a single sine wave and is suggested for classification, and SinPoint-MSF, which combines multiple sine waves and is proposed for segmentation. They also describe a way to combine SinPoint with other common transformations like rotation and scaling, using a Markov Chain process to create more varied augmented data. The paper's main results show that SinPoint generally performs better than other point cloud augmentation methods on several datasets (ModelNet40, ReducedMN40, ScanObjectNN, and ShapeNetPart) and with a few different network architectures (PointNet, PointNet++, DGCNN). They report improvements in accuracy for classification and IoU for segmentation. The paper also claims SinPoint makes models more robust to things like noise and rotation. The contributions seem to be the sine-based deformation, the homeomorphism argument, the Markov Chain process, and the experimental results. The authors support these claims with experiments, ablation studies, and some visualizations. ## Update after rebuttal I have carefully read the authors' detailed rebuttals (both the initial response and the reply to my comment) and the comments from other reviewers. I appreciate the authors' engagement and clarifications. The authors pointed out that results on complex scene datasets (S3DIS and SemanticKITTI) were included in Table 10 of the supplementary material, addressing one of my primary concerns about the lack of such evaluation. They also provided further rationale for the choice of the sine function and addressed concerns about computational cost by referencing Table 11. However, my core assessment remains largely unchanged, and I will keep my current rating. My main concern is not the absence of these results, but their limited prominence and analysis within the main paper. Evaluating performance on complex, realistic data is crucial for convincingly demonstrating the significance and generalizability of a data augmentation method for point clouds. The main paper's experimental narrative heavily relies on the simpler datasets, which limits the strength of the evidence presented for broad applicability. Therefore, while acknowledging the authors' responses and the additional data provided, the way the experimental validation is presented in the main body of the work still leaves questions about the method's practical impact in more challenging, real-world scenarios. Claims And Evidence: The paper makes several key claims regarding SinPoint's effectiveness. The first is that SinPoint generates topologically consistent deformations. This is supported by a theoretical proof (Theorem 2 and Appendix C) demonstrating that the sine-based transformation is, under specific conditions, a homeomorphism. While the mathematical reasoning is sound, the practical implications might be a bit overstated. Real-world point clouds are discrete and can be quite complex. It's possible that large deformations, even if theoretically homeomorphic, could still distort the shape in a way that effectively changes the topology from a machine learning perspective. Another central claim is that SinPoint outperforms existing point cloud data augmentation methods. The paper provides quantitative results in Tables 1 and 2, showing improvements in accuracy and mIoU across several datasets and network architectures. While the results generally favor SinPoint, the improvements are sometimes relatively modest. A much bigger concern is that the experiments are limited to relatively simple point cloud datasets. Without results on complex (PartObjaverse-Tiny), scene-level data (like S3DIS, ScanNet, or SemanticKITTI), it's hard to know how well the proposed method truly generalizes. The paper also claims that SinPoint makes models more robust to corruptions, supported by the results in Table 3. While this is a positive finding, it shares the same limitation as the performance claim: robustness on simpler datasets doesn't guarantee robustness in complex, real-world scenarios. The contribution of the Markov Chain Augmentation Process is demonstrated in the ablation studies (Table 4), but the benefit seems incremental. One might question whether the added complexity is worth the relatively small gains. The justification for selecting sine function lacks compelling evidence. Overall, while the theoretical claim of topological consistency is mathematically strong, the core claim of superior performance is weakened by experimental limitations. The robustness claim has similar limitations. The benefit of the Markov Chain is supported, but its impact seems modest. Methods And Evaluation Criteria: The proposed method, SinPoint, has some conceptually appealing aspects, but there are also some questions about whether it fully makes sense for the broader problem of point cloud data augmentation, particularly in real-world applications. The use of a sine function to generate deformations is, on the surface, a reasonable idea. It allows for smooth, controlled deformations, and the connection to homeomorphism provides a nice theoretical grounding. However, the paper doesn't convincingly justify why a sine function is the best choice, compared to other possible deformation methods. There's also a lack of comparison to other techniques from computer graphics, like thin-plate splines or free-form deformation. The two variants, SinPoint-SSF and SinPoint-MSF, are presented as being suitable for classification and segmentation, respectively. This makes intuitive sense, as segmentation often requires more localized deformations. The Markov Chain Augmentation Process, while adding flexibility, also adds complexity, and it's not entirely clear if the benefits outweigh the costs. The choice of evaluation criteria, specifically the datasets, is a significant area of concern. While ModelNet40, ReducedMN40, ScanObjectNN, and ShapeNetPart are standard benchmarks, they represent relatively simple point clouds. They don't capture the complexity of many real-world applications (occlusions, clutter, noise, varying point density). In summary, while the core idea of SinPoint has merit, the lack of comparison to alternative deformation methods and, crucially, the limited evaluation on complex, scene-level datasets, raise questions about whether the proposed methods and evaluation criteria are fully adequate. Theoretical Claims: The paper presents two main theoretical claims, both related to homeomorphism: - Theorem 1 (Data augmentation increases the variance of the dataset): This theorem seems correct and its proof in Appendix C is sound. The derivation is straightforward. - Theorem 2 (Homeomorphism Based on Sine Function): In Appendix C. The reasoning also seems correct. Experimental Designs Or Analyses: **Strengths:** - The inclusion of robustness experiments (Table 3) is a good point. - The experiments use a variety of datasets and backbone networks. - The ablation studies (Table 4) seem carefully designed and help to isolate the contribution of different components of the proposed method (SinPoint, Markov Chain, mixed training). **Weaknesses:** - The experiments are primarily conducted on relatively simple datasets. ModelNet40 consists of clean CAD models, ScanObjectNN features isolated objects, and ShapeNetPart deals with individual object parts. This limits the conclusions that can be drawn about the proposed method's effectiveness in real-world, complex scenarios. - More qualitative analysis, especially showing examples of failure cases or situations where the method struggles, would provide a more balanced view. Supplementary Material: The appendix provides useful extra information. It does include the full proofs for the theorems, which are important for checking the math, and it gives details on the experimental setup, which helps with reproducibility. There are also some additional experiments, like showing means and standard deviations, doing more ablation studies, and testing on different network architectures and datasets, and some visualizations. Relation To Broader Scientific Literature: The paper cites relevant work in point cloud processing and data augmentation. It also acknowledges the general success of data augmentation in computer vision. However, the paper could do a better job of connecting to the broader scientific literature, particularly in computer graphics. While the paper uses the concept of homeomorphism, citing a relevant topology textbook, it misses a key connection to the extensive research on non-rigid deformation in computer graphics. Essential References Not Discussed: I did not find essential references that were missing from the paper's discussion. Other Strengths And Weaknesses: **Other Strengths:** - The paper is generally well-written and easy to follow. The core concepts are explained clearly. The use of formal definitions and theorems adds to the technical clarity. **Other Weaknesses:** - Justification for two methods (SSF and MSF): It would be useful if the author could provide more evidence on this. Other Comments Or Suggestions: The authors could consider making the code publicly available upon publication. Questions For Authors: Please refer to the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer SdQc: **Thanks for your time and insightful reviews. We responded in detail as follows:** **Q1:** It's possible that large deformations, even if theoretically homeomorphic, could still distort the shape in a way that effectively changes the topology. **A1:** Thank you for agreeing with our theory. Homeomorphism is strictly maintained in theory and has been proved by theorem 2. **If the topology is inconsistent, holes or tears will be created after deformation, but our continuous deformation based on sine transformations does not create holes or tears and therefore does not change the topology.** However, in machine learning, **excessive deformation can lead to ambiguity,** which we need to avoid. As we have verified in Table 3, too large deformation parameters will be detrimental to model learning. Therefore, we choose the appropriate parameters to generate the enhanced sample. **Q2:** A much bigger concern is … limited to … simple … datasets. Without … on complex … datasets. **A2:** Your question is very professional. In terms of experiments, **we also tested on complex datasets, and Table 10 in the supplementary material shows our experimental results on S3DIS and SemanticKITTI.** **Q3:** The paper doesn't justify why a sine function is the best choice, compared to other possible deformation methods. a lack of comparison to …, like thin-plate splines or free-form deformation. **A3:** Your questions are very professional and very helpful to us. **We clarify that we are not saying that the sine function is the best choice.** We discussed why the sine function was chosen, **and we have a brief introduction in Section 4.3.** The reasons are as follows: **The sine function is a periodic function, and by controlling the amplitude, the maximum deformation can be easily adjusted. Other functions, such as monotonic functions, parabolic functions, or other types of functions, cause excessive deformation as the diameter of the object increases, which leads to augmented samples deviating from the real ones.** In contrast, thin-plate splines are computationally more complex and may not be suitable for current applications. Free-form deformation (FFD) can provide flexible deformation control, but the process is complicated. Thus, it is simpler and more efficient to use the sine function directly. This can be our future exploration direction because FFD can generate more controlled deformation samples. **We would like to thank the reviewers for their valuable comments, which also guide us to a new exploration direction.** **Q4:** The Markov Chain Augmentation Process, while adding flexibility, also adds complexity, and it's not entirely clear if the benefits outweigh the costs. **A4:** Thank you for affirming our method. **We have given a quantitative analysis of your concerns in Table 11 (in supplementary material). Compared to existing methods, such as PointAugment, our method reduces time costs by up to 10 times and improves performance by 11%. We not only save the calculation cost but also improve the accuracy of the model.** **Q5:** The choice of evaluation criteria, specifically the datasets. **A5:** Thanks for your suggestion. To make a fair comparison with other methods, we had to conduct primary experiments on these datasets. **However, we have also carried out additional experiments, including S3DIS and SemanticKITTI, in Table 10 of supplementary materials.** Moreover, in Table 3, we further investigate the robustness of the SinPoint by simulating complex noises in the real world. We plan to explore SinPoint's performance on more realistic and complex datasets in future work. **Q6:** More qualitative analysis, ... showing examples of failure cases **A6:** Thanks for reminding me. **We added it in the latest version, and you can see more qualitative analysis examples of failure cases in Figure 1** (in https://github.com/Anonymous-code-share/ICML2025). Meanwhile, we also conducted ablation experiments as shown in Table 7. When excessive deformation occurs, the performance will decline. **Q7:** It misses a key connection to the extensive research on non-rigid deformation in computer graphics. **A7:** Your question is good, **especially your proposed FFD method for computer graphics, which encourages us to develop controlled deformation augmentation in the future. We plan to discuss the feasibility of these methods (like TPS or FFD) as part of our related work.** **Q8:** Justification for two methods (SSF and MSF). **A8:** Due to the word limit, we jointly replied to the same questions in **Q4 of Reviewer 8z5w.** **Q9:** code publicly available upon publication. **A9:** Yes, we must do it. **We thank you again for your careful review, which helped us a lot. We have addressed all your concerns in detail. If you have other questions, we can discuss them again, and we look forward to your feedback.** --- Rebuttal Comment 1.1: Comment: Thank you to the authors for providing a detailed rebuttal and addressing the questions raised. I have read the rebuttal carefully, along with the comments from the other reviewers. While I appreciate the clarifications and the additional results pointed out in the supplementary material, my core concerns about the extent and prominence of the evaluation on complex, real-world scenarios remain. Therefore, I will maintain my current rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer SdQc: **We appreciate your response. We are glad to have addressed almost all of your concerns. We have once again concluded our paper, and we hope that the following reply will truly help you.** First, **we would like to clarify that our SinPoint is applied to object point cloud tasks, not scene point clouds (in line 95).** However, the applicability of our SinPoint to complex scene point cloud tasks has already been tested **in Table 10. Our method shows a performance improvement of 0.6 on S3DIS and up to 7.6 on SemanticKITTI, which is sufficient to prove the advanced nature and scalability of our method. In fact, we have also addressed this in Q2 of the rebuttal.** **Object and scene point cloud tasks are two different research directions.** Object tasks help computers understand objects in the 3D world more accurately. The main goal of scene tasks is to understand and analyze the entire 3D scene, including the objects in the scene, their structures, and the spatial relationships between them. **The paper, "Advancements in Point Cloud Data Augmentation for Deep Learning: A Survey, PR2024," is a review of point cloud augmentation, which classifies and summarizes the tasks and applications of point clouds and introduces the scope of application of different cloud augmentation methods.** **Our main research is object point cloud augmentation to improve the model's understanding of objects.** The existing point cloud augmentation methods may improve data diversity to some extent, but they often overlook the point cloud's intrinsic structure and semantic details, resulting in a loss of topological consistency in the augmented point cloud. For instance, PointMixup, PointCutMix, and SageMix all use different strategies to mix samples, but they do not consider the local structure of each sample. PointAugment relies on a learnable transformation matrix, making the outcome unpredictable. Similarly, PointWOLF transforms local point clouds using a combination of strategies, which can lead to data distortion and significant semantic deviation, as shown in Figure 1. **Thus, we mainly solve the problem of topological inconsistencies in the current object point cloud augmentation methods.** **Our contributions are:** We analyze the data augmentation from a statistical perspective. This expands the distribution boundary of the dataset and increases its variance. We prove that the proposed sine-based mapping function is a homeomorphism. In theory, it increases the diversity of point clouds without destroying the topological structure. We propose a new Markov chain augmentation framework that increases sample diversity by randomly combining different foundational transformations to expand the distribution space of the dataset. **We demonstrate the effectiveness of our framework by showing consistent improvements over SOTA augmentation methods on both synthetic and real-world datasets in 3D shape classification and part segmentation tasks.** **Datasets used by different-level tasks:** The main datasets used in object-level tasks include 1) Synthetic datasets for classification tasks: ModelNet10 (MN10), ModelNet40 (MN40), ReducedMN40 (RMN40). 2) Real-world scanned datasets for classification tasks: ScanObjectNN (SONN). 3) Synthetic datasets for part classification tasks: ShapeNetPart (SNP). The main datasets used in scene-level tasks include 1) Indoor segmentation task dataset: S3DIS. 2) Outdoor segmentation task dataset: SemanticKITTI. **We focus on object point cloud tasks, and the methods we compare are also focused on object-level tasks. We summarize our object-level method as follows:** Table R: The datasets used in the object-level point cloud augmentation. |Method|Object-level|Scene-level| |:---|:---:|:---:| | PointAugment(CVPR2020)|MN10, MN40|NA| | PointMixup(ECCV2020)|MN40, SONN|NA| | PointWOLF(ICCV2021)|MN40, RMN40, SONN, SNP|NA| | RSMix(CVPR2021)|MN10, MN40|NA| | PatchAugment(ICCVW2021)|MN10, MN40, SONN|NA| | SageMix(NeurIPS2022)|MN40, SONN|NA| | WOLFMix(PMLR 2022)|MN40|NA| | PCSalMix(ICASSP 2023)|MN10, MN40|NA| | PointPatchMix(AAAI2024)|MN40, SONN|NA| | SinPoint(Ours)|MN40, RMN40, SONN, SNP|S3DIS, SemanticKITTI| As shown in the table above, **in object-level tasks, other methods are tested only on object-level datasets, and we use the most object-level datasets to verify the performance of our methods.** However, **to further expand the application scope of our method, we also conducted additional validation on complex scene point clouds, such as S3DIS and SemanticKITTI. Compared to object-level methods, we are already SOTA. And on scene-level tasks, we are also able to further expand and enhance performance.** In the future, **our method needs to add new contributions to better apply to scene tasks, which is our future work.** **We hope that our sincere response will help you and other reviewers understand our research area. We look forward to your feedback.**
Summary: The paper introduces SinPoint, a novel data augmentation technique for point clouds that employs homomorphism-based sine transformations to increase geometric diversity while preserving topological consistency. SinPoint has two variants: SinPoint-SSF, which uses a single sine function anchored at the origin, and SinPoint-MSF, which combines multiple sine functions anchored at different points to produce richer deformations. Experiments demonstrate that SinPoint consistently outperforms state-of-the-art augmentation methods, enhancing the generalization and robustness of models (PointNet, PointNet++, DGCNN) on synthetic and real-world datasets across 3D shape classification and part segmentation tasks. ## update after rebuttal Due to the limit of only one round of questions and answers between the reviewer and the authors, therefore I still have some minor concerns below. However, these concerns will not strongly affect my final rating of **Weak Accept** (leaning towards accept). Q1: Can we use the same parameters for all datasets, or do we need to choose different values for each dataset? Are there any special requirements for our own collected data? Q2 and Q3: To truly validate effectiveness, data augmentation methods should perform well across both old and new, simple and complex models. Relying on simpler models, which are often less representative of current state-of-the-art systems, may fail to capture real-world challenges like scalability, robustness, or compatibility with modern architectures. A stronger approach would test a broad spectrum of simple to advanced models to ensure versatility, rather than tuning methods to succeed only on legacy frameworks. While starting with smaller models is practical, it's flawed to assume their performance predicts success on large models, which can exhibit unique behaviors and sensitivities not seen in simpler systems. Q5: Could the authors provide more details on the training efficiency of the proposed method? For example, the OA of the original method, and then when applying SinPoint, the training time of the original method, and then when applying SinPoint. Some of the above methods can be used: PointNet, PointMetaBase, SPoTr, and Point Transformer v3. Claims And Evidence: The claims are clear and supported by evidences. Methods And Evaluation Criteria: - While homeomorphism theoretically offers diverse continuous transformations for data augmentation, it can distort critical geometric features such as distances, angles, and curvature. As evident in Figure 2 in the main paper and Figures 8 and 9 in the supplementary, such transformations may generate unrealistic data. - In addition, when dealing with more complicated task (part segmentation task as we can see on Table 2 in the main paper or Table 9 in the supplementary), the proposed method’s performances dropped significantly (less than 1%, for some methods they are just 0.1 or 0.2%, compared to 2.6-7.3% on classification task), it dues to confusing part boundaries (mixed up the point-wise labels) which leads to incorrect training signals. Even though the authors tried to show in Table 10 that their method can work well in scene segmentation (with SemanticKITTI, a LiDAR, and a sparse dataset, which can get more information from any data augmentation methods), the chosen method, MinkNet, is pretty old (2019). - Moreover, the experimental evaluation has limitations: the proposed method shows effectiveness with older augmentation techniques (Table 1) but is less effective when combined with recent methods (Table 6). Theoretical Claims: The proofs are correct for theoretical claims. Experimental Designs Or Analyses: Please check the Methods And Evaluation Criteria above. Supplementary Material: I have read the supplementary material. Relation To Broader Scientific Literature: It somehow has impact to the scientific literature on a specific domain, which is Data Augmentation for Point Cloud Understanding. Essential References Not Discussed: None Other Strengths And Weaknesses: The paper is well-structured, making it accessible and understandable. Other Comments Or Suggestions: None. Questions For Authors: 1. Could the authors provide a detailed analysis of how the proposed method specifically impacts part boundaries and label consistency? 2. The authors should report additional details on model size, number of parameters, and throughput when integrating the proposed augmentation method with different backbone models. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer MuAY: **Thanks for your time and insightful reviews. We responded in detail as follows:** **Q1:** ...such transformations may generate unrealistic data. **A1:** Your opinion is very professional. Indeed, when the deformation parameter is too large, it will produce unrealistic data. Therefore, **we chose smaller parameters to avoid such results, and ablation experiments were also conducted in Table 7.** When the deformation is too large, it will also reduce the performance of the model. **Our SinPoint aim is to increase the geometric diversity of the augmented samples while maintaining topological consistency to understand the nature of the data from more viewpoints and conditions, which can effectively avoid the overfitting of the model to the training data.** **Q2:** ...it dues to confusing part boundaries ... which leads to incorrect training signals. Even though the authors tried to show in Table 10 that their method can work well in scene segmentation ..., the chosen method, MinkNet, is pretty old (2019). **A2:** Your observations were sharp and your reviews were professional. **To avoid boundary confusion, we introduce more conservative parameters to ensure that the transformed part boundaries remain as accurate as possible and avoid extreme distortions. In addition, as can be seen from Table 7, the appropriate deformation can maximize model performance.** In the experiment, we also found that the performance improvement in the part segmentation was not significant and conducted a series of experimental verifications, as shown in Table 9 and Table 10. **We find that most of the data augmentation methods in Table 1 are not suitable for segmentation tasks. As shown in Table 2, only four works reported the result of part segmentation. But our SinPoint is best.** However, to confirm whether the method has limitations, we further verify it on multiple backbones, as shown in Table 9, and it can be found that our method can achieve certain performance improvement. **Further, we test on scene segmentation; in Table 10, the performance improvement is still limited when performing semantic segmentation of S3DIS, while the performance improvement is significant when performing instance segmentation on SemanticKITTI.** We analyzed that ShapeNetPart and S3DIS are fine-grained semantic segmentation datasets, which are more challenging, while SemanticKITTI is a coarse-grained instance segmentation dataset. As shown in Table 1, our method is better suited for object tasks, which explains its superior performance on SemanticKITTI. Our main goal was to **use MinkNet to test the limitations of our approach to segmentation tasks, which has been demonstrated in experiments. So we didn't add more scene-level backbone to test.** **Q3:** the proposed method shows effectiveness with older ... (Table 1) but is less effective ... with recent methods (Table 6). **A3:** Your question is valuable. **This issue is largely influenced by the dataset and model capabilities. The older backbone design is simple and prone to overfitting due to insufficient data, and data augmentation is to avoid this problem by increasing the diversity of data.** Therefore, data augmentation can greatly improve its performance. **However, in the recent backbone, due to its more powerful network design and strong representation ability, the backbone is close to the theoretical limit, and the marginal revenue is reduced.** **Therefore, all current point cloud augmentation efforts are based on the old backbone to test the performance of the method, which can highlight the advantages of these methods.** **Q4:** Detailed analysis the impacts part boundaries and label consistency **A4:** Your question is very professional and valuable. **We have carried out relevant analysis in the first draft, but this part has been deleted from the submission version due to the structural adjustment.** For the boundary analysis, **we deeply discuss the relationship between the determinant of the Jacobian and the parameter of sine transformation.** Regarding label consistency, **we analyze it from the property of one-to-one mapping of homeomorphism.** Due to the rebuttal **word limit**, **we will re-add these in the latest version.** **Q5:** The authors should report additional details on model size, number of parameters, and throughput. **A5:** Our SinPoint is **a plug-and-play, non-parameter method and is only used during the training phase. It is removed during the testing phase.** Therefore, **our SinPoint does not change the size, number of parameters, and throughput of the baseline.** However, **to show them more clearly to readers in the future, we plan to report these details and add them to the latest version.** **We thank you again for your careful review, which helped us a lot. We have addressed all your concerns in detail. If you have other questions, we can discuss them again, and we look forward to your feedback.** --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response and efforts to address my concerns. While I appreciate the attempt, several points remain inadequately resolved, though the authors have acknowledged some of my comments. Q1: The decision to select smaller parameters to avoid unrealistic data, which results in a performance drop, does not constitute a meaningful scientific contribution. This approach resembles parameter fine-tuning for optimal performance rather than a novel advancement. - Q2 and Q4: The responses regarding the impact of part boundaries and label consistency are incomplete. However, the authors have committed to addressing these in the final version, which I appreciate. - Q2 and Q3: The rationale for relying on older methods lacks robustness. A critical perspective might question, "Why propose a new method to enhance outdated techniques instead of leveraging recent approaches with advanced network designs and superior representation capabilities?" - Q5: Similar concerns apply here, though the authors have again promised to incorporate improvements in the final version. Given these considerations, I maintain my rating of "Weak Accept". I encourage the authors to fulfill their commitment by including the promised updates in the final manuscript. --- Reply to Comment 1.1.1: Comment: Dear Reviewer MuAY: **Thanks for your time and reply. We hope the following responses truly address your concerns.** **Q1:** The decision to select smaller parameters to avoid unrealistic data, which results in a performance drop, does not constitute a meaningful scientific contribution. This approach resembles parameter fine-tuning for optimal performance rather than a novel advancement. **A1:** Thanks for your reply and further concern about the issue. Our detailed reply is as follows: First, **we are not fine-tuning the sine function parameters. As shown in Algorithm 1 (line 220), we generate diverse deformation functions by uniformly sampling from different parameter ranges. Specifically, in Table 7, parameter $A$ refers to a parameter range of $[-A, A]$, and the amplitude is uniformly sampled from this range, not fixed.** Second, **the ablation of the parameter range is used to validate the upper and lower bounds of the method, not for fine-tuning the sine function parameters.** In Table 7, **we can observe that our method remains SOTA across different parameter ranges, which further illustrates that the robustness of our SinPoint comes from the SinPoint itself.** Finally, **we are the first to propose a novel method based on homeomorphism and a Markov augmentation process, which provides theoretical guarantees for data augmentation. This further demonstrates the novelty and effectiveness of our method.** **Q2 and Q4:** The responses regarding the impact of part boundaries and label consistency are incomplete. However, the authors have committed to addressing these in the final version, which I appreciate. **A2 and A4:** Thanks for your responses. **Detailed analysis of label consistency in line 706 of our paper.** Due to the word limit, we commit to providing a detailed analysis of part boundaries in the final version. Part of the analysis about part boundaries is as follows: **The Jacobian determinant is a quantity that describes the 'stretching' or 'shrinking' properties of the mapping locally. In practical applications, when the sign of the Jacobian determinant changes, a transition from 'stretching' to 'shrinking' may occur, leading to a folding phenomenon.** Given the mapping $P' = P + Asin(\omega P + \phi)$, we can see it as a transformation from $P$ to $P'$, where $P =(x,y,z)$ and $P' =(x',y',z')$ denote points in 3D space. The determinant of this Jacobian is: $det(J_{P'}(P)) = (1+A_{x}\omega_{x}cos(\omega_{x}x + \phi_{x})) \cdot (1+A_{y}\omega_{y}cos(\omega_{y}y + \phi_{y})) \cdot (1+A_{z}\omega_{z}cos(\omega_{z}z + \phi_{z})).$ **A larger $A$ increases the probability that the Jacobian determinant becomes zero, making the transformation more intense and causing stronger folding of the shape and structure. This is consistent with our visualization, as shown in Figure 1 (in https://github.com/Anonymous-code-share/ICML2025), where folding only occurs when $A$ is large.** **Q2 and Q3:** The rationale for relying on older methods lacks robustness. A critical perspective might question, "Why propose a new method to enhance outdated techniques instead of leveraging recent approaches with advanced network designs and superior representation capabilities?" **A2 and A3:** Thanks for your further discussion. Your question is very critical. Firstly, **we are not enhancing outdated technologies, but rather using these technologies to validate the effectiveness of our data augmentation methods.** Data augmentation methods are generally validated for effectiveness on simple models, such as PointAugment (CVPR 2020), PointWOLF (ICCV 2022), and PointPatchMix (AAAI 2024). Secondly, data augmentation is typically applied in contrastive learning or pretraining large models to construct positive samples. Due to the difficulty of training large models, we cannot directly validate the performance of new data augmentation methods on large models. Therefore, after validating the effectiveness of the methods on smaller models, they are directly applied to large model training. **Specifically, data augmentation plays a crucial role in contrastive learning models like CLIP (Contrastive Language-Image Pretraining) and BLIP.** **Q5:** Similar concerns apply here, though the authors have again promised to incorporate improvements in the final version. **A5:** We promise to add these model parameters in the final version. We have updated the table as follows: Table R2: Additional details on backbones. |Backbone|OA|Params. (M)|FLOPs (G)|Throughput (ins./sec.)| |:---|:---:|:---:|:---:|:---:| |PointNet|70.8|3.5|0.9|4212| |PointNet++|84.5|1.5|1.7|1872| |DGCNN|84.6|1.8|4.8|402| |PointMLP|87.5|13.2|31.3|191| |PointNeXt-S|88.9|1.4|1.6|2040| |PointMetaBase-S|89.3|1.4|0.6|2674| |SPoTr|89.5|1.7|10.8|281| **We thank you again for your careful review. We have addressed all your concerns in detail. If you have other questions, we can discuss them again, and we look forward to your feedback.**
null
null
null
null
null
null
Minimalist Concept Erasure in Generative Models
Accept (poster)
Summary: This paper introduces "Minimalist Concept Erasure," a framework designed to remove unwanted concepts from generative models with minimal performance degradation. The core algorithmic idea involves learning a binary mask that selectively prunes neuron connections, guided by an end-to-end optimization process aimed at minimizing the distributional distance between outputs of the original and erased models. The authors validate their method primarily using the FLUX model, demonstrating improved effectiveness in concept erasure and enhanced robustness against specific adversarial attacks compared to several baseline approaches, while claiming to preserve image quality. Claims And Evidence: While the paper presents quantitative and qualitative evidence, several claims could benefit from more rigorous and nuanced validation. - Claim 1: "Minimal Performance Degradation" The claim that minimalist concept erasure robustly erases concepts with minimal overall model performance degradation is not fully supported by the presented evidence. While metrics such as FID and SSIM scores are informative, they do not comprehensively capture all facets of model performance. Essential dimensions like diversity, novelty, and controllability remain insufficiently evaluated. For instance, it remains unclear whether the method preserves stylistic variations or fine-grained details. Additional metrics and analyses would help substantiate a more holistic performance assessment. - Claim 2: "Robustness Against Adversarial Attacks" Although robustness against adversarial attacks is demonstrated, the scope of validation is limited. The paper tests only a few adversarial prompts (e.g., Ring-A-Bell, MMA-Diffusion, P4D, I2P), leaving uncertainty about performance under more diverse or sophisticated adversarial scenarios. Additionally, the mechanism behind increased robustness through neuron masking is not thoroughly explained—is it due to increased sparsity, modified decision boundaries, or another factor? A broader and deeper adversarial analysis, including white-box and transfer attacks, would strengthen this claim. - Claim 3: "Model-Agnosticism" The claim of model-agnostic applicability appears overly broad given that experiments focus primarily on rectified flow models (FLUX and SD-XL). The paper lacks evidence demonstrating effectiveness across fundamentally different generative architectures, such as GANs or VAEs. Either this claim should be qualified or supported by additional experiments involving diverse generative architectures to validate broader applicability. Methods And Evaluation Criteria: The proposed minimalist concept erasure method has notable strengths but also clear limitations in both method design and evaluation. - Strength: Minimalist Objective The "minimalist" objective—focusing on minimal modifications guided by output distribution—is conceptually appealing, directly addressing concerns about excessive modifications in existing concept erasure methods. - Weakness: Insufficient Algorithmic Detail The current description of the algorithm lacks clarity on key details. How is the binary mask initialized and optimized? Are additional loss functions beyond distributional distance utilized? How is the trade-off between concept erasure effectiveness and performance preservation explicitly managed during optimization? Providing more detailed algorithmic specifications would facilitate reproducibility and deeper understanding. - Weakness: Evaluation Metrics Limitations While the metrics (ACC, CLIP, FID, SSIM) are standard, they do not completely capture perceptual quality, diversity, controllability, or potential biases. A comprehensive evaluation including human assessments and metrics targeting diversity, novelty, and bias would enrich the analysis. For instance, low FID does not necessarily guarantee perceptually satisfying or diverse image outputs. - Weakness: Limited Baseline Comparison Although the paper compares against methods such as ESD, CA, SLD, EAP, and FlowEdit, it lacks sufficient justification regarding baseline selection and deeper qualitative comparisons. A more detailed and nuanced analysis of these baselines, emphasizing their relative strengths and limitations across varied scenarios, would contextualize the paper's contributions more effectively. Theoretical Claims: The paper currently lacks explicit theoretical claims and formal proofs. While the minimalist objective is intuitively appealing, rigorous theoretical justification would significantly enhance the paper. A theoretical analysis—possibly leveraging optimization theory, information theory, or network pruning theory—could clarify why minimalist concept erasure is effective and efficient. For instance, examining the relationship between network sparsity and robustness or exploring convergence properties of the proposed optimization method would substantially strengthen the paper. Experimental Designs Or Analyses: The experimental approach demonstrates several limitations impacting robustness and generalizability: - Weakness: Limited Granularity in Ablation Studies The presented ablation studies lack sufficient depth, primarily considering coarse-level masking strategies (Attn, FFN, Norm). A more fine-grained ablation study evaluating variations such as masking ratios, specific layers, or strategies would provide valuable insights into method sensitivity and optimal configuration. - Weakness: Insufficient Qualitative and Error Analysis The paper emphasizes quantitative metrics with limited qualitative visual examples. Conducting more thorough error analyses, examining failure cases, common artifacts, or method limitations, alongside detailed qualitative assessments (e.g., human evaluations or user preference tests), would deliver richer and more comprehensive insights into the method’s practical effectiveness and limitations. Supplementary Material: The supplementary material provides the source code implementation, which is valuable for practical replication. However, due to the absence of theoretical derivations or mathematical proofs and since the provided code was not practically executed during this review, the clarity and reliability of the algorithmic details remain uncertain. Relation To Broader Scientific Literature: The paper adequately references relevant scientific literature on concept erasure in generative models but would benefit from: - Connecting the method more explicitly to broader research themes such as model compression, adversarial robustness, and fairness. - Discussing relevant theoretical frameworks such as information bottleneck theory or rate-distortion theory to enrich theoretical understanding. Supplementary references to key literature on sparsity, information bottleneck theory, and adversarial robustness would further contextualize and enrich the paper. Essential References Not Discussed: As mentioned previously, essential references that could enrich the paper's context include: - Sparsity and Network Pruning: "Learning both Weights and Connections for Efficient Neural Networks" (Han et al., 2015), "The Lottery Ticket Hypothesis: Training Pruned Neural Networks" (Frankle & Carbin, 2018). - Information Theory and Model Compression: Relevant works on information bottleneck principle and rate-distortion theory. Other Strengths And Weaknesses: Strengths: - Conceptually Appealing Approach: The minimalist objective intuitively addresses key concerns in concept erasure. - Practical and Scalable Framework: Combining neuron masking and end-to-end optimization offers a potentially practical and scalable solution. - Promising Initial Results: Despite limitations, experimental outcomes indicate potential effectiveness worthy of further exploration. Weaknesses: - Lack of Theoretical Depth: The paper currently lacks rigorous theoretical grounding for algorithmic decisions. - Experimental Evaluation Limitations: Evaluations demonstrate weaknesses in statistical rigor, evaluation scope, and depth, limiting robustness and generalizability. - Overstatement of Claims: The novelty, effectiveness, robustness, and general applicability are occasionally overstated without sufficient supporting evidence or nuanced limitation discussions. Other Comments Or Suggestions: - Reconsider the framing of "minimalism." While conceptually appealing, the term might be misleading without a more rigorous definition and justification. Consider focusing on "efficient" or "targeted" concept erasure instead. - Emphasize the limitations and potential trade-offs of the proposed method more explicitly throughout the paper, providing a more balanced and realistic perspective. Questions For Authors: - Rigorous Definition of "Minimalism": Could you provide a more rigorous definition and quantification of "minimalism" in the context of your algorithm? How do you measure and compare the "minimal modification" achieved by your method against alternative approaches? - Mechanism of Robustness Enhancement: Can you elaborate on the specific mechanisms through which neuron masking enhances robustness against adversarial attacks? Is there a theoretical or empirical analysis that supports your claim that neuron masking provides inherent robustness advantages over weight-tuning methods? - Comprehensive Performance Evaluation: To address the "minimal performance degradation" claim more convincingly, could you include a more comprehensive evaluation of model performance beyond FID and SSIM, assessing diversity, novelty, controllability, and potential biases? Human evaluation studies or user preference tests could also provide valuable insights. - Broader Model and Concept Validation: To strengthen the generalizability claim, could you present experimental results on a wider range of generative model architectures (e.g., GANs, VAEs) and concept categories, including more complex and abstract concepts? - Statistical Significance and Practical Significance: Could you include statistical significance testing for all quantitative comparisons and provide a more in-depth discussion of the practical significance of the observed performance differences, considering the computational cost and complexity of your method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for recognizing the strengths of our work. We’re glad you found the minimalist objective conceptually appealing, and we appreciate your acknowledgement of the practicality and scalability of our framework. We reply to your concerns below: > “While metrics such as FID and SSIM scores are informative, they do not comprehensively capture all facets of model performance.” The use of ACC, CLIP, FID, and SSIM is well justified, as these metrics are commonly used for evaluation in prior unlearning works. Please refer to our Related Works section for supporting literature. > “Essential dimensions like diversity, novelty, and controllability remain insufficiently evaluated. For instance” Can you elaborate on evaluation metrics for diversity, novelty, and controllability and provide concrete metrics? > “The paper tests only a few adversarial prompts (e.g., Ring-A-Bell, MMA-Diffusion, P4D, I2P), leaving uncertainty about performance under more diverse or sophisticated adversarial scenarios” Adversarial attack on unlearning is a newly field emerged just recently. There are only a few related works available. In this work, we selected SOTA adversarial attack methods (Ring-A-Bell from ICLR2024, MMA-Diffusion from CVPR2024, P4D from ICML2024, and I2P from CVPR2023). We believe our choices are representative. We welcome any specific suggestions you may have. > “The paper lacks evidence demonstrating effectiveness across fundamentally different generative architectures, such as GANs or VAEs” Our method can be easily extended to GANs and VAEs, since they both perform single-step generation. However, study on GANs and VAEs have limited importance due to their limited generative ability. We choose to evaluate our method on SOTA models. This is also acknowledged by reviewer B5EZ and BukH. > “The current description of the algorithm lacks clarity on key details. How is the binary mask initialized and optimized? Are additional loss functions beyond distributional distance utilized? How is the trade-off between concept erasure effectiveness and performance preservation explicitly managed during optimization?” Our final loss is precisely described in L172, i.e. no other loss is utilized. In addition, Appendix H provides details and training configuration. Trade-off between concept erasure effectiveness and performance preservation is discussed in L370-375 and Figure 5 in section Ablating $\beta$. > “The paper currently lacks explicit theoretical claims and formal proofs.” Our loss is rigorously derived from KL divergence between the generative distributions of a model. The problem formulation and core derivation are all presented in Section 3. Appendix A shows full derivation of preservation loss. Appendix B shows full derivation of erasure loss. Appendix C shows the full loss derivation for diffusion models. Appendix D shows the connection between step-wise loss by analyzing a different KL divergence setup. > “The presented ablation studies lack sufficient depth” We present 4 other ablation studies besides module ablation. They are closely related to the algorithmic choice mentioned in this work. > “However, due to the absence of theoretical derivations or mathematical proofs and since the provided code was not practically executed during this review, the clarity and reliability of the algorithmic details remain uncertain.” We want to emphasize that Appendix A,B,C, and D provides sufficient mathematical proofs. All these proofs are appended to the maintext rather than separately in the supplementary materials. Regarding the issue with executing our code. We would like to know more details so we can help you reproduce our results. > “The paper emphasizes quantitative metrics with limited qualitative visual examples. Conducting more thorough error analyses, examining failure cases … ” Besides Figure 3-6 in the main text, we present Figure 7-14 in Appendix, which include some failure cases in these qualitative examples due to incorrect setups or hyperparameter changes. > “Is there a theoretical or empirical analysis that supports your claim that neuron masking provides inherent robustness advantages over weight-tuning methods?” As stated in L201, we build on the finding that masking leads to improved robustness performance [1]. We hope our response has addressed your concerns, and we’re happy to answer any further questions you may have. If there are no additional concerns, we would be grateful if you would consider further supporting this work by raising your rating. [1]: Pruning for robust concept erasing in diffusion models --- Rebuttal Comment 1.1: Comment: Thank you for your response and clarification. However, after carefully revisiting your paper, I realized the core reason for my confusion was the term **"Minimalist"** itself. Although you clearly define "minimal" in Section 3 as referring specifically to minimizing the distributional difference in the final outputs (minimal changes at the output-level), other parts of the paper still seem ambiguous. For example, certain statements regarding neuron masking in Section 3.6 could easily mislead readers—especially those familiar with network pruning—to interpret minimalism as referring to minimal parameter changes, weight-space distances (such as minimizing norms like ∥θ - θ'∥), or pruning fractions. To prevent similar misunderstandings by other readers, I suggest explicitly reiterating and emphasizing within your main text—especially when first introducing the term or describing practical mechanisms—that your definition of minimalism strictly targets minimal changes in output distributions, not necessarily minimality in network parameters or pruning metrics. If you clearly include these suggested clarifications within the main text, my concerns will be fully addressed, and I will gladly raise my evaluation score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your comment and for reconsidering our work. We appreciate your acknowledgement that the term “minimalist” is clearly defined in Section 3. To further clarify this term and prevent potential misunderstandings, we will make the following revisions in the main text: 1. Add a label in Figure 2 to highlight the minimal output difference. 2. Revise Section 1 to include a more descriptive explanation when the term “minimalist” is first introduced, and add a reference to Section 3 and Equation 5 for readers seeking a more rigorous definition. 3. Revise Section 3.6 to emphasize that we mask neurons for robust concept erasure, and that our definition of “minimalist” does not refer to minimal parameter change. Thank you again for your thoughtful feedback and for helping us improve our work.
Summary: The paper introduces a concept erasure method that is minimal in design, requiring only the final output of the diffusion model, rather than access to intermediate timesteps. In addition, the authors propose a neuron masking technique as an efficient alternative to traditional fine-tuning. Both approaches demonstrate strong erasure performance, particularly when applied to flow-matching models such as FLUX. Claims And Evidence: Yes, I believe that the claims in this submission are supported by extensive evaluation. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for concept erasure. Most concept erasure works evaluate on object erasure, artist style erasure, object removal, and some copyrighted characters. This work follows this evaluation setup, along with evaluation on adversarial attacks. Theoretical Claims: I have checked the correctness of the preservation loss (Appendix A), the erasure loss (Appendix B), full loss derivation (Appendix C) and connection with step-wise concept erasure loss. While I may not have understood it completely, I can say that it is sound. Experimental Designs Or Analyses: 1. My main concern is that several recent concept erasure works have been ignored. Can you please add them so that we can compare the proposed method against similar baselines? [1] One-dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications [2] MACE: Mass Concept Erasure in Diffusion Models [3] Receler: Reliable Concept Erasing of Text-to-Image Diffusion Models via Lightweight Erasers [3] Selective Amnesia: A continual learning approach to forgetting in deep generative models, Heng et al [4] Forget-me-not: Learning to forget in text-to-image diffusion models. Zhang et al [5] Scissorhands: Scrub data influence via connection sensitivity in networks, Wu et al. 2. It is unclear to me if the authors trained different models for every concept or separate models with inappropriate objects, IP characters, nudity, and art styles removed? If not, I would suggest that the authors train for a multi-concept erasure method, similar to UCE. 3. Can you provide more details on how you trained ESD, UCE, and other ablations with FLUX? 4. Many concept erasure methods like ESD, UCE and [4] have shown that cross-attention plays a major role in learning undesired concepts. But most of these methods have considered SDv1.5, while this paper considers a transformer based FLUX model? Do the authors have any insight into which attention blocks (joint attention/self-attention) in FLUX play a role in generating these concepts? Does it help if we can selectively tune these attention blocks? This is not a weakness of the paper, so please don't feel pressured to do this analysis. Supplementary Material: I checked the proofs and the results in the appendix. Relation To Broader Scientific Literature: The key contributions of this paper are towards concept erasure in flow-based models like FLUX. I think this is the first work to consider newer models. Their results seem to be better than other concept erasure methods considered in the paper. However, I still feel that the authors ignore many recent concept erasure works. Essential References Not Discussed: Please see references in Experimental Designs Or Analyses Section that I think the paper has not considered. Other Strengths And Weaknesses: Please see the Experimental Designs or Analyses section Other Comments Or Suggestions: Minor typo in the introduction. line 68 - full stop needed before 'In practice' Questions For Authors: Please see Experimental Designs Or Analyses section Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful and thorough review. We greatly appreciate your acknowledgment that our claim is well-supported by the evaluation with conventional settings as well as adversarial settings. We also appreciate your review of our theoretical derivations and acknowledgement of their soundness. We address your concern below: #### 1. **Additional concept erasure works**: We appreciate your comment regarding additional baselines. From your reference, Recler edit features after cross-attention modules, MACE finetunes $W_v$ and $W_k$ in cross-attention modules, and Forget-Me-Not performs attention-resteering on cross attention modules. Since all three methods rely on cross-attention, they are not directly applicable to FLUX, which uses MM-attention instead. As for One-dimensional Adapter, SA, and Scissorhands, including them as baselines poses significant challenges, as they do not provide official implementation for FLUX, and adapting them to FLUX requires substantial effort due to their complexity. We will include these baselines and discuss them in our related works. #### 2. **Training on separate models or a monolithic model** In our evaluation, we train separate models for each category that contains multiple concepts—an evaluation setting commonly adopted in prior unlearning works. Additionally, we present a result where a single monolithic model is trained to unlearn concepts across multiple categories. As shown in the table, unlearning multiple concepts across different categories is more challenging compared to unlearning concepts in a single category. | Number of Unlearned Concepts | CLIP ↑ | FID ↓ | SSIM ↑ | |----------------------------------|--------|--------|--------| | FLUX (Original) | 0.31 | 40.4 | - | | 10 (IP + Styles) | 0.29 | 44.4 | 0.55 | | 20 (IP + Styles + Nudity) | 0.23 | 49.3 | 0.53 | | 50 | 0.20 | 58.3 | 0.48 | **Table:** COCO performance as the number of unlearned concepts increases. Arrows indicate preferred direction. #### **3. Details on how baselines are compared:** Since many unlearning works are implemented on SD1.4 and lack official support for FLUX, we reimplemented all baselines used in this work. For a fair comparison (hyperparameters are originally proposed for SD1.4), we performed hyperparameter search for each baselines to find the best-performing hyperparameter, which we used in our experiments. Details of baseline implementation are shown in Appendix H.2. One qualitative example of ESD hyperparameter performance on FLUX is shown in Figure 12. We will update the appendix to include the full set of baseline hyperparameters. Additionally, we plan to release our FLUX implementations of these baselines upon acceptance to support reproducibility. #### **4. Do the authors have any insight into which attention blocks (joint attention/self-attention) in FLUX play a role in generating these concepts?** On FLUX, we observe behavior that differs significantly from U-Net-based SD models (e.g., SD1.4). Unlike SD models, which generate concepts in cross-attention layers, we observe that normalization layers and FFN layers play a critical role in generating concepts. This is supported by our ablation study in Table 4. We also present additional qualitative results in Figure 11 in Appendix K.2. These findings motivate our decision to exclude attention modules from FLUX unlearning, as noted in Line 382. We hope our response has addressed your concerns, and we’re happy to answer any further questions you may have. We sincerely appreciate your recognition of our work as a meaningful contribution to the current unlearning research with SOTA FLUX model. We would be grateful if you would consider further supporting this work by raising your rating. --- Rebuttal Comment 1.1: Comment: Thank you for your comprehensive rebuttal. I understand that Receler and Forget-me-not methods may be hard to apply to FLUX model based on joint attention, and training the other baselines I mentioned may be difficult. I have a few questions in this regard - 1. Have the authors tried to extend their method to other models like SD3.5 etc? I would like to point out that proposing a very model-specific erasure method is of very little use to researchers, however, proposing a more general concept erasure method (perhaps only for MM attention models) would be more useful. Many of the recent works that I proposed exhibit excellent performance and it would be useful for us to understand the efficacy of minimalistic concept erasure if we can compare it with other recent works. Thank you for providing some results for multi-concept erasure. Please include them in the paper. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our rebuttal. Regarding the extension to SD3.5: SD3.5 shares the same architecture as FLUX (MM-DiT), and both are rectified flow models [1,2,3]. Therefore, we chose to evaluate our method on the better-performing FLUX model to show its effectiveness on SOTA model. Nevertheless, we tried our method on SD3.5 prior to submission, and it worked. Regarding generalization to other models: Our method is model-agnostic by design. We also included additional results on SD-XL in our response to reviewer ab8s. This demonstrates that our method is effective across different architectures (U-Net and MM-DiT) as well as various training paradigms (diffusion and flow matching). Thank you for your suggestion. We’ll include the multi-concept erasure results in the paper and discuss the additional recent works you recommended. \[1\]: https://huggingface.co/blog/sd3-5 \[2\]: https://arxiv.org/pdf/2403.03206 \[3\]: https://huggingface.co/blog/sd3
Summary: This paper studies minimalist concept erasure, which aims to remove inappropriate content from a generative model with minimal modification to the original model, specifically in diffusion/flow matching models. Unlike previous approaches that operate on each step of the sampling chain, the training objective in this paper applies to the entire trajectory, prioritizing its effect on the final generated images. Additionally, the authors introduce a KL regularization term to ensure minimal changes to the model’s behavior. Across multiple benchmarks, this method demonstrates advantages over other compared approaches. Claims And Evidence: Extensive empirical analysis supporting the theoretical findings, detailed in Appendices A–D. Effective content erasure while preserving the model’s functionality, as evidenced in Tables 2–5. Methods And Evaluation Criteria: The final objective is given in Equation 16, where the first term ensures that the model’s output is not correlated with the concept c, while the second term encourages the model’s output to align with a reference model. Theoretical Claims: The new loss is related to the step-wise loss function, as in sec 3.4 and Appendix D, and to the alignment, as in sec 3.5 Experimental Designs Or Analyses: Authors run experiments on state-of-the-art flow-based models and inspect whether the model could still generate images with erased concepts. In the meantime, the author also benchmarks the image quality to ensure the model’s performance doesn’t degenerate. The authors compared multiple methods and showed clear improvement. Supplementary Material: The author provides supplementary material for further loss derivation and more examples. Relation To Broader Scientific Literature: This work is related to other step-wise concept erasing models and the alignment model Essential References Not Discussed: This paper provides sufficient references Other Strengths And Weaknesses: Strength: The primary strength of this paper is that the proposed objectives are motivated and in a very concise format. The performance is also significantly better than others. Weakness: This method needs to generate the entire chain, which is less efficient than step-wise methods. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful and thorough review. We’re glad you found our approach to minimalist concept erasure well-motivated and effective across benchmarks. We appreciate your feedback and the recognition of our theoretical and empirical contributions. Regarding your concern, we acknowledge the efficiency trade-off introduced by complete gradient trajectories across all generation steps. Nevertheless, since unlearning only requires one training run, improving performance with a trade-off on runtime is acceptable. Besides, we show that concept removal requires only a few GPU hours. We’re happy to answer any further questions. If you don’t have any additional concerns and find that this work complements the current unlearning research, we would greatly appreciate it if you could raise your score to further support it.
Summary: The paper presents a technique to unlearn concepts from generative models. Unlike existing concept erasure techniques, the proposed technique unlearns concept based only on distributional distances of the final generation outcomes. Claims And Evidence: - The authors claim that “our method adopts a connectionist perspective, treating concepts as being stored in the interconnected structure of neurons.” However, this requires further analysis and discussion to fully understand the approach. Generative models inherently learn highly correlated concepts with complex interdependencies. It remains unclear how neuron masking is applied to concepts that exhibit strong correlations, such as “nudity” and “revealing clothing.” A more detailed explanation and experimental validation are necessary to clarify the handling of such cases. - Figure 2 suggests that “the model learns an optimal trajectory as the gradient propagates through all generation steps.” However, this claim is not rigorously proven in the paper, and further justification is required to substantiate it. - Additionally, the title “Minimalist Concept Erasure” appears misleading, as the paper does not quantitatively demonstrate that minimal changes are made to images or features. To justify this claim, the authors should either introduce an evaluation metric or provide a detailed analysis of feature-level modifications to verify the extent of erasure. Methods And Evaluation Criteria: - The paper overlooks a crucial evaluation metric—assessing how well the unlearned model retains neutral concepts. Effective unlearning should remove specific concepts without inducing catastrophic forgetting, ensuring the model can still generate neutral, unrelated content. A dedicated analysis on this aspect is necessary. - The proposed unlearning loss function closely resembles existing approaches, such as the Concept Ablation (CA) method. The formulation does not introduce significant innovations beyond prior work. - The paper does not sufficiently address the scalability of neuron masking when unlearning multiple concepts. As the number of masked neurons increases, training stability may degrade. Additional justification and experimental validation are necessary to demonstrate the method’s robustness under varying numbers of masked neurons. - For models with a large number of denoising steps, the backpropagation of gradients from the final step to the initial step accumulates all previous gradients, which could lead to instability or unintended interference at early denoising steps. The impact of long-range gradient dependencies on early denoising dynamics should be analyzed. - The current loss function is applied only at the final denoising steps, leaving open the possibility that the target concept remains preserved in earlier steps. To rigorously validate concept erasure, the authors should empirically and mathematically demonstrate how the concept is removed across intermediate steps ( X_1, X_2, … ). This could be achieved by discarding stochasticity in denoising and analyzing the generative trajectory. - For the last point, represents are how feature space is affected for objects at different denoising step. If the concepts are removed properly, the feature should be highly discriminative from the original feature of concept. Theoretical Claims: The authors theoretically claimed that they formulate the loss to preserve neutral concepts, however, no empirical proof is provided. Experimental Designs Or Analyses: - The results on the SDXL model are not provided, despite the authors claiming that their approach is suitable for diffusion models. Additionally, they state that these results are included in the appendix, but they are either not referenced or were inadvertently omitted. In either case, the absence of quantitative results raises concerns. - The evaluation of neutral prompt concept preservation is missing, which is crucial for understanding how the model generalizes after unlearning specific concepts. - In Equation 5, the authors mention that a neutral prompt set is created and sampled to compute the loss. However, there is no explanation regarding which neutral prompts the model is trained on or how these concepts are selected. Furthermore, details on the number of neutral concepts used for a given target concept are not provided, making it difficult to assess the robustness of the approach. Supplementary Material: Discussed already. Relation To Broader Scientific Literature: Contributions are related to existing literature. However, all the papers discuss about the retention accuracy of the model on non-target concepts. This is missing from the paper. Essential References Not Discussed: Some of the papers are missing and comparisons are also missing: - Gong, C., Chen, K., Wei, Z., Chen, J., and Jiang, Y.-G. Reliable and efficient concept erasure of text-to-image diffusion models. In European Conference on Computer Vision, pp. 73–88. Springer, 2025. - Hong, S., Lee, J., and Woo, S. S. All but one: Surgical concept erasing with model preservation in text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 21143–21151, 2024 - Kim, C., Min, K., and Yang, Y. Race: Robust adversarial concept erasure for secure text-to-image diffusion model. In European Conference on Computer Vision, pp. 461–478. Springer, 2025. - Huang, Chi-Pin, et al. "Receler: Reliable concept erasing of text-to-image diffusion models via lightweight erasers." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024. - Zhang, Gong, et al. "Forget-me-not: Learning to forget in text-to-image diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024. - Lu, Shilin, et al. "Mace: Mass concept erasure in diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Some of these are discussed in the paper but comparison is missing. Other Strengths And Weaknesses: Already discussed Other Comments Or Suggestions: Writing could be improved. Some equations uses notations that are not defined. Check all equations carefully. - Equation 22 for example The statement does not seem to be complete or correct "During backward propagation, we recompute the forward before gradient calculation." Appendix M. Robustness study of unlearned models with neural prompts. I think the authors meant neutral prompts. Questions For Authors: - What was the training time. How many neutral concepts are selected. - What happens if these neutral concepts are reduced. Does model behaves unpredictable for neutral concepts? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the thoughtful and thorough review. We address your concern below: #### **Claims**: 1. As stated in L201, prior masking-based unlearning methods achieve accurate and robust results, which we build on due to their strong performance. Regarding neuron masking for strongly correlated concepts, Table 2 shows that model performance degrades minimally on neutral prompts (see LAION dataset). Visual examples in L228 further illustrate that our method does not unlearn a concept by pushing it toward a predefined anchor or a contrastive concept (our result is neither nude nor heavily dressed). Further understanding how the model functions internally falls under the domain of mechanistic interpretability research and is beyond the scope of this work. 2. Our derivation in Appendix A shows how we derive from the KL divergence between the original and unlearned models to our preservation loss. L263 explains how the mask gradient is composed of the gradients of all intermediate masks, following the chain rule. 3. We conduct extensive evaluations of model performance post-unlearning. For target concepts, we report CLIP (text alignment), FID (image quality), and SSIM (structural similarity between original and unlearned outputs), with results in Table 1 showing our method outperforms baselines. For neutral prompts, we evaluate 5,000 samples from the LAION dataset (Table 2), demonstrating that our unlearned model maintains strong performance. #### **Methods and Evaluation Criteria**: 1. We response in Claims 3. 2. Our method fundamentally differs from CA: while CA applies per-step losses on intermediate variables, our loss operates only on the final output. Additionally, CA requires an anchor concept paired with each target concept, whereas our approach does not rely on any anchor. 3. In our evaluation, we train separate models for each category that contains multiple concepts. This evaluation setting is adopted in almost all unlearing works. We present an additional result to unlearn one monolithic model across multiple categories. | Number of Unlearned Concepts | CLIP ↑ | FID ↓ | SSIM ↑ | |----------------------------------|--------|--------|--------| | **FLUX (Original)** | 0.31 | 40.4 | - | | 10 (IP + Styles) | 0.29 | 44.4 | 0.55 | | 20 (IP + Styles + Nudity) | 0.23 | 49.3 | 0.53 | | 50 | 0.20 | 58.3 | 0.48 | **Table:** COCO performance as the number of unlearned concepts increases. 4. We do not observe any significant gradient instability issues, as modern model architectures utilize residual connections that help stabilize gradients during training. 5. One cannot conclude that an early step still preserves a concept simply because it is not altered, as it may no longer resembles the concept in the final outcome. Afterall, we concern the presence of a concept in the last step. 6. In response to your feedback, we performed UMAP visualization on the final denoised latent. The results show clear separation between unlearned and original FLUX features using the same prompts. We will include this plot in the revision to further support our claims, alongside detection accuracy results. #### **Experimental Desings Or Analyses**: 1. We apologize for the misunderstanding caused. We include the result on SD-XL below and will add it in Appendix. In addition, Appendix C shows the derivation and final loss for diffusion models, we welcome you to check its correctness. | Category | Method | ACC ↓ | CLIP ↑ | FID ↓ | SSIM ↑ | |----------------|--------|--------|--------|--------|---------| | IP | Ours | 3% | 0.30 | 33.4 | 0.63 | | IP | SDXL | 100% | 0.31 | 35.5 | – | | Art Styles | Ours | 7% | 0.29 | 37.5 | 0.53 | | Art Styles | SDXL | 89% | 0.31 | 35.5 | – | **Table**: Concept erasure results (IP Characters and Art Styles) on SDXL. 2. We response in Claims 3. 3. In training details in Appendix E (Line 887), we state that neutral prompts are randomly selected from the GCC3M dataset. In addition, we use 100 neutral prompts during unlearning. We appreciate your suggestion on the detailed settings of neutral prompts and will include these details in the main text. #### **Questions**: 1. In Appendix H.1, we state that we train 400 steps on an H100 GPU. As replied above, we randomly select 100 GCC3M prompts as neutral prompts. 2. We include one additional ablation study on the number of neutral prompts. Our method does not require a large number of neutral prompts. | # Images | ACC ↓ | CLIP ↑ | FID ↓ | |----------|--------|---------|--------| | 100 | 4% | 0.31 | 35.4 | | 50 | 4% | 0.31 | 39.7 | | 10 | 12% | 0.27 | 48.2 | **Table:** Ablation on number of neutral prompts. We would like to thank you again for your thorough review. We are happy to answer any further questions.
null
null
null
null
null
null
Holistic Physics Solver: Learning PDEs in a Unified Spectral-Physical Space
Accept (poster)
Summary: The paper introduces Holistic Physics Mixer (HPM), a neural operator framework that integrates spectral-based and attention-based PDE solvers in a unified space. The authors claim that HPM inherits the generalization ability of spectral methods while maintaining the local adaptability of attention mechanisms, overcoming their respective limitations. The key contribution is a holistic spectral space that simultaneously encodes spectral domain structure and point-wise physical states, allowing adaptive modulation of spectral components. They provide a universal approximation proof, extensive empirical evaluation across structured and unstructured mesh problems, and claim superior performance over existing baselines such as FNO, Transolver, and SpecSolver. The authors also present a study on learned spectral modulation patterns, suggesting that insights from HPM can guide the design of fixed spectral methods. Claims And Evidence: **Claim**: HPM consistently outperforms state-of-the-art neural operators. **Issue**: The performance gains, while present, are marginal in several cases, and no statistical significance analysis is provided. Given the noise in PDE datasets, these improvements may not be meaningful. Issue: The comparison against spectral-based methods is unfair because HPM explicitly introduces additional learnable parameters and adaptive mechanisms, while the baselines operate with fixed frequency bases. **Claim**: HPM balances spectral priors with local adaptability better than previous methods. **Issue**: The effectiveness of HPM’s coupling function H(x,Φ) is only demonstrated through limited ablation studies. There is no evidence that this particular formulation is optimal beyond empirical validation. A more rigorous mathematical analysis of the inductive biases introduced by HPM is missing. **Claim**: HPM generalizes better under limited training data. **Issue**: The claim is supported by results on only a few PDE benchmarks, all of which follow the same general format. The paper does not test on significantly different physical domains (e.g., highly nonlinear PDEs, turbulent flow beyond the studied Pipe problem). Without such cases, the claim of broad generalization is overstated. **Claim**: The universal approximation theorem guarantees that HPM can learn any PDE operator. **Issue**: The theoretical proof is standard and does not differentiate HPM from prior neural operators, all of which can be formulated as integral neural operators. This claim does not justify practical superiority. Methods And Evaluation Criteria: **Benchmark Selection**: The paper primarily evaluates standard PDE problems (Darcy Flow, Navier-Stokes, Airfoil, etc.), which many previous neural operators have already tackled. The results do not convincingly demonstrate that HPM generalizes beyond these well-studied cases. **Baselines**: The selection of baselines is reasonable, but some comparisons are misleading. For instance: (1) Fixed spectral neural operators are inherently at a disadvantage since they do not have additional learnable modulation mechanisms like HPM. (2) Transolver is an attention-based baseline but is not optimized for spectral constraints, making the comparison against HPM’s hybrid design somewhat artificial. **Evaluation Metrics**: The paper solely relies on Relative L2 Error, which does not fully capture solution stability, robustness, or interpretability. A more comprehensive evaluation should include energy conservation properties, long-term stability, and sensitivity to initial conditions. Theoretical Claims: The universal approximation theorem (Theorem A.4) is a minor extension of existing results. The proof follows directly from prior work on integral neural operators and does not provide deeper insights into the spectral-physical coupling mechanism. Lack of theoretical justification for H(x,Φ): While the authors empirically compare different formulations of the coupling function H(x,Φ), there is no theoretical analysis of why Softmax-based modulation is optimal. Stability of learned spectral representations is not analyzed. Given that the spectral basis dynamically adapts, it is unclear whether this process induces instability in long-term PDE integration. Experimental Designs Or Analyses: The comparisons with baselines appear selective. For example, some state-of-the-art methods (e.g., advanced Transformer-based PDE solvers) are omitted from the comparison tables. The zero-shot generalization experiments are promising but fail to account for potential biases in the training data that may favor HPM’s architecture. The computational efficiency claims are weakly supported. While inference times are reported, there is no detailed analysis of memory consumption or training efficiency. Supplementary Material: No. Relation To Broader Scientific Literature: The paper adequately discusses related work in neural operators but overlooks some recent advances in hybrid spectral-attention architectures. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strength**: (1) The idea of unifying spectral and physical spaces is novel and aligns with recent trends in physics-informed machine learning, (2) The proposed method is conceptually simple and easy to integrate into existing architectures, (3) The experiments cover a diverse range of PDE problems, demonstrating the method’s versatility. **Weakness**: The lack of rigorous theoretical justification weakens the impact of the proposed approach, (2) The empirical results, while promising, do not convincingly demonstrate superiority over all existing methods, (3) The clarity of the paper is suboptimal, with several technical details buried in the appendix and figures that are difficult to interpret without extensive cross-referencing. Other Comments Or Suggestions: The authors should provide statistical significance testing for their experimental results. A more detailed discussion of computational complexity would improve the credibility of the efficiency claims. Additional ablation studies are needed to isolate the contributions of different architectural components. Questions For Authors: (1) How does HPM compare to PINNs and other physics-informed architectures for PDE solving? (2) Can you provide a formal theoretical justification for why the proposed coupling function improves generalization? (3) Have you tested HPM on real-world PDE applications beyond the benchmark datasets? (4) How does HPM scale with increasing problem complexity (e.g., higher-dimensional PDEs)? (5) Could you clarify how the spectral basis functions are initialized and whether different choices impact performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and insightful questions. They have significantly improved this work. We respond to them below. `"#1-#4"` provide complements for several important concerns. `"#5-#12"` provide "point-by-point responses" exactly aligned with your review. **#1 Statistical Significance Analysis** To confirm statistical significance, we perform paired t-tests across multiple runs, confirming confidence levels above 95% for all improvements. |Problem|Statistical Confidence (%)| |---|---| |Darcy|98.37| |Airfoil|96.96| |Navier-Stokes|99.96| |Plasticity|99.39| |Irregular Darcy|98.95| |Pipe Turbulence|96.95| |Heat Transfer|96.50| |Composite|95.39| |Blood Flow|96.35| **#2 Evaluation Metrics** We test more metrics on the Navier-Stokes in the table below, demonstrating HPM's improvement across all evaluation criteria. |Model|Max Error|Boundary Error|LowFreq Error|MiddleFreq Error|HighFreq Error| |---|---|---|---|---|---| |Transolver|1.02e+0|1.17e-1|1.42e-2|1.47e-2|9.14e-3| |SpecSolver|1.01e+0|1.15e-1|1.12e-2|1.32e-2|8.65e-3| |HPM|8.24e-1|8.99e-2|9.20e-3|1.04e-2|7.12e-3| **#3 Computational Efficiency** We provide more computational measurements, concluding that the HPM don't increase computational burden. |Model|Memory|Training Time / Epoch| |---|---|---| |Transolver|2.12GB|50.49s| |SpecSolver|1.83GB|36.48s| |HPM|2.13GB|46.11s| **#4 Design of $H(x,\Phi)$** Softmax-based modulation is a favorable approach with several practical advantages. It ensures balanced scales and lets each point adaptively focus on proper spectral components, providing an effective balance of stability and flexibility. It is typically challenging to establish theoretical optimality for neural architecture design. Nevertheless, this doesn't diminish our core contribution, a framework that bridges spectral prior with point states. We have provided two principles for $H(x,\Phi)$ in Section 4.5: point-wise processing for local adaptivity, and normalization for balanced information across physical points. They will provide guidance for future explorations. **#5 Claims And Evidence** (1) To statistically validate the improvements, we report stds across multiple runs per dataset, and add statistical significance in "#1". This fully demonstrates HPM's consistent superiority and notable gains in some problems (e.g., Navier-Stokes: 18.4%, Plasticity: 35.0%). (2) In Table 6, HPM with 4 layers outperforms both baselines, despite having fewer parameters. This confirms our gains come from architectural design, not parameter count. (3) Please see "#4". (4) We agree including complex PDEs is important for comprehensive evaluation. We clarify that our benchmark have included significantly nonlinear PDE, the Navier-Stokes with very low viscosity (1e-5), exhibiting turbulence features. (5) This work goes further by explicitly formulating physical-spectral structure. This theoretical foundation validates that HPM properly extends previous operator frameworks, complementing the empirical results. **#6 Methods And Evaluation** (1) Benchmarks: Please see our response "#2" to `Reviewer bPLw`, which addresses the same concern. (2) Baselines: To avoid misleading comparison, we have compared baselines following exact protocol as previous works. (3) Metrics: Please see "#2". **#7 Theoretical Claims** (1) Please see "(5)" in "#5". (2) Please see "#4". (3) Time-series results on Navier-Stokes confirm HPM's spectral stability with reduced error. Section 4.4 demonstrates consistent spectral patterns across resolutions, validating stability beyond training resolutions. **#8 Experimental Designs Or Analyses** (1) We have included state-of-the-art comparisons with advanced Transformer-based solvers (Transolver, GNOT, etc.). (2) The datasets are standard benchmarks used across the field, ensuring no bias favoring HPM over baselines. (3) Please see "#3". **#9 Broader Scientific Literature** (1) We will add more related works. **#10 Other Strengths And Weaknesses** (1) Please see "#4". (2) Please see "(1)" in "#5". (3) This may be due to the small fonts in Figure 2 and a lack of details. We will fix it. **#11 Other Comments Or Suggestions** (1) Please see "#1". (2) Please see "#3". (3) Yes, to isolate component contributions, we have included ablations on $k$ value (Table 9), head count (Table 10) and model depth (Table 11). **#12 Questions For Authors** Q1: HPM and PINNs solve different problems - HPM learns operator mappings across multiple PDE solutions, while PINNs encode PDEs in losses for single-instance solutions. Q2: Please see our response "#1" to `Reviewer bPLw`, which addresses the same concern. Q3: Please see our response "#2" to `Reviewer bPLw`. Q4: HPM is promising for higher-dimensional PDEs due to its linear scaling with physical points and high accuracy with fewer frequencies. Q5: The basis functions are derived directly from geometry and don't undergo training, thus initializations don't affects. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed rebuttal. I will increase my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your valuable feedback and thoughtful acknowledgment of our work. Your suggestions have been instrumental in helping us enhance this project significantly. We deeply appreciate the time you've taken to provide these helpful comments. If you have other questions or concerns, please feel free to raise them. We will be more than willing to respond them.
Summary: Holistic Physics Mixer (HPM) unifies attention-based and spectral methods for PDE solving, combining point-level adaptability with spectral continuity constraints. This integration enables strong generalization and flexibility, surpassing existing methods in accuracy, efficiency, and zero-shot performance across diverse PDEs. Claims And Evidence: Yes Methods And Evaluation Criteria: yes Theoretical Claims: No such Experimental Designs Or Analyses: Yes, all of them. Supplementary Material: Yes, all of them. Relation To Broader Scientific Literature: I think it brings a new view of blend of grid and spectral method for AI model. Essential References Not Discussed: All good. Other Strengths And Weaknesses: Overall, I like this paper for its bringing the new way of representative method of PDE-based solution. I do have one question to ask, is the method applicable to triangular mesh data? Since FNO is only applicable to structured grid, I am not sure if the method is the same. Other Comments Or Suggestions: No Questions For Authors: No such Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and insightful questions. They have significantly improved this work. We respond to them below. **#1 Handling triangular mesh data** > I do have one question to ask, is the method applicable to triangular mesh data? Since FNO is only applicable to structured grid, I am not sure if the method is the same. Yes, HPM is fully applicable to triangular mesh data. Unlike FNO which is limited to structured grids, our method works effectively on both structured and unstructured meshes. The following is detailed explanation: (a) We use Laplace-Beltrami Operator (LBO) eigenfunctions [1] as our spectral basis, which can be computed on arbitrary mesh structures, including triangular meshes. This is a key advantage over traditional Fourier-based methods that require structured grids. (b) Our experiments explicitly demonstrate this capability in Section 4.3, where we evaluate HPM on five unstructured mesh problems: Irregular Darcy, Pipe Turbulence, Heat Transfer, Composite, and Blood Flow. These problems use triangular meshes with node counts ranging from 1,656 to 8,232. **Reference** - [1] A Laplacian for Nonmanifold Triangle Meshes
Summary: The paper introduces the Holistic Physics Mixer (HPM), a unified framework that leverages a holistic spectral feature space to integrate domain-level structures with point-wise physical states. The author claims that HPM achieves strong performance in scarce-data scenarios, offers resolution generalizability, and maintains computational efficiency. To validate these claims, the author conducts experiments across multiple PDE problems, comparing HPM with existing spectral and attention-based neural operators. Claims And Evidence: The author makes three primary claims: (1) Strong performance and generalizability in data-scarce scenarios, attributed to the spectral priors; (2) Improved training efficiency and the ability to capture fine-scale variations due to point-wise adaptivity; (3) HPM’s learned spectral processing patterns provide insights for designing fixed spectral neural operators. Table 1 and Table 3 compare performance across structured and unstructured meshes, while data scarcity is partially explored in the Darcy Flow and Navier-Stokes experiments. Resolution generalizability is evaluated in zero-shot airfoil cases, though further discussion on generalization under diverse PDE conditions would further strengthen the claim. Methods And Evaluation Criteria: The benchmark datasets include nine PDE problems, covering both structured and unstructured meshes. The author ensures a comprehensive comparison by including a wide range of existing spectral and attention-based methods. The evaluation metric used is relative L2 error, which is a standard and appropriate measurement in PDE learning, allowing for fair comparisons across different resolutions and problem settings. Theoretical Claims: The theoretical claims in the paper are concisely formulated, mainly presenting short representations of key properties rather than extensive derivations. Specifically, the theoretical claims related to the coupling functions are presented in Equations 11-15, defining how spectral features interact with point-wise physical states. Experimental Designs Or Analyses: Experimental design has thorough considerations that include structured and unstructured mesh PDEs, generalizability assessments in zero-shot resolution, and the impact of training data. In addition, the author considers ablation studies in spectral coupling functions. One improvement could be the inclusion of some real-world data, such as climate data or smoke plume data, to further demonstrate the robustness of HPM. Supplementary Material: Supplementary material spans theoretical backgrounds, training specifications, numerical results, and dataset introductions, which covers a variety of information that increases reproducibility of the paper. Relation To Broader Scientific Literature: The broader scientific literature relevant to HPM includes spectral-based and attention-based PDE learning methods. HPM contributes a new perspective by bridging these two categories, integrating spectral priors with point-wise adaptivity to enhance performance. Essential References Not Discussed: The author discusses the majority of the essential references. Other Strengths And Weaknesses: Strengths: The limitations and future work are well-discussed, and the paper flows smoothly, making readers easy to follow. Other Comments Or Suggestions: Comments and suggestions are addressed in the earlier sections. Questions For Authors: Question 1: Beyond resolution generalization, would the design of HPM still be beneficial in other generalization scenarios, such as PDEs with different physical parameter settings? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and insightful questions. They have significantly improved this work. We respond to them below. **#1 Discussion about other generalization scenarios** > Question 1: Beyond resolution generalization, would the design of HPM still be beneficial in other generalization scenarios, such as PDEs with different physical parameter settings? Thank you for this insightful question. Below we provide further discussion about the generalization capability of HPM. (a) First, we wish to clarify that the strong resolution generalization and limited-data performance naturally stem from the preservation of spectral bias. As derived in Section 3.1, HPM inherits the established inductive bias of fixed neural operators (like FNO) for learning continuous operator mappings. Beyond this, the coupling mechanism allows adaptive spectral modulation based on local physical features, improving the flexibility while maintaining such inductive bias. While not specifically optimized for other generalization scenarios, this unified approach creates fundamental advantages applicable to various generalization tasks. (b) For specific generalization scenarios, as a versatile neural module, HPM could potentially be combined with dedicated techniques (such as hypernetworks or meta-learning approaches [1,2]) to enhance generalization across different physical parameters. This integration would be a promising direction for future, leveraging HPM's unified representation alongside specialized parameter generalization methods. **#2 More real-world scenarios beyond standard benchmarks** > Experimental design has thorough considerations that include structured and unstructured mesh PDEs, generalizability assessments in zero-shot resolution, and the impact of training data. In addition, the author considers ablation studies in spectral coupling functions. One improvement could be the inclusion of some real-world data, such as climate data or smoke plume data, to further demonstrate the robustness of HPM. Thanks for your recognition for our comprehensive experiments. (a) First, We'd like to highlight that the standard benchmarks encompass diverse physical scenarios and have been widely adopted by the community [4,5]. They can effectively validate HPM's capabilities across various physical domains, from fluid dynamics to elasticity problems, demonstrating its general applicability and superior performance compared to existing methods. (b) Additionally, our current experiments already include problems derived from real-world industry scenarios. For example: (1) The Composite problem simulates deformation fields of Carbon Fiber Reinforced Polymer under high-temperature conditions, directly relevant to aerospace manufacturing of jet air intake components. (2) The Blood Flow problem models hemodynamics in the human thoracic aorta, which has significant clinical applications. These real-world examples demonstrate HPM's practical utility beyond synthetic benchmarks. We agree that expanding to additional domains like climate data or smoke plume simulations would further validate HPM's robustness. We plan to explore more real-world applications in future work to further demonstrate the practical benefits of our unified spectral-physical approach. **Reference** - [1] Meta-Auto-Decoder for Solving Parametric Partial Differential Equations - [2] HyperDeepONet: learning operator with complex target function space using the limited resources via hypernetwork - [3] Learning Neural Operators on Riemannian Manifolds - [4] Fourier Neural Operator for Parametric Partial Differential Equations - [5] Transolver: A Fast Transformer Solver for PDEs on General Geometries
Summary: The paper introduces Holistic Physics Mixer (HPM), a framework that integrates spectral transformation and data-dependent modulation (i.e. attention). HPM employs a learnable coupling mechanism that enables adaptive modulation of spectral components while preserving the advantages of spectral transformation. The numerical benchmark demonstrates the proposed model outperforms other baselines on both uniform grid and unstructured grid. Claims And Evidence: The majority of the claims are numerically tested and validated. Methods And Evaluation Criteria: The paper benchmarked on benchmark problems that are widely adopted by the community, the selected baseline models are also comprehensive. Theoretical Claims: The theoretical result is based on the universal approximation theorem of neural operator in Kovachki et al., 2023. Experimental Designs Or Analyses: While the experiments provided cover quite a lot baseline and ablation studies are pretty comprehensive, the following aspects are not entirely clear to me. 1. What is the computational cost of precomputing the LBO? And how is the number of bases being chosen? 2. As HPM can be viewed as an extension of linear attention, its computational complexity should also be similar to linear attention based model like Galerkin transformer/OFormer/GNot. Yet, it is shown in the paper that it is faster than Transolver (which computes attention on only a few slices) and only a small overhead over SpecSolver which does not have any linear attention part in it. Supplementary Material: I check the model implementation details and the visualization of learned modulation. Relation To Broader Scientific Literature: The key contribution can be extended to other applications that involves non-Euclidean geometries like sphere, or tasks beyond PDE modelling such as image modelling. Essential References Not Discussed: Despite the author cited AFNO in the paper, I think it is worth more in-depth discussion and if possible, adding a empirical comparison, since AFNO is also a model combing spectral transformation and data-dependent modulation. Other Strengths And Weaknesses: I think the overall idea is straightforward but interesting and definitely a meaningful addition to the community. The strong performance of SpecSolver itself also hints the potential of this direction. Other Comments Or Suggestions: See Experimental Designs Or Analyses. Questions For Authors: See Experimental Designs Or Analyses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and insightful questions. They have significantly improved this work. We respond to them below. **#1 Computational cost of LBO** > What is the computational cost of precomputing the LBO? The LBO eigenfunctions is computed efficiently using the robust-laplacian library, with `near-linear time scaling relative to mesh nodes` for both structured and unstructured meshes. Our empirical evaluation on a single Intel Xeon CPU demonstrates the favorable scaling: |Nodes Number|Computation Time (s)| |---|---| |2.5K|$0.008\pm 6.0\times10^{-6}$| |10K|$0.037\pm 1.3\times10^{-4}$| |40K|$0.177\pm 4.3\times10^{-4}$| |160K|$0.894\pm 4.4\times10^{-3}$| |640K|$4.581\pm 9.4\times10^{-2}$| |2.56M|$19.86\pm 9.0\times10^{-2}$| Importantly, this computation is a one-time cost per physical domain, with eigenfunctions reusable across simulations in that domain. It is negligible cost for applications with fixed geometry but varying ICs. Future work could explore more methods (e.g., learning-based approaches, and FFT-like approaches for uniform grids) to further enhance the computational efficiency. **#2 Chosen of frequency number** > And how is the number of bases being chosen? The number of frequency bases $k$ in HPM is chosen based on several considerations to balance representational capacity and fair comparison. For most structured mesh problems, we use $k=128$, while unstructured mesh problems use $k=64$. This aligns with: (a) We align $k$ with the hidden dimension to ensure fair comparison with linear attentions, where $k$ corresponds to $Q$, $K$ dimensions. (b) We maintain consistency with spectral methods [1,2], which typically use comparable total frequency numbers. (c) Following [2,3], we make problem-specific adjustments - set $k=128$ for Navier-Stokes despite its larger hidden dimension (256) to control parameter count, and set $k=32$ for Blood Flow due to limited mesh nodes. Table 9 confirms HPM performs well across different $k$ values, demonstrating the effectiveness of HPM regardless of basis count. While more sophisticated basis selection strategies (such as AFNO [4]) could be explored in future work, current results effectively demonstrates the benefits of spectral-physical integration. **#3 Computational complexity comparison with linear attention** > As HPM can be viewed as an extension of linear attention, its computational complexity should also be similar to linear attention ... Below we provide a concise analysis comparing HPM with linear attentions. (a) HPM and linear attention share similar computational complexity. Linear attention operates at $O(Nd^2)$ for features $\mathbf{x} \in \mathbb{R}^{N \times d}$, while HPM requires $O(Ndk)$ operations, where $k$ represents number of frequency basis. The main computations include computing coupling function at $O(Nk)$ cost, and forward/inverse transforms at $O(Ndk)$. Since $k$ is typically much smaller than $N$ and comparable to $d$ (e.g., $k=d=128$, $N=11271$ for Airfoil), HPM maintains efficiency like linear attentions. (b) Our measurements below confirm this analysis. On Airfoil, Galerkin attention requires 16.5ms for inference compared to HPM's 17.4ms. This minor overhead enables HPM to capture both domain-level structure and point-wise adaptivity, delivering significant advantages. |Model|Inference Time|Training Time / Epoch| |---|---|---| |Galerkin Attention|16.5 ms|45.25 s| |HPM|17.4 ms|46.11 s| **#4 In-depth discussion about AFNO** > Despite the author cited AFNO in the paper, I think it is worth more ... Thanks for this insightful suggestion. We'll include a comprehensive discussion about AFNO in the methodology section of final version. While AFNO [4] also incorporates data-dependent modulation, HPM differs in several aspects: (a) Objective: AFNO aims to improve "efficiency and robustness" of spectral features by sparsifying the frequency modes. In contrast, HPM focuses on "enhancing the flexibility of preset spectral features" via point-wise modulation. (b) Methodology: AFNO selects and modifies modes in the spectral domain post-transformation, while HPM introduces spatial domain modulation pre-transformation, enabling point-wise flexibility while preserving spectral structure. (c) Application: AFNO targets computer vision tasks, whereas HPM addresses PDE challenges requiring both global spectral coherence and local adaptivity. The spectral processing technique from AFNO could complement HPM in future work, potentially combining both benefits (AFNO's efficiency, HPM's flexibility) for diverse applications. **Reference** - [1] Fourier Neural Operator for Parametric Partial Differential Equations - [2] Learning Neural Operators on Riemannian Manifolds - [3] Transolver: A Fast Transformer Solver for PDEs on General Geometries - [4] Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers
null
null
null
null
null
null
Wasserstein Policy Optimization
Accept (poster)
Summary: The paper applies Wasserstein gradient flows to reduce the parameter space, and this way obtain a closed-form update rule. It lifts the necessity of using the reparameterization trick in stochastic policy learning. The method merges the strengths of the policy gradient approaches that work on sampled evaluations of the action-value and deterministic policy gradients that work on a single steepest-ascent direction of the action-value. It can generate a vector field towards an increasing direction. Claims And Evidence: I am having hard time to conclude from Figures 3 and 4 that the proposed method improves the state of the art. The alternative approaches consistently outperform the proposed WPO approach. This is a rather strong evidence against the central claim of the paper: Training with a Wasserstein gradient flow facilitates learning. Methods And Evaluation Criteria: The adopted methodology to build a deep actor-critic algorithm, the built experiment, pipeline, and the used performance scores make sense. There are issues about the choice of the baselines as well as the implementation details of the proposed algorithm. I give more details under Experimental Designs or Analyses. Theoretical Claims: I checked especially the functional derivative proof in Appendix A.1 in detail and it looks in order. Experimental Designs Or Analyses: The chosen baselines make sense. However, the DMC results do not match the ones I know from the literature and from my own experience. For instance, in Humanoid Walk, I am able to reach much higher performances with vanilla SAC in much less actor steps than what is reported. I suspect that this is because the authors train their networks with a single critic. There are also some additional hyperparameter choices which are uncommon, such as using 5-step temporal difference and using hard target network updates in intervals instead of Polyak averaging. All in all, I am not able to conclude that the reported results are informative about the match of the claims to evidence. Supplementary Material: I took a close look at the reported results, presented pseudocode, and the theoretical derivations. I haven't read the detailed descriptions of the experiment protocol. Relation To Broader Scientific Literature: The paper takes the Maximum A-Posterior Policy Optimization (MPO) and DDPG as the prior work on which it brings an improvement. While I agree that MPO is an appropriate representative of stochastic policy gradient approaches, I do not think DDPG represents the state of the art. Its TD3 variant works significantly better. I would expect discussion about how the significant problem and the proposed method comes together with the established state of the art that uses DPG on an ensemble of critics, which essentially addresses quite many of the issues targeted by the WPO solution. For example the REDQ method is known to bring significant reduction to the gradient estimator variance while also correcting the overestimation bias. I would definitely expect a comparison against it to draw a conclusion about the value of the solution. The authors may see the following detailed benchmarking study to have an impression about what is possible with the existing approaches: Nauman et al., Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning, ICML, 2024 Essential References Not Discussed: The paper does cite the most important papers with respect to the chosen problem formulation. With this regard, I do not have concerns. However, I will reiterate that problem formulation has issues. Other Strengths And Weaknesses: While the idea of using Wasserstein Gradient Flows to create a sweet spot between PPO-like and DDPG-like algorithms is an interesting idea, the proposed solution is straightforward in the sense that it does not yield downstream scientific challenges that would contribute to the advancement of the field. The implementation of the idea also has severe flaws I pointed out above. Other Comments Or Suggestions: Post-rebuttal update: Having completed the rebuttal and discussions, I am still missing the concrete problem for which a solution is sought. The paper describes the problem as: "Adding in stochasticity can be difficult to tune, and extensions that learn the variance (Heess et al., 2015; Haarnoja et al., 2018) rely on the reparameterization trick, which limits the class of policy distributions that can be used." My grade stems for a serious doubt about the significance of this problem and the claim that the solution addresses it. The proposed solution does work, however the experiment setup does not represent the state of the art in the sense that the decade-old techniques to mitigate the harmful effects of the Bellman target are not used. The outcome is then a set of half-working models, one new, the other two are baselines. The new one is working tiny bit better than others. The questions would such a performance difference be visible when all the commonplace training stabilization techniques were used. The authors clarify in their response is that the results are shown in the proprioceptive setup. I have first-hand experience to achieve much better results in much less environment interactions using multiple methods all of which are well-known in the literature. Under these circumstances I cannot conclude that the paper makes a concrete scientific contribution. Hence I maintain my score. Questions For Authors: Why is $\nabla_\theta \pi(s_t)$ a Jacobian but not the gradient? Are the DMC results obtained on the visual (pixel-based) or proprioceptive control setting? I interpret them as the latter in my evaluation above. I can update it based on the answer here. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and hope to convince them of the merits of our approach. 1. With regards to the performance of WPO relative to other baseline methods: while Figs. 3 and 6 show that WPO is competitive over the DM control suite, our main performance claims are highlighted in Figs. 4 and 5 that show that, as conjectured, WPO especially shines in domains with relatively high-dimensional action spaces. We would also dispute the characterization that “the alternative approaches consistently outperform the proposed WPO approach”. As we state in the paper, no single approach consistently outperforms the others across all tasks. Even on DeepMind Control Suite, there are tasks for which WPO performs the best, such as Cartpole - Three Poles, and many others where it matches state-of-the art. We also think the reviewer may misunderstand the main claim of the paper. We are *not* claiming that “training with a Wasserstein gradient flow facilitates learning” in all cases. We are claiming that WPO is novel, fills in a much needed gap in the space of actor-critic algorithms (how to generalize DPG-like updates to arbitrary stochastic policies without reparameterization) and, with only minimal effort, can be adapted into a performant deep RL algorithm. 2. In response to the issues raised in “Experimental Designs and Analyses”: we wanted to show that the WPO gradient (Eq. 6) can be dropped into existing deep RL methods with minimal modification, and to do a fair comparison we wanted the baselines to be trained in as similar a fashion as possible. Our results use the same hyperparameters per algorithm across all tasks. Many deep RL algorithms are highly sensitive to small details in how they are trained. Thus we may exactly match the absolute best performance of each baseline method. However, where possible, we made sure to compare against numbers in the literature to make sure our comparison is fair - for instance, we both ran the open-source Dopamine implementation of SAC ourselves and added baseline numbers from the literature. Since submission, we implemented SAC in the same framework that was used for the other experiments, and will include those results in the final camera-ready paper. We are also running DDPG with 1-step updates instead of 5-step updates for comparison and can include those results in the appendix, although they do not look appreciably better than what we have already included. In general, while it may be possible to tune the hyperparameters to get better baseline performance, it may also be possible to tune WPO hyperparameters to get better performance, so we believe the comparison provided in the paper is quite fair. 3. Re:TD3 and other possible baseline methods, we are primarily interested in the WPO update itself (Eq. 6), and compare like-for-like with algorithms like MPO. The algorithm we arrive at in this paper which we call “WPO” is one of an enormous number of possible algorithms that use the update in Eq 6. TD3 is an extension of DDPG that has some additional algorithmic ideas like target policy smoothing that can also be combined with other algorithms. It would be lovely to see future work decomposing these separable ideas and combining them in novel ways, to understand the individual contributions of all these ideas separately, rather than just comparing complete agents with somewhat-arbitrary combinations. Maybe WPO with TD3-like tricks will outperform TD3, maybe not, but that is out of scope of this particular paper. 4. When you say “the proposed solution is straightforward in the sense that it does not yield downstream scientific challenges that would contribute to the advancement of the field” - we do not understand if this is a criticism or a compliment. A solution which is straightforward and does not create future challenges is a good thing, is it not? 5. “Why is ∇_θπ(s_t) a Jacobian but not the gradient?” - ∇_θπ(s_t) is only a Jacobian in the case of DPG, because in this case π(s_t) is a deterministic vector-valued mapping from state vector to action vector. In the case of WPO or standard policy gradient, π(a_t|s_t) is a scalar-valued probability density and thus ∇_alogπ(a_t|s_t) is a vector of the same shape as π(s_t) in DPG. 6. The DMC results are based on proprioceptive control, not pixel-based control. We hope that this resolves any lingering issues with the paper. We want to emphasize again that our goal was not to show that we have the right combination of tricks and hyperparameter settings to achieve optimal performance across all possible tasks. We wanted to show that an open question in continuous control - how to train arbitrary stochastic policies with updates that exploit the action-value gradient - has a clear and elegant answer using approximations to Wasserstein gradient flows, and make a first foray into translating that into a practical deep RL algorithm. We hope you will agree that that is a significant contribution worth publishing at ICML. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. Overall I am not able say I am convinced. 1. and 2.) The replies verify my interpretation that the proposed method is not demonstrated to improve performance and it is not sufficiently challenged. I am completely fine by this if any other theoretical or empirical benefit is demonstrated (e.g. a tighter convergence proof, a generalization performance guarantee, a new algorithmic design that follow unconventional steps of deductive reasoning, a steeper learning curve, or a reduced computation time). The issue is not about whether hyperparameters are tuned. It is about a mismatch of the given answer and the posed scientific question. 3.) If DDPG qualifies as a baseline, TD3 (its truly working variant) definitely does. The rebuttal answer to this point justifies the issues I raised in my original review about the adopted scientific method. WPO update itself is a worthwhile scientific problem if a pain point of the state of the art can be demonstrated. I do not see any effort for doing this and also any improvement in the results for addressing any. 4.) For example, showing a performance improvement by increasing model capacity or access to more information is straightforward. There are many ways to put together different ideas. Some are avoided as being straightforward, meaning that the consequences are predictable. Hence, it does not raise new scientific questions or attract interest in different aspects of the same problem. This is what I meant and I am missing a core scientific problem and a solution that shows a clear practical or conceptual benefit. I read the paper in good enough detail, so I don't think another summarization will help there. 5. and 6.) Thanks, understood.
Summary: This paper naturally and clearly derives a natural gradient version of the stochastic policy extension of deterministic policy gradient from the perspective of Wasserstein gradient flow, proposes a practical implementation method, and conducts detailed tests on DeepMind Control Suite and scientific control tasks, achieving comparable results. Claims And Evidence: yes. Methods And Evaluation Criteria: yes. Theoretical Claims: I have checked most of the proofs but cannot ensure that they are entirely accurate. Experimental Designs Or Analyses: The authors conduct extensive experiments on the widely used DMC and show that the proposed method is comparable to the commonly used methods. Supplementary Material: all Relation To Broader Scientific Literature: This paper proposes a new reinforcement learning policy update method, and the paper explains the motivation well from the perspective of Wasserstein gradient. Essential References Not Discussed: Enough Other Strengths And Weaknesses: The overall writing of the paper is clear, the overall motivation is well supported, and the relationship with previous work is fully discussed. My main concern about the paper is that the experimental performance results are relatively comparable with baseline methods mainly focusing on the comparison with the MPO algorithm, and the selected comparison methods lack popular algorithms such as PPO and TD3. The paper omits the $d^\pi(s)$ term when deriving the formula, and I think it is better to add it. In line 358 of the left column, the author says that off-policy state data can be used, but this will cause $d^\pi(s)$ to be outdated, making the gradient direction not the direction in which $\mathcal{J}$ decreases the fastest. Other Comments Or Suggestions: In line 685, there is a missing right bracket before $\nabla_a Q^\pi(s,a)$. I really appreciate that authors conducted comparative experiments on the magnetic control problem, which is extremely valuable. Since the simulator is closed-source and few groups used it, it is difficult for other RL researchers to reproduce the experimental results. If authors have plans to make the experimental code open source, it will yield more benefits. Questions For Authors: Many current works like Zhang et al. (2018) choose the reparameterization trick because the method is convenient for sampling actions. The authors emphasize in the paper that the main difference between WPO and other methods is whether to use parameterization. Can authors give some more practical examples or scenarios to illustrate the necessity of using some difficult-to-reparameterize policies? In line 229 of the right column, the authors mentioned that for the sampled case, the variance of WPO is 0 when Q is locally linear in the actions. I am a little confused about this because for the method using reparameterization, this property is still enjoyed in this case due to $\frac{\partial Q}{\partial \theta}=\frac{\partial Q}{\partial a}\frac{\partial a}{\partial \theta}=w(s)\frac{\partial a}{\partial \theta}$. Have the authors considered the application of this method to the diffusion model as a policy representation? The diffusion model can be considered to directly model $\nabla_a \log \pi$. I think this may be interesting. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and helpful suggestions. 1. With regards to the state-occupancy term $d^\pi(s)$ in the definition of the gradient of the value function, we agree that including a correction in the off-policy case could potentially improve performance, but want to emphasize that virtually no methods in off-policy RL actually use a correction of this form in practice. Therefore, our algorithm conforms to standard practice and we are confident that the experimental comparisons in the paper are fair. One reason that this term is almost always excluded is that estimating it in continuous state spaces is extremely challenging. It’s also worth highlighting that it has been argued in previous work that excluding this term actually can improve the efficiency of the updates, and it has been shown to be theoretically a sound thing to do (see, e.g., https://arxiv.org/pdf/2302.11381). We will discuss this explicitly in the paper. 2. We have fixed the typo in the equation on line 685 3. Unfortunately, we do not control the rights to the fusion simulator used in our experiments, and so have no control over open-sourcing it. However, the organization which does own the rights intends to release an open-source version of the simulation code in the long term. We will do our best to encourage them to release this as soon as they can. 4. The use of the reparameterization trick becomes much more complicated with distributions with discrete latent variables, like mixtures of Gaussians. The exact SVG(0) update, and the efficiency of this, will be sensitive to the choice of reparameterization. It is difficult to find a practical problem for which mixture-of-Gaussian action distributions are necessary, since Gaussian action distributions work well in practice for most environments investigated here. However, we will extend Section 4 to include a toy example of a mixture of Gaussians policy on a complex loss landscape showing the qualitative difference in dynamics between WPO and standard policy gradient, and Gaussian vs. mixture-of-Gaussian action distributions. 5. The discussion of the variance of the linear action-value function case in Section 4 can be improved. First, we will clarify that the WPO gradient is only constant when the variance of the policy is fixed (e.g. only the mean is updated). Second, we will clarify that it is only standard policy gradient for which the variance is finite, while SVG(0) with the appropriate parameterization is identical to WPO, as you noted. Third, we will highlight that the agreement between WPO and SVG(0) is only true for specific choices of parameterization for SVG(0), and provide a concrete example where this identity is broken. For instance, replacing a Gaussian action distribution with an exponential distribution π(a) = exp(a/β)/β (with the natural reparameterization a = β * η) leads to different updates for WPO and SVG(0), even in expectation: the expected natural WPO update with respect to the scale parameters β will then simply be E[ ∇_a Q(a) ], while the SVG(0) update will be E[ ∇_a Q(a) diag(a/β) ], with η a standard exponential with p(η = x) = exp(-x). (If we consider the gradient with respect to the network parameters, both these updates just get multiplied with the Jacobian ∇_θ β, as per standard backprop.) 6. We agree that using WPO in combination with diffusion models is a promising future avenue to investigate. Thank you for the suggestion.
Summary: This paper introduces Wasserstein Policy Optimization (WPO) for continuous-action reinforcement learning. By viewing policy optimization as a Wasserstein gradient flow in the space of distributions, the authors derive a closed-form update that: i) Uses gradients of action values w.r.t. actions (like deterministic policy gradients). ii) Works for any stochastic policy (no reparameterization needed, unlike SVG or SAC). iii) Can be turned into a simple actor-critic algorithm with minimal additional structure. The paper provides theoretical derivations, then extends WPO with practical features (e.g., KL regularization, a Fisher information rescaling) for deep RL. Experiments on the DeepMind Control Suite (including high-dimensional tasks like Humanoid) and a real-world-inspired fusion-control task show that WPO is robust, sometimes outperforming or matching strong baselines (MPO, SAC, DDPG). The authors highlight WPO’s favorable scaling in large action spaces and its ability to converge stably to high-performing solutions. ## update after rebuttal I thank the authors for their detailed response and additional experimental results. I continue to find the theoretical formulation of Wasserstein Policy Optimization novel and compelling. I also appreciate the effort made to re-run experiments with more seeds and to clarify the challenges involved in evaluating on larger benchmarks. That said, some of my original concerns remain. The updated results indicate noticeable variance across seeds in several tasks, and some performance drops (e.g., Humanoid CMU Walk, Manipulator: Bring Ball) raise questions about the method’s robustness. Additionally, while I understand the limitations during the rebuttal phase, the lack of evaluation on more realistic high-dimensional environments makes it difficult to fully assess the claimed scalability of WPO. Overall, I believe the paper presents an interesting direction, and I hope to see future versions strengthen the empirical foundation. I am maintaining my score. Claims And Evidence: The paper’s main claims—i.e., that WPO (1) combines properties of deterministic and standard policy gradients, (2) applies to arbitrary continuous action distributions, and (3) matches or outperforms strong baselines in large action spaces—are backed by both theoretical derivations and empirical results on benchmark (DeepMind Control Suite) and real-world-like (fusion) tasks. The paper does not appear to make unsupported or overstated claims Methods And Evaluation Criteria: The paper evaluates WPO in three ways. (1) the DeepMind Control Suite, a widely used benchmark for continuous control, enabling clear comparisons with established methods. (2) a Magnetic Confinement Fusion domain, which tests real-world relevance and benchmarks WPO against strong baselines like MPO. (3) Combined Tasks, The paper combines multiple copies of a Control Suite environment to form high-dimensional action spaces, revealing WPO’s potential for scaling to large numbers of actions. These three perspectives offer a well-rounded assessment of WPO’s capabilities. Theoretical Claims: There are no apparent flaws in the theoretical arguments as presented. Experimental Designs Or Analyses: **Experiments in DeepMind Control Suite.** From Figure 3, we can see that there is a high variance environments, e.g. Cartpoles, Don Run, Manipulator Bring ball and Humanoid Stand. It should run more seeds to reduce the variance. **Experiments in Combinded Tasks.** By replicating a single environment (e.g., Humanoid - Stand) multiple times to expand the action space, the authors leverage SmoothMin to aggregate rewards. This approach effectively tests scalability to higher-dimensional controls, but it still falls short of simulating the diversity and complexity of real-world scenarios. It would be good if test WPO in Bi-DexHands tasks with high-dimensional control. Supplementary Material: I concentrated on the pseudocode and detailed experimental setup, including hyperparameter configurations and training routines, as well as the ablation studies. Relation To Broader Scientific Literature: The authors propose Wasserstein Policy Optimization (WPO), presenting it as an extension of classic gradient-based methods (e.g., REINFORCE, actor-critic) by exploiting action-value gradients similarly to DPG, yet without restricting the policy class to deterministic forms. Grounded in optimal transport, WPO’s update reflects a continuous steepest descent in policy space, differing from approaches like MPO that exponentiate Q-values rather than relying on a direct gradient. This formulation aligns stochastic and deterministic policy gradients, offering a unified viewpoint and potentially stronger scalability in high-dimensional action spaces. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** - Principled Theoretical Foundation - Combines Advantages of Deterministic and Stochastic Policy Gradients: This allows WPO to leverage value information for efficient updates without the limitations of deterministic policies or the reparameterization trick. **Weaknesses:** - High variance in training: From Figure 4, WPO has higher variance compared with MPO and DDPG. - Not Universally Superior: While WPO performs competitively on many tasks, it does not consistently outperform state-of-the-art algorithms like SAC and MPO across all environments in the DeepMind Control Suite Other Comments Or Suggestions: Do more experiments to show WPO’s validness. Questions For Authors: WPO combines the advantages of deterministic and stochastic policy gradients, but why it is not universally superior than the baselines? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments and favorable review. The reviewers made a few suggestions for improving the paper which we will address: 1. We have re-run experiments on the high-variance environments with more seeds to smooth out the learning curves and will add the new results to the paper. The new results look qualitatively similar to those in the original submission, with some amount of noise. 2. We appreciate the suggestion for more challenging high-dimensional tasks to investigate such as Bi-DexHands. The state-of-the-art RL methods for bidextrous manipulation often involve many additional steps beyond blank-slate actor-critic methods, such as imitation learning from human data (e.g. ALOHA Unleashed, https://arxiv.org/pdf/2410.13126). This adds an additional layer of complexity for comparison against baselines that goes beyond the scope of what we have the space and time to investigate in this paper, however we will be sure to investigate these sorts of tasks going forward. With regards to the lack of uniformly superior performance that the reviewer points out, we refer to the rebuttal to reviewer A8Ro for a longer discussion. We hope that this addresses all of the issues the reviewer has and thank them again. --- Rebuttal Comment 1.1: Comment: The rebuttal does not sufficiently address key concerns. - The authors claim to have re-run high-variance experiments with more seeds, but no updated figures or metrics are shown. Without evidence, this claim lacks credibility and weakens the empirical foundation. - The justification that dexterous tasks like Bi-DexHands are “beyond the scope” due to imitation learning is inaccurate. Standardized RL benchmarks for Bi-DexHands without imitation do exist (e.g., [DexterousHands](https://github.com/PKU-MARL/DexterousHands)). Given that WPO emphasizes scalability, and the paper already ventures beyond standard tasks (e.g., fusion control), omitting such realistic, high-dimensional environments undermines the generalization claims. I still find the theoretical framework interesting, but the empirical validation is incomplete. I am lowering my score to 3: Weak Accept. --- Reply to Comment 1.1.1: Comment: We apologize for insufficiently addressing the reviewer's concerns, and will do our best to fix this. First of all, we want to emphasize that there may have been a misunderstanding. Given the initial high score for the paper, we understood the reviewer's comments to be helpful suggestions for further improving the paper, not necessary conditions for maintaining the high score. To the two points raised by the reviewer: 1. We did in fact do *exactly* what the reviewer requested here - we re-ran high variance experiments with more seeds. We apologize for not sharing more details of the results of those experiments, but want to point out that there is in fact no mechanism for us to share new figures during the rebuttal period - the paper draft cannot be updated and images cannot be included in the rebuttal text. Instead, we can share some subset of the results here as a table, which we hope is enough to convince the reviewer that the results remain qualitatively similar: Humanoid CMU: Walk $$ \begin{array}{ccc} \\hline \\mathrm{Total Steps} (1e6) & \\mathrm{Old Reward} & \\mathrm{New Reward} \\\\ \\hline 2&2.06\pm1.64&2.08\pm1.71 \\\\ 10&717.23\pm58.66&297.25\pm324.97 \\\\ 18&716.35\pm57.28&406.47\pm350.39 \\\\ \\hline \end{array} $$ Quadruped: Run $$ \begin{array}{ccc} \\hline \\mathrm{Total Steps} (1e6) & \\mathrm{Old Reward} & \\mathrm{New Reward} \\\\ \\hline 2&821.62\pm54.88&783.21\pm77.03 \\\\ 10&965.25\pm15.98&951.15\pm28.91 \\\\ 18&951.59\pm13.02&955.61\pm25.86 \\\\ \\hline \end{array} $$ Point Mass: Easy $$ \begin{array}{ccc} \\hline \\mathrm{Total Steps} (1e6) & \\mathrm{Old Reward} & \\mathrm{New Reward} \\\\ \\hline 2 & 913.94 \pm 62.21 & 885.28 \pm 26.44 \\\\ 10 & 929.75 \pm 32.82 & 902.26 \pm 41.81 \\\\ 18 & 905.96 \pm 25.66 & 924.83 \pm 44.67 \\\\ \\hline \end{array} $$ Manipulator: Bring Ball $$ \begin{array}{ccc} \\hline \\mathrm{Total Steps} (1e6) & \\mathrm{Old Reward} & \\mathrm{New Reward} \\\\ \\hline 2&0.00\pm0.00&0.00\pm0.00 \\\\ 10&189.31\pm378.41&1.62\pm5.13 \\\\ 18&610.85\pm295.04&143.30\pm302.60 \\\\ \\hline \end{array} $$ Cartpole: Three Poles $$ \begin{array}{ccc} \\hline \\mathrm{Total Steps} (1e6) & \\mathrm{Old Reward} & \\mathrm{New Reward} \\\\ \\hline 2&219.64\pm6.07&240.66\pm42.45 \\\\ 10&499.05\pm109.37&415.99\pm157.16 \\\\ 18&532.93\pm227.68&558.91\pm102.08 \\\\ \\hline \end{array} $$ Dog: Run $$ \begin{array}{ccc} \\hline \\mathrm{Total Steps} (1e6) & \\mathrm{Old Reward} & \\mathrm{New Reward} \\\\ \\hline 2&27.58\pm28.74&35.84\pm67.82 \\\\ 10&138.16\pm205.54&216.62\pm270.20 \\\\ 18&446.78\pm416.41&486.13\pm283.23 \\\\ \\hline \end{array} $$ Humanoid: Run $$ \begin{array}{ccc} \\hline \\mathrm{Total Steps} (1e6) & \\mathrm{Old Reward} & \\mathrm{New Reward} \\\\ \\hline 2&1.46\pm0.68&25.56\pm51.51 \\\\ 10& 253.92\pm77.59&284.51\pm89.45 \\\\ 18&418.07\pm28.90&459.54\pm111.50 \\\\ \\hline \end{array} $$ Humanoid: Stand $$ \begin{array}{ccc} \\hline \\mathrm{Total Steps} (1e6) & \\mathrm{Old Reward} & \\mathrm{New Reward} \\\\ \\hline 2&71.63\pm110.61&204.18\pm278.80 \\\\ 10&724.10\pm85.79&764.26\pm13.00 \\\\ 18&749.74\pm44.65&790.56\pm25.43 \\\\ \\hline \end{array} $$ On tasks where the final reward dropped, this was primarily due to some fraction of seeds where the reward did not take off. The learning curve was roughly the same for seeds for where the reward did take off. 2. The reviewer is correct that imitation learning is not necessary to get standard RL methods to make progress on some two-handed manipulation tasks. Our point was that the most recent state-of-the-art results, such as on ALOHA, do tend to use imitation learning for the most challenging tasks. But even if we wanted to compare against results on Bi-DexHands which don’t use imitation learning, it is extremely challenging to implement learning in an entirely new environment in the time allotted to rebuttals. The training setup in the Bi-DexHands repository is substantially different from the setup used in the paper - the environments are based on Isaac Gym rather than MuJoCo, the learning algorithms are implemented in PyTorch rather than JAX, etc. The hyperparameters used for training are also substantially different (number of actors, number of backup steps, trajectory lengths, etc) making comparison across experiments difficult. Setting up and running these experiments properly on our infrastructure would take at least several weeks, beyond the timeframe available for rebuttals. Nevertheless, we made a good-faith attempt to get WPO running on Bi-DexHands as quickly as possible, but quickly ran into serious issues building and running Isaac Gym, as it has been deprecated in favor of Isaac Lab. While we agree that results on Bi-DexHands would improve the paper and have no doubt that we could get experiments running given a reasonable amount of time, there is simply no way that we will be able to provide results in time for the end of the rebuttal period. We hope that you understand.
Summary: The paper introduces a novel policy gradient update using Wasserstein Gradient Flows called Wasserstein Policy Optimization. While classical policy gradient update works with stochastic policy, it does not take gradient through the action-value space, while deterministic policy gradients are able to take gradients through the action value function but are designed for deterministic policies. Using the notion of the vector flows derived from Wasserstein-2 distance and second order Fisher approximation of the KL divergence, the authors derive a policy gradient update that's best of both worlds. They derive a tractable practical algorithm for the special case of Gaussian policies. Claims And Evidence: The authors claim to use Wasserstein gradient flows to derive policy gradient updates that have best of both worlds: use stochastic policies and gradients through the action value function. They derive the gradient update that has this property. But what concerning are: (1) Its not always simple to derive a practical algorithm for the gradient update. For a special case of policy distribution: normal, the authors derived a tractable approximate algorithm for these updates but how generalizable is this? (2) While WPO performs robustly well, there are environments where WPO fails. It would be interesting to see why and in what situations WPO fails. Methods And Evaluation Criteria: The proposed method and evaluation criteria though make sense. Though the authors didn't compare against prior Wasserstein based policy gradient methods like [1]. Comparisons against these methods will justify if their gradient updates are the ideal way of using Wasserstein Gradient Flows. [1]: Zhang, Ruiyi, et al. "Policy optimization as wasserstein gradient flows." International Conference on machine learning. PMLR, 2018. Theoretical Claims: The theoretical claims made are justified and proven. Experimental Designs Or Analyses: (1) Some baselines like the one mentioned above (prior attempts to use Wasserstein Gradient Flows) should be compared against. Supplementary Material: I have read the proofs and experiment setup/hyperparams in the supplementary material. Relation To Broader Scientific Literature: Policy gradient methods updates are used in a majority of RL algorithms. There have been two commonly used updates: classical and DPG. This work proposes a novel policy gradient update rule using Wasserstein gradient flows to have best of both worlds. But it is not clear if this method scales to other policy distributions (i.e. practical algorithms designed for them). Essential References Not Discussed: To my knowledge most of the related work has been referred to. Other Strengths And Weaknesses: Nil Other Comments Or Suggestions: Nil Questions For Authors: Please address the concerns raised. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your helpful comments. We appreciate the overall positive evaluation of the paper, and wanted to address the two points raised in “Claims and Evidence”: 1. To go from the idealized WPO update to the practical algorithm presented in the paper, two approximations were made: the Fisher information matrix was approximated by the FIM for a diagonal Gaussian distribution, and an entropy regularization term was added. First, there is no reason that the “bare” WPO update (i.e. without the Fisher term) couldn’t be used for general distributions over actions. As noted in the paper, the variance in the update would grow as the policy becomes more deterministic, but with gradient clipping this might be manageable. Beyond that, there are many classes of distributions for which the exact Fisher information matrix is known (e.g. exponential families) and, going even beyond that, numerous practical approximations to natural gradient descent exist which could be applied here, e.g. KFAC (Grosse and Martens 2015). Ultimately, there are going to be a spectrum of approaches with the exact update at one end and the “bare” update at the other which can be explored in future work. Second, regularization of the policy update through penalties or trust region methods is a standard technique, and the approach used here could be extended to more complex distributions over action spaces so long as an appropriate way to estimate the KL divergence between action distributions exists. For instance, a sampling-based approximation could be used for generic distributions, or an upper bound or other approximation to the KL could be used in cases where a closed form solution does not exist. In practice, the exact measure of divergence between action distributions is probably not that important, as long as it is simple to compute and is well behaved. 2. Understanding the precise cause of WPO underperforming other algorithms in certain environments is challenging. Note that we only tried tuning a small number of parameters, primarily the weights of the KL regularization, while other methods have benefited from years of extensive hyperparameter tuning on many of these environments to achieve optimal performance. Note also that, while WPO is not always the top performing algorithm, it does not fail dramatically on any environments in the way that, say, DDPG does on Cartpole - Swingup Sparse, which illustrates the overall robustness of WPO. We agree that it would be quite valuable to investigate the cases where WPO underperforms in more detail, but given the large number of moving parts and possible causes, we feel this is best left to future work. Furthermore, while Figs. 3 and 6 show that WPO is competitive over the DM control suite, our main performance claims are highlighted in Figs. 4 and 5 that show that, as conjectured, WPO especially shines in domains with relatively high-dimensional action spaces. We also appreciate the suggestion to compare against Zhang et al (2018). The algorithm implemented in Zhang (2018) is a combination of a DDPG-like gradient with the reparameterization trick, and thus closely resembles SVG(0) and SAC, albeit with a slightly different regularization. The significance of WPO lies mainly in its generality, going beyond methods like SVG(0), SAC and Zhang (2018) that rely on reparameterization. Additionally, we believe that by including SAC and DDPG in the experiments already, we have good coverage of methods with the same basic form of the policy gradient update. If we investigate the effect of regularization in more detail in future work we will be sure to include an analysis of Zhang (2018). Once again, we are pleased that you liked the paper, and appreciate all of your helpful comments.
null
null
null
null
null
null
LoRA Training Provably Converges to a Low-Rank Global Minimum Or It Fails Loudly (But it Probably Won't Fail)
Accept (oral)
Summary: The authors investigate the landscape of LoRA fine-tuning under assumptions of restricted strong convexity and smoothness. In particular, the authors prove a characterization of second-order stationary points for problems with regularization, showing that spurious high-rank local minima are bounded away from the global minimizers. Claims And Evidence: The claims of the authors sound reasonable and the proofs look correct under the authors' assumption. However, I fail to recognize the quantitative part of the result, in fact Theorem 1 does not exclude the case of numerical low-rank spurious local minima. See "questions" for more details. Methods And Evaluation Criteria: I believe the experimental evaluation lacks both a controlled setting to show quantitatively the authors' claims. Theoretical Claims: The theoretical claims and the proofs provided by the authors are correct as far as I am concerned. Experimental Designs Or Analyses: The experimental evaluation is not satisfying in relation to the theoretical claims. I would have appreciated a more controlled numerical setting (e.g. we know the global minima) to show convergence and the possible absence of spurious local minima around. Supplementary Material: Supplementary material was not included by the authors. Relation To Broader Scientific Literature: LoRA variants are abundant in literature, and their popularity exploded in the last years given their effectiveness. The authors' work focuses on showing that the landscape of these problems is in some way "well-behaving", with low-rank local minima and higher rank spurious minima which are bounded away from the global one. Essential References Not Discussed: As far as I am concerned, all the relevant literature was discussed. Other Strengths And Weaknesses: The results in the paper are sound and shed light on why zero initialization works well in practice for LoRA fine-tuning. Other Comments Or Suggestions: See "Experimental Designs Or Analyses". Questions For Authors: While qualitatively the result in theorem 1 and corollary 1 state that there might be spurious local minimizes of higher rank far from the global one, it doesn't eliminate the possibility of spurious local minima with numerical low-rank (therefore, potentially close to $X_*$. Can the authors comment on this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Common Response (Repeated in all responses)** First of all, we thank the reviewers for their positive and constructive feedback. We are excited to see that the reviewers are appreciative of our theoretical contributions. Below, we address each of the reviewers' comments individually. ### **Individual Response** We are happy to hear that the reviewer found our results sound and illuminating. Below, we address the reviewer’s main concern regarding the scenario with near-low-rank updates. **Q) Theorem 1 and Corollary 1 … spurious local minimizes of higher rank far from the global one, it doesn't eliminate possibility of spurious local minima with numerical low-rank** We thank the reviewer for the thoughtful question. We clarify the claims of Theorem 1 and Corollary 1 as follows. Corollary 1 states that if $X_\square$ is a spurious local minima, then $\sigma_r (X_\square) \ge \frac{2\alpha}{\beta} \sigma_{r*} (X_\square)$. This provides a substantive lower bound on the $r$th singular value, making $X_\square$ not numerically low rank. (Recall that $r> r_\star$ in Corollary 1 and “low-rank” refers to rank $r_\star$.) Moreover, this lower bound on $\sigma_r (X_\square)$ means that $X_\square$ and the global minimum $X_\star$ cannot be close, since $\sigma_r(X_\star)=0$. Therefore, Corollary 1 does eliminate the possibility that a spurious local minimum is low-rank or is close to $X_\star$. --- Rebuttal Comment 1.1: Comment: First of all, I would like to thank the reviewers for the rebuttal. I agree with your comment, but in practical problems, it is however possible to be in the ill-conditioned setting in which $\frac{\alpha}{\beta} \approx 0$. In particular, if $\frac{\alpha}{\beta} \approx 0$ was of order machine precision, it could be possible to find spurious local minima $X_\Box$ close to $X_*$ (as shown also in table2). I believe for this reason it would be beneficial to include in the revised version an experiment in a controlled scenario, in which $\alpha$, $\beta$, and the global minima are controlled a priori (could be even a simple quadratic problem). In this setting, it should be possible to showcase the distance to minima over time and your predictions concerning the stationary point. Moreover, in an experimental setting similar to this one one could also showcase the dependency of $||X_\Box -X_* ||$ on the condition number $\kappa = \beta/\alpha$, which is what I am personally interested in cause it could be crucial for applications. In any case, I am satisfied by the author's rebuttal and I am keeping my score.
Summary: This paper provides a theoretical analysis of Low-Rank Adaptation (LoRA) loss landscape (near the global or local min). The main contributions are as follows: (1) The authors identify two regimes—special (well conditioned) and generic (more realistic). In the generic regime, second-order stationary points (SOSPs) are either (a) low-rank, small-magnitude global minima or (b) high-rank, large-magnitude spurious minima. (2) The authors shows that zero-initialization and weight decay are biased toward low-rank, small-magnitude solutions, which partially explains LoRA's empirical success despite non-convexity. (3) The paper provides some experimental results confirming the existence of low-rank global minima under nuclear norm regularization and validate restricted strong convexity/smoothness assumptions. They also illustrate that the failure cases (high-rank solution) may results in from large initialization. Claims And Evidence: The claims are supported by the theoretical proofs and experimental validation. Methods And Evaluation Criteria: The methods makes sense. Theoretical Claims: The main theoretical results (Theorem 1 and Cor 1) are interesting. I checked the proofs (not every details) and most parts look correct to me. I have not identified any serious flaws, but some claims and notations requires some clarification. (1) Why the resticted smoothness is defined in this form (it is somewhat different from the standard smoothness assumption). In particular, why U and V appear here. More explanations are needed. (2) In theorem 1, 2(ii), why X_box can be full rank? (since X_box is defined to be AB^T) (3) Proof 1. In the proof, the authors used kappa to distinguish several cases of the theorem. However, it is unclear to me how the condition of kappa corresponds to the condition of alpha and beta. In particular, which case is the special regime and which case is the generic regime (this is not mentioned in the proof at all)? Experimental Designs Or Analyses: The main contribution of the paper is theoretical and the experimental section is relatively small. I checked the experimental setting and the results, which look reasonable to me. Supplementary Material: Yes. Relation To Broader Scientific Literature: LoRA is part of a broader trend in machine learning towards developing parameter-efficient methods for adapting large pre-trained models. The contribution of the paper is useful in understanding the loss landscape (near the global/local min) of LoRA optimization. The paper also discusses the implicit bias of zero-initialization and weight decay. This connects to a growing body of work on the implicit regularization effects of optimization algorithms like SGD in deep learning models. Essential References Not Discussed: There are a flurry of recent works on improving the vanilla LoRA (by tuning the learning rate or changing the initialization etc.), such as LoRA+, rsLoRA, LoRA-GA, PiSSA. Does these work change the conclusion of the paper? Some papers propose to change the initialization of LoRA (e.g., LoRA-GA, PiSSA). Do these results contradict the result on zero-initialization? Other Strengths And Weaknesses: Strengths: (1) Theoretical Contributions: The paper provides a rigorous theoretical analysis of LoRA's loss landscape. The helps explain the success of LoRA in practice. (2) The new theoretical analysis doesn't rely on linearization arguments. Weaknesses: (1) Limited Experimental Section: The experimental validation, while sufficient to support the theoretical claims, is relatively small in scope. It focuses on fairly small-scale specific tasks (SST-2 and CIFAR-100). (2) Some assumptions and proofs requires clarification. (3) Several recent important improvements on LoRA are not mentioned at all. Other Comments Or Suggestions: 1. page 3. section 1.3 ||X||_\star, is this the nuclear norm? but the sentence before this line says "adding l_2 regularization" which is confusing. 2. page 3. section 1.3 line 129. the equivalence between the unconstrained minimization problem and the constrained minimization problem. Are they always equivalent? Is there any condition or assumption? 3. Information theorem line 153. The algorithm used in Ge et al. 2015 is somewhat different from the standard SGD. Moreover, the standard algorithm for training LoRA is Adam. So there is a discrepancy between the theorem cited and the conclusion (line 160) drawn from it. typo: (1) page 5, line 251 (2) page 11. line 571. Questions For Authors: I have asked questions in previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Common Response (Repeated in all responses)** First of all, we thank the reviewers for their positive and constructive feedback. We are excited to see that the reviewers are appreciative of our theoretical contributions. Below, we address each of the reviewers' comments individually. ### **Individual Response** We are excited to hear that the reviewer found our main theoretical results interesting. We are also grateful for the detailed feedback provided by the reviewer, and we respond to them individually in the following. **Regarding the relation with other LoRA variants** Yes, as the reviewer points out, our theory is indeed applicable to the more modern enhanced LoRA variants. We discuss a few popular LoRA variants, including the ones pointed out by the reviewer. LoRA+ (Hayou et.al., 2024) and rsLoRA (Kalajdzievski, 2023) share LoRA's objective and initialization while modifying hyperparameters like the learning rate and scaling factors to accelerate and stabilize training. Therefore, our theory assures these variants can also successfully find the global minimum, hopefully, faster and more stable. PiSSA (Meng et.al., 2024) and MiLoRA (Wang et.al., 2024) modify the training objective by decomposing the pretrained weight matrix into the sum of a principal ($A_pB_p$) and minor($A_mB_m$) component. PiSSA optimizes the objective $f(A_{lora}B_{lora} +A_mB_m)$, initializing $A_{lora}B_{lora}$ with $A_pB_p$. In contrast, MiLoRA optimizes $f(A_pB_p+A_{lora}B_{lora})$, initializing with $A_mB_m$. While appearing very similar, our theory introduces a contrasting view for the two methods. PiSSA initializes the fine-tuning weights with $A_pB_p$, which inherently contains 'principal' pretrained model information. In this setting, applying weight decay to $A_p$ and $B_p$ is counterintuitive; rather than regularizing the fine-tuned model to remain close to the pretrained model, weight decay here would bias the model toward losing essential pretrained characteristics. Thus a low-rank solution isn't naturally expected, and the model would be required to be very well-conditioned (i.e., within the special regime) to guarantee convergence to the global minimum under our framework. Conversely, the use of weight decay in MiLoRA is more sensible as $A_mB_m$ inherently represents a minor component. Therefore, we can more naturally apply our results to MiLoRA to guarantee convergence to a global minimum, in contrast with PiSSA. Investigating the practical implications of this theoretical insight is an interesting direction of future work. We note that variants like LoRA-GA (Wang et.al., 2024), which significantly alter training dynamics, do fall outside our current scope and offer promising directions for future work. **Q) Why the restricted smoothness …** Our definitions of restricted convexity and smoothness are motivated by the proofs requiring only a "directional form" of strong convexity and smoothness. In contrast, the conventional definitions require strong convexity and smoothness for all possible directions. While our definitions may feel more complicated, they are weaker and more realistic assumptions compared to standard strong convexity and smoothness. In particular, instead of requiring the standard smoothness $\nabla^2 f(X) [A, A] \le \beta ||A||_F^2 $ for all $A \in \mathbb{R}^{m\times n}$, we only require it for the directions$A$ in the space $\\{UX+XV | rank(U)=rank(V)=1 \\}$, a much smaller, restricted space. **Q) Why X_box can be full rank** We apologize for this confusion. By “full rank” meant rank $r$, since the low-rank update $X_{box}=AB^T$ can have rank at most $r$. However, we now see that this was not the correct language, so we will update it to say rank $r$ instead of full rank. **Q) In Proof 1, the authors used kappa …** As in line 253, $\kappa$ is defined as $\frac{\sigma_{r*}}{\beta \sigma_r}$. Thus, the case $2\kappa \alpha >1$ in the last line of the proof corresponds to the special regime and case (i) of the generic regime, while the converse corresponds to the case (ii) of the generic regime. Thank you for pointing this out, we will make this distinction clearer in our revision. **Q) ||X||_\star, is this the nuclear norm? but … Q) equivalence between the unconstrained…** We apologize for the confusion. The L2 regularization applies to the LoRA fine-tuning objective, while the nuclear norm regularization applies to the rank-constrained full fine-tuning objective. As we note in Section 1.3, these two objectives are equivalent, as shown in Recht et.al., 2010, Lemma 5.1. **Q) Ge et al. 2015 is somewhat different from the standard SGD … Adam.** We appreciate the reviewer’s precision on this point. While we believe that the qualitative point we derive through this reference remains valid, we agree with the reviewer that, strictly speaking, Ge et al.’s result does not imply global convergence for all optimizers. We will clarify this point explicitly in our revised paper.
Summary: This paper provides a theoretical understanding of the training dynamics of LoRA (i.e. Low-Rank Adaptation) of transformers. The authors first establish the equivalence between the low-rank form of loss and the rank-constrained optimization problem. Then the authors state their main results that the LoRA results can be split into two regimes, and in the second regime ("generic regime") LoRA can lead to "failed" results (i.e., does not converge to a global minimum and stuck in a spurious local minimum). The authors further demonstrate that standard LoRA practices, such as zero initialization and weight decay, and extend to multiple matrices fine-tuning. Claims And Evidence: I think the claims made in the submission are well supported by both theory and experiments. Methods And Evaluation Criteria: The authors consider a few cases (both NLP and CV tasks) and conduct experiments to verify their main theorems. More specifically, the authors use two different initialization methods, one of which lead to global minimum and another converges to spurious local minimum. However, it would be helpful in my opinion to add another experiment to show that the multiple experiments can converge to the same global minimum. Theoretical Claims: Although I have not got solid theoretical background, I tried my best to check the theoretical claims. Did not identify issues. Experimental Designs Or Analyses: Yes. In my opinion the experiments only account for a very small part of this paper, and are mainly served as a validation of theory results. The authors come up with two settings and demonstrate the results fit the expected outcomes. Supplementary Material: Yes. Briefly went through the proof to understand the workflow. Relation To Broader Scientific Literature: I am not familiar with relevant literature, but I checked the referenced works and I believe the authors' work has distinct aspects compared to those existing ones. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper could be useful to identify / design better low-rank adaptation algorithms since it provides a principled way to identify if the fine-tuning could work or not. It can also be used as a monitor to tell if the training / finetuning is healthy or not. Other Comments Or Suggestions: There are some occasional misuse of \cite and \citep. It would be great if the authors can fix them. Example: "following the prescription of (Hu et al.,2022)." at line 378. Questions For Authors: - It would be helpful in my opinion to add another experiment to show that the multiple experiments can converge to the same global minimum. Correct me if I am wrong but if the claims hold, should the solutions of low rank matrices be similar? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Common Response (Repeated in all responses)** First of all, we thank the reviewers for their positive and constructive feedback. We are excited to see that the reviewers are appreciative of our theoretical contributions. Below, we address each of the reviewers' comments individually. ### **Individual Response** We are happy to hear that the reviewer found our claims to be well supported by theory and experiments and the contribution distinct from prior works. We agree with the reviewer's view that “this paper could be useful to identify/design better low-rank adaptation algorithms,” and we outline how our theory can be applied to several LoRA variants in our response to Reviewer 9M1X. **Regarding the question of whether multiple experiments share the same global minimum** We thank the reviewer for the insightful observation and suggestion. Yes, our theorem indeed implies that multiple experiments should converge to the same global minimum (the same product $X=AB^\intercal$, but the low rank factors $A$ and $B$ are determined only up to rotations). To demonstrate this empirically, we extended the experiments originally presented in Figure 5 of the appendix to directly test whether training trajectories with multiple random seeds converge to the same limit. See the figure of the link below: https://drive.google.com/file/d/1vU2oZT2qAHxcY2gnIjA5CIBUwevxNJhs/view?usp=sharing In the figure, we plot the ‘total variation' of the training trajectories with multiple random seeds, defined as: $\sum_{1\le i<j\le N} \|| \Theta_i^{(t)} - \Theta_j^{(t)} ||$ where $\Theta_i^{(t)}$ is the parameters of the $i$ th model at the end of the $t$th epoch. We see that the total variation converges to 0, indicating that multiple experiments share the same global minimum. (The total variation starts at 0 because the $B$ matrix is always initialized to be 0, so the product $X=AB^\intercal$ starts out at the same value of 0, even though $A$ is initialized differently.)
Summary: The authors have shown that a low rank and low magnitude initialisation in LoRA models results in convergence towards a global minima. Conversely, larger rank models with larger initialisation variance results in convergence towards spurious local minima with high probability. Whilst this is a result observed in experimentation in previous literature, this is largely a novel result for theoretical confirmation. # update after rebuttal The rebuttal discussion was informative and helped to understand some aspects of this paper better. I will maintain my original score Claims And Evidence: As mentioned in the summary, there is a theoretical claim that low rank low magnitude initialisation results in convergence towards global minima. This is supported by a mathematical proof. Some experimental evidence is present. The results in Table 2 maybe even understated – an interesting insight which both highlights and limits the application of this theory to LoRA models (that convexity assumption $\alpha$>0 does not hold for larger rank). Contrasting over parameterisation is very interesting, could be worth discussing that more prominently. Methods And Evaluation Criteria: Use of CIFAR 100 is a reasonably complex dataset for showing differences in initialisations during low-rank adapted training. The paper presents results for sufficient number of steps and there is a good use of metrics like rank (which was particularly novel for readers/reviewers) for showing convergence differences. Theoretical Claims: I have attempted to review the theoretical correctness of the proofs. I have validated Proof A.4 and the lemma proofs in appendix B. I was not able to comprehensively verify the results of the main theorem as it is quite dense and complex. The assumptions of restricted convexity and restricted smoothness appear reasonable and are empirically backed. However, they do break down at larger rank in Table 2. Experimental Designs Or Analyses: Experimental design does seem to be a bit 'rushed' but does validates and supports mathematical proofs. Table 2 results are of interest, maybe worth discussing more. Supplementary Material: I did. See Theoretical claims for more details. Relation To Broader Scientific Literature: This work builds on many existing works and confirms existing empirical results in literature as well as 'common knowledge' of LoRa convergence. Essential References Not Discussed: None noted Other Strengths And Weaknesses: Discussed in other sections. None Other Comments Or Suggestions: * Page 2, line 76, column 2: “for any any matrix” -- “for any matrix” * Page 3, line 152, column 2: “Collectively, these prior results make the assumption that L admits a low-rank minimizer more natural.” -- “more naturally”? * Page 6, line 296, column 1: Missing subscript *, currently reading as a dot product where one of the components is 0. * Page 11, line 561, Property 3: ∇f(X)UV⊺ -- ∇f(X),UV⊺ {needs comma} * Page 11, line 571: “regarless” -- “regardless” * Page 12, line 658: “then, which requires independent reasoning, and then ..” {doesn’t need both then’s} * Page 17, line 926: Additional epsilon ϵ used in notation that isn’t referenced anywhere else. Currently assuming this is a typo of ε considering the context of the proof. Questions For Authors: With the experimentation, is it not true that the use of large variance in initialisation can cause poor performance for alternative reasons? (i.e., initialising with kaiming gives you better results than having a large initialisation in a simple feedforward neural networks) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Common Response (Repeated in all responses)** First of all, we thank the reviewers for their positive and constructive feedback. We are excited to see that the reviewers are appreciative of our theoretical contributions. Below, we address each of the reviewers' comments individually. ### **Individual Response** We thank the reviewer for their meticulous and constructive comments, and we are pleased to hear that the reviewer recognizes the novelty of our theoretical contributions. **Discussion with Table 2** We appreciate the reviewer’s encouraging remarks on the results of Table 2 and its contrast with over-parameterization. We agree this point is interesting enough to merit further emphasis, so, following the reviewer’s suggestion, we will highlight this discussion more prominently in our revised paper. **Correction of typos** We have corrected all typos pointed out by the reviewer. Once again, we sincerely thank the reviewer for their thorough reading of our manuscript and their detailed feedback. **Question on alternative reasons for the failure case** We thank the reviewer for raising a thoughtful point. Indeed, bad (large) initializations can lead to poor performance due to issues such as activation function saturation (for tanh activation functions) or exploding gradients. In our setup, this phenomenon indeed occurs if the initialization variance is further increased to $\mathcal{N}(0, \frac{1}{2])$, as demonstrated in the figure in the link below. https://drive.google.com/file/d/14ui2E-hmg7E7LvaUp2X0hBPx9Zb6AQZp/view?usp=sharing However, it is unlikely that the behavior observed in Figure 2 of our main text is caused by these issues, as the gradient norm remains stable and nonzero throughout training, as demonstrated in the figure in the link below. https://drive.google.com/file/d/1w7k0dGcjFvBebFtw8OCqaKZb3BP-1180/view?usp=sharing
null
null
null
null
null
null
HyperIV: Real-time Implied Volatility Smoothing
Accept (poster)
Summary: The paper studies the problem of fitting the implied volatility surface. They consider a challenging setting when the time interval is reduced to one minute. This would mean the sample size is much smaller, therefore making the problem challenging. Toward this goal, they use hyper-network, the practice of using one neural network to generate the weights of another network, separating training data from test input. Claims And Evidence: The key claim is that in the setting the authors care about, the proposed method gives the smallest MAE loss on the test split. This claim itself is supported by the Table 4 and Table 5. Methods And Evaluation Criteria: I have very little experience in option pricing literature and am unable to confidently judge if MAE is enough. Theoretical Claims: There are no theoretical claims. There are some mathematical results that cite existing literatures. Experimental Designs Or Analyses: The experiment uses data from 6 index funds and perform the experiment. They use 2 funds for 1-minitue interval and 6 for 1-day interval. The experiment design is a bit strange, as the author claims to study the setting where the interval is 1-minitue. This is the main setting they would like to focus on, but more data is devoted to the setting when the interval is 1-day. In the 1-minitue setting, the proposed methods only beat the other baselines in one of the example. Supplementary Material: No. Relation To Broader Scientific Literature: The discussion of related literature is extensive. Essential References Not Discussed: No. Other Strengths And Weaknesses: The experiment results are a bit weak for reasons discussed above. Other Comments Or Suggestions: The author should discuss extensions of their algorithm where we can use more features from the market to better predict the IV surface. Questions For Authors: What is the density from equation (11)? Why the second derivative w.r.t. k makes a density function? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your insightful comments. * Data allocation between 1-day and 1-minute intervals To clarify the data usage: while the 1-day dataset covers more *assets* (8 vs 2), the 1-minute data constitutes the majority (about 87%) of surfaces analyzed (~130,000 out of ~150,000 total, see Table 2). We included the diverse 1-day assets partly because end-of-day data is more accessible, aiding reproducibility, and also to provide a testbed for the cross-asset generalization study. * Performance comparison on 1-minute SPX data It is true that GNO achieved a slightly lower MAE on 1-minute SPX data (0.0140 vs 0.0167, Table 4). However, this should be considered alongside computational costs (Table 3): HyperIV uses about 1/4 the memory and is nearly 4x faster at inference. Furthermore, HyperIV demonstrates superior *robustness*, performing consistently well across all assets, whereas GNO's performance degrades significantly on VIX and MXEF. For context, if we scaled up HyperIV to use half of GNO’s resources (memory/runtime), its MAE could be reduced to 0.0128, easily surpassing GNO. We presented the lightweight version as this marginal accuracy difference did not outweigh the significant efficiency advantage. * Using more market features Yes, it is possible to extend HyperIV (and potentially the baselines) to include additional features beyond moneyness and maturity. This could be an interesting direction for future work. * Density function in Equation (11) and the second derivative The term $p(k,t)$ in Eq. (11) represents the **risk-neutral probability density function** of the terminal log-moneyness, $\log(S_t/F)$, for maturity $t$. The second derivative of the *undiscounted* call price $C(K)$ with respect to the strike price $K$ yields the risk-neutral probability density function $Q(K)$ of the terminal asset price $S_T$, evaluated at $K$: The undiscounted call price is: $$ C(K) = \mathbb{E}^Q[\max(S_T - K, 0)] = \int_{K}^{\infty} (S_T - K) Q(S_T) dS_T $$ Taking the first derivative w.r.t. $K$: $$ \frac{\partial C(K)}{\partial K} = - \int_{K}^{\infty} Q(S_T) dS_T $$ Taking the second derivative w.r.t. $K$: $$ \frac{\partial^2 C(K)}{\partial K^2} = Q(K) $$ Therefore, the second derivative yields the density of the underlying asset price under the risk-neutral measure $Q$. This derivation is model-agnostic. Our $p(k,t)$ is directly related to this density $Q(K)$ through a change of variable from strike price $K$ to log-moneyness $k=\log(K/F)$.
Summary: This paper introduces a new method called HyperIV, designed to quickly construct accurate and arbitrage-free implied volatility surfaces using minimal market data. Main findings and results include: 1. HyperIV generates high-quality implied volatility surfaces in real-time—approximately within just 2 milliseconds—using only 9 observed market option prices, making it highly suitable for fast-paced trading environments. 2. It outperforms other well-known approaches such as the SSVI model, Variational Autoencoders (VAE), and Graph Neural Operators (GNO), both in terms of computational speed and predictive accuracy. 3. It uses one neural network (a hypernetwork) to instantly generate parameters for a smaller, compact neural network that builds the implied volatility surface. And the model incorporates built-in mechanisms that prevent common arbitrage issues, like calendar spread and butterfly arbitrage, by applying specialized auxiliary loss functions during training. One notable feature of HyperIV is its ability to generalize well across different markets, requiring very few data points (only nine contracts) at high-frequency intervals (every minute). This capability addresses practical challenges encountered in real-world trading, where only limited data is reliably available at high frequencies. Claims And Evidence: The major claims in this paper include real-time performance and computational speed, accuracy of implied volatility surfaces, non-arbitrage, and generalization across markets. And they are clearly supported based on the claimed testing results from the authors. Methods And Evaluation Criteria: The proposed method and evaluation criteria are standard in evaluating the goodness of the implied vol surface. Another criteria that can be added is the ratio of the fitted implied vols within the best bid/ask, this is also very important. One of the baseline is the SSVI model which is known to suffer several issues, and it not enough for today's financial market; it is OK to use it as a representative of the parametric fitting methods, however, if possible, the authors can also compare their methods with more advanced parametric method using more parameters and constrains such as the vola dynamic's products. The authors may also want to study the stability of the vol surfaces across the day, i.e. the vol surface should not change too much, in particular, on the wings, if there is no significant market news happening. This can be done by calculating the variance swap price using the fitting implied vol surfaces. Theoretical Claims: There is no essentially any new theoretical results; the whole paper is more on the fine application of hypernetwork in learning the shape of implied vol smiles; the trick on adding penalty functions to reduce the arbitrage possibility (formula 14-17) is standard. One of the claim which is not essential here is that the author only assume proportional divided on Page 3; this may be OK in this paper as the examples are all index; however, it seems that the assets discussed in the paper are not constrained to index, the author may also want to mention the general affine dividend modeling in the literature. Experimental Designs Or Analyses: The proposed experiments look good in general; as mentioned above, the authors could check the values of the variance swap from their fitted implied vol surfaces. Supplementary Material: There is no extra supplementary materials; there are some appendix which give more details on some of the results in the paper and they are clear and good. Relation To Broader Scientific Literature: This paper discusses how to apply hypernetwork in implied vol surface fitting; this helps to enrich the literature on the applications on deep learning in the implied vol surface fitting. Moreover, the authors achieved a high-speed which is rarely discussed in the previous literature while keeping a good fitting quality, making it more possible to apply such deep learning based method in real-life trading This is a very interesting key contribution. Essential References Not Discussed: The authors discussed how to fit a vol surface given only few data and the non-arbitrage conditions. The following papers also discussed how to fit implied vol surfaces for illiquid names and conditions to guarantee non-existence of calendar arbitrage which should be included in the literature review and discussions: The Longitude: Managing Implied Volatility of Illiquid Assets (it discuss how to fit illiquid names) Volatility Transformers: an optimal transport-inspired approach to arbitrage-free shaping of implied volatility surfaces (it discuss how to transfer implied vols/densities from one maturity to another) One-X Property Conjecture, Stochastic Orders and Implied Volatility Surface Construction (it discuss sufficient and necessary conditions to eliminate calendar arbitrage for implied densities over different expiries, and provide a deep discuss on the theoretical side on conditions to eliminate arbitrage.) Other Strengths And Weaknesses: The main strength of this paper, based on its claimed testing results, is the introduction of a fast, robust, deep-learning based vol fitting method, making it more possible to apply it in real-life trading. However, they are also some potential weakness: 1. the tested underlying are all index which is known to be liquid and easy to fit in general; it is better to test some other real illiquid names such EEM. 2. The author mentioned W-shape in the introduction part, however, did not really dig into it. This is an important topic and appears in single stocks a lot; the authors may want to study the performance of their method on single stocks around earning dates (notice that the options are American options). 3. the testing period is relatively short, only covering half a year of 2023; if possible, the authors should consider their method for 2024 year's data, as there are many macro events making the market volatile. Other Comments Or Suggestions: There seems no essential typo or gramma issues in this paper; in general, it is written nice and clearly. Questions For Authors: Besides, the comments and suggestions above, some of the main questions are: 1. if possible, could you also test against the data from 2024, especially, the data in the 2nd half of 2024? Also, add the calculation of variance swap, if possible. 2. if possible, could you try the algorithm for single stocks such as AAPL, around earning dates, and also for very illiquid ETF names? (minor) 3. Enrich the literature review as mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your insightful comments. * Dividend modelling The method itself does not rely on specific dividend assumptions like proportional dividends. It uses log forward moneyness ($k = \log(K/F)$), where the forward price $F$ (taken from data vendors in our study) already incorporates the impact of rates and dividends. We will clarify this in the revised manuscript's Preliminaries section. * Literature on illiquid options Thank you for recommending these papers. We agree they are relevant and will add them to the literature review in the revised version. * More experiments on 2024 data The original work used data available up to late 2023, as the 2024 data snapshot was not yet released by the vendor at the time of experimentation. We have run preliminary experiments on available 2024 futures options data. The results support our original findings: Average MAE on 2024 Data (%) | Asset | HyperIV | SSVI | |---------|---------|-------| | TNOT10Y | 0.52% | 0.74% | | BONDS | 0.78% | 1.18% | | CRUDE | 0.68% | 1.18% | | JYEN | 0.41% | 0.57% | | EURO | 0.34% | 0.40% | | GOLD | 0.49% | 0.74% | | SPX | 0.47% | 0.97% | 90th Percentile MAE on 2024 Data (%) (representing the tail/worst cases) | Asset | HyperIV | SSVI | |---------|---------|-------| | TNOT10Y | 1.16% | 1.90% | | BONDS | 1.69% | 2.52% | | CRUDE | 1.56% | 3.19% | | JYEN | 0.79% | 1.21% | | EURO | 0.85% | 0.94% | | GOLD | 1.59% | 2.04% | | SPX | 1.09% | 1.87% | We will add these results to the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for the the reply! Please include the three recommended reference papers and the above experiment data in the final version. I would increase the score.
Summary: This paper presents a framework based on hypernetwork to perform the implied volatility smoothing with very few reference samples and small computational cost. The robustness and reliability of the proposed approach is evaluated under a special circumstance, where the smoothing needs to be completed within milliseconds with only a limited number of reference samples. ## update after rebuttal The authors have resolved most of my questions. However, the original contributions to the ML community are not very strong, and the use cases of this method are limited to a special condition, e.g., small data size with limited computation time allowed. I have increased my score from 2 to 3. Claims And Evidence: The claims that the proposed HyperIV is "particularly valuable for real-time trading applications." is not clearly justified in the current manuscript. The authors should provide more specific examples or discussions about how to make use of this fast estimation of the implied volatility surface in financial applications or quantitative tradings. Methods And Evaluation Criteria: The evaluations are restricted to a special setup, where only a small number of reference samples are provided with limited computational resources. The authors need to clarify how common and important is this setup in real-world applications. Theoretical Claims: No theory proof involved. Experimental Designs Or Analyses: The condition of the experimental setup is restricted to one special setup. See my comments in "Methods And Evaluation Criteria". Supplementary Material: Yes, Section A and D. Relation To Broader Scientific Literature: The key contributions of this work is limited to one specific finance problem, e.g., the implied volatility smoothing. Essential References Not Discussed: The method proposed in this work is very similar to that in a previous paper [1], which also utilized hypernetwork to build a financial model. This work is only briefly mentioned in the literature review. More detailed discussions regarding the key differences between these two works should be added. [1] Yang, Y. and Hospedales, T. M. On calibration of mathematical finance models by hypernetworks. In ECML PKDD, 2023. Other Strengths And Weaknesses: Strength: - the problem and method is clearly explained - the structure of the paper is well organized - experimental results seem promising Weakness: - The value of this work in real-world finance applications/tradings is not clearly justified. - The advantage of the proposed framework is not clear in other more general cases, e.g., with enough data points and computation power - Difference from previous similar works is not clearly discussed Other Comments Or Suggestions: N/A Questions For Authors: 1. How is the performance of the proposed method compared to other SOTA methods if we have more data points and allow for more computational time? Does the performance advantage of HyperIV still hold? 2. How can this technique be used to create values in real-world financial applications/tradings? More discussions and examples should be given. 3. What is the key difference between HyperIV and HyperCalibration [1], which is not sufficiently discussed in the current manuscript? [1] Yang, Y. and Hospedales, T. M. On calibration of mathematical finance models by hypernetworks. In ECML PKDD, 2023. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful comments. * Justification of real use cases. The implied volatility surface is a starting point for option trading and hedging. HyperIV's ability to generate an arbitrage-free surface in ~2 ms from sparse data (9 contracts) is useful for intra-day option traders. Specifically, it enables: 1. Updating the market view based on the latest transitions (e.g., the past minute). 2. Providing timely option quotes. 3. Calculating real-time Option Greeks for dynamic hedging (e.g., delta hedging). 4. Using the latest surface for anomaly detection in subsequent quotes, potentially identifying trading opportunities. * Connection to Yang & Hospedales (2023) [ECML PKDD] Both papers use HyperNetworks, but their work focuses on accelerating calibration for models like rough Bergomi, still requiring iterative optimization (~5 seconds). Our method is calibration-free at inference time, constructing a surface in ~2 ms via a single forward pass. This fundamental difference in approach and speed is why their method wasn't selected as a direct baseline for our specific calibration-free, real-time, sparse-data setting. * Performance with more data points and computation power. HyperIV is specifically designed for the challenging scenario of sparse data and high-frequency updates. For general cases like fitting end-of-day surfaces with thousands of options, where time sensitivity is low (once per day), directly training a standard network or calibrating a model might suffice, possibly without needing a hypernetwork. However, the sparse-data setting is a realistic reflection of high-frequency trading conditions where only a few contracts have reliable quotes at any instant, making HyperIV's speed and data efficiency valuable. --- Rebuttal Comment 1.1: Comment: Thanks for the response to my questions. Please include the above discussions into the revised version of the paper. I will increase my score from 2 to 3.
null
null
null
null
null
null
null
null
Can Biologically Plausible Temporal Credit Assignment Rules Match BPTT for Neural Similarity? E-prop as an Example
Accept (poster)
Summary: The authors use a the procustes method to compare how well do recurrent neural networks trained with eligibility propagation compare with networks trained with backpropagation through time on matching data from neural recordings on monkeys. As secondary results, the authors add a theoretical point suggesting that eligibility propagation can find the same solutions as backpropagation through time, and they show that other features such as initialization can be more relevant than learning rules to macht neural dynamcis. Claims And Evidence: There are two main problems with the claims: \textbf{Focus on one specific case while the question is general} I think that the question they formulate is too broad for a short article. The paper simply shows that e-prop converges to a similar solution to BPTT, and even that is only for certain parameters. This is not the claim of the title, and only narrowly answers their main motivating question (lines 70 to 73). Two issues: - The single theoretical argument is about e-prop resembling BPTT, not about any other rule. I would not claim to be answering the more general question from this. - It examines only e-prop, but the claim is much more general. If the authors pipeline is very flexible, they should test some other rules to validate their method (even if they perform worse). I don't fully see why did they not include ModProp in the main text. \textbf{Learning rules and architectures} The authors argue that they evaluate bio-plausible learning rules. However, e-prop is a combination of architecture and learning rules, and so are other methods that they cite (equilibrium propagation, for example, requires a specific architecture). The learning rule has a role to play, but only in combination with a specific architecture. \textbf{Suggestion} The results seem correct and interesting, but the title and some of the key words are misleading. I find it hard to conclude anything about bioplausible learning rules, other than there is one model that includes one specific rule that seems to converge to a similar solution to BPTT. I would change the title to e-prop converges to BPTT or something along those lines. In the discussion the authors can make a point that this is an example that shows that there are bioplausible learning rules that can do as well as BPTT. Methods And Evaluation Criteria: It's not clear, given the question, why they focus only on task with dynamics, rather than the very common set-up including vision, where most recent bioplausible models have been formulated. I mention this because, the original source for using BPTT (Mante and Sussillo, 2014) has limitations that are noted by Valerio Mante himself (Pagan, M., Tang, V.D., Aoi, M.C. et al. Individual variability of neural computations underlying flexible decisions. Nature (2024)) Also, nothing wrong with procustes analysis, but it seems to be a choice among many, and it is unclear if the results would be different with other methods. None of those points mean that the methods are wrong, just that the claims need to be clarified. Theoretical Claims: Theorem 1 is correct. However, the way I understand the result seems trivial, and I would like to see what am I missing. If the point is to argue that e-prop converges to BPTT, it should suffice to state that both have the same fixed points (without error there won't be gradients). Also, it is clear that in a problem with two different solutions, it is perfectly possible that BPTT/e-prop converges to one or the other, so it is clear that they could converge to different ones. Experimental Designs Or Analyses: Seems correct. More examples of learning rules would be necessary. Supplementary Material: Theorem and modProp. both seem correct. Relation To Broader Scientific Literature: While the authors argue that learning rules should match recordings, so I would point out that there is a very wide literature on neural recording investigating learning rules. The whole idea of testing whether a learning rule makes sense should at the very least mention that such rules should be compatible with the experimental evidence aimed at investigating precisely which are the learning rules found in biological neurons (not only theoretical properties such as locality). Just a few examples: STDP (Bi and Poo, 1998) or differential Hebbian learning can be used in supervised learning (Xie & Seung, Neurips 1999), and to approximate backprop in some settings (Aceituno et al. Front. Comp. Neur. 2023) Burst-induced plasticity (Remy & Spruston, PNAS 2007) is definitely observed in the brain and can also approximate BP (Payeur et al Nat. Neuro. 2021). Also, the authors should at least mention that there is a debate about whether the brain uses backpropagation at all, which seems to be the main benchmark used. For example, (Song et al., Nat. Neuro. 2024). Essential References Not Discussed: Pagan, M., Tang, V.D., Aoi, M.C. et al. Individual variability of neural computations underlying flexible decisions. Nature (2024) There are architectures similar to the equilibrium-propagation or deep feedback control that do not make the equilibrium assumption (assuming by that they mean static inputs and responses) Aditya Gilra Wulfram Gerstner (2017) Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network eLife Kaleb, Klara, et al. "Feedback control guides credit assignment in recurrent neural networks." The Thirty-eighth Annual Conference on Neural Information Processing Systems. Other Strengths And Weaknesses: I think it's a good point to show that the hyperparameters matter more than the learning rules. However, it seems to indicate that the question is ill-posed, as it seems to be the learning capacity that matters, rather than whether the rule is really biological or not. The authors could comment on that point Other Comments Or Suggestions: N/A. It is well written and good graphics. Also, no need to be overtly polite in the responses, I would simply like clarifications or corrections (also if I am wrong) Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **A more focused title/claim and why mainly one bio-plausible rule**. We thank the reviewer for this thoughtful and detailed feedback. We agree that our study does not aim to characterize all biologically plausible learning rules; rather, our goal is to show that at least one such rule—e-prop—can match BPTT in neural similarity when trained to similar performance, thereby **demonstrating existence rather than universality**; as the reviewer noted, universality would be beyond the scope of a short paper. To clarify our focus, we will: - Update the title to: “*Can biologically plausible temporal credit assignment rules match BPTT for neural similarity? E-prop as an example*” - Modify lines 70–73 to end with: “*— or can some such rules, under the right conditions, yield representations as brain-like as those learned by BPTT?*” - Revise the first contribution point to state explicitly that this is an existence result - Emphasize in the Discussion that e-prop serves as a concrete example, not a general case With this reframing, we believe that providing a single positive example is sufficient for the contribution we aim to make, though we do explore additional rules in appendices for curiosity. **Focusing on tasks with dynamics**. While vision tasks are important, our study focuses on temporal credit assignment—a critical challenge in biological learning. This focus, aligned with prior work on dynamic tasks (e.g., Bellec et al., 2020), allows us to evaluate how well models capture evolving neural activity over time rather than static averages, making it ideal for representational similarity analyses. Our updated title now reflects this focus, and we will note that extension to vision tasks is an interesting direction for future work. **Coupling of learning rule and architecture**. We agree that learning rules are often developed with specific architectures in mind, and we acknowledge this broader issue. In our study, we intentionally fix the task, architecture, and initialization to isolate the effect of the learning rule on neural similarity, following prior work (e.g., Liu et al., 2022). The architecture used is a standard RNN widely adopted in computational neuroscience (e.g., Yang & Wang, 2020). We will add a discussion point noting the importance of studying rule–architecture interactions more systematically in future work. **Additional similarity measures**. Please see our response under “Alternative Metrics” to Reviewer AQPQ, including new results using additional metrics **Table 3**. **Theoretical claim**. We agree the result may appear intuitive, but BPTT and e-prop do not generally share the same fixed points (e.g., \(\hat{y}'(W) = 0\) yields a fixed point for BPTT but not generally for e-prop); even when fixed points overlap, the dynamics of convergence may differ due to distinct basins of attraction. Also, what's a bit more precise than saying they sometimes converge to different ones is the sign condition on initialization for e-prop, which determines whether it converges to W*. That said, we recognize the theoretical result is limited in scope and may distract from our main contributions. As noted in our response to Reviewer xVGX, we will revise the manuscript to de-emphasize the theoretical component and clarify that the theorem serves only as a simple illustrative example, not a general convergence result. Our aim is to highlight, in a concrete toy setting, e-prop’s sensitivity to initialization as a motivation for future theoretical work. **BPTT limitations**. We agree. While BPTT-trained RNNs are widely used in brain modeling, recent studies (e.g., Pagan et al., 2024) show that BPTT tends to favor a limited subset of solutions. Zahorodnii et al. (bioRxiv, 2025) further demonstrate that handcrafted solutions can yield more brain-like dynamics. We will also cite Song et al. (2024) and others who question the usage of BPTT by the brain (see also significance of the e-prop–BPTT comparison in response to Reviewer 23UQ). Together, these points highlight the limitations of BPTT as a benchmark and motivate future work on alternative rules and rule–architecture co-design; we will update the manuscript accordingly. **Additional references**. In the updated manuscript, we will discuss and cite **all** references the reviewer mentioned. **Is the question ill-posed?** This is an excellent point. In under-constrained systems with unknown architectural variables, multiple learning rules may converge to similar solutions. Constraining the architecture using experimental data—or incorporating data during learning—could help narrow the search space, making this a promising direction for future work. We will update the manuscript with this point. That said, our goal is not to pinpoint the brain’s actual learning rule, but to test whether any existing biologically plausible rule (e.g., e-prop) can match BPTT in neural similarity, the de facto benchmark for brain-like models. --- Rebuttal Comment 1.1: Comment: The rebuttal addresses my points and I will update my score. --- Reply to Comment 1.1.1: Comment: We thank this reviewer for their insightful and actionable feedback.
Summary: This paper examines the properties of recurrent neural networks trained on experimentally motivated tasks using both Backpropagation Through Time (BPTT) and biologically plausible learning rules, in particular e-prop, a truncated approxiamtion of BPTT which uses only local information for weight updates. They demonstrate that e-prop (as well as mod-prop), can achieve competitive similalrity with neural recordings when matched for task performance with BPTT. They also inversigate the effects of learning rate and weight initialization scale on performance-matched similarity scores, finding a profound impact of initialization, and a small effect of learning rate. Comparing neural distances to a baseline constricted by splitting neural recordings into multiple populations, they show that RNNs can reach this baseline level of similarity for a simpler, but not a more complex task. To explain the similarity between E-Prop and BPTT, they show that E-Prop will converge to the same solution as BPTT in a linear RNN with a single recurrent unit solving an effective root-finding task. Claims And Evidence: Empirical claims are well supported. Methods And Evaluation Criteria: The evaluation that is missing is the amount of data / training time required to achieve equal performance across learning rules. Theoretical Claims: I found the main text to be very misleading about the extent of the theoretical claims proven in the SI. You can't say in the abstract that you "provide theoretical results demonstrating that e-prop can converge to the same solution as BPTT, depend- ing on the initialization," give only an informal statement of the theorem in the main text which glosses over the fact that the dimensionality of the RNN considered in the theorem is only one-dimensional, and finally get to that limitation in the S.I. There's no argument given for higher-dimensional systems, and this should be made clear in the main text. Furthermore, the assumptions of theorem 4.1 are so restrictive that they actually bypass any meaningful consideration of the BPTT algorithm. The same exact theorem could be applied by replacing BPTT with "algorithm Z" and assuming that "algorithm Z" converges to some fixed point $w^*$ which solves the task, then so will E-Prop if converged within a small neighborhood of $w^*$. This is more a statement about the task considered than the e-prop algorithm, which only converges in this case when $\hat{y}'(W(0)) and $x_{T-1}$ have the same sign, which is essentially putting in as a hypothesis that the e-prop updates will move in the right direction. I think these limitations in the theoretical claims need to be mentioned more transparently in the text, and I would urge the authors to extend their theorem to the multivariate linear case, where similar tasks have been explored theoretically (see e.g. https://arxiv.org/abs/2006.11036) Experimental Designs Or Analyses: I didn't check beyond reading what's in the paper. Supplementary Material: Reading the proof of theorem 4.1 don't see any mistakes in the math. Relation To Broader Scientific Literature: This paper relates to a large body of literature comparing the representations learned by neural networks to neural activity measured in behaving animals. Also to a large body of work seeking and analyzing biologically plausible learning rules that can achieve competitive performance compared to non-plausible algorithms, namely Backpropagation. Essential References Not Discussed: none that are obvious to me Other Strengths And Weaknesses: Strengths: The introduction is well-written with a good set of references to related work. The main point demonstrated here -- the bio-plausible learning rules can achieve competitive neural data similarity to BPTT, is a useful fact that will interest many people in the field. Weaknesses: The captions of the figures are a bit short, and including more detail could enhance readablity. Theoretical contribution is not consistent with the message presented in the abstract or main text Some details are missing or not elaborated clearly in the main text (see questions). Other Comments Or Suggestions: I think there's an indexing error in equation 6 in the SI. As it currently reads, you get $\frac{\partial h_{l, t}}{\partial W_{h, ij}} = \frac{\partial h_{l, t}}{\partial W_{h, ij}} + \cdots $ implying the rest of the expression is zero. Questions For Authors: 1) What parameter is varied to change task performance along the curves in figures 1B, 2AB. and 3A? Is sample size or training time changing? How does this compare for the different training algorithms? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Theoretical claims**. We appreciate the reviewer’s constructive feedback and acknowledge that the main text did not clearly convey the limited scope of our theoretical result. We agree that the theorem is restrictive in two key ways: (1) it applies only to a 1D setting, and (2) W* can be the solution of any algorithm Z, not specifically BPTT. We recognize that our presentation may have unintentionally suggested a broader theoretical contribution than intended, potentially distracting from our main message. **In response, we will make the following updates to our manuscript to (A) clarify the theorem’s assumptions and scope in the main text, and (B) re-center the reader’s attention on our core empirical focus and existence-based framing**: - Remove all phrases referring to the theoretical results in abstract and Contributions, including “*to support this further, we provide theoretical results …*”, to avoid implying that we seek to establish a general theoretical framework for e-prop convergence, which we do not aim to do. - Clarify the restrictive assumptions in the main text’s informal theorem statement by replacing “*linear RNN*” with “*1D linear RNN*”, and “*found via BPTT*” with “*found via an arbitrary algorithm (e.g., BPTT)*”, to acknowledge that our result does not specifically hinge on BPTT alone. - Update Appendix B heading to “*Convergence and divergence of e-prop in a toy setting*” - Explain the theorem’s purpose in the surrounding text, clarifying that it serves as an existence demonstration in a toy setting to motivate future theoretical work: “*We provide a 1D linear RNN example showing that e-prop can match BPTT under certain initializations, but diverge under others. While this illustrates key ideas—such as e-prop’s sensitivity to initialization—extending the analysis to higher-dimensional RNNs is a substantial effort (e.g., Schuessler et al., NeurIPS’20) and beyond the scope of this paper.*” To offer more information about the last point, our rationale for focusing on a toy (1D) setup is twofold. First, to our knowledge, there is no existing theoretical framework for e-prop convergence that we could readily build upon, and developing such a framework (e.g., in the spirit of Schuessler et al.) would be a major effort in its own right and beyond the scope of this work. Second, the toy setting lets us illustrate the key **existence story** we wish to highlight: there are indeed cases where e-prop converges to the same solution as BPTT, and there are cases where it fails. This demonstration is **not** meant to be comprehensive, but rather to motivate future investigations (e.g., broader architectures, higher-dimensional RNNs, etc.). By making these changes, we hope our updated manuscript can better convey that our theorem is meant primarily to provide an example—“e-prop can match BPTT under certain initial conditions, but not always”—rather than to establish a general theoretical foundation for e-prop, as well as motivating future work on theoretical work to examine insights for the simple toy model across more realistic settings. We hope these revisions clarify that the theoretical result plays a limited, illustrative role and is not essential to our main aims. **Amount of data/training required to achieve equal performance**. We agree this is an important point. While we did not include these comparisons originally, they have been addressed in prior work evaluating the “(1) good neuroscience task performance” criterion (see Fig. 3, Bellec et al., 2020), which shows that e-prop typically requires more training iterations than BPTT to reach the same accuracy. Our findings are consistent with this, and we will include this information and citation in the revised manuscript. **Additional questions**. Indeed, task performance is varied by changing the number of training iterations. At each iteration, we compute both task accuracy and neural similarity, and plot one against the other. As training progresses, models move from low accuracy/high distance (upper left) to high accuracy/low distance (bottom right) in Figures 1-3. The same procedure is applied to all learning rules. We will add this explanation to the caption for clarity. **Longer captions**. In addition to the above explanation, we will ensure all figure captions include complete axis label explanations and refer to the relevant appendix sections. We acknowledge that short captions are more common in the appendix, and will revise those to match the clarity of the main text, including color legend descriptions (e.g., Fig. 8B). **Other comments**. We thank the reviewer for catching the indexing error and will correct it in the updated version. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I think these changes will improve the paper substantially. I am raising my score to a 3. --- Reply to Comment 1.1.1: Comment: Thank you so much again for your valuable and constructive feedback.
Summary: This paper investigates whether biologically plausible learning rules (e-prop) in RNNs can achieve neural activity similarity to biological data comparable to models trained with BPTT. Using primate datasets and Procrustes distance as a similarity metric, the authors demonstrate that e-prop-trained models match BPTT in neural data similarity at equivalent task accuracies. Additionally, they show that architecture and initialization significantly influence similarity, often surpassing differences caused by learning rules. Claims And Evidence: The evidence is robust for the claims overall, except reliance on Procrustes distance alone may miss nuances captured by other metrics. The study lacks experimental comparisons to demonstrate the superiority of Procrustes distance over alternative similarity metrics. Methods And Evaluation Criteria: Yes. Theoretical Claims: Theorem 4.1 is proven in Appendix B, Experimental Designs Or Analyses: Baseline comparisons (Fig. 5) and ablation studies (Fig. 7) strengthen validity. But tasks are limited to two datasets; generalization to other species or tasks is unexplored. Supplementary Material: I check theorem 4.1 in Appendix B. Relation To Broader Scientific Literature: The work extends prior studies on bio-plausible learning (e.g., Lillicrap et al., 2020; Richards et al., 2019) by systematically evaluating neural similarity—a less-explored aspect. It aligns with Yamins et al., 2014)on performance-similarity correlations but adds insights into initialization’s role. Essential References Not Discussed: The article omits some of the latest biologically plausible algorithms such as BrainScale: Enabling scalable online learning in spiking neural networks. Other Strengths And Weaknesses: **Strengths**: The article is fluent and easy to understand overall and provide insights into initialization. **Weaknesses**: 1. Using existing similarity evaluation metrics to assess the algorithm's performance on neural datasets, the article lacks significant contributions and novelty. 2. Limited discussion of alternative similarity metrics (e.g., CKA, RSA). 3. The analysis is restricted to comparing only one biologically plausible algorithm (e-prop) with BPTT, leading to limited validity and generalizability of the evaluation. Other Comments Or Suggestions: 'We used retanh to mimic type-1 neuronal firing ‘ should it be '*Tanh*'? Questions For Authors: 1. Does the dominance of initialization over learning rules hold across tasks with varying temporal dependencies (e.g., memory-intensive tasks)? 2. How would results change if combining Procrustes with dynamical metrics (e.g., DSA) into a composite score? 3. Could the conclusions extend to SNN or non-primate datasets? Ethical Review Concerns: none Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Alternative metrics**. We thank this reviewer for their constructive comments and now add additional metrics to the manuscript: **Table 3:** Additional measures. | Rule | CCA | CKA | Duong et al., 2023 | |---------|----------------|----------------|---------------------| | BPTT | 0.283 ± 0.011 | 0.160 ± 0.018 | 0.349 ± 0.038 | | e-prop | 0.281 ± 0.012 | 0.155 ± 0.009 | 0.394 ± 0.072 | These additional results again support the insignificant differences across the rules (p=0.834, 0.683, 0.373, for the respective measures). While we provided initial justification for using Procrustes in the Discussion, we will expand on it in the revised manuscript. In addition to its interpretability and geometric grounding, a recent study (Cloos et al., ICLR 2025) found that optimizing Procrustes may better preserve task-relevant features than other metrics. This suggests it can capture more meaningful neural structure. They also observed that Procrustes is stricter than CKA: high Procrustes implies high CKA, but not vice versa (see also Harvey et al., UniReps 2024). We will add this discussion. **Novelty & contribution**. We thank the reviewer and respectfully clarify that while we use existing tools, our contribution lies in addressing a fundamental, underexplored question: How do biologically plausible gradient approximations affect neural activity similarity? Using e-prop as an example, we show that despite gradient truncation, it is possible to match the similarity of BPTT (before the truncation). This is not obvious a priori—e-prop < BPTT or > BPTT were both plausible. The finding of matched similarity is thus nontrivial, and we further analyze why this occurs (Figs. 3 & Appx Fig. 8), as well as how confounds like architecture/initialization dominate (Fig. 2). These insights also help refine the precise future questions that can be systematically explored across learning rules, architectures, datasets, and metrics — all of which our flexible pipeline is designed to support. We will revise the Discussion to reflect these points and believe our insights meaningfully advance this emerging area, warranting consideration for ICML. **Why focus mainly on one bio-plausible rule**. We thank the reviewer for raising this point. Our goal is not to claim universality — that all biologically plausible rules match BPTT in neural similarity — but to demonstrate existence, by showing that e-prop can do so under certain conditions. To support this existence claim, showing a single such example should suffice, which we believe is both non-trivial and novel, especially given how underexplored representational similarity is in this context. That said, we sympathize with the reviewer and acknowledge that our current title may overstate the scope. To avoid this confusion, we plan to revise the title to better reflect our focus on a single illustrative case — e.g., “*Can biologically plausible temporal credit assignment rules match BPTT for neural similarity? E-prop as an example*”. Additional planned updates are outlined in our response to Reviewer buDs. **Questions**. We thank the reviewer for those meaningful questions and will add them to our manuscript: 1. This is a nuanced question that touches on how task timescale and complexity modulate the roles of both initialization and learning rules. Due to rebuttal word limits, we focus here on timescale. Prior work shows that performance gaps between BPTT and e-prop widen with longer task timescales (Liu et al., 2021). If neural similarity correlates with task performance—as supported by Yamins et al. (2014) and our results—we would expect a growing similarity gap as well (although we'd be violating the assumption of matched accuracy for comparison). Testing this directly would require datasets with varying temporal dependencies, which we view as a promising direction for future work. 2. Intuitively, if both metrics show similar results for e-prop and BPTT, a composite score (based on weighted or normalized average) would reflect the same conclusion. However, we prefer to report each metric separately to avoid arbitrary weighting and allow for clearer interpretation. We’ll add composite scores to the manuscript for completeness. 3. This is a profound question that touches on translational relevance: to what extent do findings from one species (e.g., primates) generalize to others (e.g., rodents or humans)? The widespread use of model organisms in neuroscience is based on shared circuit motifs and learning mechanisms. While we expect our conclusions may extend to other species or even SNNs with similar dynamics, systematic cross-species and model-type studies will be essential. We view this as an exciting direction for future work. **Other comments**. We will cite and discuss BrainScale and others in Related Works. We didn’t use Tanh for type-1 firing for nonnegative firing rate; we will clarify this point in the manuscript.
Summary: This paper compares the representations learned by BackProp Through Time (BPTT), truncated BPTT (tBPTT), and e-prop–a “biologically plausible” learning rule designed as a model of neural plasticity–to those learned in the brains of monkeys performing several tasks. Specifically, the distance between neural representations and learned model representations are calculated via Procrustes analysis. The authors find that the representational distance between e-prop and neural data is not so much larger than that of BPTT, or tBPTT, and neural data. To further bolster this favourable comparison of e-prop and BPTT, a theorem is proven showing that, in linear models, the solution found by e-prop can match that of BPTT if certain initial conditions are met. Lastly, the authors demonstrate that initial conditions can actually have a larger effect on representational similarity than learning rule itself. ## Update after rebuttal Thank you to the authors for the many clarifications, the additional experiments and the statistical significance analysis! I still feel that the theoretical results are exceedingly limited but I believe that, with the rebuttal updates, the experiments alone make the paper worthy of consideration. For this reason I have updated by score to a 3. Claims And Evidence: The claims seem reasonably well-supported, but it is at times difficult to parse noise from signal. For this reason the reviewer suggests statistical tests and tabular display of data (see Weaknesses). Methods And Evaluation Criteria: The datasets used seem reasonable. Theoretical Claims: The proof was skimmed but not checked in detail. Experimental Designs Or Analyses: Experimental design and analysis seems sound. Supplementary Material: Portions of Appendix B and C. Relation To Broader Scientific Literature: As discussed in the paper, past work in computational neuroscience has focused on proposing “biologically plausible” alternatives to BPTT, as candidate models for learning in the brain–many of which were discussed in the paper (e.g. e-prop, RFLO). Most of this literature has evaluated these models based on (1) whether they are local in space and time (not requiring information about spatially distant synapses, or temporally distant timesteps, to update the given synapse at the given time), and (2) whether they can learn reasonably difficult tasks. As of yet, there has not been a significant amount of work comparing the representations learned by these bio-plausible algorithms. This is the gap that the paper intends to address. Essential References Not Discussed: Recent work has similarly begun comparing representations learned by different types of algorithms–for example Brain-like neural dynamics for behavioral control develop through reinforcement learning by Codol et al. (2024 – biorXiv)–which could be a useful reference. Also, the term “biologically plausible” is somewhat underdetermined, which could be worth discussing. One potential reference for this is Formalizing locality for normative synaptic plasticity models by Bredenberg et al. (2023 – NeurIPs). A potential issue related to this paper is the recent finding in certain deep learning architectures that, under sufficient conditions, many models exhibit similar internal representations. An example reference here would be The Platonic Representation Hypothesis, Huh et al., 2024. Other Strengths And Weaknesses: ## Strengths The paper seems to be addressing a meaningful problem that has been understudied in the computational neuroscience community, and has selected datasets that are well-known and well-studied with which to address the problem. The paper seems to be well written, and the flow of ideas is logical and well-presented. By and large, the authors have done a solid job situating their work, with a respectable background section and citations. ## Weaknesses The primary weaknesses that the reviewer sees with the paper are issues of (1) clarity when it comes to displaying and analyzing the data, (2) significance of differences in data, (3) and depth of content provided in the paper. To elaborate: 1. The data that is reported in the main body of the paper is relatively noisy and the effects are not huge. It is at times also difficult to compare across plots–for example, if one wishes to compare the effect of initialization using BPTT to e-prop (Fig.2A to Fig2.B). The reviewer would thus suggest (i) statistical testing, and (ii) reports of numerical results, e.g. in a table, to better support the claims. 2. Critically, it is also difficult to get an idea of what kinds of differences between representational similarity are large and which are small (in part due to the issues mentioned in the above bullet point). Figure 5 takes a good first step towards this by comparing to a noise floor and a kind of '(relevant) noise ceiling'. However, it exhibits the issues of displaying data and statistical testing mentioned above, and doesn’t investigate the 2nd main dataset used in the study (Mante). Including a kind of ‘noise ceiling’ used in Figure 5–a comparison with a matched randomly initialized network (ideally with the best performing initialization scaling) in the main plots (and proposed table) would be helpful. One might also be able to normalize somehow by the difference between this ‘ceiling’ and the ‘noise floor’ provided by a neural data-to-neural data comparison or the best performing model for the given plot. 3. This point is somewhat more subjective, but the reviewer wonders whether the paper provides enough content to warrant publication at a venue like ICML. The reviewer thinks this perception could be a function of the fact that only the Procrustes results are included in the main body of the paper. If the figures were made more concise so that some alternative tests (like the DSA results and UMAP examples) could be moved into the main text this should help. Also, comparing with another metric (e.g. CCA, or trying to come up with a paradigm where representation similarity analysis could be used), could also be interesting. If the authors satisfactorily address these weaknesses and the below points the reviewer would likely raise their score. Other Comments Or Suggestions: - Line 13 LHS: first sentence seems a little unnecessarily wordy - Line 17 LHS: “approximate” => “approximating” - Line 23 RHS: suggestion: “approach” used twice very close together; perhaps use a different word - Fig 1B: do the authors know why there is such higher noise on the Mante dataset? - Fig 1: could be useful to state truncation length here. - Fig 2: it would be useful to have some numerical quantification of how the variance due to initialization exceeds that of the learning rule - Fig 3: could be useful to include color in this figure; the greyscale makes it very difficult to tell differences between lines and points - Fig 4: could be good to explain what the “condition component 1” represents in this context - Fig 5: why use Hatsopoulos dataset and not Mante? - Fig 5: if the reviewer understands correctly, the noise floor was calculated by comparing the similarity of different sets of recorded neurons. Is this the case? If so, why choose to compute similarity between different sets of neurons rather than between different sample points (matched for subject and test condition)? - Line 420 RHS: final sentence seems rather verbose and doesn’t contribute much insight. - The reviewer noted that the theorem is for 1D linear RNNs, and it’s not mentioned that only the 1D case is studied in the main body of the paper. Perhaps this should be mentioned? Questions For Authors: 1. The reviewer wonders if the authors could motivate a little more the significance of seeing similar representations between e-prop and BPTT. Given that e-prop tries to approximate BPTT this might not seem that surprising. Moreover, from a neuroscience perspective–given the biological implausibility of BPTT, is it such a good benchmark to compare with? 2. Why use the Hatsopolous dataset in place of Mante in Fig.5? 3. The reviewer noted that the objective in the theorem’s proof is to optimize the 1D linear RNN so that its hidden state is zero after $T$ time steps. Of course, given any parameterization with $W < 1$ the hidden state will converge to zero for large $T$. Do the authors not think that, for this reason, the studied case might be too simple? 4. Would it be possible to plot appendix Fig.7 and Fig.1 on the same footing (i.e. noise or no noise in both cases)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Tabular results and statistical test for Fig 2**. We appreciate this reviewer's concrete feedback and tabulate the results: **Table 1**: Results from Fig2. *Noise ceiling corresponds to untrained models. | rule | gain=0.0 | gain=0.5 | gain=1.0 | gain=1.5 | -------------|---------------|---------------|---------------|--------------- | *BPTT* | 0.461 ± 0.006 | 0.423 ± 0.005 | 0.398 ± 0.007 | 0.428 ± 0.010 | *e-prop* | 0.461 ± 0.007 | 0.437 ± 0.009 | 0.407 ± 0.008 | 0.432 ± 0.008 | Noise ceil.* | 0.565 ± 0.002 | 0.529 ± 0.005 | 0.467 ± 0.005 | 0.508 ± 0.017 Non-overlapping error bars across rows suggest differences across gains (fixed rule), while overlapping bars within columns suggest similarity across rules (fixed gain). Values are mean ± standard deviation across seeds. Noise ceiling refers to untrained models. Gain=2.0 is excluded due to poor e-prop performance. Distances were measured at normalized accuracy ≈ 0.8 to ensure matched performance. To further support these trends, we find BPTT at gain=1.0 vs 0.0 differs significantly (p=2.07e-5), as does e-prop (p=1.3e-4). In contrast, BPTT vs e-prop yields no significant difference at fixed gain=1.0 (p=0.173) or gain=0.0 (p=0.985). We report these two gains for brevity (due to rebuttal char limits), but similar trends hold at other values and are available upon request. **Fig 5 updates**. As suggested, we add tabulated bar values and statistical tests below; we apologize for initially omitting Mante13 (due to an earlier code we wrote for Hats07), but now include it and observe similar trends as Suss15 — trained BPTT/e-prop outperform the noise ceiling but still show a gap from the noise floor: **Table 2:** Neural similarity scores (Fig5). | Dataset | Noise floor | BPTT (trained) | e-prop (trained) | BPTT (init) | e-prop (init) | |----------|---------------|----------------|------------------|---------------|----------------| | Hats07 | 0.401 ± 0.024 | 0.395 ± 0.020 | 0.442 ± 0.030 | 0.538 ± 0.041 | 0.540 ± 0.040 | | Suss15 | 0.373 ± 0.009 | 0.406 ± 0.012 | 0.411 ± 0.012 | 0.498 ± 0.015 | 0.494 ± 0.014 | | Mante13 | 0.704 ± 0.004 | 0.721 ± 0.008 | 0.727 ± 0.010 | 0.800 ± 0.008 | 0.801 ± 0.008 | As noted, distances here use fewer neurons (to compute the noise floor), so values may differ slightly from earlier plots. Consistent with the bar plots, the gap from the noise floor is insignificant for Hats07 (p=0.311, BPTT-trained), but significant for Mante13 (p=1.29e-5) and Suss15 (p=5.02e-6). We will append these results to the updated manuscript. We will also add a noise ceiling to the main plots and visualization using normalized values to the manuscript. **Additional content in the main text**. As suggested, we will add DSA and UMAP results to the main text. We’ve clarified the choice of Procrustes and added new results using CCA and CKA (**see Table 3 for Reviewer AQPQ**). **Other suggestions and questions**. We will incorporate all these helpful suggestions to improve clarity. Fig2&5 comments are addressed above. For Fig5, the reviewer is correct: the noise floor was computed by comparing different neuron subsets matched across time and condition. Our baseline compares time-varying firing rates across conditions (not trial-to-trial variability), aligning with common practice in systems neuroscience. The variance in Fig. 1B (Mante) is largely due to the absence of learning rate decay; adding decay stabilizes training without altering the main trend — we will include this plot. We will also revise the main text to clarify the toy theorem’s assumptions, limited scope, and illustrative purpose, motivating comprehensive future work (see also Reviewer xVGX). We will cite and discuss **all** references the reviewer mentioned. Finally, we will update Fig1&7 to be on the same footing. **The significance of e-prop BPTT comparison**. This is an important question. Beyond the motivation in lines 70–73, BPTT remains a widely used benchmark in brain modeling, especially in seminal works (Yamins, DiCarlo, Mante, Sussillo, …). Comparing it to a biologically plausible alternative like e-prop offers broadly relevant insight into neural similarity. While BPTT has known limitations (**see response to Reviewer buDs**), our pipeline aims to evaluate — and eventually surpass — it as a benchmark in future studies. More broadly, our study addresses a fundamental and underexplored question: how do bio-plausible gradient approximations affect neural similarity? Using e-prop as a case study (demonstrating existence, not universality), we show that despite gradient truncation, it can match BPTT (before the truncation). This was not obvious a priori; both “e-prop < BPTT” and “e-prop > BPTT” were plausible outcomes (**see further discussions on novelty&contribution with AQPQ**). Our framework enables detecting future models that outperform BPTT. We will add these discussion points to the manuscript. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed response! I'm excited to see the addition of tables and statistical tests, and to hear about the adjustments that will be made to the text. I have a couple extra questions: 1. would it be possible to add p-values for each relevant comparison in the table, and mention the method used to correct for multiple comparisons? 2. I'm still a little confused as to why the noise floor was computed by comparing different subsets of neurons. If the dataset contains multiple samples for each task condition, would it be possible to show an extra control comparing across samples instead? Thanks again! --- Reply to Comment 1.1.1: Comment: We thank you for your meticulous review and insightful comments. Apologies for the delay, as we took a few days to make sure we addressed your points carefully. **Question 1**. As suggested, we will add p-values to the table to make them more visible. **Modified Table 1** with p-values in the table (similar trend as before): | rule | gain=0.0 | gain=0.5 | gain=1.0 | gain=1.5 | p-val (gain=1.0 vs 0.0)| p-val (gain=1.0 vs 0.5) | p-val (gain=1.0 vs 1.5) | -------------|---------------|---------------|---------------|---------------|---------------|---------------|--------------- | **BPTT** | 0.461 ± 0.006 | 0.423 ± 0.005 | 0.398 ± 0.007 | 0.428 ± 0.010 | 2.07e-5 | 1.94e-3 | 5.61e-3 | **e-prop** | 0.461 ± 0.007 | 0.437 ± 0.009 | 0.407 ± 0.008 | 0.432 ± 0.008 | 1.30e-4 | 5.71e-3 | 7.61e-3 | **p-val (e-prop vs BPTT)** | 0.985 | 0.078 | 0.173 | 0.587 | - | - | - | Noise ceil. | 0.565 ± 0.002 | 0.529 ± 0.005 | 0.467 ± 0.005 | 0.508 ± 0.017 | - | - | - We report p-values for all gain comparisons (fixing the learning rule) relative to gain=1.0, as it showed optimal performance and neural similarity in Fig2. Initially, we used uncorrected p-values from independent two-sample t-tests; since only a small number of planned comparisons were performed and the p-values were well below standard thresholds, we did not apply correction at first. For completeness, we now include Bonferroni-corrected p-vals: for e-prop, gain=1.0 vs {0.0, 0.5, 1.5} yields p = {7.80e-4, 3.43e-2, 4.57e-2}; for BPTT, the values are {1.24e-4, 1.16e-2, 3.37e-2} — all remaining significant post-correction, reinforcing the trend that gain influences neural similarity. In contrast, rule comparisons (fixing gain) yielded uncorrected p > 0.05, so no correction was needed. We will clarify this in the revised manuscript. We will also add the p-values, given in our initial rebuttal, directly into **Table 2** (a new column with p-vals for trained BPTT vs. noise floor) and **Table 3** (a new row with p-vals for BPTT vs. e-prop across the additional metrics). We apologize that we cannot copy Tables 2&3 here again due to the character limit. **Question 2.** This is an insightful suggestion, and we apologize for not elaborating more initially due to the 5000-character limit. The main reason we did not compute the noise floor via trial-based subsampling is due to the substantial trial-to-trial variability in single-trial neuronal firing estimates (Cunningham et al., 2009; Kay et al., 2024), which can dominate the distance and obscure meaningful similarities, even between similar neural responses. As shown in Table 4 (Hats07 as an example), data-data distance is much larger when computed from single trials and decreases with trial averaging. This reflects how averaging helps recover the underlying condition-specific firing rates, while single-trial estimates remain noisy. Although the dataset includes multiple trials per condition, reliable single-trial estimation—e.g., via LFADS (Pandarinath et al., 2018)—remains an open challenge. Due to this, and following common practice in systems neuroscience, we focused on trial-averaged neural activity to assess model- and data-data similarity; we will clarify this in the revised manuscript. **Table 4**: illustrating greater data-data distance when computed from single trials; note that these numbers are not comparable to those in Fig 5 due to different ways of subsampling. | Number of trials averaged | 1 | 5 | 10 | 15 | 20 | |---------------------------|-------------|-------------|-------------|-------------|-------------| | Distance | 0.513 ± 0.008 | 0.358 ± 0.005 | 0.280 ± 0.004 | 0.238 ± 0.003 | 0.212 ± 0.003 | Thanks again for your thoughtful questions. We’d also like to clarify why the data-data baseline shown in Figure 5 is informative in this context and will update the appendix to include this explanation. As the reviewer noted, it is computed by comparing similarity between different subsets of recorded neurons, matched across timepoints and conditions. While one could compute alternative baselines (e.g., comparing trial-to-trial fluctuations across the population using simultaneously recorded data), this would require reliable single-trial firing rate estimates, which remain challenging (as explained above). Since our study, like most RNN modeling work in systems neuroscience, focuses on trial-averaged neural activity, we opted for a baseline that asks: if more neurons had been recorded from the same region, would we be able to distinguish them from our model units? That is the question our baseline is designed to assess. Overall, we agree this is an important question and will add a discussion in the updated manuscript reflecting all points above. We thank the reviewer again for raising this point, which touches on a deeper methodological question for future research.
null
null
null
null
null
null
Dimension-Independent Rates for Structured Neural Density Estimation
Accept (poster)
Summary: The authors study density estimation for the setting of Markov Random Field (MRF) data where we only have local correlations (cliques). Under these assumptions they study the convergence rate of Neural Networks with their bounds only depend on the range of the correlations. Morally the dimension in convergence bounds can be substituted with the dimension of the effective dimension of the correlation range. Claims And Evidence: The claims are supported by evidence. Methods And Evaluation Criteria: Yes, given that this is a theory paper. Theoretical Claims: Checked them high level (i.e. skimmed the proof), did not find any major issue. Experimental Designs Or Analyses: Experiments seem ok. Supplementary Material: Skimmed most of the appendix Relation To Broader Scientific Literature: These results are relevant to the broader Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: Strengths: - Well motivated - Clearly written - From the theory side this is interesting research that can be built on. Weaknesses: - The main result (Theorem 4.2) is quite abstract in the sense, that it only makes a qualitative statement about the existence of neural networks achieving the bound. This in combination without sufficient practical experiments makes it difficult to judge how impactful this result is. - In terms of technical novelty it is not too much. The proofs use mostly standard techniques with the main results leaning on ideas from Schmidt-Hieber 17. ( This is critique solely on the *technical* novelty, considering known techniques in a different setting and carrying out all the computations and proofs does provide value) - - What seems missing to me a bit is how one would go about estimating *r* for a given dataset. Obviously for any real data there is nonzero but potentially negligible correlation based on distance. However it would be good if there was a rigorous way to determine what negligible means and how it is determined. Other Comments Or Suggestions: - In the abstract who is $d$, it does not appear in the formulas? Questions For Authors: - How would one determine $r$ for a real dataset (efficiently)? What if the correlation is not zero far out but rather very small (think of $\epsilon << 1$)? I am not an expert in that literature but in general most graph problems can be computationally quite expensive. - On thing I am missing a bit motivation wise is what the practical implications would be from this theorem. Too me it seems that the theorem mostly states that local (i.e. simpler) structure leads to better bound, which in itself is not surprising. What takeaway is there beyond that? In particular do you believe that the rate would be reasonable baseline for practical settings? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: __“The main result (Theorem 4.2) is quite abstract in the sense, that it only makes a qualitative statement about the existence of neural networks achieving the bound. This in combination without sufficient practical experiments makes it difficult to judge how impactful this result is.”__ We appreciate the reviewer’s concern about the limited empirical validation. While we acknowledge this limitation, we note that: * The paper's primary contribution is theoretical: Proving rigorously that neural networks can achieve dimension-independent rates in density estimation under commonly accepted assumptions. This provides yet another compelling justification for using DNNs in practice. * The local dependency structure we identify is already implicitly leveraged in successful practical methods, particularly in convolutional architectures and patch-based approaches to anomaly detection such as SoftPatch (NeurIPS 2022). Convolutional structures inherently exploit local dependencies through limited receptive fields, closely aligning with the Hammersley-Clifford theorem, which we use to prove our results. * Due to strict space constraints (we are already at the page limit), we focused on developing the theoretical and intuitive foundations thoroughly. A comprehensive empirical study would require significant additional space to properly evaluate different architectures, datasets, and parameter settings. __“On thing I am missing a bit motivation wise is what the practical implications would be from this theorem. Too me it seems that the theorem mostly states that local (i.e. simpler) structure leads to better bound, which in itself is not surprising. What takeaway is there beyond that? In particular do you believe that the rate would be reasonable baseline for practical settings?”__ The fact that real-world densities can be estimated with (a) dimension-independent rates through (b) neural density estimation is a surprising result, and indeed suggests this presents a reasonable baseline in practical settings. For the reasons outlined above, we leave it to future work to evaluate practical aspects more thoroughly, and indeed, we believe this is an important direction to pursue. Moreover, the significant attention devoted simply to validating the manifold hypothesis [1,2,3] illustrates the community's interest in identifying structural assumptions that explain learnability in real-world data. From this viewpoint, the MRF framework we propose provides another natural way of suggesting structured function spaces—especially given that the Hammersley-Clifford theorem explicitly characterizes functions reflecting local conditional independence. __"What seems missing to me a bit is how one would go about estimating $r$ for a given dataset. Obviously for any real data there is nonzero but potentially negligible correlation based on distance. However it would be good if there was a rigorous way to determine what negligible means and how it is determined."__ __“How would one determine $r$ for a real dataset (efficiently)? What if the correlation is not zero far out but rather very small (think of $\epsilon \ll1$)? I am not an expert in that literature but in general most graph problems can be computationally quite expensive.__ A significant advantage of the settings we consider is that they naturally suggest reasonable MRF structures (along with their $r$) that can be verified empirically, as we have shown. Indeed, datatypes for which DNNs have historically shown exceptional success—such as images, audio, and text—naturally exhibit conditional dependence structures closely related to the MRF framework we propose. This connection is well-established, as evidenced by comprehensive textbooks dedicated to applying Markov Random Fields to image processing tasks [4,5] . Outside of these settings, we acknowledge that identifying the graph or $r$ from data can be challenging. With the extra space allotted for the camera ready we will include related work on the estimation of MRF graphs and $r$. __“In the abstract who is $d$, it does not appear in the formulas?”__ This was an oversight on our part: $d$ is the ambient dimension of our data, i.e., the dimension of the data whose pdf we are estimating. We will make this clear in the camera ready. [1] Pope et al. The intrinsic dimension of images and its impact on learning. ICLR, 2021(in paper) [2] Carlsson et al. On the Local Behavior of Spaces of Natural Images. International Journal of Computer Vision, 76(1):1–12, January 2008. [3] Brown et al. The Union of Manifolds Hypothesis, NeurIPS, 2022 (in paper) [4] Li, S. Z. Markov Random Field Modeling in Image Analysis, 2009 (3000+ citations) [5] Blake, A., Kohli, P., and Rother, C. Markov random fields for vision and image processing. 2011 --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my concerns, I have updated the score in my review accordingly
Summary: This paper claims to propose a novel theoretical framework that exploits the data structure using Markov Random Fields (MRFs) to provide a dimension independent converge rate for structured density estimation. It shows that using Markov Random Fields allows capturing local dependency between pixels while considering far-away pixels as nearly independent. It addresses the curse of dimensionality complementary to the manifold hypothesis. Claims And Evidence: The paper suggests using power graphs to model possible arbitrary long-range dependencies in the data structure, which is supposed to be more realistic compared to path and grid MRFs that only model extremely local correlations. This is motivated through the computation of correlation between pixel values in CIFAR-10 in Figure 3 and in COCO in Appendix D. The paper proposes convergence rates for ReLU fully connected networks that are applicable to spatial (e.g., images) and sequential data (audio, text), spatiotemporal (e.g., videos), and hierarchical data through the main theoretical result which is the Theorem 4.2. Methods And Evaluation Criteria: This paper is mostly theoretical, the evaluation of the approach compared to path and grid MRFs is qualitative with the correlation plots of Figure 3. Theoretical Claims: For ReLU fully connected networks, the paper claims to obtain a convergence rate in $O(n^{-1/(4+r)})$ where $n$ is the number of training samples and $r$ is the size of the largest clique in G, where G is the graph structure associated with the data distribution. This graph G can be obtained using the power graph of the original graph that represents the data structure to capture long-range dependencies. This is compared in the argumentation with $O(n^{-1/(2+d)})$, thus providing a large improvement with $r + 2 \ll d$. Furthermore, $r$ is claimed to be mostly independent of the data dimension, thus providing Dimension-Independent Rates that alleviate the curse of dimensionality. Experimental Designs Or Analyses: Figure 3 shows a scatter plot that motivates the choice of using Markov random fields to model the data distribution, showing that (a) the correlation decreases with the distance and (b) conditioning on a neighboring pixel decreases the correlation. Furthermore, Figure 4 demonstrates the motivation for using Power Graphs instead of standard grid structures to model image distributions. Supplementary Material: I did a check of the Supplementary material which seems correct. Relation To Broader Scientific Literature: The paper is based on the factorization property for Markov Random Fields Hammersley & Clifford, (1971). This is applied to ReLU neural networks considering Theorem 5 of Schmidt-Hieber, (2017). It also discusses the more general manifold hypothesis (Bengio et al., 2013; Brahma et al., 2016) in section 3.3 and in Appendix E. It would be useful to provide more explanation to the unfamiliar reader for the four examples of section E. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is very well written and provides a comprehensive overview of the field and the problem setup. The contribution is mostly theoretical with the provided convergence rate of Theorem 4.2, and it is then illustrated with numerous examples. Other Comments Or Suggestions: While the theorem claims are applicable to diverse data time (audio, text, image), the illustrations are mostly performed with images. It would strengthen the claims to provide plots similar to Figures 3 and 8 (and possibly, an estimate of $r$) with another data type (e.g., with sequential data such as audio or text). I would also suggest defining $n$ in the abstract. Questions For Authors: The theoretical result holds for ReLU Networks. Any insights on how it may generalize (or not) to other spaces of networks? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: __“It would be useful to provide more explanation to the unfamiliar reader for the four examples of section E.”__ We will extend this discussion. In our response to Reviewer ZT7H, we included a more technical explanation of how the manifold hypothesis connects to dependence. We can incorporate a similar discussion in the main text or appendix. __“While the theorem claims are applicable to diverse data time (audio, text, image), the illustrations are mostly performed with images. It would strengthen the claims to provide plots similar to Figures 3 and 8 (and possibly, an estimate of $r$) with another data type (e.g., with sequential data such as audio or text).”__ We will gladly include a similar plot for a different data type in the next draft. It's worth noting that using Markov random fields to model images is well-established—line 215 cites two textbooks that are entirely on the subject. Our goal was simply to provide images to help the reader grasp the intuition. --- Rebuttal Comment 1.1: Comment: Thank you for your response! However, you did not answer to my question: "The theoretical result holds for ReLU Networks. Any insights on how it may generalize (or not) to other spaces of networks?" --- Reply to Comment 1.1.1: Comment: We apologise for the oversight! This is a great question. Our results are not specific to ReLU networks and can be extended to any class of neural networks for which appropriate approximation bounds are available. For example, our results can be extended to non-ReLU activation by using the results of https://arxiv.org/abs/1906.06903 (applicable to sigmoid, tanh, swish, etc.). We omitted this technical detail since ReLU networks are the most common choice in practice, and the proof remains the same.
Summary: This paper analyzes density estimation using a neural network under a conditional independence (MRF) assumption and shows that density estimation is possible with a dimension-independent rate under this assumption. Claims And Evidence: I don't think there's enough evidence here to support the claim that images and audio have the kind of independence structure assumed. The evidence provided seems to just be that conditioning on some particular pixel for CIFAR images seems to give a plot where the pixel correlation is smaller, which is extremely weak. Methods And Evaluation Criteria: There are no real experiments here. Theoretical Claims: The theoretical claims are correct. Experimental Designs Or Analyses: No real experiments Supplementary Material: No supplementary material Relation To Broader Scientific Literature: I'm extremely surprised that the result presented here was not already known, perhaps as a corollary of some stronger result. No real comparison with any prior work is provided. Essential References Not Discussed: I'm not familiar enough with the literature here to know for sure, but am skeptical that the results in this paper were not already known. Other Strengths And Weaknesses: Half of this paper is spent on just explaining MRFs, which most people in the ML community are probably already at least somewhat familiar with. Many of the claims about images having this conditional independence structure are not well-supported. I am generally skeptical that the theoretical results are not already known. Other Comments Or Suggestions: 1) Explain relationship to prior work much more thoroughly 2) Need much more convincing evidence to support the claim that images have the kind of conditional independence structure assumed. 3) Most of the paper needs to be spent on explaining your contribution, not on establishing preliminaries, as is currently the case. Questions For Authors: Please look at comments above. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: __"There are no real experiments here."__ __"No real experiments"__ We appreciate the reviewer’s concern about the limited empirical validation. While we acknowledge this limitation, we note that: * The paper's primary contribution is theoretical: Proving rigorously that neural networks can achieve dimension-independent rates in density estimation under commonly accepted assumptions. This provides yet another compelling justification for using DNNs in practice. * The local dependency structure we identify is already implicitly leveraged in successful practical methods, particularly in convolutional architectures and patch-based approaches to anomaly detection such as SoftPatch (NeurIPS 2022). Convolutional structures inherently exploit local dependencies through limited receptive fields, closely aligning with the Hammersley-Clifford theorem, which we use to prove our results. * Due to strict space constraints (we are already at the page limit), we focused on developing the theoretical and intuitive foundations thoroughly. A comprehensive empirical study would require significant additional space to properly evaluate different architectures, datasets, and parameter settings. __“I don't think there's enough evidence here to support the claim that images and audio have the kind of independence structure assumed. The evidence provided seems to just be that conditioning on some particular pixel for CIFAR images seems to give a plot where the pixel correlation is smaller, which is extremely weak.”__ __“Many of the claims about images having this conditional independence structure are not well-supported.”__ __"Need much more convincing evidence to support the claim that images have the kind of conditional independence structure assumed."__ This model is extensively well-supported: On line 215 (left) we cite two entire textbooks on the application of Markov random fields to image analysis: * Li, S. Z. Markov Random Field Modeling in Image Analysis, 2009 (3000+ citations) * Blake, A., Kohli, P., and Rother, C. Markov random fields for vision and image processing. 2011 Other textbooks on Markov random fields for machine learning (e.g., A Blake, P Kohli, C Rother, Markov models for pattern recognition: from theory to applications, 2011) contain applications to text and audio data. For example the classic n-gram text model is exactly a Markov random field. We included the images simply to give the reader an idea of how this model is manifest in real world data. __“I'm extremely surprised that the result presented here was not already known, perhaps as a corollary of some stronger result. No real comparison with any prior work is provided.”__ __“I'm not familiar enough with the literature here to know for sure, but am skeptical that the results in this paper were not already known.”__ __“I am generally skeptical that the theoretical results are not already known.”__ __"Explain relationship to prior work much more thoroughly"__ To the best of our knowledge estimators that achieve these rates do not exist, let alone that they can be achieved with a neural network using a reasonably computationally tractable loss function. Our literature review is quite extensive, with over 40 citations, including up to date citations from 2023-24. We even discuss alternative approaches to obtaining faster rates at lines 125-143. __“Half of this paper is spent on just explaining MRFs, which most people in the ML community are probably already at least somewhat familiar with.”__ __"Most of the paper needs to be spent on explaining your contribution, not on establishing preliminaries, as is currently the case."__ To maximize accessibility we tailored our work towards providing intuition to those who may not be so comfortable with these models, which has been appreciated by other reviewers (e.g. Reviewer 2ejB, “The paper is very well written and provides a comprehensive overview of the field and the problem setup.”) __"No supplementary material"__ There are 9 pages of supplementary material in the appendix.
Summary: Estimating probability densities is a recurring task in machine learning and statistics. However, this becomes challenging in high-dimensional settings due to the curse of dimensionality, where the number of required samples grows exponentially with the dimension. This paper addresses the challenge of high-dimensional density estimation by leveraging structured dependencies in real-world datasets such as images, videos, and text. These datasets often exhibit underlying structures that can be represented as graphs with small cliques, enabling more efficient estimation. The authors show theoretically the existence of deep neural networks trained to learn the density from samples that can achieve convergence rates that are independent of the dimension. In particular, when data dependencies are modeled by a power graph Markov random field, the neural network estimator converges at a rate of $n^{-1/(4+r)}$ in the $L^2$ norm, where $n$ is the number of samples and $r$ is the size of the largest clique in the graph of MRF. The notion of power graphs extends classical Markov random fields by capturing higher-order interactions. ## Update after rebuttal I would like to thank the authors for their response. I am maintaining my score. Claims And Evidence: Overall, the paper is well-written, and its claims are clearly stated, well-supported, and their consequences are well discussed. A mild criticism is that while the theoretical results establish improved convergence rates when the probability distribution follows a known clique structure, they do not address how to infer this structure from data. Methods And Evaluation Criteria: The contribution of the article is theoretical and does not rely on empirical evaluation or benchmark datasets. While this provides valuable theoretical insights, the lack of an explicit method for learning the clique structure limits its immediate practical applicability. A discussion on how these theoretical findings could be integrated into practical density estimation frameworks would strengthen the paper. Theoretical Claims: I checked the proofs and did not find any glaring mistakes. Experimental Designs Or Analyses: N/A Supplementary Material: I checked the proofs. Relation To Broader Scientific Literature: Density estimation is a recurring task in machine learning and statistics. This work contributes to the field by establishing dimension-independent convergence rates for structured probability densities. Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: 1) The paragraph beginning with 'While the manifold hypothesis...' is quite confusing. The discussion seems to conflate the concept of a data distribution being supported on a manifold with the statistical independence of samples. For instance, two independent samples from a uniform distribution on a manifold are still independent, even though they lie within the manifold. I believe it would be helpful to clarify this distinction and refine the formulation of this idea. 2) Figure 3: The visualization would be more informative if the entire dataset were used instead of just a subset. Questions For Authors: 1) The theoretical results assume that the clique structure of the probability distribution is given. Could you elaborate on possible approaches for inferring this structure from data? 2) Is it possible to achieve the same rate of convergence with shallow neural networks? 3) Do you think it is possible to design a kernel specifically adapted to the clique structure of the probability distribution that achieves a similar convergence rate? If so, how would such a kernel compare to the neural network approach in terms of efficiency and theoretical guarantees? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: __"A mild criticism is that while the theoretical results establish improved convergence rates when the probability distribution follows a known clique structure, they do not address how to infer this structure from data."__ __"The theoretical results assume that the clique structure of the probability distribution is given...?"__ A significant advantage of the settings we consider is that they naturally suggest reasonable MRF structures that can be verified empirically, as we have shown. Indeed, datatypes for which DNNs have historically shown exceptional success—such as images, audio, and text—naturally exhibit conditional dependence structures closely related to the MRF framework we propose. This connection is well-established, as evidenced by comprehensive textbooks dedicated to applying Markov Random Fields to image processing tasks [1,2] . Outside of these settings, we acknowledge that identifying the graph or $r$ from data can be challenging. With the extra space allotted for the camera ready we will include related work on the estimation of MRF graphs. __"The paragraph beginning with 'While the manifold hypothesis...' is quite confusing. The discussion seems to conflate the concept of a data distribution being supported on a manifold with the statistical independence of samples. For instance, two independent samples from a uniform distribution on a manifold are still independent, even though they lie within the manifold. I believe it would be helpful to clarify this distinction and refine the formulation of this idea."__ We appreciate the reviewer’s observation and agree that the example is correct. In retrospect, our point was too vague, and we will update the paper to state this more precisely. First, we consider the manifold hypothesis: __Manifold hypothesis:__ A probability measure $\mu$ on $\mathbb{R}^d$ satisfies the “manifold hypothesis” if $\operatorname{supp}(\mu) \subset M$, where $M$ is a submanifold of $\mathbb{R}^d$ with dimension less than $d$. Here we have the following lemma makes our point relating the manifold hypothesis and dependence precisely (which is maybe of interest in its own): __Lemma:__ Let $ M \subset \mathbb{R}^d$ be a submanifold of dimension $d' < d$. Let $X = (X_1, \ldots, X_d)$ be a random variable defined by a probability density function $p$ on $M$ (using the standard manifold area/volume measure), and assume that for each $i$, the marginal distribution of $X_i$ is given by a probability density function $p_{X_i}$ (i.e. they are nondegenerate). Then for any subset of $d'+1$ coordinates $I_1, \ldots, I_{d'+1}$, the collection of random variables $\bigl( X_{I_1}, \ldots, X_{I_{d'+1}} \bigr)$ cannot be mutually independent. So, for any density on a $d'$-dimensional manifold embedded in Euclidean space, where the marginal distribution for each covariate is non-degenerate, any collection of $d'+1$ covariates must exhibit some dependencies. In other words, the lower the intrinsic dimension $d'$, the more dependence must be present among the covariates. The proof this is simple somewhat technical: basically if we assume $d'+1$ covariates are independent then their support $\prod_{i=1}^{d'+1}$ looks like a $d'+1$ dimensional manifold, which contradicts the $d'$ dimensional manifold assumption (we will put this into the appendix). __“Figure 3: The visualization would be more informative if the entire dataset were used instead of just a subset."__ The full dataset is 60,000 points, which would be difficult to visualize, so we felt that 100 randomly selected points made this plot easier to understand. We will happily add more points if you like. __"Is it possible to achieve the same rate of convergence with shallow neural networks?"__ __"“Do you think it is possible to design a kernel specifically adapted to the clique structure of the probability distribution that achieves a similar convergence rate?"__ These are interesting questions! While we do not know of a shallow method or an adaptive kernel that achieves this directly, these are fascinating problems for future work. We hope that our results provide a foundation for further investigation into how and why dimension-independent rates are attainable in models for images, audio, and text. [1] Li, S. Z. Markov Random Field Modeling in Image Analysis, 2009 (3000+ citations) [2] Blake, A., Kohli, P., and Rother, C. Markov random fields for vision and image processing. 2011 --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. Please update the manuscript accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for the acknowledgement! We are happy to update the manuscript. Although ICML does not allow revisions at this stage, we will certainly include these revisions in the camera ready version.
null
null
null
null
null
null
Investigating the Overlooked Hessian Structure: From CNNs to LLMs
Accept (poster)
Summary: In this work, the authors report a previously overlooked power-law Hessian structure in well-trained deep neural networks, which includes CNNs and LLMs. The authors show that both the top few thousand eigenvalues as well as the eigengaps follow approximately a power-law $p(\lambda) = Z_c^{-1} \lambda^{-\beta}$, which they test statistically using the Kolmogorov-Smirnov Test (KS Test). This phenomenon is observed across multiple optimizers, datasets, and architectures and differs from the eigenspectrum of randomly initialed networks. In addition, the authors provide a theoretical interpretation via the maximum-entropy principle. The authors also provide additional ablations, including different optimizers, overparametrizing the network, the size of training data, and the batch size, and show how the fitted slope magnitude correlates well with the largest Hessian eigenvalue, the Hessian trace, and the test performance and how the KS distance $d_{KS}$ predicts the generalization performance of LLMs models, which conventional sharpness measures ($Tr(H)$ and $\lambda_{\max}$) fail to capture. Claims And Evidence: Generally, I think that all claims are supported by clear and convincing evidence. For some of the experiments, I believe that one needs to extend the figures to fully make the claims of the authors. For instance, in the experiment on varying Batch Size, where the authors describe three different phases of large-batch training, it would be beneficial to also include the Test Error to Figure 8 to more clearly different phases apart from the qualitative shape of the eigenvalue distribution. Actually, it would be interesting to see if one could have a similar plot as e.g. Fig. 10 or Fig. 11, where the $d_{KS}$ for each batch size is plotted against the Test Loss/Error. Methods And Evaluation Criteria: The methods and evaluation criteria seem reasonable to me. The authors chose a reasonable range of different models, including FCN, LeNet, ResNet18, and ViTs for some image classification datasets, including MNIST, Fashion-MNIST, Cifar-10/100, and non-image Avila as well as GPT2-models on OpenWebText, Shakespeare, and MathQA. Also, the ablation studies were quite thorough, including training data size, model size, batch size, and optimizers. Of course, it would be beneficial if the results could also be reproduced in some other settings, as Figure 3, Figure 4 and Figure 7 are all conducted on LeNet trained on MNIST. Is it possible to reproduce some of the results for instance on ResNet18 trained on Cifar10? Theoretical Claims: I only checked the proof for Theorem 1 in Appendix A and think that it is fine as is. Experimental Designs Or Analyses: The experimental designs and resulting analyses are sound as I have already elaborated in the section above on "Claims and Evidence". As elaborated above, it would be nice to provide further figures in the setting of image classification, similar to the section on LLMs. e.g. plotting $d_{KS}$ against the test loss for the ablation study on large-batch training and training-data size. Supplementary Material: I reviewed parts of the supplementary material, including the proof of Theorem 1 in section A, details on the Experimental settings in section C, and supplementary experiment results of CNNs in section D. I briefly skimmed the section on the experimental results of LLMs in section E and section F on the Kolmogorov-Smirnov test, but I did not closely look at the tables 6-8. I did not review section B. Relation To Broader Scientific Literature: The analysis of $d_{KS}$ as a generalization measure in LLMs relates to previous results by Jiang et al. [2019] and Kaur et al. [2023], which uses sharpness measures such as the Hessian trace $\text{Tr}(H)$ and the largest eigenvalue $\lambda_{\max}(H)$ as a measure of generalization. The power-law Hessian structure relates to previous work analyzing the Hessian structure of DNNs, including work by Pennington & Bahri [2017], who study the eigenvalue distribution through random matrix theory; Singh et al. [2021], who study the rank of DNNs; Liao & Mahoney [2021], who study the Hessian spectra of nonlinear models and Dauphin et al. [2024], who investigate the Nonlinear Modeling Error (NME) matrix part of the Hessian, which has been neglected in previous analysis. Essential References Not Discussed: - Other Strengths And Weaknesses: I think the observation of the power-law distribution of the eigenvalues that the authors report is quite intriguing and it's exciting to see it occur in such various settings. It is also interesting to see how the slope magnitude correlates with other sharpness measures and $d_{KS}$ correlates well with generalization in LLMs. I believe that this is very interesting work and am open to raising my score if the authors address my comments and questions. Other Comments Or Suggestions: - some minor typos: line 258: considered, line 389: maximum, line 429: considered - it was not very clear to me what the three phases of large-batch training were until I read the section in the appendix. I would recommend the authors either to refer to this section in the main paper or to move this section from the appendix to the main paper to avoid confusion. Questions For Authors: - What is meant by the so-called equilibrium of DNNs, which is mentioned in line 124? Is it equivalent to a DNN at convergence? - I am not entirely sure, but the eigenvalue distribution of sharp minima (e.g. large-batch training with B=768) intuitively seems to obey the power-law distribution less than a randomly initialized network, which to me implies a larger $d_{KS}$. However, I would assume that the network trained with $B=768$ will still perform better than random. (This is also why I asked for additional plots.) Can the authors explain this? Or for which cases $d_{KS}$ is a good predictor of generalization? - I checked the work by Wu et al. [2017] and noticed therein in Figure 2 (right) that models with quite different generalization behavior (21% vs. 92.9%) have a similar shape of the spectrum (at least by eyeballing), but that the spectrum is simply shifted. Can the authors elaborate how this result fits into the analysis that the authors have conducted? (e.g. Figures 3 and 4 in this work?) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your insightful feedback. Below, we duly address your concerns through additional experiments and careful responses. Q1: Is it possible to reproduce some of the results for instance on ResNet18 trained on Cifar10? A1. Yes, it is possible. The Kolmogorov-Smirnov distance decreases from d_KS = 0.198 in randomly initialized ResNet18 to d_KS = 0.031 (< d_c) in the well-trained model, providing further empirical support for our paper's central conclusion. We expect to supply these results in the revision. Q2: The experimental designs and resulting analyses are sound, as it would be nice to provide further figures in the setting of image classification, similar to the section on LLMs. A2. Thank you for your complementation and suggestion. We will supply these results in the revision. Q3: It is also interesting to see how the slope magnitude correlates with other sharpness measures and correlates well with generalization in LLMs. A3. Thank you for your suggestion. However, we are unable to identify significant correlations between slope and other metrics on LLMs while it works well on CNNs. In particular, for GPT2-small fine-tuned on the Shakespeare dataset, the Pearson correlation coefficient of the slope magnitude to test loss and hessian trace are -0.173 and 0.414, drawing a negative conclusion on the validity of using the Hessian spectrum slope as a generalization metric. Q4: What is meant by the so-called equilibrium of DNNs, which is mentioned in line 124? A4: Thank you for raising this question. We have followed the concept from statistical physics (e.g. Boltzmann Machine) we referenced in line 117. Equilibrium often indicates that a system or a network reaches a stationary distribution, which can help in understanding the optimization dynamics of NNs in some ML works. However, we would like to remove the contents of equilibrium, as it is not a necessary concept in our work. Q5: Can the authors explain the eigenvalue distribution of sharp minima (e.g. large-batch training with B=768) intuitively seems to obey the power-law distribution less than a randomly initialized network? A5: Thank you for your question. We have discussed the observation that large-batch trained networks (B > 768) do not exhibit power laws, resembling under-parameterized networks in Appendix D, and summarized as the three phases in large-batch training. Our Hessian spectra analysis discovery differs from traditional beliefs that different phases exist in training. In phase 2, the Hessian eigenvalue increases and breaks the power-law structure, while the model's performance is much superior compared to untrained neural networks. We may leave a deeper investigation for future works. Q6: Can the authors elaborate how the work by Wu et al. [2017] (Figure 2 right) fits into the analysis? A6: Thank you for your suggestion. Wu et al. utilize an attack set to control the model's performance which differs from the principle experiment setup in our main paper. We believe the behaviour cannot be guaranteed to be consistent across different data recipes and setups. We have instead discussed a similar setting with noisy labels presented in Figure 15, Appendix D. The red line refers to the model with the best generalization performance as trained on the fully clean dataset, corresponds to the red line of the model with 92.92% accuracy in Figure 2 (right) of Wu et al. (2017). We may observe that the power-law behaviour of the Hessian spectrum computed on the Test set is disturbed by the noisy labels introduced. We may also observe that the magnitude of eigenvalues increases for networks trained with adversarial labels and poorer generalization performance. Finally, we sincerely thank the reviewer's feedback. It definitely inspires us to further improve our work and clarification for more readers. We respectfully hope that the reviewer can reevaluate our work given the responses addressing your main concerns. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal. A1. Thank you for the additional experiments. Have you tried using multiple optimizers to get a similar picture as Figure 3 and 4? Can you also provide the figures via an anonymized link? A3. Thank you for the answer. So it seems like the $d_{KS}$ measure is more predictive in this case. I just compared the results in Figure 11 on fine-tuning tasks with the results in Figure 31 on LoRA-Adapters, where the picture seems reverted and the Hessian trace correlates very well with generalization, while $d_{KS}$ does not. Do the authors also have a possible explanation in what ways full-parameter fine-tuning differs from parameter-efficient fine-tuning tasks such as with LoRA? Q5./Q6. I was raising these two questions because it would put the $d_{KS}$ measure as a predictor for generalization into question, if there are certain choices in training (e.g. large batch training) which lead to a well performing solution, while not having the power-law structure of the Hessian. Despite this intriguing observation on the power-law structure in well trained networks (in most cases), it remains unclear to me when $d_{KS}$ is a good measure and in which cases the slope of the power-law is also insightful. (A3) --- Reply to Comment 1.1.1: Comment: **Q7 (to A1):** Can you also provide the figures using multiple optimizers as Figure 3 and 4 via an anonymized link? **A7:** Thank you for your suggestions. Sadly, due to the constrained time limit in this rebuttal round, we cannot provide the full experiment results on multiple optimizers. We have provided the experiment results for Optimizer SGD and Adam in the link below: \url{https://anonymous.4open.science/r/anonymous_link_11101-5D5D/README.md}. The experiment setups followed the CIFAR-10 settings in Appendix C.1.3. We wish to supply further results in the future. **Q8 (to A3):** Do the authors also have a possible explanation in what ways full-parameter fine-tuning differs from parameter-efficient fine-tuning tasks such as with LoRA? + In which cases the slope of the power-law is also insightful? **A8:** Regarding Figures 11 and 31, we note that no existing literature indicates that the Hessian of low-rank adapters accurately captures the true loss landscape of the original model or reliably reflects the flatness or sharpness of its minima. In pursuit of an intuitive explanation, the observed pattern of an initial increase followed by a decrease may indicate a transition from one local minimum to another within the loss landscape. We further hypothesize that the power-law slope only becomes informative when the power-law behaviour is highly consistent, as indicated by a significantly small d_KS; in such cases, the slope is more likely to have a stronger impact on the model's performance. **Q9 (to Q5/6 & A3):** Despite this intriguing observation on the power-law structure in well-trained networks (in most cases), it remains unclear when d_KS is a good measure. **A9:** Thank you for this insightful question. In our paper, we revealed the Hessian power-law structure and demonstrated its widespread existence in deep learning models. This power-law structure itself is a novel and promising perspective on understanding DNNs. Building on this power-law hypothesis and our empirical observations, we introduced the KS distance as a novel measure and explored its connection to the loss landscape, the sharpness of minima, and generalization behaviour — offering what we believe is a valuable contribution to the field. Furthermore, while existing literature highlights sharpness-based metrics as promising directions [1, 2], they often fall short for LLMs, whereas the KS distance tends to correlate more consistently with generalization in many cases. We recognize the importance of further investigating the conditions under which d_KS serves as a reliable metric, which we plan to pursue through more extensive experimentation across diverse settings. Nevertheless, we acknowledge that no single generalization metric can universally apply across all deep learning settings—considering the wide variability in model architectures, datasets, tasks, batch sizes, and optimizers [3]. We hope that our work, alongside other studies exploring the strengths and limitations of various generalization metrics, contributes to a deeper understanding of deep neural networks and their behaviour under different conditions. Refs. [1] Jiang, Yiding, et al. "Fantastic Generalization Measures and Where to Find Them." ICLR 2020. [2] Dziugaite, Gintare Karolina, et al. "In search of robust measures of generalization." Advances in Neural Information Processing Systems 33 (2020): 11723-11733. [3] Gastpar, Michael, et al. "Fantastic Generalization Measures are Nowhere to be Found." ICLR 2024.
Summary: This paper investigates the power-law structure of the Hessian matrix in deep neural networks, including Convolutional Neural Networks (CNNs) and Large Language Models (LLMs). Key contributions include: A maximum-entropy principle from statistical physics is proposed to explain the emergence of the power-law structure, linking it to flat minima and generalization. The authors identify a power-law distribution in the Hessian spectra of well-trained NNs, contrasting with random or under-trained models. This structure is consistent across CNNs, LLMs, and Vision Transformers (ViTs). This paper indicate this power-law structure helps predict the generalization of LLMs. Claims And Evidence: The claims in the submission are generally supported by a theoretical interpretation and experimental discovery. However, the theoretical analysis here is merely an intuitive explanation and does not provide a rigorous proof. Methods And Evaluation Criteria: The evaluation criteria for generalization are widely recognized and reasonable standards. Theoretical Claims: The proof of the theory are correct but somehow straightforward. Experimental Designs Or Analyses: From an experimental perspective, the experimental design of this paper can illustrate the claims. Supplementary Material: I have reviewed most part of the supplementary material. Relation To Broader Scientific Literature: This article adopts the well-known maximum entropy principle to intuitively explain the power-law distribution of the eigenvalues of the network Hessian matrix. The main contribution lies in extensive experimental observations, which is not very related to prior works. Essential References Not Discussed: No. Other Strengths And Weaknesses: The theoretical analysis is the weakness of this article; the author only somewhat tenuously accepts the maximum entropy principle and can not provide a rigorous explanation of the relationship between the two sides. However, this article still provides a new perspective on how to predict the generalization of LMMs. Other Comments Or Suggestions: See the following Questions. Questions For Authors: 1. The author should mention the computational cost and time required to obtain the Hessian matrix and its spectrum. This could become unaffordable when applied to advanced LMMs? 2. In the evaluation criteria of this article, when the KS distance exceeds 0.05, it is considered as not satisfying the power-law hypothesis. However, as shown in Table 1, the DS distance of untrained networks is also quite small. Is this value a widely recognized criteria, or could it be that many untrained models themselves also satisfy the power-law hypothesis? Ethical Review Concerns: No ethical concerns identified. This work focuses on the empirical performance of existing networks. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your support of our work and constructive suggestions. We try our best to structure your concerns and duly address them as follows: Q1: The author cannot provide a rigorous explanation of the maximum entropy principle. However, this article still provides a new perspective on how to predict the generalization of LMMs. A1: Thanks for the suggestion. First, to recognize our works, we would like to convey that we are the first to provide an interpretation of the power-law hypothesis while following the maximum entropy principle in statistical physics. We refer you to the works (Bahri et al., 2020; Torlai & Melko, 2016) we referenced in line 117 for better understanding. To ground our approach, we draw parallels with research works in the machine learning field that adopt similar theoretical frameworks: Dinh et al. studied minima flatness with similar Hessian-based measures [1] and Baldassi et al. also interpreted the minima flatness of neural networks from an entropy perspective [2]. However, we understand that this work lacks a strong justification for using the maximum entropy principle for power-law interpretation. We are willing to improve the theoretical analysis and justify the relationship between the maximum entropy principle and minima flatness of neural networks in future revisions. Q2: The author should mention the computational cost and time required to obtain the Hessian matrix and its spectrum. A2: Thanks for your suggestion. We employed the Stochastic Lanczos Quadrature (SLQ) algorithm with optimizations tailored for large neural networks in our work. SLQ consists of K Lanczos iterations and M integral steps to reduce estimation error. The primary computational expense arises from repeated Hessian-vector product (HvP) computations in each Lanczos step, requiring two backpropagation passes, which we implemented using PyTorch's auto-differentiation framework. To manage memory overhead, we stored intermediate SLQ computations as temporary tensors on disk, reducing space requirements to a level comparable to training the network via SGD with momentum. Assuming the time complexity of a single backpropagation step is O(N), the overall SLQ complexity is O(N^2 \cdot K \cdot M). For all LLM experiments presented in our paper, we set M = 3, K = 1000 and a sampled batch size of 256 as in Zhang et al. (2024b). In our implementation, computing 3000 Hessian eigenvalues for the last layer of GPT2-small takes around 12-14 GPU hours on a standard RTX 4090. While PyTorch's current implementation does not support explicit storage of the computation graph, in theory, the time complexity could be further reduced to O(N \cdot K \cdot M) by pre-saving gradients and the computation graph before SLQ execution. Consequently, computing the top-N eigenvalues has a time complexity equivalent to training the network for N steps. Q3: Is the small KS distance of untrained networks widely recognized criteria, or could it be that many untrained models themselves also satisfy the power-law hypothesis? A3: We do not believe that many untrained models could satisfy the power-law hypothesis. We would like to convey that the critical values (d_C) of the KS Tests are derived from a=0.05 significance level (Massey Jr, 1951) and we would reject the null hypothesis for d_KS below the critical values. As we have conducted a large number of experiments on models from CNNs to LLMs, we have not observed the d_KS of any untrained networks are below d_C. Some untrained models may have a close d_KS but they are still far from accepting the power-law hypothesis. Finally, we sincerely thank the reviewer's feedback. It definitely inspires us to further improve our work and clarification for more readers. We would highly appreciate it if you could kindly reconsider the rating of our work given the initial score of 3 and the addressed concerns. Your suggestion and action will both make an addition to our community. Refs: [1] Dinh, Laurent, et al. "Sharp minima can generalize for deep nets." ICML, 2017. [2] Baldassi, Carlo, Fabrizio Pittorino, and Riccardo Zecchina. "Shaping the learning landscape in neural networks around wide flat minima." PNAS, 2020.
Summary: This paper investigates the Hessian structure in CNNs and LLMs from the point of view of the power laws in the Hessian spectrum. It is shown that across a range of settings the power law like trend holds for the Hessian eigenvalues of a trained network, which however is not the case at random initialization. There is some rough theoretical interpretation that is provided based on the maximum-entropy principle. In addition to power laws in the Hessian spectrum, similar thing is shown for eigengaps. Alongside, there is a thorough empirical exploration of the effects of power law in regards to training dataset size, batch size, and some interesting trend is shown for its ability to correlate with the generalization during the course of pre-training/fine-tuning. Claims And Evidence: - In Eqn 2, when discussing the finite-sample power law, normalization is carried out by the Hessian trace. However, this may not a be suitable normalization when the Hessian is indefinite, which is definitely the case at a random initialization. A more appropriate choice would have been to consider the absolute value of the eigenvalue and then see when they are ranked, if the power law trend is present or not. Currently, I'm concerned how this would change the trend for the spectrum at random initialization. - The link to maximum entropy is interesting, but conceiving of measure flatness of the minimum with the determinant of **Hessian inverse** is ad-hoc. Normally, you would use a spectral quantity based on the Hessian, so det(H) would be fine. I believe this is soley being done for the purposes of their interpretation, else the power law has positive exponents, which would be a mismatch. Can the authors comment on this? Methods And Evaluation Criteria: - I am curious if you were to look at all the eigenvalues, and not just the top 6000, how much of this behaviour holds. Of course, I understand the difficulty with such an endeavour, but this can still be done with a network having parameters in the range of 20K and see what happens. Theoretical Claims: yeah theorem 1, and I think the use of Hessian inverse for determinant is a bit ad-hoc as detailed elsewhere Experimental Designs Or Analyses: - The connection to generalization during training is interesting, but could use some measurements to see if the correlation is really robust or not. - When showing the trend with different batch sizes, it is unclear what the underlying setting is. Is the number of training steps kept the same across batch sizes? How is the learning rate adjusted? These things could make a big difference - The statement that "the model trained with limited training data finds minima with many sharp directions like under parameterized models" and their finding therein is a bit strange. Since, underparametrized is relative to number of parameters (which I believe are fixed), so actually when training set is 600, it is the most overparameterized setting. So I don't quite understand this statement in the paper and the trend. Also, see claims and evidence section Supplementary Material: No Relation To Broader Scientific Literature: This is a complementary perspective to the study of Hessian in deep learning, and the initial results like the correlation trend with generalization seem promising. Essential References Not Discussed: I think the work of Mahoney and co-authors on heavy tails could be discussed and compared to, since essentially both these works are effectively comparing the trends of spectral decay. Other Strengths And Weaknesses: The paper contains some very interesting results, on the power-law structure. However, often a lot of the results are thrown at the reader, and the overall narrative is not as cohesive. Some of the theoretical discussion is not well justified. The potential link to generalization offers an interesting avenue, perhaps that could be elaborated and studied further, to result in a more tightly-knit paper. Also, I'd move some of the other marginal analyses to the appendix. Other Comments Or Suggestions: - Table 1 caption: no \beta or slopes are mentioned - It would be worth elaborating on the link between Eqns 1, 2, 3 to reach a wider readership. - The 'random' in the column label 'Training' is confusing at first sight, and rather say something like, 'random-init'; else it could be thought of something else. Questions For Authors: - Looking at table 1, it seems that the more complex the setting, the higher is the value of d_KS. So it seems like the power law test might fail if you go to a setting which is more complicated. Could the authors comment on the possible reason for this? Also, it would be nice to see the results on ImageNet and see what the trend is like! - How many datapoints are used to estimate the Hessian in each of these cases in Table 1? Ideally, the power law trend should not be too sensitive to this choice of datapoints, assuming a sufficient number of samples is used. Can the authors verify this aspect? - What do you mean by the networks being close to equilibrium, just above section 2.3? It is very vague at the moment. - What are the numbers for GPT-2 medium and large on OpenWebText? - It would have been interesting to read off some trends from looking at the use of different optimizer. But right now it's unclear if something can be said, except that this power law occurs for various optimizer choices. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your support of our work. We try best to structure your concerns and address them as follows: Q1: Normalization by the Hessian trace in Eqn.2 may not be suitable when Hessian is indefinite. A1: Eqn.2 attempts to describe the power-law from a frequency perspective, and we reported true eigenvalues in experiments following Eqn.3 instead of being trace-normalized. Further KS tests on the absolute eigenvalues also suggest robust power-law structure. Q2: Measuring minima flatness with the Hessian inverse determinant is ad-hoc. A2: Mathematically, the absolute value of the determinant quantifies how the transformation scales n-dimensional volume, which naturally make Hessian a measure of minima flatness. It corresponds to the volume scaling factor of parallelepipeds in 3D, which implies sharp or flat curvature. We also draw parallels with research works that the Hessian determinant is a kind of sharpness measure [1] and Empirical Fisher determinant is an approximation of loss curvature [2]. We will improve the clarification in the revision. Q3: I am curious how this behavior holds at all eigenvalues. A3: Experiments on LeNet show the eigenvalues quickly decay to near zero beyond certain threshold. Researchers commonly focus on the top eigenvalues since they are significant and critically relate to the network's performance compared to smaller ones. Q4: Could use some measurements to see if the connection to generalization during training is robust / What are the numbers for GPT2 medium/large? A4: In addition to Pearson's correlation (p_corr), we have re-evaluated with Spearman's Rank Correlation (s_corr) and OLS's statistical significance (p_ols<0.05). For GPT2-{nano,small} finetuned on Shakespeare (Figure 10-11), s_corr={0.886,1.0} and p_ols={0.009,0.0}; For GPT2-{medium,large} pretrained on OpenWebText, p_corr={0.766,0.803}, s_corr={0.855,0.786} and p_ols={0.01,0.03}; the trend in the emergence of the Hessian power-law structure over training seems robust. Q5: When showing the trend with different batch sizes, the underlying setting is unclear. A5: We set epochs=50 with hyperparameters stated in Appendix C.1.2 in Table 4, thus all models are trained with the same total images. We follow your advice and use fixed total training steps, early experiments suggest that with more computation/same iterations, dKS for large-batch indeed decreases, while it is still significantly higher than small-batch training. Q6: The statement `model trained with limited training data finds minima ... like underparameterized models' is a bit strange. A6: We mean to convey that both data and parameter scaling could cause the power-law to emerge, and both scenarios of limited data and underparameterization would similarly break the power law and lead to many sharp directions in the loss landscape. We will clarify this more clearly in the revision. Q7: It seems like the power law test might fail if setting is more complicated in Table 1. A7: Thanks for this interesting question. We actually agree with your point. Too simple neural networks like LeNet may not well learn complex tasks like ImageNet, thus training loss will be very high and the power-law structure may break due to the poorly learned representation. Training LeNet however depend on ImageNet-size problems and we may supply it in the revision. Q8: How many data points are used to estimate the Hessian in Table 1? A8: All Table 1 experiments utilize the full training dataset of MNIST, CIFAR-10 etc. The same experiment on 1/10 dataset (5000 data points) has no noticeable difference. Q9: What do you mean by the networks being close to equilibrium above section 2.3? A9: Equilibrium often indicates that a system/network reaches a stationary distribution, which can help interpret NN optimization dynamics in theoretical ML works. We would remove the contents as it is not necessary in our work. Q10: It would have been interesting to read off some trends in the use of different optimizers. A10: Our findings primarily demonstrate the power-law existence, and the slope correlates to generalization and minima sharpness learned by different optimizers. A more fine relation to optimizer settings is an open question and beyond the main scope of this work. In addition to addressing the primary question, we have incorporated the reviewers’ suggestions to correct typo errors, clarify ambiguities in Eqn 1-3, change 'random' labels to 'random-init' in Figures (9,10,14), and discuss the work of Mahoney on heavy tails. We sincerely thank the reviewer's feedback and constructive suggestions. It definitely inspires us to further improve our work. We would highly appreciate it if you could kindly reconsider the rating, given the enhanced work quality under your kind help. Refs: [1] A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima. ICLR 2021. [2] Where is the Information in a Deep Neural Network? arxiv:2019
null
null
null
null
null
null
null
null
Dataflow-Guided Neuro-Symbolic Language Models for Type Inference
Accept (poster)
Summary: The authors present a framework for enhancing language models’ ability to accurately infer types from code. This is achieved by decomposing the input into a high-level program composed of evaluations and type analyses using LMs. This high level program can then be deterministically executed to perform type inference. Claims And Evidence: The authors primary claims are that their proposed framework improves type inference by applying smaller models (suitable for local deployment) in a structured manner. The claim is supported by their evaluation, which reports accuracy of various approaches on various tasks. The results of this evaluation indicate that their approach achieves greater accuracy in general, compared against other symbolic and neural methods. Methods And Evaluation Criteria: Both the evaluation criteria and methodology make sense for the application; the authors use program generation and dataflow analysis to reconstruct the input as a series of modules to capture dependencies. This ‘program’ is passed to an interpreter for execution Theoretical Claims: There do not seem to be any rigorous theoretical claims or proofs in the main body of the paper. Experimental Designs Or Analyses: The experimental design and analysis is valid. Mild concerns are raised in a later section of this review, but by and large there are enough results presented to justify the claims made in the paper. I am referring specifically to Table 1 (the primary results table) and the subsequence analysis in section 4. Supplementary Material: I reviewed sections A, B, and C. I did not thoroughly review the sections regarding implementation details. Relation To Broader Scientific Literature: This paper contributes to the trend of producing neuro-symbolic frameworks which apply a deep learning model(s), usually large language models, in a structured system that executes symbolic operations or algorithms on programs or data synthesized by the model. This paper supports an argument for this design pattern, which is that certain abilities that are strong in truly large models but weaker in smaller models can be achieved by the smaller models if this design pattern is applied correctly. Essential References Not Discussed: To my knowledge, all relevant references are included. Other Strengths And Weaknesses: Strengths: + Broad range of LMs included in the evaluation. + Detailed methodology makes the approach clear. + Appendix addresses information omitted in the main body. + Experiments and code are made available for reproducibility. Weaknesses: + Only one purely symbolic method is included in the evaluation. + Only one target language is used in evaluation. Other Comments Or Suggestions: n/a Questions For Authors: 1. How much more expensive is NESTER inference in terms of time and energy over the naive CL and L3 models? 2. Is NESTER extensible to compiled languages, or are interpreted languages the only candidate for type inference with NESTER? 3. Why not evaluate on manytypes4typescript as well? It seems that NESTER should at least be extensible to JS/TS. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. * * * **Q1:** How much more expensive is Nester inference in terms of time and energy over the naive CL and L3 models? **A1:** We conduct additional experiments to evaluate Nester's computational cost in terms of inference time and energy consumption using an NVIDIA RTX 4090 GPU. The results are as follows: | Method | Time (s) | Energy (J) | Accuracy (%) | |---------------------------------------------|---------:|------------:|-------------:| | Naïve-CL | 4910 | 219,317 | 31.3 | | Nester-CL | 18,396 | 1,365,714 | 68.7 | | Nester-CL (w/o multi-step reasoning) | 4391 | 228,023 | 65.7 | | Naïve-L3 | 436 | 20,818 | 42.4 | | Nester-L3 | 2906 | 233,378 | 63.6 | | Nester-L3 (w/o multi-step reasoning) | 465 | 22,248 | 59.6 | Nester incurs higher computational costs, primarily due to multiple invocations of large LMs during inference. However, when multi-step reasoning is removed, the additional overhead is negligible, making the inference time and energy consumption comparable to those of the naive models. In this case, Nester will only locate the relevant code lines (e.g., identifier definitions) from the source code according to the high-level program. It then leverages LLMs along with predefined rules in a single step to infer types without further iteration, still yielding practical results for type inference. This provides an optional solution with fast type inference time. * * * **Q2:** Is Nester extensible to compiled languages, or are interpreted languages the only candidate for type inference with Nester? **A2:** This is a good point. Nester can be adapted to lower-level compiled languages like bytecode and assembly instructions. This will require adjusting the high-level program prompts for LMs and developing language-specific rules for inference, taking into account the existing type systems of the specific languages. We will add a discussion. * * * **Q3:** Why not evaluate on ManyTypes4TypeScript as well? It seems that Nester should at least be extensible to JS/TS. **A3:** Our current implementation does not support certain JS/TS features due to the limitations of the underlying tools used by Nester. These include JavaScript's dynamic scoping (e.g., eval), prototype-based inheritance, event-driven asynchrony, and TypeScript's hybrid semantics, such as gradual typing and any escape hatches—none of which are present in Python. We will clarify this and discuss how to extend Nester to other languages beyond Python. * * *
Summary: The paper presents Nester, a novel neuro-symbolic technique for type inference. Nester decomposes the type inference process into sub-tasks that are aligned with the data and control flows of the input code, encapsulating these into a modular high-level program. This program executes multi-step actions, such as evaluating expressions and analyzing conditional branches, thereby integrating static typing with language models (LMs) to deduce potential types. Implemented for Python, Nester is benchmarked against various models, including symbolic, deep learning-based, and LM-based systems, using the ManyTypes4Py dataset. Although Nester does not surpass Type4Py in exact match accuracy, it excels in both exact match and match to parametric metrics for inferring argument and return types. Additionally, Nester is available as a VSCode plugin, providing users with visibility into the high-level program generated by the LM and insight into the LM's reasoning process. Claims And Evidence: The paper presents an interesting claim regarding whether LMs perform reasoning in type inference. The experiments conducted in Section 2.3 provide convincing evidence to support this claim. Methods And Evaluation Criteria: The proposed method, Nester, makes sense as it builds upon existing work in both rule-based and LM-based approaches. The evaluation criteria are convincing since the dataset, ManyTypes4Py, is commonly used in other studies, and the metrics—exact match and match to parameters—are standard for type inference task [1], [2]. [1] Peng, Yun, et al. "Generative type inference for python." 2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2023. [2] Guo, Yimeng, et al. "Generating Python Type Annotations from Type Inference: How Far Are We?." ACM Transactions on Software Engineering and Methodology 33.5 (2024): 1-38. Theoretical Claims: The paper does not provide specific theoretical proofs Experimental Designs Or Analyses: The experimental design is reasonable. The paper first presents the overall performance, then separately discusses the neural and symbolic rule-based components, and finally provides a case study. Additionally, it examines the effectiveness of high-level programs. Supplementary Material: I have read the Supplementary Material, Appendix D: Details of High-Level Program Generation. The prompts for high-level program generation seem reasonable. Relation To Broader Scientific Literature: This paper presents a novel neuro-symbolic method for addressing the type inference task, specifically focusing on tackling control flow. Essential References Not Discussed: The related works are sufficiently discussed in this paper. Other Strengths And Weaknesses: Pros: 1. This paper introduces a novel methodology that seamlessly combines static analysis (symbolic) with neural perceptual representations (neural) via LMs to enhance the accuracy and effectiveness of type inference tasks. To the best of my knowledge, this work is the first to apply a neuro-symbolic approach in software engineering, specifically selecting type inference as a suitable application scenario. 2. This paper reveals the limitations of traditional rule-based methods and recent LMs-based approaches by providing a motivating example. Motivated by this, the authors introduce a neuro-symbolic system that addresses the shortcomings that neither rule-based nor LMs-based methods can fully resolve. Additionally, a practical and easy-to-understand example is used throughout the paper to clearly illustrate its motivation and approach. 3. This paper presents a high-level program interpreter that initially conducts data flow analysis on the target program to construct a data flow graph, which is reasonable and effective. 4. To validate the effectiveness of the proposed Nester, the authors have conducted experiments and compared Nester against eight other systems, including three types of models: rule-based models, deep-learning models, and LM-based models. Cons: 1. Nester seems to mainly use LLMs for type inference. How does it differ from other LLM-based methods, such as TypeGen? 2. I would assume that LM-based high-level program generation might sometimes produce errors. How does the proposed Nester handle situations when the generated program is not entirely correct? 3. Although the authors provide an example of a high-level program generation in the paper, a specific definition is lacking. Could the author provide a detailed formalization of high-level programs? 4. One limitation of the approach is that Nester is only implemented for very simple code structures, neglecting complex code structures. Are there challenges in implementing it for more complex code structures? Other Comments Or Suggestions: None Questions For Authors: 1. I would like to understand your contributions in the area of type inference. How does your work differ from existing LLM-based approaches, such as TypeGen? 2. Could the author please provide a detailed formalization of high-level programs? 3. In the context of this dataflow graph, what specifically has been made coarse? How does this compare to the original dataflow graph? 4. It seems that Nester employs a symbolic execution method for type inference. How does Nester compare to traditional symbolic execution approaches? 5. Since rule-based methods are designed, there must be concerns about coverage. What is the coverage of your rules? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** The key differences between Nester and TypeGen. **A1:** Please see **Q2** of **Reviewer Q6Mu** for a discussion. * * * **Q2:** Could the author please provide a detailed formalization of high-level programs? **A2:** This is provided in Appendix D.1. We will better highlight. * * * **Q3:** Comparison between the introduced coarse-grained dataflow graph and the original one. **A3:** Thanks. This information is provided in Appendix E.2 in the context of CGDG (Coarse-Grained Dataflow Graph). Specifically, for an identifier $x_v$, we represent its three hops using three types of nodes: **Def**($x_v$), **User**($x_v$), and **Usee**($x_v$). In essence, our approach differentiates nodes based on their hop distance from $x_v$, categorizing them into three distinct types. Compared to traditional Concrete Syntax Tree (CST) and Abstract Syntax Tree (AST), CGDG constraints each identifier to at most three hops, resulting in a more concise and structured graph representation. We will explain this approach in more detail. * * * **Q4:** How does Nester compare to traditional symbolic execution approaches? **A4:** Nester is a static analysis framework for type inference and does not rely on symbolic execution—no symbolic execution traces are generated. Unlike traditional symbolic execution, which explores possible input spaces to produce traces, Nester operates purely on static source code information. We will clarify. * * * **Q5:** What is the coverage of the rules in Nester? **A5:** The coverage of rules is reported in **Table 3**. Our rules support primitive types (int, float, bool, str, bytes) and container types (list, tuple, dict, set), covering the vast majority of type inference scenarios encountered in real-world code. * * * --- Rebuttal Comment 1.1: Comment: I thank the authors for resolving my concerns.
Summary: The paper introduces a neuro-symbolic approach named NESTER, which integrates language models (LMs) with program analysis for type inference in dynamically typed languages like Python. With the help of LMs, NESTER translates target code into a high-level program composed of predefined API-driven analysis units and thus decomposes type inference into modular sub-tasks guided by dataflow analysis. This program is executed via a neuro-symbolic interpreter that combines static analysis (e.g., coarse-grained dataflow graphs) and LM-based reasoning to infer types. It uses a type recognizer to initially identify types from explicit type declarations in the code. When the type recognizer fails, two language models are involved: a condition-evaluation language model to assess conditional statements in the high-level program and a type-inference language model to infer types based on context and the dataflow graph. Evaluated on the ManyTypes4Py dataset using metrics like Exact Match and Match to Parametric, NESTER outperforms the selected baselines by 3-4% in Top-1 accuracy. ## update after rebuttal The authors did not provide enough evidence to show that the proposed method can perform well for user defined types or rare types, which is known as a common chanllenge in the type inference task. Moreover, the authors also did not provide how they construct the datasets from ManyTypes4Py. Based on this, I have concerns about whether the datasets are cherry-picked from ManyTypes4Py. Claims And Evidence: The authors claim their approach to be the first neuro-symbolic framework for type inference using LMs. The existing work TypeGen also follows combines static analysis and LLM. What are the key differences? Methods And Evaluation Criteria: The proposed methods are evaluation criteria are reasonable. Theoretical Claims: NA Experimental Designs Or Analyses: The experimental design is generally acceptable, with some issues which will be discussed in the weakness part. Supplementary Material: No. Relation To Broader Scientific Literature: This paper is an increment compared to previous work, as the proposed method differs from existing work. Essential References Not Discussed: The references are adequate. Other Strengths And Weaknesses: Strengths: - NESTER effectively bridges neural and symbolic paradigms by leveraging LMs as neural sub-task parsers while retaining control-flow semantics through symbolic program decomposition. - The paper provides empirical analyses of practical challenges in LM-based type inference. For instance, it investigates the pitfalls of in-context learning (ICL), demonstrating that LM accuracy drops by 8.5% when exposed to misleading demonstrations. - The use of LMs to generate high-level programs avoids reliance on language-specific parsers, suggesting scalability to other dynamically typed languages (e.g., JavaScript). Weaknesses: - In the motivation example, it's more like that adding an unreachable statement to fool the CodeLlama model (from float—none to bool—float—none). - While NESTER reports strong results on common and Union-like types (e.g., 53.2% accuracy on `typing.Optional`), its evaluation lacks dedicated metrics for complex parametric types (e.g., `Dict[str, List[int]]`) and user-defined types. This contrasts with some previous work [1][2][3], which explicitly benchmarks rare/complex/user-defined types. This omission raises concerns about generalizability as real-world codebases heavily utilize such types, especially given ManyTypes4Py’s known dataset distribution bias on rare type, which was reported in [2]. - The experimental improvement is relatively marginal. References [1] Allamanis, M. et al. "Typilus: Neural Type Hints." PLDI 2020. [2] Wei, J. et al. "TypeT5: Seq2seq Type Inference using Static Analysis." ICLR 2023. [3] Wang, X. et al. "TIGER: A Generating-Then-Ranking Framework for Practical Python Type Inference." ICSE 2024. Other Comments Or Suggestions: - While the case study highlights NESTER’s ability to correct TypeGen’s errors (e.g., inferring HttpResponse instead of HttpResponseNotFound), it does not analyze failure modes. - The authors may consider to submit the paper to a software engineering venue, as the key contributions seems to be the high-level program generation and program interpreter parts. Questions For Authors: How does the proposed method perform for complex parametric types and user-defined types? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: **Q1:** How does the proposed method perform for complex parametric types and user-defined types? **A1:** In response to your concern, we will provide Nester's performance on **complex parametric types** (depth = 0, e.g., list; depth = 1, e.g., list[str]; depth ≥ 2, e.g., Dict[str, List[int]]) and **user-defined types**, as follows. | Method | Depth=0 | Depth=1 | Depth>=2 | User-defined | |------------------|---------|---------|----------|--------------| | Exact Match | | | | | | TypeGen-CL | 40.8 | 39.3 | 7.0 | 74.7 | | Nester-CL | 58.3 | 30.7 | 2.1 | 81.3 | | TypeGen-L3 | 35.7 | 31.3 | 2.1 | 69.3 | | Nester-L3 | 55.8 | 26.8 | 2.1 | 78.7 | | Match to Para. | | | | | | TypeGen-CL | 68.3 | 69.3 | 35.5 | 75.0 | | Nester-CL | 79.9 | 75.2 | 58.7 | 81.2 | | TypeGen-L3 | 51.5 | 60.5 | 31.3 | 69.5 | | Nester-L3 | 66.3 | 73.8 | 61.7 | 78.6 | From this table, we can observe that Nester outperforms Typegen for user-defined types in terms of exact match and match to parametric metrics. Furthermore, Nester performs better with complex parametric types in terms of match to parametric metrics. Although Nester does not achieve higher exact match accuracy for depth ≥ 2 and depth=1, this can be attributed to Typegen's use of specially designed chain-of-thought operations for recursive type structures, which Nester has not yet incorporated. We believe that integrating such techniques into Nester could further enhance its performance, and we will explain how this can be done later. * * * **Q2:** The key differences between Nester and TypeGen (raised in the **Claims and Evidence** part). **A2:** Nester differs from TypeGen in its approach to type inference. Nester integrates static data and control flow analysis with the capability of LLMs. In contrast, TypeGen relies entirely on LLM-generated reasoning, treating code analysis as solely textual processing progress—a less reliable approach and more prone to hallucination. Our design also allows Nester to provide an intuitive, high-level program view, offering greater interpretability during type inference than TypeGen. * * * --- Rebuttal Comment 1.1: Comment: I still have concerns about the provided experimental results. From the tables, the results of user-defined types are even better than those of built-in types (e.g., list, set, dict as indicated by the results of the depth=0 column). This is quite strange. Can the authors provide explanations for this? --- Reply to Comment 1.1.1: Comment: Thank you for getting back to us. Our approach achieves better performance on user-defined types by leveraging type hints derived from import analysis, which are included in the LLM prompt (see also Appendix A.2). These hints help the LLM make more accurate predictions for user-defined types. To ensure a fair comparison, we report performance separately for user-defined and built-in types. We will clarify this in the final version. Please let us know if you have any further questions.
Summary: This paper proposes Nester, a neurosymbolic tool for performing type inference with LLMs. It uses LLMs to generate high-level versions of the code in question, and determines the return type of a function by analyzing its data and control flow. It outperforms existing SOTA type inference tools for simple as well as complex types. Claims And Evidence: I believe the claims are clear and the paper shows the evidence to support them. Methods And Evaluation Criteria: Yes, the paper presents a good evaluation with relevant baselines for the problem. Theoretical Claims: Didn't verify. Experimental Designs Or Analyses: The experiments seem sound. As for the results, I would have liked a better explanation for why Type4Py exceeds performance in certain cases. The paper says that Type4Py is stronger in handling short code segments, but is it just based on lines of code in the function? There are also cases where CodeT5-Large exceeds performamce, so an explanation for that would be nice too. Apart from that, the evaluation section tables use a lot of abbreviations which were not specified earlier (e.g. Arg., Ret., Var. in table 1), so please add definitions for those (unless I missed them somewhere in the paper). Also, what are the main challenges preventing you from extending support to other datasets? Based on the paper, it should be relatively easy to just apply to any function defined in Python. Supplementary Material: Reviewed until Section C. Relation To Broader Scientific Literature: This paper offers a marked improvement over previous SOTA type-inference methods, and uses techniques relevant to both the type-inference fields as well as the neurosymbolic fields. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: In general, the problem being tackled here is both important and difficult to solve, and the proposed technique is novel and elegant. The paper is also well explained and clear on its contributions. I do believe, despite what is written in threats to validity, that Nester should be evaluated over larger LLMs. What is the fundamental reason why Nester couldn't just be run with a larger LLM? It would be interesting to see if Nester + LLama-70B performs better than just LLama 70B, or if the gains made by the neurosymbolic reasoning diminish with larger models. I know the main motivation was to use smaller LLMs, but section 4.2 suggests an openness to evaluating Nester with larger models. The paper also does not have a discussion on limitations of Nester wrt the input functions itself. For instance, does Nester only work with individual functions and local context, or can it draw from contexts outside the scope of the function? E.g., if there was a function call to some function custom defined in a repository, will Nester assume the return type of the called function or in turn perform type-inference for that function? Other statistics about functions input to Nester would also give a good idea about its abilities, such as how many lines of code are there in each input? Other Comments Or Suggestions: 1. Have you thought about evaluating Nester with small language models like Microsoft's Phi family? 2. In Figure 3, for line 6 in the high level program, why is only T2 and T3 being combined? The condition evaluation hasn't been run yet and it doesn't know T1 is unreachable, so should it not assume for now that T1, T2 and T3 should all be combined? Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. * * * **Q1:** What is the fundamental reason why Nester couldn't just be run with a larger LLM? **A1:** Nester can work with LLMs of various sizes. In this work, we use a modest-size LLM as we target scenarios where users prefer running models locally—on a developer PC or cluster—to avoid sending data to the cloud due to security and privacy concerns. In the final version, we will demonstrate how Nester can be integrated with a larger LLM (70B). * * * **Q2:** Does Nester only work with individual functions and local context, or can it draw from contexts outside the scope of the function? **A2:** Our current implementation performs type inference on individual functions and local context. This can be easily extended by - for example, inlining the callee functions. We will provide a discussion. * * * **Q3:** Other statistics about the functions input to Nester, such as the number of lines of code in each input. **A3:** Agree. The distribution of lines of code in our dataset is as follows. | Line Range | Count | |---------------|-------:| | ≤ 5 lines | 68,860 | | 6–10 lines | 13,108 | | 11–15 lines | 2,340 | | 16–20 lines | 557 | | > 20 lines | 340 | We will provide this information. * * * **Q4:** Have you thought about evaluating Nester with small language models like Microsoft's Phi family? **A4:** Thanks for your comments. Based on your suggestion, we simply evaluate Nester on 100 instances from ManyTypes4Py using top-1 predictions with Phi-3.5-mini-instruct, due to the limited time slot. The results are as follows: | Method | Arg. | Ret. | Var. | All | |--------------------|------|------|------|------| | Exact Match | | | | | | Naive | 27.3 | 72.7 | 39.4 | 40.4 | | Nester | 22.7 | 72.7 | 51.5 | 47.5 | | Match to Para. | | | | | | Naive | 31.8 | 72.7 | 43.9 | 44.4 | | Nester | 22.7 | 72.7 | 60.6 | 53.5 | From this table, we can see that Nester outperforms the naive method in overall accuracy. However, it does not surpass the naive method in argument (Arg.) and return(Ret.) type predictions. This is likely due to the small number of return and argument types in the 100-instance dataset—if a single inference fails, it can result in a significant deviation. We leave further investigation of Nester on Microsoft's Phi family for future work. * * * **Q5:** In Figure 3, for line 6 in the high level program, why is only T2 and T3 being combined? **A5:** In the paper, we simplified the presentation by combining T2 and T3 in Figure 3. In actual execution, however, the interpreter processes T1, T2, and T3 separately and checks whether the returned variable is empty. For this example, the runtime check confirms that T1 is unreachable. We will clarify. * * *
null
null
null
null
null
null
SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model
Accept (poster)
Summary: The paper challenges the recent paradigm of self-supervised genomic language models (gLMs), which are trained on the DNA sequence alone, and proposes supervised genomic profile prediction (GPP) task as a more effective pre-training method. The motivation stems from the number and complexity of different factors (epigenetic modifications, chromatin accessibility patterns etc.) that determine the function of the DNA, in addition to the raw sequence. To overcome the limitations of current genomic profile prediction models (GPPMs) in learning the relationship between DNA sequence across species and genomic profiles, the authors implement Species-Profile Adaptive Collaborative Experts (SPACE), which extends Enformer by preprending species embedding token and adding two Mixture of Experts (MoE)-based modules: (1) Transformer Encoder with MoE layer replacing the FFN component including a shared pool of experts and a per-species gating function, to allow species-aware weighing of expert contribution (2) MoE-based decoder to decode the genomic profiles. MoE learning is employed with standard practices of noisy top-k selection to ensure sparsity. In addition, mutual information loss is used, which encapsulates load balancing between experts (to avoid collapse) and species diversity in training data, while encouraging species aware preference to specific expert subset. SPACE is evaluated on two benchmarks that are commonly used in gLMs literature (GUE and Nucleotide Transformer’s downstream tasks) and compared to three gLM baseslines (HyenaDNA, DNABERT2, Nucleotide Transformer and its variants) and a GPPM popular baseline (Enformer). ## Update after rebuttal The authors have addressed my questions. I thus keep my initial rating and lean to accept this paper. Claims And Evidence: The authors present two main claims: - Supervised GPP is a more effective are more effective for pre-re than self-supervised gLMs that are trained on raw DNA sequences - A MoE-based GPPM architecture as a mitigation for inferior performance of previous models While the motivation for challenging the current sequence-only self-supervised gLMs paradigm makes sense, there are several issues that needs to be addressed: - While SPACE consistently outperforms the selected baselines on chromatin profiles and regulatory elements prediction tasks (Table 3), it shows inferior performance on splicing tasks and some prediction tasks in the GUE dataset (Table 3; Tables 9-10), where Nucleotide Transformer (NT) and DNA-BERT2 show better performance. The authors suggest the model capacity of NT as a possible reason for its better performance in some cases, but DNA-BERT2 is on the same order of magnitude as SPACE and employs a simple Transformer Encoder architecture. - In addition, the comparison to gLMs is missing recent strong baselines including GPN-MSA, EVO and Caduceus The MoE modules are a novel extension to GPPMs (and gLMs) to the best of my knowledge and the authors also present results on emerging expert capabilities (specific and cross-species). However, it is not completely clear if improvement is gained simply thanks to additional model capacity or to the MoE addition. This is important to understand as MoE training at scale can introduce complexities due to load balancing and bandwidth bottlenecks. Methods And Evaluation Criteria: The authors employ well accepted criteria on common benchmarks for evaluating gLMs Theoretical Claims: The paper does not consist of any theoretical proofs Experimental Designs Or Analyses: The paper employs standard and well accepted experimental design and analysis for gLMs as well as an ablation study which examines the contribution of the main architectural components (Table 3 in main text, Section E in the supplementary). The Results section is well structured and is supplemented with additional results in the appendix of the paper. Supplementary Material: I have reviewed the supplementary material, giving special attention to sections B-E. Relation To Broader Scientific Literature: The paper makes two main contributions: - Pointing out the difference between natural language, where the sequence typically includes all information to determine its function, to DNA, where additional factors collaboratively determine the phenotype, and by that, challenging the paradigm of many gLMs, which are trained on raw DNA sequences without additional information using self-supervised tasks (next or masked token prediction). - Proposing supervised GPPMs as an alternative to self-supervised gLMs and extending a current popular baseline with MoE architecture, which outperforms several gLM baselines Essential References Not Discussed: The comparative analysis in the paper is missing some recent competitive gLMs baselines, including: - Caduceus [1] - Grover [2] - Evo [3] - GPN-MSA [4] [1] Schiff, Y.,et al. Caduceus: Bi-directional equivariant long-range dna sequence modeling. ICML’24 [2] Sanabria, M. et al. DNA language model GROVER learns sequence context in the human genome. Nat Mach Intell 6, 911–923 (2024). [3] https://doi.org/10.1038/s42256-024-00872-0[1] Eric Nguyen et al. Sequence modeling and design from molecular to genome scale with Evo. Science 2024.DOI:10.1126/science.ado9336 [4] Gonzalo et al. "GPN-MSA: an alignment-based DNA language model for genome-wide variant effect prediction." bioRxiv (2024): 2023- Other Strengths And Weaknesses: Strengths: - Presenting an interesting discussion, which challenges popular paradigm on self-supervised gLMs - Introducing a novel MoE-based GPPMs, which outperforms several gLMs on commonly used benchmarks. - The paper is well written and well structured Weaknesses: - Leading gLMs are missing from the comparison and the model does not consistently outperform selected baselines, which weakens the main claim of the paper - Key claim does not address the challenge of collecting the supervised signal. While using the raw sequence alone has its disadvantages it is readily available, and related information such as conservation metrics were shown to greatly improve performance (GPN-MSA). - The exact contribution of MoE (as opposed to simple addition of capacity) is not completely clear Other Comments Or Suggestions: 1. Perhaps it is worth considering a “softer” claim, which points to the importance of adding information beyond the raw sequence (either as a prior or as a supervision signal) as a means to improve current genomic language models. This is supported by the results but does not necessitate a clear and consistent superiority over self-supervised gLMs. In addition, it is worth discussing challenges such as modelling long range interactions. 2. It is worth adding comparison, or at the very least references, to recent gLMs (see previous comments) 3. Minor - running title is incorrect (still using the default template string I think) Questions For Authors: 1. Have you tried simply increasing the capacity of Enformer by adding a decoder module and adding species embedding (instead of MoEs)? 2. Do you have an intuition to why SPACE outperforms the selected gLMs in some tasks and less so in others? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. Additional experiments are presented in https://anonymous.4open.science/r/charts-CDE6. We respond your concerns below. ## Q1:Performance on Splicing Tasks and GUE Dataset We clarify two key points: * SPACE ​outperforms DNABERT2 across all splicing benchmarks (see Table 7 and Table 10). While SPACE does not yet surpass Nucleotide Transformer (NT) in absolute terms, it achieves ​substantial gains over Enformer: * ​GUE splice task: 0.8748 (SPACE) vs. 0.8593 (DNABERT2), 0.0593 improvement (relative to Enformer) * ​NT splice tasks: Average 0.917 (SPACE) vs. 0.851 (DNABERT2), ​0.161 average improvement (relative to Enformer) * Our analysis shows that SPACE underperforms DNABERT2 only on a subset of Epigenetic Marks Prediction tasks in the GUE benchmark. This can be attributed to SPACE encountering a novel species (yeast) in these tasks. Notably, SPACE demonstrates significant improvements over Enformer under the same conditions, which provides strong evidence for its generalization capability. ## Q2:Comparison to gLMs Baselines * Grover: Directly referenced Grover's results on the GUE dataset. As Grover used only human-annotated datasets, we limited our comparisons accordingly (in section 1.1 of supplementary materials). Our results demonstrate significant outperformance over Grover across all these tasks. Furthermore, we fine-tuned Grover using the official hyperparameters on the revised NT downstream tasks (in section 1.2 of supplementary materials). * Caduceus: All baseline comparisons were derived from Caduceus' reported performance on genomic benchmarks (as showed in their paper). Notably, Caduceus did not evaluate the Drosophila task, consequently, the CNN baseline results were extracted from genomic benchmarks (showed in ​Supplementary Materials Section 1.3). * GPN-MSA: Compared SPACE against GPN-MSA on mutation effect prediction tasks within the BEND benchmark.While SPACE underperforms GPN-MSA, we emphasize that ​SPACE uses only reference sequences during inference, aligning with standard foundation model evaluation protocols. In contrast, GPN-MSA requires ​multi-sequence alignments (MSAs) as input, introducing an unfair advantage for this baseline. * evo: While we acknowledge Evo's contributions, integrating its results during rebuttal faced challenges due to:(1) Evo's 7B parameters exceed the largest baseline capacities (NT-2.5B), and current resources cannot support timely replication;(2) No official benchmark data exists for Evo; (3) Evo focuses on generation tasks, lacking results on mainstream benchmarks. ## Q3:Data Availability Challenges We acknowledge that raw sequence data is more accessible than genomic profiles. However: (1) Multi-omic data is increasingly available, and we've demonstrated SPACE's advantages using only limited data, suggesting future improvements as these resources expand; (2) While profile data is needed during pre-training, downstream applications require only sequence input since profile understanding is embedded in the model parameters, unlike GPN-MSA which requires additional MSA input for both pre-training and inference. ## Q4:Increasing capacity by decoder and species embedding We appreciate your suggestion. We have incorporated this consideration in our new ablation studies, with the results presented in Section 2 of the supplementary materials. The corresponding analysis is provided in *Q3 of Reviewer oB6U*. ## Q5:Task Performance Yes, we hypothesize that SPACE demonstrates superior performance on tasks requiring modeling of gene regulatory relationships. However, for purely sequence-based tasks, SPACE may underperform compared to gLMs pretrained on extensive sequence data, as its training data is more limited in this regard. ## Q6: Other Comments We fully agree with the reviewer's perspective and will soften our claims. Our work primarily validates the potential of supervised pre-training for DNA foundation models, having already achieved SOTA. Both our results and those of GPN-MSA demonstrate that reference sequence pre-training alone is insufficient. However, we think that integrating extra information as supervision signals is particularly meaningful, as it enables downstream tasks to rely solely on raw sequences while implicitly embedding biological knowledge into model parameters, whereas GPN-MSA requires additional inputs. In the protein LM field, recent work has similarly adopted MSA information as supervision [1]. [2] has also demonstrated the advantages of using AlphaFold2's supervised-trained backbone for pLMs. Therefore, we believe incorporating additional information as supervision signals provides advantages for foundation models. Detailed discussion on long sequence modeling is in our response to *Q5 of Reviewer ceSM*. References: [1]Evolution-inspired loss functions for protein representation learning [2]Exploring evolution-aware &-free protein language models as protein function predictors
Summary: This paper introduces a novel approach using a genomic profile prediction models (GPPMs) as a foundation model in biological sequence modeling. The authors propose SPAC which leverages species Mixture-of-Experts (MoE) and an enhanced category operator to improve predictive performance. They conduct ablation studies to analyze the necessity of individual components and demonstrate SPACE outperforms alternatives in existing benchmarks. Claims And Evidence: - Overall, I think this is a solid paper providing new perspectives of using a GPPM as a foundation model. - The paper claims that SPACE, with its category operator and species MoE, significantly improves structured prediction performance. However, the ablation studies (Table 3) show only marginal performance differences between the full model and its variants, raising concerns about the necessity of these components. The authors do not provide sufficient discussion on the results of these ablations, making it difficult to assess their importance. - The paper states the category operator decomposes the base prediction into Q profile type based on domain-specific biological knowledge. It's unclear what they meant by "domain-specific biological knowledge" and how they are incorporated. Methods And Evaluation Criteria: Evaluation is performed using standard benchmarks, but the paper lacks a qualitative analysis of why certain components contribute to performance improvements. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: - The ablation studies lack interpretability; Table 3 alone is insufficient to justify the necessity of different components. - Additional case studies or qualitative insights would enhance understanding of why specific design choices matter. Supplementary Material: No. Relation To Broader Scientific Literature: The key contributions are very specific to the application, and would be difficult for broader scientific community to benefit from this paper. Essential References Not Discussed: Not applicable. Other Strengths And Weaknesses: Not applicable. Other Comments Or Suggestions: Not applicable. Questions For Authors: - What specific biological knowledge is incorporated into the category operator, and how does it influence model decisions? - Given the marginal performance differences in Table 3, how do the authors justify the necessity of the species MoE and enhanced components? --- ***Post-Rebuttal*** I'm updating the score from Weak Reject to Weak Accept based on the authors' clarifications. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for highlighting areas where our manuscript can be improved. We supplemented with additional experiments which are presented in https://anonymous.4open.science/r/charts-CDE6. Below, we address your concerns and outline revisions to strengthen the paper. ## Q1:Category Operator The category operator leverages biological priors derived from the **Enformer dataset**, which groups genomic profiles into four categories based on distinct measurement technologies: 1. Transcription factor (TF) chromatin immunoprecipitation and sequencing (ChIP–seq) 2. Histone modification ChIP–seq 3. DNase-seq or ATAC-seq 4. CAGE tracks These categories reflect critical biological distinctions. For example, **chromatin accessibility** (DNase/ATAC-seq) and **transcription initiation** (CAGE signals) are functionally linked: chromatin accessibility establishes permissive 3D topological environments necessary for transcription initiation, with transcription start site (TSS)-associated promoters and enhancers overlapping spatially with accessible chromatin domains. This biological rationale guides expert specialization in the decoder—profiles with strong interdependencies (e.g., CAGE and ATAC-seq) share experts, while distinct profiles (e.g., TF ChIP-seq) activate specialized experts. ## Q2:Necessity of the components The evaluation of DNA language models (LMs) typically exhibits substantial variance, and no existing DNA LM achieves absolute dominance across all datasets. For instance, DNABERT2 adopted mean ranking as its primary metric in their publication. We note that while the ablation results in Table 3 of our original paper report averaged outcomes, the complete results in Appendix E (Table 11) demonstrate robust improvements over Enformer across nearly all evaluated tasks. To further validate the architecture, we conducted targeted ablation studies, which demonstrate that: • The **species-aware MoE encoder** enables cross-species knowledge transfer. • The **profile-grouped decoder** captures category-specific regulatory mechanisms. These components are biologically grounded and critical for generalizability, particularly in data-scarce scenarios. To further strengthen our claims, we have included and analyzed new ablation results in Q3. ## Q3:Ablation Studies Additional ablation studies are documented in Section 2 of our supplementary materials. Section 2.1 presents comprehensive ablation results on Nucleotide Transformer downstream tasks. All ablation models were pre-trained with their hidden dimension parameters halved, as detailed in Section 4.6 of the main text. SPACE demonstrates comparable or superior performance to the decoder-removed variant in ​14/18 tasks, with ​11/18 tasks still outperforming even when replaced by a parameter-matched MLP. Notably, for regulatory element classification tasks, SPACE achieves better results in ​4/5 datasets, with the only exception being the ​TATA box dataset—which primarily examines sequence motifs of TATA boxes and does not require complex regulatory mechanism understanding. This suggests that while our decoder does not explicitly improve direct chromatin profile prediction accuracy, the ​MoE architecture implicitly captures cross-profile regulatory interactions by modeling their dependencies. This capability provides critical advantages for tasks requiring integrated understanding of multiple profiles, such as regulatory element prediction. To further validate cross-species generalization, we evaluated ablation variants on the GUE benchmark (yeast and virus tasks, detailed in Section 2.1). Results reveal that the MLP-based decoder variant shows markedly weaker generalization to novel species compared to SPACE with its enhancement decoder architecture. ## Q4:Qualitative Analysis and Interpretability In **Section 4.4 of main text**, we analyze expert specialization patterns: • **Encoder**: Two shared experts capture cross-species mechanisms, while one species-specific expert adapts to unique genomic contexts. • **Decoder**: Complex profiles (e.g., TF ChIP-seq) activate dedicated experts, whereas interdependent profiles (e.g., CAGE and ATAC-seq) co-activate shared experts—aligning with their biological relationships. We agree that deeper case studies could strengthen interpretability and will prioritize this in revisions. ## Q5:Broader Impact We have re-examined the viability of ​supervised learning paradigms for DNA foundation models and proposed specifically designed optimizations. This approach effectively addresses the core limitation of traditional unsupervised pre-training — ​inadequate modeling of biological functional associations. We believe these insights could have profound implications for advancing research in ​genomic language models (gLMs). --- Rebuttal Comment 1.1: Comment: ***Re-posting as a rebuttal comment*** Thank you for the detailed and thoughtful response. The supplementary ablation results, particularly those in Appendix E, help clarify the value of the proposed architectural components. I now better appreciate how the species-aware MoE encoder and profile-grouped decoder contribute to improved cross-species generalization and category-specific modeling. 1. **Component Justification**: The expanded ablation study and qualitative analysis in Section 4.4 demonstrate that the architectural choices are biologically motivated and result in consistent gains across diverse tasks. While some improvements are modest, they appear meaningful given the complexity of genomic prediction. 2. **Biological Interpretability**: The explanation of how expert activation aligns with known biological groupings adds useful context. While more case studies would further strengthen interpretability, I find the current discussion reasonably convincing. The authors' revisions and clarifications have addressed my key concerns. I am updating my score to **weak accept**. --- Reply to Comment 1.1.1: Comment: Dear Reviewer oB6U, We sincerely appreciate the insightful and constructive comments, which have significantly enhanced the overall quality of our manuscript. We are particularly grateful for the positive recognition of our ablation studies, as well as the favorable assessment of both the component justification and biological interpretability of our approach. We confirm that all supplementary experimental results, including additional ablation analyses, will be systematically integrated into the final version of the manuscript. Furthermore, we fully agree that incorporating more comprehensive case studies would substantially improve the interpretability of our findings. Accordingly, we will prioritize this enhancement in our revision process to ensure a more robust and transparent presentation of our results.
Summary: The paper claims that self pretraining alone in DNA is not a good prior to later generalize for downtream tasks. Instead this paper revisits Genomic Profile Prediction Models (GPPMs) such as Enformer the are trained to directly to predict genome profiles. The paper proposear a further refinement on Enformer model, incorporating a specie-aware encoder and profile-grouped decoder. Experimentation shows that the refined version of Enformer surpases its original in several tasks. Claims And Evidence: The paper claims that the architecture of current GPPMs is not optimal in the sense that the encoder is shared for different species and the prediction heads are independent. Methods And Evaluation Criteria: The paper proposes a new architecrure of GPPM with (1) a specie-aware encoder and (2) profile-grouped decoder following MoE in both cases. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimentation is performed in 2 datasets: The NT dowstream tasks and the GUE benchmark. In both cases, the model seems to improve the performance of Enformer and in some cases becomes SoTA. Although the improvement over Enformer is not very significant in most cases, the experimentation is sufficient. Perhaps, the most notorious improvement is in the underrepresented mouse experiments. Supplementary Material: Yes, I reviewed the Experimentation details Relation To Broader Scientific Literature: The contribution of this paper might help in designing multi genome foundational models. Essential References Not Discussed: The paper references are correct and up to date. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Are the authors using an initial pretained model from Enformer or are you performing the pre-training? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the constructive feedback. We appreciate your acknowledgment of our work’s potential contributions to multi-genome foundation models. We supplemented with additional experiments which are presented in https://anonymous.4open.science/r/charts-CDE6. Below, we address your points to clarify and strengthen the manuscript: ## 1. Response to the Question **Reviewer’s Question**: Are the authors using a pretrained model from Enformer or performing pretraining from scratch? **Response**: SPACE is **trained from scratch** on genomic profile prediction tasks without initializing weights from Enformer. We sincerely appreciate this insightful inquiry, as it inspires us that initializing each expert FFN's weights with Enformer's pretrained parameters could be a promising direction for future exploration. ## 2. Significance of Improvements **Reviewer’s Observation**: Although the improvement over Enformer is not very significant in most cases, the experimentation is sufficient. Perhaps, the most notable improvement is in the underrepresented mouse experiments. **Response**: Our improvements yield **significant gains** over Enformer in cross-species generalization and new genomic tasks, demonstrating that SPACE learns more robust DNA representations. As detailed in Table 2 of the main text, we systematically quantify SPACE’s cross-taxa generalization superiority over Enformer. This is further empirically validated through the Drosophila melanogaster enhancer classification task (Section 1.3 of supplementary materials). Notably, on splice prediction benchmarks, SPACE demonstrates marked improvements of ​**0.0593** on the GUE splice dataset and an average ​**0.161** across NT splice tasks (Table 7 and Table 10 of main text). These quantitative enhancements provide rigorous evidence of SPACE’s enhanced generalization capacity on both evolutionarily divergent species and functionally distinct genomic tasks. The strong performance on underrepresented species (e.g., mouse) highlights SPACE’s ability to generalize in **data-scarce scenarios**—a strength enabled by its species-aware encoder. By decoupling shared and species-specific regulatory mechanisms via MoE, SPACE effectively adapts to species with limited training data.
Summary: The paper introduces SPACE, a supervised DNA foundation model that predicts genomic profiles (e.g., chromatin accessibility) to learn effective DNA sequence representations. The authors argue that unsupervised DNA foundation models (DFMs) lack biological context, leading to suboptimal generalization. SPACE addresses this via a Species-Profile Adaptive Collaborative Experts architecture, which combines a species-aware encoder (using Mixture of Experts, MoE) and a profile-grouped decoder to capture cross-species and cross-profile dependencies. Generally, key contributions include advocating supervised genomic profile prediction as a superior pre-training objective and proposing a biologically inspired MoE-based architecture. Claims And Evidence: The claims: 1. Supervised pre-training > unsupervised DNA LMs: 2. SPACE’s architecture improves over GPPMs However, 1. Since the method uses supervised pre-training, it's very important to evaluate generalization capability. Authors demonstrate that SPACE is better than Enformer on yeast and viral genome in Tab. 2, but it would be more promising to demonstrate generalization across more species on more tasks. 2. The newly proposed model architecture is rather complex compared to transformers or state-space models, while the performance improvement is not so significant. Methods And Evaluation Criteria: **Methods**: The MoE design for species-specific and shared features is biologically motivated (species-specific regulatory mechanisms). The profile-grouped decoder leverages functional dependencies (e.g., chromatin accessibility ↔ transcription). **Evaluation**: Benchmarks (NT, GUE) and metrics (MCC, Pearson correlation) are standard. However, - Benchmarking on GUE and NT only is largely behind the development of DNA LMs. For example, BEND [3] develops more biological important tasks that many DNA LMs fail. - BEND [3] also shows that many DNA LMs cannot outperform simple supervised models trained from scratch using ResNet or CNN. SPACE is also trained supervisely, and it's also important to consider supervised expert models. - Importance baselines like Caduceus [1] and Evo [2] are missing [1] Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling [2] Evo: DNA foundation modeling from molecular to genome scale [3] BEND: Benchmarking DNA Language Models on biologically meaningful tasks Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: Reviewed Relation To Broader Scientific Literature: N/A Essential References Not Discussed: As mentioned above, importance baselines like Caduceus [1] and Evo [2] are missing [1] Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling [2] Evo: DNA foundation modeling from molecular to genome scale Other Strengths And Weaknesses: - The novelty of this paper is unclear. - If it's demonstrating supervised pretraining is a better choice for DNA LMs, the performance improvement is not so significant. - If it's proposing a new architecture applying MoE onto DNAs with cross-species and genomic profile design for the gate, it's important to focus on comprehensive downstreams for benchmarking, rather than simply adopting GUE and NT (see the next point on more benchmarks) Other Comments Or Suggestions: N/A Questions For Authors: What do you think is important for developing DNA LMs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive feedback. We supplemented with additional experiments which are presented in https://anonymous.4open.science/r/charts-CDE6. We address each key issue and outline manuscript improvements below. ## Q1:Generalization Across Species and Tasks We concur that evaluating generalization capacity is paramount for supervised DNA foundation models.To comprehensively address this concern, we performed extended benchmarking using the Genomic Benchmarks dataset, which represents the only mainstream benchmark encompassing species beyond those investigated in our previous experiments, including Human-or-worm classification and Drosophila enhancer classification. The results are presented in Section 1.3 of supplementary materials. SPACE achieves robust performance on these tasks, demonstrating cross-species generalizability. While it doesn't achieve SOTA on the human-or-worm classification, it significantly outperforms Enformer. ## Q2:Architectural Complexity vs. Performance Gain SPACE's architecture is biologically motivated: * ​Species-aware Encoder: Explicitly models shared and species-specific regulatory mechanisms to enable cross-species knowledge transfer. * ​Profile-Grouped Decoder: Captures mutual regulatory influences among distinct genomic profiles. SPACE consistently improves performance across ​most tasks while maintaining biological interpretability. And additional ablation studies on module effectiveness are provided in *Q3 of Reviewer oB6U*. ## Q3:Missing Baselines We acknowledge this limitation and resolved in *​Q2 of Reviewer ueQp*. ## Q4:Novelty and Contribution Clarity We consider both aspects as key contributions: As the first work to rigorously explore supervised pretraining for DNA foundation models, SPACE achieves SOTA over NT on most benchmarks despite using significantly fewer parameters and less training data. This validates the effectiveness of supervised pretraining for genomic tasks. While computational limitations and the scarcity of curated multi-species, multi-omics datasets prevent scaling SPACE to the extremes of NT or Evo, our work highlights supervised pretraining as a promising scale-up direction for future research. The Species-Aware Encoder and Profile-Grouped Decoder collectively enable substantial improvements over Enformer, particularly in cross-species tasks and new biological applications(e.g., splice site prediction). The detailed experimental results presented in the Appendix D consistently demonstrate that our proposed module achieves superior performance compared to Enformer. The MoE design enhances interpretability: expert selection frequencies align with foundational biological principles. While we acknowledge the reviewer’s concern about limited downstream benchmarks, we have validated SPACE on additional benchmarks, including Genomic Benchmarks and BEND (see *​Q2 of Reviewer ueQp*). ## Q5:keys for developing DNA LMs First, based on the scaling laws observed in NT [1] and Evo [2], expanding both model parameter size and training data scale — following practices in the LLM field — is critical for developing DNA language models. Second, increasing the model's ​context window length is theoretically essential to prevent information loss by accommodating full DNA sequences. However, current DNA models have not yet empirically validated this advantage. As noted in Evo 1.5 [3], training with 8K context lengths outperforms 131K configurations. While longer input lengths technically allow processing full DNA sequences, they currently lack consistent performance improvements. Additionally, most existing DNA LMs scale training data by aggregating reference genome datasets, which inherently fail to capture mutation patterns. This explains why DNA LMs underperform Protein LMs [5] in ​zero-shot variant effect prediction [4]. Protein LMs benefit from more diverse training data (e.g., multiple sequence alignments), suggesting that incorporating population-level mutation data into DNA LM pretraining — as proposed in [6] — holds significant promise. Finally, as emphasized in our work, **​genomic profiles** are indispensable for advancing DNA LMs. DNA sequence functionality is regulated through cell-type-specific interactions with diverse genomic profiles (e.g., chromatin accessibility, histone modifications). These profiles collectively shape chromatin environments that govern gene expression patterns across cellular contexts. References: 1. Nucleotide Transformer: building and evaluating robust foundation models for human genomics 2. Sequence modeling and design from molecular to genome scale with Evo 3. Semantic mining of functional de novo genes from a genomic language model 4. GPN-MSA: an alignment-based DNA language model for genome-wide variant effect prediction 5. Evolutionary Scale Modeling: Pretrained language models for proteins 6. Pre-training Genomic Language Model with Human Variants for Better Understanding Functional Genomics --- Rebuttal Comment 1.1: Comment: Thanks for authors' efforts. The generalization experiment helps to better clarify the model's performance. And also thanks for adding the BEND benchmark. I would like to encourage the authors to gather all supplementary results to the main paper or in appendix to show a comprehensive results across different benchmarks. Personally, I'm still deeply concerned about the complex model design and its influence for the future development for this DNA FM research direction. And also thanks for sharing the insights. Overall, I would like to increase my rating from 2 (weak reject) to 3 (weak accept). --- Reply to Comment 1.1.1: Comment: Dear Reviewer ceSM, Thank you for your thoughtful feedback, which has undoubtedly enhanced the quality of our work. We appreciate your positive response to our additional experiments. We commit to incorporating all supplementary experimental results, including the BEND benchmark, into the final version of our paper. Regarding your ongoing concerns about our approach, we would like to offer the following clarifications: 1. **On model complexity**: From a model architecture design perspective, our approach primarily builds upon MoE, which aligns with the widely used MoE designs in modern LLMs [1, 2]. We extended this by introducing task-specific gating networks [3] to address the multi-task nature of our problem (where tasks correspond to different species and genomic profiles). While MoE approaches are not yet common in genomic models, they have become a fundamental component in modern LLMs. In contrast, models like AlphaFold2 [4] introduced truly complex, specialized architectures that are quite uncommon in the broader ML community. 2. **On future research direction**: Self-supervised DNA FMs and sequence-to-function models represent the two most critical model types in genomic modeling. While the ML community has actively advanced architectural improvements for self-supervised DNA FMs in recent years, few researchers have focused on architectural innovations for sequence-to-function models (typically emphasizing data expansion instead [5,6]). Given that genomic profile prediction is a complex task spanning multiple species and profiles, designing more suitable architectures is necessary rather than relying on the conventional approach of a unified encoder with parallel profile decoders. Our work demonstrates that bio-inspired modifications to the Enformer architecture can transform sequence-to-function models into competitive DNA foundation models. We hope our research encourages more ML researchers to participate in optimizing sequence-to-function models. **References**: [1] Liu, Aixin, et al. "Deepseek-v3 technical report." arXiv preprint arXiv:2412.19437 (2024). [2] "The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation," https://ai.meta.com/blog/llama-4-multimodal-intelligence/ [3] Chen, Zitian, et al. "Mod-squad: Designing mixtures of experts as modular multi-task learners." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [4] Jumper, John, et al. "Highly accurate protein structure prediction with AlphaFold." Nature 596.7873 (2021): 583-589. [5] Chen, Kathleen M., et al. "A sequence-based global map of regulatory activity for deciphering human genetics." Nature genetics 54.7 (2022): 940-949. [6] Linder, Johannes, et al. "Predicting RNA-seq coverage from DNA sequence as a unifying model of gene regulation." Nature Genetics (2025): 1-13.
null
null
null
null
null
null
Geometry-Informed Neural Networks
Accept (poster)
Summary: The paper proposes geometry-informed neural network (GINN) a general framework that allows for generating a diverse set of implicit shapes all satisfying a set of constraints while minimizing an objective function. The paper formulates this problem as a probabilistic generative problem with a novel diversity loss that prevents the model from mode-collapsing. It also provides a set of constraints including connectedness, smoothness, interface, and design region that can be used to enforce design constraints. ## Post Rebuttal I appreciate the authors' rebuttal addressing most of my concerns. Therefore, I'd like to keep my original rating as it is. However, as discussed by other reviewers dCHc and 9JbR, I agree that the current experimental results do not validate the robustness of the system when optimizing for a wide range of setups. Thus, I also encourage the authors to include more settings in their experiment section for their revised version. Claims And Evidence: The paper claims to introduce GINN, a new type of neural field that is able to optimize for a given constraint while satisfying a set of constraints. The optimized network is further able to create diverse outputs all of which satisfy the constraint to some extent. This claim is well supported by the experiments that the paper shows in the paper and the supplement. Methods And Evaluation Criteria: The paper proposed a general framework that is able to handle a wide range of modeling tasks. To support this claim the authors evaluated GINN on multiple tasks all of which require using GINN on different sets of data. The paper also provides an extensive ablation study which I found very helpful. Theoretical Claims: I did not find discrepancies in the theoretical claims made by the paper. Although I did not go into details into the derivations presented in the supplement. Experimental Designs Or Analyses: The experiment design is comprehensive with lots of qualitative and quantitative metrics provided to support the effectiveness of GINN. The ablation study in the supplement is also helpful in analyzing the effect of different losses and components of GINN. Overall I found the experiments to be complete. Supplementary Material: I briefly looked at the supplement material. I found it very helpful as it provides details of the metrics, experiment setups and additional results regarding GINN. Relation To Broader Scientific Literature: The paper proposed the first data-free generation method that is able to generate a diverse set of outputs given a set of constraints and an objective function. This can potentially open up a new interesting research direction. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Overall I don't have major concerns regarding the paper. I found its methodology insightful and experiment results complete. However, the experiments do seem like they are fake problems and do not have any significant practical applications. For example, in designing the engine bracket, often the design criteria cannot be straightforwardly expressed as a differentiable constraint as those presented in the paper (e.g., parameters only obtainable after physics simulation). How would the paper tackle such cases? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. We would merely like to answer the question concerning the extension of the method to constraints that require running a physics solver. Conceptually, this is straightforward by adopting a solver-in-the-loop [1], which we already do with the persistent-homology (PH) constraint. Loosely speaking, such a solver maps an input geometry to a differentiable scalar objective by discretizing the domain and solving an algebraic problem. This high-level procedure is the same for a PDE solver. Our framework can reuse PDE solvers from classical (differentiable) topology optimization, e.g., [2]. We are currently developing the use of a PDE solver with the proposed framework in a follow-up work. Despite the conceptual similarity, there are technical differences; for example, PDE solvers are generally more expensive, require more careful discretization, and have more complicated loss landscapes, which is the reason we chose to omit this discussion in this submission. [1] Um, K., Brand, R., Fei, Y., Holl, P., and Thuerey, N. Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers. Advances in Neural Information Processing Systems, 2020. [2] Jia, Y., Wang, C., and Zhang, X. S. FEniTop: a simple FEniCSx implementation for 2D and 3D topology optimization supporting parallel computing. Structural and Multidisciplinary Optimization, August 2024.
Summary: This paper studies a novel and interesting problem of learning a generative model of shapes under certain geometric constraints. It tackles this question in a simple yet effective manner: perform constrained optimization with a diversity penalty term. Some applications are included, and some analyses are conducted to show the property of the proposed framework. ## Update after rebuttal Thanks for the rebuttal! I agree with the authors and other reviewers that the problem and the solution are both interesting. However I keep my opinion that only two use cases are not enough to fully support the soundness of the proposed method. Given that this paper also does not have adequate theoretical contribution, I do expect extensive qualitative evaluations for this paper to meet the bar of ICML. Therefore I would keep my original evaluation. Claims And Evidence: - **Claim 1: Proposes a novel and important problem.** This claim is supported by the survey of related works (Fig 2 and Sec 2). To the best of the reviewer's knowledge, the problem setting itself is novel. The potential application of constraint-driven shape design also makes sense to me. - **Claim 2: The proposed method is effective.** This claim is partially supported. - (**Q1**) This claim is **not** sufficiently supported solely from the exisiting experiments. Although the authors propose six questions, only two of them are related to the core task (shape generation, problem 5 and problem 6). Although the experiment result looks convincing, just working on one or two problems does not show the effectiveness of the proposed method on a wide range of problems. - (**Q2**) There is no theoretical guarantee on the convergence of the proposed optimization process. The task itself can also be ill-defined from a theoretical perspective: how should we evaluate whether the generative model indeed recovers the desired distribution, or, what is the desired distribution from the first place? Methods And Evaluation Criteria: The method itself is straightforward and sound. However the evaluation does not make sense to me. As mentioned above, only two problems studied are related to the central problem studied (shape generation). The other four problems, although providing some insights, have no direct relationship with the actual problem (shape generation). Theoretical Claims: There are no proofs. Experimental Designs Or Analyses: Yes I checked the soundness/validity of the experimental designs. As mentioned above and in Q1 and Q2, the experiment design is not sound. The experiments are so sparse that we can barely make any conclusions from it. **(Q3)** The ablation studies are also sparse. Only Fig 8. ablates the influence of different network choice for only one task. (Fig. 3 cannot be regarded as ablation study most of the time) It's expected to include more results, such as performing ablation studies on every single task, and performing ablation on other techniques used, such as optimization methods. How is the usage of ALM influencing the effectiveness? How useful is the diversity constraint in wheel and jet engine? I noted Table 2 which serves for part of the ablations I mentioned above, however no ablations are performed for other cases, e.g., wheel. Supplementary Material: Yes, I reviewed all the appendices. However didn't read in detail. I didn't review the code. Relation To Broader Scientific Literature: The contribution of this paper is properly discussed in Fig. 2 and Sec. 2. The closest fields, as mentioned, are neural fields, generative modeling, and theory-informed learning. Essential References Not Discussed: - (**Q4**) Enforcing geometric constraints on neural fields and/or learning generative models for neural fields have been well-studied before. There are some missing literatures to be discussed: - Mehta et al., A Level Set Theory for Neural Implicit Evolution under Explicit Flows - Yang et al., Geometry Processing with Neural Fields - Schwarz et al., GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis Other Strengths And Weaknesses: Beyond the points discussed above, there are several other strengths and weaknesses as detailed below: Other strengths: - The paper is well-written. The writing is coherent with consistent logical flow, which presents the novel task and proposed solution effectively. - The studied problem is interesting. This problem might be useful in several communities, especially in CAD (computer-aided design). - The properties of the learned generative model is interesting. I find Fig. 6 particularly interesting to me, as the latent space seems to be organized and interpretable. Other weaknesses: - The proposed system seems to be fragile and hard-to-tune. The authors use different network architectures and different training parameters for different tasks, which indicates the complexity to tune the optimization process. Other Comments Or Suggestions: N/A Questions For Authors: If we have a limited number of ground-truth data, how would the learning framework benefit from those expert data? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and balanced review. In the following, we address the main questions. ## Q1 The reviewer writes that while the experiments look promising, their scope is insufficient and, in particular, only two studied problems are relevant to the core research question. We believe the first four problems do support the central theme by isolating two key aspects: shape optimization with NN representations (Plateau and mirror) and training a generative model through a diversity constraint (physics and obstacle). We invite the reviewer to view these tasks as ablations of solution multiplicity and shape representations. The reviewer might also find relevant the rebuttal pdf (linked below) and the discussion with reviewer 8unU, where we provide a matrix overview of all problems and tasks and discuss their variety. However, we do agree that we can strengthen the existing problems with more ablations, which we do under Q3. ## Q2 While the reviewer is right that our method does not come with convergence guarantees, this is the case for most adjacent methods in machine learning, physics-informed learning, and topology optimization. The convergence is a complex interplay between the model, the problem, and the optimizer, we do not foresee that such guarantees are easily obtained for the investigated non-linear differential losses on moving domains. Concerning the desired distribution, our best mental model is the Boltzmann distribution induced by the objective function over the feasible set. As described in Section 2.3, this distribution is over a function space which prohibits the direct use of Boltzmann generators (BGs) and motivates the use of an explicit diversity term that plays the role of the entropy term in BGs. Since this is an unexplored setting, we also call for further investigation under limitations and future work. ## Q3 We have performed additional studies ablating each constraint for three tasks (obstacle, wheel, bracket) with the metrics collected in the [rebuttal pdf](https://github.com/ginn-rebuttal-icml-2025/ginn-rebuttal/blob/main/GINN_rebuttal_icml_2025.pdf). The majority of these results quantitatively confirm the obvious roles of the constraints (e.g., interface losses improve interface metrics, etc.). The diversity ablation reduces the diversity in the bracket and wheel solutions roughly five and ten-fold, respectively. Notably, diversity and smoothness losses are competing (less diversity allows for lower smoothness). The reviewer might also be interested in how these ablations impact the computational speed discussed in the response to reviewer 9JbR. We also include the ablation of the augmented Lagrangian. For the simpler problems, the constraint satisfaction is similar. For the more complex bracket problem with diversity and smoothness terms, we find that ALM finds more diverse shapes with more latent structure. This is best summarized visually ([this 3x3 plot](https://postimg.cc/SJyZfVx7) of ALM ablation can be juxtaposed with the 5x5 plot in Figure 6): while the overall structure of the shape is learnt, we see almost no interesting diversity or latent structure, and there are floaters and spurious surface features. We believe this is because ALM helps provide a stronger training signal throughout the training by adaptively increasing the weight of each loss. For some losses, the Lagrangian weight changes over two orders of magnitude. ## Q4 We thank the reviewer for the additional references. ## Other comments The reviewer writes that the “system is fragile and hard-to-tune”. We must briefly comment that the wheel and bracket experiments share the exact same setup. While we chose simpler models for simpler tasks, all experiments (including the generative PINN task which is very sensitive to initialization) can be repeated with WIRE. However, in under-determined problems (those without objective, specifically, obstacle), the inductive bias of the model impacts the identified solutions, so the reviewer is right that there is a setup-dependence in this sense. Correspondingly, the low-frequency biased softplus MLP provides smooth and visually pleasing solutions, motivating such a choice. The reviewer might appreciate the [following plot](https://postimg.cc/DJhtq1nS) illustrating the role of inductive biases and diversity in connection to Q3. We are open to adding this discussion and experiments to the manuscript if the reviewer suggests this would strengthen our work. Lastly, the reviewer asks how we could incorporate ground-truth data. Conceptually, this is simple by adding a supervision loss. Using auto-decoder style training, the model should learn to organize the ground-truths in the latent space (imagine placing ground-truth shapes in Figure 6). Alternatively, the latent positions of the ground-truths can be enforced with a specific interpolation in mind (e.g., vertices of a hypercube). We believe this is a very exciting future direction. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! I agree with the authors and other reviewers that the problem and the solution are both interesting. However I keep my opinion that only two use cases are not enough to fully support the soundness of the proposed method. Given that this paper also does not have adequate theoretical contribution, I do expect extensive qualitative evaluations for this paper to meet the bar of ICML. Therefore I would keep my original evaluation.
Summary: Paper introduces geometry-informed neural networks — a gradient-based way to optimize neural implicit function (SDF) without data based on local and global geometric constraints. Authors propose a set of practically relevant constraints (particular design region, particular interface, connectedness, smoothness, topology in form of Betti numbers) and formulated them in differentiable manner to be used in proposed optimizations. Authors demonstrate validity of the approach via a set of qualitative and quantitative evaluations. =====POST REBUTTAL===== As I said in my initial review, I think this is a very promising work that might lead to a lot of interesting follow-up work as well. Authors have addressed some of my concerns in the rebuttal: time ablation with respect to different constraints, question about sharp features. I maintain my rating of 'weak accept' and I am willing to champion this paper if needed. I am not going to rise my rating because I agree with reviewer dCHc on a notion that "the proposed system seems to be fragile and hard-to-tune" but I think that this is something to be studied in follow-up work. I strongly recommend including additional discussion into revised version's supplementary of the paper. Claims And Evidence: - Proposed method can be trained with data to satisfy design constraints to produce diverse shapes. This claim is supported by method design and experimental evaluation with different sets of constraints. - Proposed method is applicable to a variety of problems: geometric optimization, engineering design, physics. This claim is supported via toy examples in respective domains. Overall, I find claims and provided examples convincing (see more detailed comments in other sections). Methods And Evaluation Criteria: - Method evaluates qualitatively in the main paper using several toy examples; - Additional quantitative evaluation is provided in the supplementary (Tables 2 and 3), where authors introduce a set of metrics to evaluate adherence to the constraints. - Design region adherence is evaluated via ratio of volume/surface area outside of boundary region normalized by total volume/area outside of design region; - Fit to interface is computed via one-sided Chamfer Distance between GT interface and interface area; - Connectedness is evaluated on the resulting shape via connected component analysis and Betti numbers. - Diversity is computed as variance across pairwise chamfer distances across the generated set. - Smoothness evaluated via Monte Carlo estimation of surface strain. Overall I find qualitative measures well designed for proposed tasks and quantitative results look good to me. However, I think that qualitative evaluation is limited — almost all qualitative examples in the paper use one particular set of constraints (Figures 3 and 6). Paper might benefit from additional qualitative examples (and corresponding quantitative examples). Theoretical Claims: - Method optimizes neural implicit function SDF via adaptive augmented Langrangian method to satisfy a set of constraints (examples are given in Table 1). These losses are formulated in differentiable manner in supplement. I have checked losses and differentiable formulations and I haven’t found any issues with them. However, I think paper should discuss couple of important theoretical considerations: - To me, it looks like the proposed method can only produce smooth surfaces. This is important limitations because in CAD design surfaces are often non-smooth; - Authors mention that wall-clock time for the method is 10 min per single shape. Since qualitative evaluation in the paper is limited, I assume it is the same set of constraints as depicted in Figure 3. I think paper might benefit from additional ablation that investigates the number of constraints versus optimization speed trade-off. Experimental Designs Or Analyses: Almost all experiments in the paper are done using one set of constraints (Figure 3). It is not clear how well the method generalizes to a diverse set of constraints. This is the main reason for my rating. Paper can heavily benefit from additional examples derived from diverse sets of constraints. Supplementary Material: I have checked the supplementary and refer to it in my review. Relation To Broader Scientific Literature: This paper can be viewed as constrained optimization of neural implicit function (SDF) via set of differentiable practically relevant constraints. The main theoretical contribution here is observation that neural implicit function can be optimized via a set of differentiable constraints instead of sampled ground truth implicit function. The main practical contribution (based on limited set of examples) is that this optimization leads to plausible shapes. Essential References Not Discussed: Mariem Mezghanni, Malika Boulkenafed, Andre Lieutier, Maks Ovsjanikov; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 9330-9341 Mezghanni M, Bodrito T, Boulkenafed M, Ovsjanikov M. Physical simulation layer for accurate 3d modeling. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022 (pp. 13514-13523). Terzopoulos D, Witkin A. Physically based models with rigid and deformable components. IEEE Computer Graphics and Applications. 2002 Aug 6;8(6):41-51. Yang L, Kim B, Zoss G, Gözcü B, Gross M, Solenthaler B. Implicit neural representation for physics-driven actuated soft bodies. ACM Transactions on Graphics (TOG). 2022 Jul 22;41(4):1-0. Schirmer L, Novello T, da Silva V, Schardong G, Perazzo D, Lopes H, Gonçalves N, Velho L. Geometric implicit neural representations for signed distance functions. Computers & Graphics. 2024 Dec 1;125:104085. Zhao M, Wang Y, Yu F, Zou C, Mahdavi-Amiri A. SweepNet: Unsupervised Learning Shape Abstraction via Neural Sweepers. InEuropean Conference on Computer Vision 2024 Sep 29 (pp. 302-320). Cham: Springer Nature Switzerland. Pujol E, Chica A. Rendering piecewise approximations of SDFs through analytic intersections. Computers & Graphics. 2024 Aug 1;122:103981. Xiang H, Jianbing S, Wang J, Ji M, Zhang C. Boundary Adaptive Physics-Informed Neural Network for the Fluid Flow Around Circular Cylinders with Complex Boundary Conditions. Available at SSRN 5110040. Other Strengths And Weaknesses: + Authors provided code - Examples in the paper could be more diverse Other Comments Or Suggestions: I do think that paper is somewhat poorly titled: any neural implicit function that was trained on some shapes is geometry informed in some sense. I think that the name should be something like “Constraint-based optimization of neural implicit functions” but this does not hold weight in my decision. Questions For Authors: - Am I correct that the method is only capable of producing smooth surfaces? If yes, this should probably be clearly stated in the paper. - How well does the method scale with respect to the number of constraints? Imagine that we add constraints the same way it is done in Figure 3. Does optimization run the same amount of time for all sets of constraints? Or convergence slows down when we increase the set of constraints? - Do you have additional qualitative examples beyond Figure 3? Especially ones that have thin wire-like constraints (e.g. shape that contains thin spring-like shape as interface; or swiss cheese interfaces with large number of small and large holes). - Are there any failure cases for your method that you have observed? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review. In the following, we will address the questions. ## Sharp features The method is not limited to smooth surfaces. The smoothness of the surface is inherited from the NN used to represent the shapes. The smoothness properties of MLPs are governed by the activation function. We use at least $C^2$ activations to have twice continuously differentiable surfaces, for which the Hessian and, hence, curvatures are well-defined. If this is not required, sharp features can be easily represented with, e.g., ReLU MLPs. We will also add that the log-surface-strain objective promotes sharp features that can be approximately represented with $C^2$ NNs, such as WIRE (see top row in Figure 1 or [this image](https://postimg.cc/75Pn1ScB) for a close-up). ## Impact of different constraints Per the reviewer’s inquiry, we have done a more comprehensive constraint ablation study collecting the time per iteration and metrics. The timings reported below are for the 3D jet engine bracket problem, which has the highest number and complexity of constraints. The other experiments show similar behaviour, so we omit these for brevity but can include them upon request. Most constraints have little effect on the time per iteration. Of the total runtime, the surface strain takes ~10% (due to the Hessian) and the PH solver ~75% (expensive multi-processed CPU task). To pre-empt a potential confusion – the runtime *increases* when ablating the eikonal constraint as it destroys the geometric regularity of the implicit function, hindering efficient surface point sampling that usually takes ~15% of total time. We use an A100-SXM GPU, and the peak memory usage for a batch of 9 shapes does not exceed 16 GB. These two losses also have a strong impact on the iterations needed. As we describe around line 430, adding the smoothness loss increases the number of iterations roughly two-fold, and adding the diversity increases it roughly five-fold. The [rebuttal pdf](https://github.com/ginn-rebuttal-icml-2025/ginn-rebuttal/blob/main/GINN_rebuttal_icml_2025.pdf) also collects the impact of these ablations on the metrics. Overall, the biggest effect is not so much the number of constraints, but rather losses that are ill-conditioned as described in “Limitations and Future Work”. This conditioning issue is also known in the PINN literature, and its mitigation is an open and active research topic. | Ablated constraint | ms/it | |---|:---:| | (None) | 260 | | Eikonal | 291 | | Interface | 264 | | Design region | 262 | | Prescribed normal | 265 | | Diversity | 265 | | Surface strain | 230 | | Surface strain & Diversity (no boundary point sampler) | 187 | | Connectedness | 94 | ## Additional examples We would first like to clarify a potential misunderstanding that “almost all qualitative examples in the paper use one particular set of constraints (Figures 3 and 6)”. Figures 1, 3, and 6 illustrate the same jet engine bracket problem for presentation continuity. However, we also solve five other problems illustrated in Figures 4 and 5 and discussed in Section 4.2. We have prepared a visual overview of the problem-constraint matrix in the [rebuttal pdf](https://github.com/ginn-rebuttal-icml-2025/ginn-rebuttal/blob/main/GINN_rebuttal_icml_2025.pdf). If this does not answer the reviewer’s question for “additional qualitative examples beyond Figure 3”, we kindly ask for clarification. ## Failure cases A few failure cases are discussed in the Appendix of the submission. For example, Figure 10 shows the bracket trained with softplus, SIREN, and WIRE, demonstrating the importance of the model’s inductive bias. Around line 329, we also discuss the impact of the surface sampling strategy, which should ensure that point samples are distributed uniformly and that high-curvature regions are not missed. We observed that naive sampling strategies lack these properties and lead to ridge artifacts that contain all the curvature but are missed during the optimization. We provide a basic illustration [here](https://postimg.cc/s1jLspm2). We discuss another failure mode around line 415: the emergence of latent space structure is sensitive to the diversity constraint. One could also view the ablations as failure cases. Appendix B.2 and the additional ablation study now contain more examples. We are glad to produce additional figures illustrating these failure modes if the reviewer suggests this would further strengthen the submission. --- Rebuttal Comment 1.1: Comment: I am glad that authors found my review 'detailed' if not 'balanced', 'positive' or 'considerate'. I also appreciate additional effort that went into the rebuttal, especially clarification on time impact for different constraints and illustration of failures of the proposed method. As I said in my initial review, I think this is a very promising work that might lead to a lot of interesting follow-up work as well. So, after the rebuttal, I maintain my rating of 'weak accept' and I am willing to champion this paper if needed. I am not going to rise my rating because I agree with reviewer dCHc on a notion that "the proposed system seems to be fragile and hard-to-tune" but I think that this is something to be studied in follow-up work. I strongly recommend including additional discussion: smoothness concerns, constraint time analysis, additional failure cases, into revised version supplementary of the paper.
Summary: The paper introduces Geometry-Informed Neural Networks (GINNs), a framework that trains shape-generative neural fields without relying on data. Instead, the method leverages user-specified design constraints (e.g. connectivity, smoothness, and topology) to drive the generation of feasible shapes. A key novelty of the paper is the explicit incorporation of diversity constraint to avoid mode collapse, which is essential when multiple solutions are desired. The framework is demonstrated on a range of problems, including real-world engineering design challenges such as wheel design and jet-engine bracket design. Claims And Evidence: Most claims made in the submission supported by clear and convincing evidence. However I have some concerns on overall applicability to real-world problems. Methods And Evaluation Criteria: The proposed methods make sense for the problem. Theoretical Claims: The theoretical claims are valid. Experimental Designs Or Analyses: The experimental designs are valid but limited to a small range & scale of applications. Supplementary Material: Supplementary material contains code of the method, which I did not inspect in details. Relation To Broader Scientific Literature: The method is likely to be significant and inspire many downstream applications. Essential References Not Discussed: References are adequate. Other Strengths And Weaknesses: Strengths: - The paper tackles the important problem of shape generation in domains lacking large datasets. GINN is a new paradigm that does not requiring any training data at all. The pipeline learns neural implicit fields using only analytical objectives and constraints provided by the user. This approach draws inspiration from physics-informed neural nets, and is novel in the field of 3D generation. - A major contribution is introducing an explicit diversity term in the optimization. By penalizing similarity among solutions, the framework yields diverse shapes that all meet the design criteria. This addresses mode collapse and is particularly valuable for design tasks where multiple viable solutions are desired, - The method allows fine-grained control of shape properties through constraint formulation. The experiments show GINNs can enforce geometrical constrains like connectivity, smooth surfaces, and topological features such as a specified number of holes in the shape. This level of control demonstrates the flexibility of the framework to incorporate domain-specific knowledge and design intent. Weaknesses - *Limited Evaluation of Generality*: While GINN is demonstrated on several tasks, the set of problems is still relatively narrow and somewhat tailor-made. It’s unclear how generally the method would perform on other shape domains or more complex design scenarios beyond these settins. The paper establishes its own metrics for each constraint (which is understandable, since no standard benchmarks exist), but this makes it difficult to judge if GINN’s success extends broadly. A more convincing evaluation would include additional diverse tasks or show that the method scales to higher-complexity shapes. - *Lack of Baseline Comparisons*: The study does not compare GINN against existing approaches, largely because “data-free shape-generative modeling is an unexplored field with no established baselines”. However, the absence of any baseline or alternative (even a simplified classical method) leaves the improvements unclear. For instance, in the engineering bracket task, the authors did not compare to classical topology optimization methods or other generative strategies, making assessment more difficult. - *Computational Cost*: Training GINNs appears computationally expensive and potentially impractical. The paper reports that even a single shape optimization (for the jet engine bracket) requires 10k iterations (1 hour), and incorporating diversity (multiple shapes) jumps to 50k iterations (72 hours) on a single GPU. Such high time costs raise concerns about scalability for real-world design use-cases. Additionally, managing many constraints (up to seven losses) is complex; while the adaptive augmented Lagrangian helps balance them, some losses (e.g. smoothness, diversity) remain hard to optimize and could significantly slow down convergence. The paper would be stronger if it discussed or mitigated these efficiency issues, or at least compared computational load with alternative methods. Other Comments Or Suggestions: Overall the method is clean, straightforward, and could inspire further studies of PINN on the problem of 3D generation. The related work section is extremely well-written and informative. However, the authors are encouraged to incorporate a more comprehensive and generalized experiments under a more complex setting, ideally with fair baseline comparisons. The authors could consider trim the related work a bit to fit in such experiments. Questions For Authors: See Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their considerate review. In the following, we address the three weaknesses. ## Task diversity While we agree that more experiments are almost always better, we would like to present a matrix of problems and constraints that illustrates the variety of considered settings, available in the [rebuttal pdf](https://github.com/ginn-rebuttal-icml-2025/ginn-rebuttal/blob/main/GINN_rebuttal_icml_2025.pdf), together with an additional ablation study. Not captured by this matrix are also the differences in domain dimension, shape representation, and problem symmetries (e.g., the jet engine bracket has no symmetries, making this a very general problem). Since this is an unexplored research setting, unfortunately, there does not exist a catalogue of standard problems, so defining each new problem in addition to the main research question is a considerable effort. However, we agree this is necessary to develop this research direction further and hope this is addressed in future work. ## Baselines While the reviewer and we seem to agree that a fully satisfactory comparison to a baseline is difficult to produce, we have followed the reviewer’s advice and implemented at least partially comparable baselines. ### Topology optimization As suggested, we consider classical TO, specifically FeniTop [1] which implements the standard SIMP method with a popular FEM solver. We define a TO problem that is as similar as possible, applying a diagonal force to the top cylindrical pin interface and allowing a 7% volume fraction in the same design region. The other interfaces are fixed in the same way. The shape compliance is minimized for 400 iterations on a 104x172x60 FEM grid (taking 190 min on a 32 core CPU to give a sense of runtime, although a fair timing comparison requires a more nuanced discussion). The produced shape is visualized [here](https://postimg.cc/1n4KD0QN). Then, we compute the surface strain (the objective we use) for this TO shape and, conversely, the compliance for a GINN shape (Section 4.2; illustrated in the penultimate column of Figure 3). Unsurprisingly, both shapes perform best at the objective they were optimized for while also satisfying the constraints up to the relevant precision. At the very least, this serves as a sanity check and confirmation of the constraint satisfaction. | Metric | Topology optimization | GINN | |---|---|---| | ↓ Connectedness (0-th Betti-number) | 1 | 1 | | ↓ Interface (Chamfer distance) | 0.00 | 0.00 | | ↓ Design region (Volume outside) | 0.00 | 0.00 | | ↓ Curvature | 442 | **144** | | ↓ Compliance | **0.99** | 0.344 | ### Human-expert dataset A unique aspect of GINN is the data-free shape generative aspect. Comparison to classical TO is trivial since it is inherently limited to a single solution, and its diversity would be 0. Instead, we use the simJEB [2] dataset to give some intuitive estimate on the diversity of the produced results. The dataset is due to a design challenge on a related problem described in Section 4.2. The shapes in the dataset are produced by human experts, many of whom also used topology optimization. To compute the diversity metric, we sample 196 clean shapes from the simJEB dataset and 14x14 equidistant samples from the 2d latent space generative GINN model. Even though these sets are not directly comparable as they optimize for different objectives, this comparison indicates that GINNs can produce diversity on the same and larger magnitude as a dataset that required an estimated collective effort of 14 expert human years [2]. | Metric | SimJEB | GINN | |---|---|---| | ↑ Diversity | 0.099 | **0.167** | ## Computational cost We thank the reviewer for highlighting the runtimes, which made us realize that we have reported an outdated runtime in the submission. The up-to-date runtimes on an A100-SXM GPU are 30 min for 10k iterations of a single-shape model and 4.5h for 50k iterations for the generative model, not 72h, as mistakenly reported in the submission. This large discrepancy was due to previously logging plots at every training iteration. Since reviewer 9JbR also asks about runtimes, we invite the reviewer to read our other rebuttal, where we report a more comprehensive constraint ablation study. In brief, the two difficult losses are surface strain and diversity, both in terms of runtime and iterations needed. Respecting the character limit, we are happy to continue with a more detailed discussion upon the reviewer's request. [1] Jia, Y., Wang, C., and Zhang, X. S. FEniTop: a simple FEniCSx implementation for 2D and 3D topology optimization supporting parallel computing. Structural and Multidisciplinary Optimization, August 2024. [2] Whalen et al., SimJEB: Simulated Jet Engine Bracket Dataset, Computer Graphics Forum 2021
null
null
null
null
null
null
Protein Structure Tokenization: Benchmarking and New Recipe
Accept (poster)
Summary: This study considers the problem of protein structure tokenization, which it defines as the problem of distilling protein 3D structure into discrete or continuous representations. The authors introduce two main contributions: 1) a set of benchmark datasets for evaluating protein structure tokenization models and algorithms and 2) AminoAseed, a new method for protein structure tokenization that leverages codebook gradient updates. Claims And Evidence: Most of the claims made throughout the paper are well supported by the evidence provided. The explanation about the datasets used is convicing and the results demonstrating that their new method outperforms the state-of-the-art is also convincing. The ablation study is both complete and convincing. The paper benchmarks two distinct qualities Distinctiveness of the vectors in the codebooks and Efficiency of the codebook utilisation. Intuitively, these two values should be related to each other. A more distinct codebook should lead to more efficient utilisation. However, the empirical results show the opposite: Foldseek, which has the worst curve shape for the distinctiveness evaluation, has much better performance on the Efficiency benchmark. This discrepancy suggests to me that this is some problem in how either efficiency or distinctiveness are measured. I would argue that the claim in the abstract that ESM3 is "the leading model" is not supported by the evidence in the benchmark results provided where MIF is the leading model. Maybe this should be rephrased to "the leading VQ-VA-based model". Methods And Evaluation Criteria: One of the main focus of the paper is to define the datasets and evaluation criteria to benchmark structure tokenization methods. I think that the choices made by the authors make sense and well reasoned. Benchmark construction --- - Effectiveness datasets make emphasis on the detection of local substructures with functional relevance (binding site, catalytic, conserved site prediction), that greatly overlap with each other; and only one of global information (remote homology); however I generally agree with this choice as they are the closest tasks to real-world data. - Sensitivity, the general idea makes a lot of sense and it is quite reasonable. It assumes that TM-score is the gold standard, which is not, it is the state-of-the-art structure search algorithm and I think an acknowledgement on this limitation would be useful. On the other hand, the target value of representation similarity is computed via dynamic programming, but it is not clearly specified whether it is global (or optimal) alignment or local (suboptimal). It should be clearly stated both in the main text and the appendix. - Distinctiveness, I find this benchmark/metric the one with the weakest foundation, as it makes the assumption that cosine similarity is a good measurement of the distinctiveness of the vectors, which I do not think is properly justified. This might be the source of the discrepancy with efficiency - Efficiency, I think this evaluation makes sense. New tokenization method --- - The study of optimal codebook size and dimension is quite informative, the methodology is simple and sound. - The finding of discretised representations obtaining similar performance to continuous ones is intriguing. - The finding that structural representations are less robust than sequence representations is interesting and shows that there is still room for improvement in the field on how to combine these types of representations Theoretical Claims: There are no significant theoretical claims. The study is mostly interested in practical application of well established concepts. Experimental Designs Or Analyses: Yes, I think that the experimental design is sound. I've already discuss some ambiguous experimental descriptions that I think should be clarified. Otherwise, I think the experimental design is sound. Supplementary Material: I've reviewed Appendices A, B, C, D, and F. I think they are quite useful contributions that address and complement some of the ambiguities in the main text, providing more context. I particularly, appreciate the analysis of train/test similarity in the effectiveness evaluation and Figure 9. Relation To Broader Scientific Literature: The work studies Protein Structure Tokenization, which is a problem first tackled through the lens of deep learning, by [1]. It is also related to the problem of inverse folding which has been tackled by [2, 3], which is also related to protein design. It is also related to multimodal integration of protein sequence and structure [5, 6]. The benchmarking of protein representation techniques is related to [9 - 11]. 1. van Kempen M, Kim SS, Tumescheit C, Mirdita M, Gilchrist CL, Söding J, Steinegger M. Foldseek: fast and accurate protein structure search. Biorxiv. 2022 Feb 9:2022-02. 2. Hsu C, Verkuil R, Liu J, Lin Z, Hie B, Sercu T, Lerer A, Rives A. Learning inverse folding from millions of predicted structures. InInternational conference on machine learning 2022 Jun 28 (pp. 8946-8970). PMLR. 3. Dauparas J, Anishchenko I, Bennett N, Bai H, Ragotte RJ, Milles LF, Wicky BI, Courbet A, de Haas RJ, Bethel N, Leung PJ. Robust deep learning–based protein sequence design using ProteinMPNN. Science. 2022 Oct 7;378(6615):49-56. 4. Yang, K. K., Zanichelli, N., and Yeh, H. Masked inverse folding with sequence transfer for protein representation learning. Protein Engineering, Design and Selection, 36:gzad015, 2023. 5. Heinzinger M, Weissenow K, Sanchez JG, Henkel A, Mirdita M, Steinegger M, Rost B. Bilingual language model for protein sequence and structure. bioRxiv. 2023 Jul 25:2023-07. 6. Su J, Han C, Zhou Y, Shan J, Zhou X, Yuan F. Saprot: Protein language modeling with structure-aware vocabulary. bioRxiv. 2023 Oct 2:2023-10. 7. Rao R, Bhattacharya N, Thomas N, Duan Y, Chen P, Canny J, Abbeel P, Song Y. Evaluating protein transfer learning with TAPE. Advances in neural information processing systems. 2019;32. 8. Capel H, Weiler R, Dijkstra M, Vleugels R, Bloem P, Feenstra KA. ProteinGLUE multi-task benchmark suite for self-supervised protein modeling. Scientific Reports. 2022 Sep 26;12(1):16047. 9. Kucera T, Oliver C, Chen D, Borgwardt K. ProteinShake: building datasets and benchmarks for deep learning on protein structures. Advances in Neural Information Processing Systems. 2024 Feb 13;36. 10. Zhang C, Zhang X, Freddolino PL, Zhang Y. BioLiP2: an updated structure database for biologically relevant ligand–protein interactions. Nucleic Acids Research. 2024 Jan 5;52(D1):D404-12. 11. Unsal S, Atas H, Albayrak M, Turhan K, Acar AC, Doğan T. Learning functional properties of proteins with language models. Nature Machine Intelligence. 2022 Mar;4(3):227-45. Essential References Not Discussed: None that I've been able to identify. Other Strengths And Weaknesses: None, the main strengths and weaknesses have already been discussed in the prior sections Other Comments Or Suggestions: I would like to congratulate the authors on a really interesting and solid study. I would update my recommendation to strong accept if my concerns are satisfactorily address. Particularly, the issue of the discrepancy between distinctiveness and efficiency. Questions For Authors: - Do you have any thoughts/insights into the relationship of distinctiveness and efficiency? In my mind, they are related to each other, but I would like to know what your thoughts are on the topic and whether my intuition might be wrong. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive review and insightful comments! We respond to your concerns as below: > R1: Discrepancy observation that FoldSeek, which has the worst curve shape for the distinctiveness evaluation, has much better performance on efficiency benchmark. However, the distinctiveness and efficiency metrics should be related to each other. A more distinct codebook should lead to more efficiency utilization We would like to humbly argue that **the two distinctiveness and efficiency are related, but it is not necessarily the case that as one increases, and the other increases**. 1. Importantly, **the discrepancy stems from the different codebook size for different PST methods**. And the reviewer’s intuition would be more sound when the codebooks share the same size: a more distinct codebook should lead to more efficiency utilization. **For example**, utilization is measured by the factor of #used codes and #total codes (i.e., the codebook size). Smaller the codebook size, higher the chance for utilization rate to be higher. **FoldSeek has a very small codebook size K=20** (see Tab. 4), so it’s natural to have a high utilization rate, despite the observation that its used codes are not that distinct from each other. 2. We would like to clarify **the concept of distinctiveness**. The codebook has two parts of codes: used and unused ones. For distinctiveness, **among the utilized codes in a codebook**, the more distinct they are, **the better chance for them to avoid ambiguous token-substructure mappings**. We would like to emphasize the distinctiveness with used codes, and we show it in Fig. 3 (right panel). Because in reality, those used codes matter the most. **We’re open to hear more opinions on this topic from the reviewer**, and are always eager to learn more to enrich our knowledge. > R2: Question on the meaning to have distinctiveness evaluation, given its discrepancy to “codebook utilization”. 1. The **“Distinctiveness” evaluation was motivated from the observation in ESM3’s** (Fig. S5) that ambiguous structural token mapping can hinder interpretability and harm model capability. **“Distinctiveness” measures the similarity within codebook vectors, which serves as a proxy to assess token ambiguity.** 2. We understand the reviewer’s concern that simply using cosine similarity for codebook vectors might be limited. **We would like to continue working on a better metric for measurement instead of simple cosine similarity. ** 3. Please see **R1** for more discussion on the “discrepancy between distinctiveness metric and codebook utilization metric”. In general, we would like humbly argue that the discrepancy mainly stems from **the different codebook size used in different PST methods**, but not the metrics themselves. We’re always open to hear more opinions on this topic from the reviewer, as we’re also in the exploration stage to find a good way to benchmark PST methods. > R3: “ESM3 is the leading model” should be rephrased to “the leading VQVAE-based model” We agree and would modify this in paper. > R4: In sensitivity evaluation, TM-score is not a golden standard for structure similarity, but a state-of-the-art structure search algorithm. Acknowledgement of this limitation is useful. Thanks for the valuable suggestion. We would acknowledge this limitation in both main text and appendix that TM-score is not a golden standard to measure structure similarity. Also, commonly used metrics to measure structure similarity include TM-score (more globally) and RMSD (more locally). In App. F.2, we explored using RMSD instead of the TM-score in “Sensitivity” evaluation. As shown in Tab. 10, using RMSD is less effective than TM-scores. This might be because RMSD is sensitive to outliers and local structural discrepancies, while TM-score focuss on global structure and is less sensitive to local structure variations. > R5: It’s unclear whether the dynamic programming in sensitivity used global or local alignment, which should be stated both in main text and appendix We used the global alignment algorithm by adopting the Biotite’s “align_optimal()” function. Thanks for pointing out that this essential detail is missing. We would add this content to both the main text and the appendix. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts for resolving and clarifying my concerns. Regarding the question of the difference between efficiency and distinctiveness, I thank you for explaining the difference so clearly, I did not appreciate the effect of codebook size and how that would make both metrics distinct. It may have been my own easy with reading comprehension, but perhaps it could be useful for other readers to spell it out either in the main text or the appendix. However, I leave that decision to the authors' judgment, as it may have been just my own personal problem understanding. Otherwise, I am satisfied with the answer from the authors and the changes they've proposed. I think that the paper is solid, and therefore will keep my current score (4: Accept). --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thanks so much for your support on our project! In the final version, we will include these additional discussions and especially spell out in the main text for the relation between utilization and efficiency according to your great suggestions.   Best, Authors
Summary: The paper presents a new framework, StructTokenBench, for evaluating protein structure tokenization (PST) methods, which break down protein 3D structures into discrete or continuous representations. This framework is critical because existing methods for protein structure tokenization (PST) lacked a unified evaluation system. Key highlights: StructTokenBench evaluates PSTs across four axes: effectiveness, sensitivity, distinctiveness, and efficiency. The framework focuses on local, fine-grained protein substructures, unlike typical benchmarks that focus on global structures. AminoAseed is introduced as an improved method over existing VQ-VAE-based PSTs. It addresses the issue of "codebook collapse," where many codes in the latent space remain unused. The proposed solution introduces codebook reparameterization and Pareto-optimal codebook configurations. These methods help increase the efficiency and utilization of the codebook. Benchmarking Results: The paper compares AminoAseed with other leading PSTs like ESM3, ProTokens, FoldSeek, and ProteinMPNN, showing a performance improvement on some tasks Challenges Identified: Current PST methods, especially VQ-VAE-based approaches, struggle with efficient codebook utilization, as large portions of the codebook remain unused, leading to reduced model expressiveness. AminoAseed Performance: The proposed method performs well in tasks that require a high sensitivity to structural changes and achieves significant improvements in codebook utilization. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N.A. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: Yes. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: AminoAseed does not perform well on some tasks such as Binding Site Prediction as indicated in Figure 4. Although the paper demonstrates the benefits of scaling the model (e.g., increasing codebook size or encoder size), it also shows diminishing returns as the model scales. Large-scale models still encounter sub-exponential improvements, meaning that simply increasing compute resources or dataset size does not guarantee a proportional performance boost. The Pareto-optimal codebook configuration and codebook reparameterization strategies, while innovative, add complexity to the model. The additional computational overhead could be a disadvantage in real-world applications where time and resources are constrained, and simpler methods may be preferred for quick prototyping or high-throughput applications. Structural tokenization methods, including AminoAseed, tend to lose reliability under high noise levels in the input protein data. Other Comments Or Suggestions: N.A. Questions For Authors: It was noted that AminoAseed performed well in structural tasks but not as well in sequence-based tasks like remote homology detection. Do you plan on integrating AminoAseed with sequence models to improve its performance on such tasks? If so, how would you approach this integration? The study suggests diminishing returns when scaling up the model. In your opinion, what would be the next steps for scaling up AminoAseed more effectively? Do you believe that improvements to the architecture or optimization strategies could yield better performance with additional compute? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for invaluable suggestions. We respond to your questions below >R1: AminoAseed doesn’t perform well on binding site prediction in Fig. 4 We'd like to clarify our observation: 1. **Fig. 4 shows that when using continuous structural representations**, AminoAseed beats ESM3 on all three binding site prediction tasks, while being comparable to the continuous version of VanillaVQ on two of the tasks. 2. **When using discrete structural tokens**, which is the most important application setting for PSTs, AminoAseed prevails for most of the tasks, and gains improvement for averaged task perf. (in Tab. 2) >R2: Paper shows the benefits of scaling the model (e.g., increasing codebook size or encoder size) and the diminishing returns as model scales We'd like to clarify this rephrased statement from the paper: 1. We show in paper the benefits of data-driven Pareto-optimal scaling (i.e., balancing) for the codebook size and codebook dimension. It’s important to notice that we kept the total codebook parameters the same 2. We show the diminishing sub-exponential returns in Fig. 7 when increasing encoder sizes, keeping the codebook configuration unchanged >R3: Pareto-optimal codebook configuration and codebook reparametrization, while innovative, add complexity to the model We'd like to clarify that **only codebook configuration adds complexity** due to multiple runs of model training, while **reparameterization shares the same complexity** as the ablation model VanillaVQ In general, for the codebook configuration, **we leveraged the additional resources for exploration and provide insights for others** who aim to develop better PSTs: under a fixed computational resources, balancing codebook sizes and dimensions could be useful. This is a discovery that’s previously unknown in the field Simpler methods without using more computational resources are indeed favored, and we would like to explore it in our future work >R4: PST methods, including AminoAseed, tend to lose reliability under high noise levels in the input protein data We would like to clarify that in Fig. 5, the noise is added to the induced structural representations, not on the raw protein structures. This is achieved by replacing them with the [Mask] representation under a given mask rate. Essentially, it’s expected to observe large performance degradation at high noise levels (up to 90% tokens masked), since most structural information is removed >R5: AminoAseed does not do well in sequence-based tasks like remote homology detection We would like to clarify that the remote homology detection task in StructTokenBench (i.e., “Homo” task) **is not a sequence-based task, but a structure-based task**. Namely, we use backbone structure as input, and do not leverage the residue sequences. As explained in App. A.1.1, our “Homo” task is from the same source TAPE as the popular sequence-based remote homology detection task, which might be the reason for the confusion. Also, **we appreciate the suggestion and would like to humbly point out this statement does not align with our observation**. As shown in Tab. 2, on “Homo” task, **our method AminoAseed performs the best, achieving 27.31% improvement over ESM3** >R6: Do we plan on integrating AminoAseed with sequence models to improve its performance on remote homology detection? If so, how would we approach the integration? 1. **We explored the setting of adding sequence information directly to PST** (i.e., using structural representations + sequence token + positional encoding as input, and passing it to the probing layer). As shown in Tab. 5, adding sequence can mostly improve PSTs’ performance, as expected 2. Further explore using sequence models like ESM2 on this specific homology task may shifts from our main focus, i.e., to benchmark structural representations induced from PSTs. **We would like to leave it for future work** 3. **For the integration method**, one way is to combine the pretrained protein sequence representations from models like ESM2, with AminoAseed’s structural representations. Fusion attention layers can be added after the two models for better combination. Per-residue and per-protein contrastive learning can be used as training objectives. >R7: What’s the next step to scale up AminoAseed more effectively? Improvement to the architecture or optimization strategies could yield better performance with more compute? 1. **We recommend data scaling**: gathering more diverse pretraining datasets to improve AminoAseed, since its pretraining data includes only around 10% of the PDB database, with 48k protein single chains. 2. **Improving the architecture design** of VQVAE encoder might be useful, because the structure encoder is usually SE-(3) invariant and its capacity can be improved with better design. 3. **Improving optimization** would be helpful, which shares the same motivation as AminoAseed to use codebook reparameterization to enable better codebook gradient update.
Summary: In order to fully evaluate the performance of existing PST methods, the authors constructed a comprehensive evaluation benchmark called StructTokenBench. AminoAseed, a new improvement scheme, is proposed to address the problem of “codebook collapse” in the traditional VQ-VAE method, whose main innovations include codebook reparameterization and pareto-optimal codebook configuration. Claims And Evidence: - Claim 1: Adding an MLP can alleviate codebook collapse. Evidence Concern: It is not fully clear why the additional MLP layer effectively propagates gradients to all codebook vectors. The explanation could be enhanced by further theoretical analysis and ablation experiments. - Claim 2: IF-based methods outperform VQ-VAE–based methods on many supervised downstream tasks. Evidence Concern: Although experimental results indicate this trend, it raises questions about the advantages of using a VQ-VAE framework and whether the proposed modifications (AminoAseed) sufficiently close this performance gap. Methods And Evaluation Criteria: The paper proposes modifications including an MLP for codebook reparameterization and a data-driven Pareto-optimal configuration. These modifications are conceptually interesting; however, more detailed descriptions and comparative evaluations (e.g., against related methods such as AIDO.St) are needed to validate their effectiveness. Theoretical Claims: There is an implicit theoretical claim that the MLP layer can mitigate codebook collapse by ensuring all code vectors receive gradient updates. However, the paper lacks a rigorous proof or in-depth discussion on this mechanism. Further theoretical justification would strengthen the contribution. Experimental Designs Or Analyses: The experimental setup covers multiple supervised tasks and evaluations (effectiveness, sensitivity, distinctiveness, efficiency). Supplementary Material: Yes. Appendix sections (e.g., Appendices A, B, C, D, E, F, G) have been viewed. Relation To Broader Scientific Literature: The paper builds on prior work in protein structure tokenization, VQ-VAE, and inverse folding. Essential References Not Discussed: The paper does not discuss related work such as AIDO.St, which appears to be relevant in the context of efficient protein structure tokenization. Including a comparison with AIDO.St would provide a more complete picture of the current state-of-the-art. Other Strengths And Weaknesses: Strengths: - The paper proposes a comprehensive benchmark covering effectiveness, sensitivity, distinctiveness and efficiency. - The paper tackles an important problem by addressing codebook collapse in VQ-VAE–based PST. - The introduction of a data-driven codebook configuration is innovative and supported by extensive experiments. Weaknesses: - The explanation of how an MLP helps alleviate codebook collapse is insufficiently detailed. - Experimental results show that IF-based methods outperform VQ-VAE–based ones on many supervised tasks, raising questions about the benefits of the proposed approach. - The lack of comparison with related methods such as AIDO.St [1] limits the scope of the evaluation. - The paper does not provide a detailed description or diagram of the AminoAseed model architecture. - The absence of complete code release undermines the reproducibility and reliability of the results. [1] Balancing Locality and Reconstruction in Protein Structure Tokenizer. Other Comments Or Suggestions: Typo: In Section 2.3, **VQ-VAE** in 'As illustrated in Fig. 1(b), VQ-VAE can be summarized as' should be **Inverse-Folding**. Questions For Authors: Please see Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments >R1: The claim “adding “a MLP” to reparametrize the codebook can alleviate codebook collapse” needs theoretical analysis and ablation experiments 1. We'd like to clarify that in Sec. 4.2 (L246), we **add a simple linear layer** (instead of a MLP) to reparameterize the codebook $C$ as $Q$=Linear($C$), and keep $C$ fixed during training 2. **Ablation experiments**: VanillaVQ is the ablation model, which shares the same pipeline and configurations as AminoAseed, except using $C$ instead of Q=Linear($C$). Main results can be found in Tab. 2/3/4/5 and Fig. 3/4/5/6/7. Generally, AminoAseed outperforms VanillaVQ across all four benchmarking perspectives 3. **Theoretical analysis**: We analyze the gradient flow on $Q$ in AminoAseed and $C$ in VanillaVQ, respectively. We will add this theoretical analysis in the appendix to enhance the method design justification **(1) Setup**: Assume $C$ of shape (K, D) is fixed, Linear() denotes a linear transformation with weight $W$ of shape (D, D) without a bias, and the reparameterized codebook $Q$=Linear(C)=$C * W$. We denote $L$ as loss, batch size as 1 for simplicity, and $p$ different codes are selected during tokenization with each code selected for once **(2) Gradient on $Q$**: For the gradient $\nabla_Q L$, only the $p$ selected rows of this gradient matrix are non-zero. The gradient updates $W$ via the chain rule: $\nabla_W L = C^T\nabla_Q L$, which is now a dense matrix with all non-zero values. Thus, all entries of $W$ are updated, enabling global adjustments to the entire codebook space through $Q=C * W$ **(3) Gradient on $C$**: if $C$ is trainable, the gradient matrix $\nabla_C L$ only has the $p$ selected rows as non-zero. Thus, unused codes get no updates, leading to limited changes for the entire codebook space and eventually distribution shift (see Fig. 2) >R2: Why using the VQ-VAE framework when IF-based PSTs beat VQ-VAE-based methods on supervised tasks? And if AminoAseed sufficiently closes the perf. gap? 1. IF-based PSTs excel in supervised tasks but lack sensitivity to conformational changes, limiting their overall capability. 2. IF-based PSTs only support continuous tokens, while VQ-VAE-based PSTs produce both continuous and discrete tokens. As said in App. C, discrete tokens offer several advantages: **(1) Multimodal LLM integration**: enabling seamless fusion with sequence and functional text data, allowing direct use of NLP optimization techniques developed for LLMs. **(2) Simplified structure modeling**: eliminating the need to explicitly encode symmetry and physical constraints **(3) Reduced overfitting risk**: discrete representations may help mitigate overfitting compared to continuous features [a] 3. **We acknowledge that there is still much more room for further improvement to close the performance gap** between AminoAseed and IF-based PSTs on supervised tasks. AminoAseed proposes a simple yet effective strategy for improvement over vanilla VQ-VAE, and enjoys the benefits of VQ-VAE methods and shows promising results in “sensitivity” evaluation [a]DeProt: Protein language modeling with quantized structure and disentangled attention >R3: Benchmarking AIDO.st As suggested, we add AIDO.st and report the results below (some supervised tasks are not finished due to the rebuttal timeline). As shown, AIDO.st is less effective than AminoAseed on supervised downstreams, and also fall short in “sensitivity”. However, AIDO.st achieves very high codebook utilization |**AIDO.st Effectiveness**|Con|Rep|BindInt|CatInt|BindBio|CatBio| |---|---|---|---|---|---|---| |Fold|56.64|77.69|44.66|57.30|65.50|73.72| |SupFam|73.79|78.08|84.21|81.94|66.70|78.66| |**Sensitivity**|Metric|**AIDO.st**| |---|---|---| |apoholo|pearsonR| 43.02| ||spearmanR|54.25| |foldswitching|pearsonR|61.59| ||spearmanR|66.11| |**AIDO.st Codebook Utilization**|UR%|Perplexity|Entropy| |---|---|---|---| |CASP14|88.05|0.7729|0.01165| |CAMEO|95.12|0.8266|0.01181| >R4: Detailed description or diagram of the AminoAseed model architecture are not provided 1. We provided the **detailed description** for AminoAseed **in App. E**, including **(1)** overall pipeline of protein frame input, encoding, tokenization, decoding, and structure reconstruction objectives; **(2)** the geometric self-attention layer used in encoder; **(3)** explanation of straight through estimator of gradients in tokenization; **(4)** the standard Gram-Schmidt algorithm to create protein frames as input 2. We will add a **diagram** for AminoAseed in appendix for next edition >R5: The absence of complete code release undermines the reproducibility. We apologize for not mentioning it in the paper. **We will release all the code and processed data in the next months**, because we’re currently in the process of code cleanup and preparation. >R6: Typo of in Sec. 2.3 Thanks for spotting the typo and we will modify it
Summary: This paper presents a benchmark for comparing methods of tokenizing proteins. They divide structure tokenization methods into two categories: those which hand-design structure based tokens, and those which learn the tokenization. Of the learned methods, they distinguish between those that produce learned codebooks, and those that use inverse folding (i.e., the “tokenization” is a sequence of amino acids that are supposed to fold into the given backbone). They suggest evaluating methods along four axes: (1) Effectiveness as an input to supervised learning tasks, (2) sensitivity to distinguish similar structures, (3) distinctiveness of codebook vectors, and (4) efficiency, i.e. how uniformly the different codebook elements are used. Evaluation of (1), effectiveness, is the most involved, and entails training a 2-layer MLP on the structure tokens (to which a positional encoding is added) for a variety of different supervised tasks, including prediction of binding sites, conserved sites, and catalytic sites. Applying their benchmark to baselines, they find that some under-utilize the learned codebooks. As a result, they define a new tokenization method, “AminoAseed”, that (1) uses reparametrization during the codebook gradient update to prevent “codebook collapse, and (2) a data-based heuristic for trading off the number of codebook elements and the dimension of each element. AminoAseed does better on the new benchmark than existing approaches. Finally, they conduct ablations and scaling experiments. Claims And Evidence: For the most part, yes, but with a few minor exceptions below: The claim that their benchmark focuses on “fine-grained local substructures rather than global structures, as typical in existing benchmarks” (from the abstract). It wasn’t clear to me how this is true — is it due to the nature of the supervised tasks chosen for the “efficiency” section? I recommend this be explained more explicitly in the main body, if the claim is made in the abstract. There are also claims about the importance of distinctiveness (how different the codebook elements are from one another) and efficiency (how much the different codebook elements appear), which are used to motivate these aspects of the benchmark, but Section 6 (L415) itself shows that efficiency and reconstruction accuracy aren’t correlated. Shouldn’t this call into question the efficiency metric? L179 also makes a claim about the utility of distinctiveness in downstream tasks, without a reference/citation to support it. Methods And Evaluation Criteria: Yes Theoretical Claims: n/a Experimental Designs Or Analyses: Not in very great detail. However, I do question, in the supervised tasks, why an MLP was used with positional encodings — first, if the intended downstream use case for PSTs are generally LLMs, would it not make more sense to train a transformer? Second, if one is committed to using an MLP, what’s the justification for a positional encoding? An MLP is not permutation invariant. Supplementary Material: I have read: Appendix C, relevant discussions Appendix D, benchmark details Appendix E, method details Appendix G, related work details Relation To Broader Scientific Literature: This work is relevant to essentially all prior works that use protein tokenization as a way of dealing with 3D structures in a LLM setting. To my knowledge, there has not been as thorough of a systematic comparison of these methods before, and having a standardized benchmark is quite valuable to the community. AminoAseed builds on ESM3 and other prior VQ-VAE approaches. Essential References Not Discussed: An essential reference not discussed in detail is “Tokenized and Continuous Embedding Compressions of Protein Sequence and Structure” by Lu et al 2024. The reference is noted in the appendix, but not the main body. I presume this is because it is an all-atom tokenization, i.e. it essentially includes the residue information (not just the backbone), and indeed, it takes the amino acid sequence as input. However, I think this is a key reference because their evaluations are aimed at comparing different protein tokenizations, and also have a similar narrative of “finding and correcting an existing problem in a PST”. In addition, the paper “BindGPT: A Scalable Framework for 3D Molecular Design via Language Modeling and Reinforcement Learning” by Zholus et al 2024 is one example of a paper that uses a simple atom-wise coordinate tokenization, essentially just tokenizing the coordinates as numbers. (Similarly, Geo2Seq in “Geometry Informed Tokenization of Molecules for Language Model Generation” by Li et al 2024 tokenizes 3D coordinates, but in spherical coordinates.) These kinds of simple methods — which are inefficient but very accurate as compressors — should be benchmarked as well. Other Strengths And Weaknesses: Strengths: The problem addressed is an important one and addresses a gap in the literature. Having a standardized benchmark for comparing protein tokenization methods (that is relatively independent of the downstream autoregressive model, and in particular does not require training many large models from scratch with different tokenizations to figure out which one is best), is quite useful. The benchmark is thoughtful, covering a wide range of tasks and facets, which have seemingly been implemented with very careful attention to detail. The empirical analyses (ablations and scaling studies) provide valuable insights for future work. Weaknesses: First, the paper leaves out several PSTs from its framing and experiments. As mentioned in the “related work” section of the review, there are many structure tokenizations that consist of tokenizing the 3D coordinates of the backbone atoms. Generally speaking, the introduction mentions “heuristic” tokenizations, but as far as I can tell, doesn’t compare to any of them. Also, a tokenization benchmark would be useful for structure tokenizations that include sequence as well (not just backbone), but methods like CHEAP are not included. Moreover, the “distinctiveness” and “efficiency” desiderata articulated in the benchmark itself — feel a bit ad-hoc to me. This is supported by the paper’s own findings in Section 6, which counterintuitively show that reconstruction accuracy and efficiency are not correlated. In fact, the “efficiency” metric is seemingly quite distribution-dependent. A PST might induce a uniform distribution over tokens on one set of proteins, but a non-uniform distribution on a very different data distribution. This would not be a failing of the PST, but potentially an ability of it to accurately reconstruct diverse proteins. Indeed, this generalizability is one advantage of the coordinate-wise tokenizations (e.g. used in BindGPT and others), but they may fail at the “efficiency” metric. A more standard metric from the channel compression formalism of information theory is simply to assess how effective the methods are as compressors, i.e. for a fixed reconstruction accuracy (based on solving the “reconstruct structure from PST” learning task), how many bits are used to represent the structure (on average, on a given dataset)? Finally, it seems like the benchmark assumes that the proposed probing method is a good proxy for how a PST will perform in downstream tasks, but this is not supported. It would be good to have evidence for this e.g. on a single task, ablating architecture choice and other factors. Overall, I think this paper still makes very valuable contributions, and recommend acceptance. But if the weaknesses above were addressed, I would probably increase my rating further. Other Comments Or Suggestions: Minor writing suggestion: I would suggest more explicitly motivating how PSTs are used downstream, so that each aspect of the benchmark can be traced back to a concrete goal for PSTs. This is currently not clear to me, specifically efficiency and distinctiveness. Minor writing suggestion: For clarity, I would recommend being more explicit (even in the introduction, or perhaps a footnote/appendix) about when the tokenizations capture backbone only vs residues also. This can be confusing to newcomers, especially distinguishing from representations of structure in the literature that are referred to as “all atom” or “joint sequence and structure”. (For example, what does “amino acid tokens” near L052 refer to?) Minor writing suggestion: This is subjective, but in my opinion “effectiveness” and “efficiency” are not the most intuitive terms for the attributes the benchmark is measuring. (I often had to check back to figure out which term I wanted to use when writing this review.) Perhaps something like “codebook utilization” instead of “efficiency” would be preferable, or “downstream effectiveness” instead of just “effectiveness”. L1029: typo, “linear layer, ,” Questions For Authors: 1. For the “efficiency” part of the metric, why is the positional encoding necessary when the probing network is an MLP? Positional encodings are traditionally used for transformers, which are permutation invariant by default. On a related note, why use an MLP over a transformer? Wouldn’t we expect a transformer to correspond more faithfully to downstream performance, if most of the PSTs are going to be passed into a transformer-based architecture in practice? 2. Did you consider the computational efficiency of computing the structure-based tokens at all? This could be a future addition to the benchmark, e.g. compute-controlled efficiency. Alternatively, perhaps one cares about compute time in the downstream application (e.g. for a transformer, the context length, which doesn’t intrinsically have to be the number of amino acids). 3. Why do “distinctiveness” and “efficiency” inherently matter? As mentioned above, these concepts might be at odds with faithfully representing structure under distribution shift. If a tokenization is useful for downstream tasks and enables small context window (i.e. the number of tokens is small), why would we care about these specific properties of the codebook elements? Instead, why not just frame as “effectiveness as a compressor” (i.e. expected reconstruction accuracy vs size of compressed representation in bits, or context length)? 4. What is the intention of the noise robustness analysis? It’s not clear to me if we should expect a good tokenization to be robust to generic noise or not. It seems like certain perturbations *should* meaningfully change the PST, whereas others (e.g. if the result is non-physical in some way) should not. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your valuable comments! We address concerns below >R1: Explain “StructTokenBench focuses on local(per-residue) over global(per-protein) structures” PSTs tokenize per-residue substructures, matching StructTokenBench: **supervised tasks and “Sensitivity” are at residue level**. Current benchmarks(Sec. 6.1) cover per-protein tasks, justifying our per-residue focus. We’ll add in main text >R2: No citation for “Distinctiveness” motivation(L179) **We’ll cite ESM3 Fig. S5**: ambiguous structural token mapping harms. This metric assess the token ambiguity via codebook similarity. >R3: Efficiency(Codebook Utilization) metric seem conflicting with Sec 5.7(L415): no correlation with reconstruction quality. Is it needed? We rename “Efficiency” to “Codebook Utilization”(% of used codes). Underutilized codebooks waste resources and harm downstream perf.(Sec. 3.4), a well known issue in CV/NLP[a]. High utilization bring gains, like LLaMa3’s 128K-token vocabulary for efficient encoding[a]. **Sec. 5.7 questions if reconstruction quality alone reliably indicates PST quality**, given no correlation with **both supervised task perf. and utilization**(see Fig. 6,12) We welcome further discussion [a]LLama technical report >R4: Codebook utilization is distribution-dependent. PSTs' non-uniform token use may reflect accurate reconstruction(like BindGPT) We agree on dependence. Current results hold across datasets. We’ll add more diverse datasets. We remain inconclusive on exact utilization-reconstruction relations. **Shown in Sec. 5.7**, we question if reconstruction quality alone reliably indicates PST quality, given no correlation with **both supervised task perf. and utilization**. We welcome further discussion. Thanks but we can’t test it as it lacks residue-level structural reprs.(see **R8**) >R5: Why use probing for supervised tasks? Why MLP not transformer? Probing(fixed encoder+simple probing layer) is standard practice in CV[b]. Being simple ensures perf. reflects reprs. quality, not probing layer capacity. For BindInt, empirical perf. below show: linear layer lags 2-layer MLP, which matches 1-/2-layer transformer perf. Limited dataset sizes(Tab. 6,8) may explain transformers' marginal gains |Method|Split|Linear|2L MLP|1L Transformer|2L Transformer| |---|---|---|---|---|---| |**ESM3**|Fold|43.36|44.30|46.78|46.08| ||SupFam|84.50|90.77|90.98|90.96| [b]Masked Autoencoders Are Scalable Vision Learners >R6: Why use positional encodings(PEs), as MLP isn’t permutation-invariant? Our MLP is shared across structural tokens, so **is permutation-equivariant**. So it needs PEs for residue order. Empirically, removing PEs largely hurts perf.(see table below vs. Tab. 2) |ESM3 no PEs|BindInt|CatInt|Con|Rep| |---|---|---|---|---| |Fold|48.85|53.14|51.05|51.36| |SupFam|74.16|69.42|62.03|62.15| >R7: Test Cheap It’s initially excluded as it models **sequence and all-atom structure**(37 atoms/residue), while our benchmark only considers backbone(4 atoms/residue, Sec. 2.1). Cheap’s perf.(vs. Tab. 2,3) is in **https://anonymous.4open.science/r/cheap_perf-00EB** limited by space: (1)it beats IF-based PSTs on supervised tasks (some results pending) (2)It largely lags AminoAseed in “sensitivity”. (3)No other evaluation as its has no codebook >R8: Test BindGPT, Geo2Seq Thanks for referring to atom-wise coordinate tokenization methods. We’ll discuss in main text and appendix. BinGPT is untestable as: (1)It encodes atom types and xyz coordinates as strings. Unlike IF-/VQ-VAE PSTs, it doesn’t output residue-level structural reprs; (2)Being decoder-only, BindGPT also can’t produce residue-level reprs. Geo2Seq is not open-source >R9:Heuristic PSTs not tested We’ll add them later due to deadline >R10:For fixed reconstruction quality, measure bits used We use scatter plot: X-axis: reconstruction quality(RMSD or LDDT),Y-axis: Compression ratio(% byte reduction vs. XYZ coordinates).**Plots in https://anonymous.4open.science/r/rebuttal_fig-3050**. AminoAseed beats VanillaVQ in reconstruction but trails ESM3, while achieving better compression(smaller byte reduction %) than both >R11: Explicitly motivate PSTs We’ll add**R3/R4** discussions on distinctiveness and codebook utilization in main text >R12:State PSTs' modeling targets We’ll add **(1)Tab. 2 footnote**: Cheap(sequence + all-atom structure) vs. others( backbone structure) and **(2)appendix table**: PST comparison (backbone/all-atom, sequence usage) including non-open-sourced >R13: Re-term We change “efficiency” to “codebook utilization” and “effectiveness” to “downstream effectiveness” for clarity >R14:Computational efficiency as future work Will leave for future >R15: Intention of noise analysis Noise is added by randomly masking structural tokens. We reveal: (1) Structural reprs. are less robust than sequence reprs., **so integrating sequence-structure may help** (2) Practical guidance **for masked language modeling on structural tokens** >R16:Typo in L1029 Will modify
null
null
null
null
null
null
Heterogeneous Data Game: Characterizing the Model Competition Across Multiple Data Sources
Accept (poster)
Summary: This paper investigates the phenomenon of model competition across multiple data sources. The authors propose heterogeneous data game, where each of the model providers decide to deploy a single model, aiming to win the choose from data sources as much as possible. The model is characterized by its parameters and covariance matrix, and the loss incurred when a data source decides to choose a model is measured via Mahalanobis distance. The authors study the cases where the data sources pick models by proximity choice and by probability choice, and prove equivalent condition for pure Nash equilibrium existence when the number of providers $N=2$ in proximity choice case, and give sufficient conditions as well as some basic properties if PNE exists in other cases. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I've only checked the correctness roughly, while I do not discover any technical flaw. Experimental Designs Or Analyses: Yes. The experiment focuses on synthetically studying the effect of temperature parameter on existence of Nash equilibrium in the probability choice model of data sources, and is generally intuitive. Supplementary Material: No. Relation To Broader Scientific Literature: No. Essential References Not Discussed: No. Other Strengths And Weaknesses: - Strength: This paper is self-contained. As a theoretical paper, the mathematical modeling of the considered real-world problem is well-defined. Most steps of the abstractions as well as the assumptions in proving theorems are justified. All different cases inside the proposed mathematical model is studied. Generally speaking, this is an interesting paper, and I do not find any serious technical flaw or vulnerability in this research that leading to a straightforward rejection. - Weakness: I maintain a reserved stance regarding the significance of this paper: - The game-theoretical modeling ensures the change of model parameters linearly influence the outcomes, which is a strong simplification of the initial real-world problem. - The sufficient conditions are overly intuitive, giving readers a sense of apparent correctness, while the position of necessary conditions, as well as the area between necessity and sufficiency is less explored. And because of this, the practical guide of theoretical results seems not enough, as claimed in abstract. Other Comments Or Suggestions: - I suggest that Assumption 4.1 could be justified more detailedly. The justification in Remark 4.2 gives an intuitive explanation of its satisfiability, while personally I think an elaboration on the case when the covariances are not equal to each other is also needed, since Assumption 4.1 is a key assumption in the rest of the paper. - In Theorem 5.6, there is an interval between case 1 and case 2. Is the existence of PNE not sured in that interval? The authors would better point out what happened in that interval to make the discussion complete. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your reviewing efforts and constructive comments. > ***Linear Model Assumption*** 1. Despite the prevalence of deep learning, **linear models remain important due to their interpretability—an essential requirement in high-stakes domains** such as healthcare and the judicial system [1]. Additionally, when training data is limited, linear models have been shown to outperform deep learning methods in various settings [2, 3], and thus are still widely used in practice. 2. As discussed in the last paragraph of Section 3.3, **our model also aligns with the linear probing paradigm**, where only the final linear layer of a pretrained foundation model is updated. This approach is common when fine-tuning full models is costly or risks overfitting [4]. In this context, our framework can be interpreted as modeling competition over linear heads on fixed representations, offering insights into the emerging market dynamics around foundation model adaptation. > ***On the Sufficiency, Necessity, and Practical Value of Theoretical Results*** 1. While prior work discusses the existence of homogeneous and heterogeneous PNE, **our main contribution lies in rigorously characterizing *when* they arise under ML-specific conditions**—namely, distribution shifts and high-dimensional strategy spaces. As noted in our response to Reviewer C4o6, most existing models assume low-dimensional spaces with uniform distance metrics, limiting relevance to ML markets. 2. **We also further characterize the exact necessary and sufficient condition for homogeneous PNE in HD-Game-Probability** (see our response below on the interval in Theorem 5.6), addressing the gap between sufficiency and necessity. 3. **Although the existence of PNE types may seem intuitive, the technical challenge lies in identifying the exact conditions under which they emerge**. The difficulty stems from the high-dimensional continuous strategy space and heterogeneous $\Sigma_k$, which, per Proposition 4.1, yield a non-linear manifold. Analyzing such cases—particularly in HD-Game-Probability (Section 5.3.1)—requires non-standard tools: we first establish a local equilibrium via a fixed-point theorem and then show it is a true PNE using a careful partition of the strategy space (Section C.7). 4. From a policy perspective (see Section 1.2), our results offer actionable insights: (1) Theorem 5.4 shows that dominant data sources attract provider focus; our results help quantify how many additional providers or how much incentive is needed to support underrepresented sources. (2) Theorems 5.6 and 5.7 demonstrate that increasing the rationality of data sources can foster heterogeneous equilibria, thus helping to mitigate the risks associated with the dominance of a single model in the market. > ***Justification of Assumption 4.1*** For any fixed $\theta \in \mathbb{R}^D$, the condition $\bar{\theta}(\mathbf{q}) = \theta$ is equivalent to solving $\sum_{k=1}^K q_k \Sigma_k (\theta_k - \theta) = 0$ under the constraint $\sum_{k=1}^K q_k = 1$. When the covariance matrices $\Sigma_k$ differ, the vectors $\Sigma_k(\theta_k - \theta)$ are typically in general position—especially in high-dimensional settings where $D \gg K$. In such cases, the corresponding overdetermined linear system is injective, and the only solution (except for a measure-zero set with degenerate alignment) is the one uniquely determined by the normalization constraint. > ***Interval between case 1 and 2 in Theorem 5.6*** We have strengthened our theoretical result to address this point: **In HD-Game-Probability, there exists a threshold $t_0 > 0$ such that the homogeneous PNE exists if and only if $t \ge t_0$.** The proof uses a constructive argument: assume the homogeneous PNE exists at $t = t'$. For any $t'' > t'$ and deviation $\theta$, the convexity of the Mahalanobis loss implies that the utility of a player deviating to $\theta$ under temperature $t''$ is no greater than the utility of deviating to $c\theta + (1-c)\hat{\theta}^M$ with $c = t'/t''$ under $t'$, which is at most $1/N$. Thus, the deviation is not profitable, and the homogeneous PNE remains valid for all $t'' > t'$, confirming the existence region is an interval $[t_0, \infty)$. Although the exact value of $t_0$ is hard to compute analytically, our synthetic experiments suggest it is approximately a constant multiple of $2\ell_{\max}$. --- [1] Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence. 2019. [2] Shuming Jiao et al. Does deep learning always outperform simple linear regression in optical imaging?. Optics express. 2020. [3] Muhammad Arbab Arshad et al. The Power Of Simplicity: Why Simple Linear Models Outperform Complex Machine Learning Techniques--Case Of Breast Cancer Diagnosis. 2023. [4] Ananya Kumar et al. Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. ICLR. 2022.
Summary: The paper explores the HDG problem, aiming to identify the Pure Nash Equilibrium (PNE) in distributing data resources among machine learning (ML) model producers. Each producer provides parameters, influencing resource allocation based on both their own and the ML parameters. The analysis covers conditions for PNE existence in scenarios involving one producer, two producers, and multiple producers. ## update after rebuttal I keep the score because I think the presentation can be improved. Claims And Evidence: Yes, all claim has its clear and convincing evidence. Methods And Evaluation Criteria: The evaluation criteria are Nash Equilibrium, a well-known notion. For method, the paper focus on proving the existence. They also shown an efficient method to find the lower bound of t in experiment. It is unclear if Nash equilibrium is good solution concept here. What is the role of pure Nash equilibria in this scenario? How to use it in the real world? Theoretical Claims: Yes. I have checked all proofs and I think they are all correct. Experimental Designs Or Analyses: Yes. I have checked the experiment designs and analysis, and I think they are sound and valid. Supplementary Material: I have reviewed the supplementary material, including the omitted example, the algorithm to find PNE in HD-Game-Probability and the proof. Relation To Broader Scientific Literature: Unsure Essential References Not Discussed: Unsure Other Strengths And Weaknesses: The paper is technically sound and the result is significant. However, I think the written is not satisfying. Other Comments Or Suggestions: Maybe dividing each theorem into two or three will make it better. And also the proof is tedious. Questions For Authors: In your approach to find heterogeneous PNE in HD-Game-Probability, you start with a PNE for HD-Game-Proximity. However, how you get that PNE? Enumerating all possible profile in (\theta_1, …, \theta_K)^N? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your reviewing efforts and constructive comments. > ***It is unclear if Nash equilibrium is good solution concept here. What is the role of pure Nash equilibria in this scenario? How to use it in the real world?*** 1. **Nash equilibrium is a well-established and widely used solution concept in game theory for analyzing strategic interactions among competing agents [1]**. In the context of ML model markets, recent works have also focused on characterizing equilibrium outcomes among model providers [2, 3]. Beyond ML model markets, the use of Nash equilibrium extends to other competitive domains such as recommender systems [4] and targeted advertising [5], where it is employed to model stable market outcomes. Therefore, applying the concept of Nash equilibrium in our setting is both standard and well-supported in the literature. 2. **In our model, a pure Nash equilibrium represents a stable state of the market**, where each model provider (player) chooses a strategy (i.e., model parameters) such that no one can unilaterally deviate to improve their utility. This notion of stability is central to real-world markets, where firms adapt until no player can gain from further adjustments, given the choices of others. Thus, analyzing PNE offers insight into the long-term strategic behavior and positioning of providers in a competitive ML ecosystem. 3. **Beyond theoretical interest, studying the structure of Nash equilibria can offer practical guidance for market design and policy-making**. As discussed in the last paragraph of Section 1.2, our results provide actionable insights: - When certain data sources have dominant weights, providers tend to focus on them exclusively (Theorem 5.4). Our analysis quantifies how adding more providers or incentivizing attention to underrepresented sources can shift the equilibrium toward more balanced coverage. - To avoid the risks of model monoculture (i.e., all data sources converging to the same model), Theorems 5.6 and 5.7 show that increasing the rationality of data sources—i.e., improving their ability to select high-performing models—encourages heterogeneous equilibria, leading to more diverse model deployment across the ecosystem. > ***I think the written is not satisfying. Maybe dividing each theorem into two or three will make it better. And also the proof is tedious.*** 1. We thank the reviewer for the valuable suggestion. **We will revise the paper to improve clarity, organization, and overall readability.** In particular, we will consider breaking down complex theorems—such as Theorems 5.4 and 5.6—into smaller components to make them easier to follow. If there are specific theorems or sections you found especially difficult to read, we would greatly appreciate further feedback, which would help us target our revisions more effectively. 2. **Regarding the proofs, we acknowledge that some arguments are lengthy, but this reflects the non-trivial nature of the results.** The main technical challenges arise from the high-dimensional continuous strategy space and heterogeneous covariance matrices $\Sigma_k$, which together form a non-linear manifold of PNE strategies (Proposition 4.1). This complexity is particularly evident in HD-Game-Probability (Section 5.3.1), where we first use a fixed-point theorem to establish a local equilibrium, and then, via a structured partition of the strategy space (Section C.7), show that it satisfies full PNE conditions. We will revise the proofs to improve clarity and provide more intuitive explanations in the future version of the paper. > ***How you get that PNE? Enumerating all possible profile in $(\theta_1, ..., \theta_K)^N$?*** **The initial PNE for HD-Game-Proximity is derived using Theorem 5.4 and Corollary 5.5.** These results show that, at equilibrium, the number of model providers focusing on each data source is approximately proportional to that source’s weight. The specific allocation is computed using Equation (11), which determines how many providers should select each $\theta_k$. **This method provides an efficient way to construct a candidate PNE and does not rely on exhaustive enumeration over all possible strategy profiles in $(\theta_1, ..., \theta_K)^N$.** As a result, it is computationally efficient and scalable in our experimental setup. --- [1] Noam Nisan et al. Algorithmic game theory. Cambridge Univ. Press. 2007. [4] Omer Ben-Porat et al. Regression equilibrium. ACM Conference on Economics and Computation. 2019. [3] Meena Jagadeesan et al. Improved bayes risk can yield reduced social welfare under competition. NeurIPS. 2023. [4] Meena Jagadeesan et al. Supply-side equilibria in recommender systems. NeurIPS 2023. [5] Ganesh Iyer et al. Competitive model selection in algorithmic targeting. Marketing Science. 2024.
Summary: This work analyze competition through Nash equilibria between multiple ML model providers across heterogeneous data sources. The game is characterized under two different data source choice models and provides conditions for each type of equilibrium. Synthetic experiments are conducted. Claims And Evidence: Characterization of conditions for PNE existence is supported with theoretical analysis. Stronger evidence is the practical generalizability beyond linear models. Methods And Evaluation Criteria: Theoretical findings are evaluated with synthetic experiments for various parameters such as model choice temperature and number of model providers. Experiments are rather basic and additional real-world examples would have strengthened the evaluation. Theoretical Claims: Statements in the main text look right. Proofs in the appendix were not checked. Experimental Designs Or Analyses: Only experiments with synthetic data are conducted, which validate theoretical analysis. Supplementary Material: N/A Relation To Broader Scientific Literature: The related section seems comprehensive and their work is well-situated in the literature. The main novelty seems to be extending the study of equilibrium from homogeneous data source to heterogeneous data source. Essential References Not Discussed: N/A Other Strengths And Weaknesses: A limitation of this analysis is focused on linear models and IID distributions. Limited empirical validation with real data. Other Comments Or Suggestions: * In line 128 "There are N model providers (players) that need to compete the models in these K data sources." * Explain how a data source queries each model provider for $\ell_{n,k}$. Is this validation loss of each data source? Mention why assuming this is available is reasonable for a data market setting. Questions For Authors: * How do the model providers optimize $\hat{\theta}$ in practice? * How do the model providers learn their loss on each data source? This is not discussed. * Does this assume stationarity of data source distribution? How would distribution shifts affect equilibria. How robust are your equilibrium results to non-stationarity in data distributions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your reviewing efforts and constructive comments. > ***Focus on Linear Models, IID Assumptions, and Lack of Empirical Validation*** 1. **Linear Models**: Despite the rise of deep learning, linear models remain widely used for their interpretability, particularly in high-stakes domains [1], and can outperform deep models when data is limited [2]. Our framework also aligns with the linear probing setup, where only the final layer of a foundation model is updated—common when full fine-tuning is costly or prone to overfitting [3]. This makes our model relevant for analyzing competition in foundation model markets. 2. **IID Assumption**: Our setting is inherently non-IID, as data sources follow different distributions, reflected by distinct covariance matrices $\Sigma_k$. This heterogeneity is central to our analysis. Extending to more complex or dynamic distributions is a promising direction for future work. 3. **Empirical Validation**: Our synthetic experiments are designed to rigorously verify our theoretical results and explore cases beyond analytical coverage. Synthetic data allows us to exhaustively check whether a strategy profile constitutes a PNE, which is difficult to do reliably in real-world high-dimensional settings. While experimental validation in real applications is indeed valuable, we emphasize that our theoretical results apply to such settings and remain relevant even when empirical verification is challenging. > ***In line 128 "There are N model providers (players) that need to compete the models in these K data sources."*** Thank you for pointing this out. If the concern is about clarity, we are happy to revise the sentence to: "There are $N$ model providers (players), each deploying a single model to compete across $K$ data sources." Please let us know if a different clarification was intended. > ***Explain how a data source queries each model provider for $\ell_{n,k}$. Is this validation loss of each data source?*** Yes, $\ell_{n,k}$ can be interpreted as the validation loss of model provider $n$ on data source $k$. The availability of such loss information is a standard modeling assumption in prior work on ML model competition [4]. In practice, this value can be obtained through interactions between model providers and data sources. A common scenario is: the data source shares a private validation set with model providers, who return predictions. The data source then evaluates the loss and may report it back to the provider. > ***How do the model providers optimize $\hat{\theta}$ in practice?*** 1. Our focus is on analyzing the properties of equilibrium among model providers, representing the stable market states. Nash equilibrium is a standard and widely used solution concept in both ML model markets [4] and broader competitive settings [5]. 2. The process by which providers optimize $\hat{\theta}$ is an interesting direction for future work. While not the focus of our paper, a common approach—also used in prior work on ML model markets without heterogeneous data [6]—is **best-response dynamics**, where each provider updates $\hat{\theta}_n$ to maximize their utility in Equation (3), given others' strategies. A formal analysis of convergence in our setting remains open. > ***How do the model providers learn their loss on each data source?*** As noted above, model providers can obtain loss information through interactions with data sources. For example, a data source may provide a validation set, and after receiving predictions from the provider, compute and share the resulting loss. This feedback allows providers to estimate $\ell_{n,k}$, which in turn guides how they weight different data sources during model training. > ***Stationarity Assumption and Robustness to Distribution Shifts*** Our current analysis assumes fixed (stationary) data distributions. However, the equilibrium is robust to moderate distributional shifts, as key parameters (e.g., $\ell_{\max}$, $\hat{\theta}^M$) vary smoothly with the distribution. Thus, small changes lead to small adjustments in PNE. Thus, our theoretical insights continue to hold under mild distributional shifts. Studying dynamic or non-stationary settings remains an important direction for future work. --- [1] Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence. 2019. [2] Muhammad Arbab Arshad et al. The Power Of Simplicity: Why Simple Linear Models Outperform Complex Machine Learning Techniques--Case Of Breast Cancer Diagnosis. 2023. [3] Ananya Kumar et al. Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. ICLR. 2022. [4] Omer Ben-Porat et al. Regression equilibrium. ACM Conference on Economics and Computation. 2019. [5] Noam Nisan et al. Algorithmic game theory. Cambridge Univ. Press. 2007. [6] Omer Ben-Porat et al. Best response regression. NeurIPS. 2017.
Summary: This work considers a game between model providers who choose which data sources to include in their models training. The approach is one like facility location games: the data sources are also the customers, and so e.g. a model provider could prioritize one data source to ensure that they win business from that customer. Or, in the presence of no competition, the model provider could equally weight all data sources, or weight them according to their sizes within the market. The example up front given is hospitals: they are both providing the data for training the algorithms, and they are the consumers of medical models that are trained on data from their and other hospitals. The paper considers pure Nash equilibria, and whether or not equilibria: a) do not exist, b) exist where all model providers converge to same outcome, or c) where model providers specialize on different sources or mixtures of sources. These are considered under two regimes: in which the facilities / data providers choose exactly the best model provider, or noisily choose providers. With two competing firms, the firms converge to the dominant data source if there is one; otherwise, there is no pure Nash equilibrium. However, this is called Heterogenous for some strange reason. I do not understand that. With more than two competing firms, the paper gives conditions on PNE existence that translate to usually finding heterogeneous strategies when data source provider is chosen optimally, and can be heterogenous or homogeneous under noisily chosen parameters. Examples and characterizations of the spaces that lead to either are included. Claims And Evidence: I did not review the proofs in detail. The claims do not seem to be problematic. Methods And Evaluation Criteria: It is helpful that the methods include both theoretical proofs and examples. Theoretical Claims: I did not verify the correctness of the proofs in the supplemental appendix. Experimental Designs Or Analyses: The paper simulates these games experimentally, but the results are just aggregated by showing 10 sample runs, not any statistics aggregated over the outputs or treatment of whether these are representative. Supplementary Material: No. Relation To Broader Scientific Literature: This paper proposes a model based on facility location, which has an extensive literature. That relationship is an interesting one to draw on. Those connections should be further discussed - what are the closest results? Is the intuition similar or different when moving from euclidean space into ML model competition? Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: The topic is interesting, and facility location makes for a good literature to draw on. I don't know that the results and model formulation are of the utmost interest. It seems clear that sometimes firms would consolidate, and sometimes differentiate. For a full treatment, this should be compared to other results in the facility location literature and general economics competition literature - what is different about the existence, homogeneous vs heterogeneous results in this setting vs other settings? The paper speaks to a desire to have policy impacts, but the results are not discussed at the level that would result in policy discussions. What are the policy implications here? How should a policy maker evaluate this in the example scenario of training on hospital data? The paper is poorly written, which makes it hard to follow. Here are some examples. - Proposition 4.1 is described as characterizing the "strategy set" for each player, but it is really talking about best response strategies. - In the definition of The Heterogeneous Data Game, the type parameter \theta on the left hand side of the equation does not show up on the right side, it is implicit in l_{n,k}. This is confusing. Adjust your notation so that you have enough space to include the type explicitly. - g_n(l_{1,k}, ..., l_{n,k}) is the weight for facility k on data provider n, which must satisfy \sum_{n\ in N} g_n(l_{.., k})=1. That seems strange, for it to be a choice of k and not n. Perhaps put a k superscript on g. Then you could say: g_n^k( l(theta hat) ). That notation makes much more sense. Other Comments Or Suggestions: Nit to pick: In 3.2, notationally, you should be defining g, not g^{prop,prox}. You could though say g = g^prop = .... Questions For Authors: How do your heterogeneous and homogeneous equilibrium results compare to the findings in the facility location? If a policy maker takes their intuition from general facility location games, what do they lose out on vs understanding the model you present? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your reviewing efforts and constructive comments. > ***Comparison with Competitive Location Models*** **1. Differences in Setting** Our model captures two key features of ML markets often missing in prior work: (1) **Source-specific distance metrics** from distribution shifts, and (2) **High-dimensional strategy spaces** due to many sources and model parameters. In contrast, most competitive location models focus on **low-dimensional spaces** or **networks** with **uniform distance metrics**, driven by two factors: (1) applications in urban planning naturally fit 1D, 2D, or network settings; and (2) many models involve additional variables like price or quantity, limiting tractability and leading to smaller-scale formulations. While a few papers explore high-dimensional competition, they do so in the context of quantity competition [1] or pricing [2], rather than spatial or parameter competition. **2. Technical Distinctions and Contributions** While prior models also observe homogeneous and heterogeneous equilibria, our key contribution lies in characterizing *when* they arise under ML-specific conditions. - **Heterogeneous distance metrics**: When $\Sigma_k$ differ across sources, Proposition 4.1 shows that the PNE strategy set becomes a non-linear manifold in $\mathbb{R}^D$, making equilibrium analysis substantially more complex. - **High-dimensionality**: To our knowledge, no prior work studies PNE in high-dimensional continuous spaces, even under Euclidean distance. As prior results rely heavily on geometric properties [3, 4], existing theories do not extend to our setting. We acknowledge connections to low-dimensional results. For example, [3] analyzes a 2D probabilistic model with conditions for homogeneous PNE, while [5] studies location–quality competition, showing that proximity choice leads to homogeneous PNE, and probabilistic choice allows both types of PNE. However, these insights do not extend to **high-dimensional settings with heterogeneous metrics**. Classical economic models (e.g., Bertrand, Cournot) focus on price or quantity competition under simpler assumptions and do not capture the geometric complexity introduced by heterogeneous loss functions in our setting. Technically, analyzing PNE requires new tools. To prove Theorem 5.7, we apply a fixed-point theorem to find a local equilibrium and show, via a partition of the strategy space (Section C.7), that it satisfies PNE conditions. The analysis is non-trivial due to the geometric complexity involved. > ***Policy Implications*** **Our model captures key aspects of ML markets overlooked in prior work, as previously noted.** As discussed in Section 1.2, our theoretical results offer the following novel policy insights: 1. When certain data sources (e.g., hospitals) have dominant weights, providers tend to focus on them (Theorem 5.4). Policymakers can counter this by adding more providers or incentivizing attention to smaller sources; our results help quantify both. 2. To promote model diversity and avoid convergence to a single dominant model (e.g., in medical ML), Theorems 5.6 and 5.7 show that increasing the rationality of data sources—i.e., improving their ability to choose better models—encourages heterogeneous PNE. > ***Writing Issues*** We thank the reviewer for the helpful suggestions and will revise the paper accordingly. In addition, in HD-Game-Proximity with $N=2$, we label the PNE as "heterogeneous" since the strategy is $(\theta_1,\theta_1)$, not the homogeneous PNE given in the paper, consistent with the heterogeneous structure observed for $N>2$. > ***Justification of Experiments*** Our synthetic experiments verify theoretical results and explore scenarios beyond theories. Ten representative simulations align well with theory. We further ran 100 random instances and report the average and standard deviation of temperature thresholds, confirming, for example, that the minimal temperature for a homogeneous PNE is typically well below $2\ell_{\max}$. |N|5|10|15|20|25|30| |:--|:--|:--|:--|:--|:--|:--| |Homogeneous PNE: Minimal $t/(2\ell_{\max})$|0.11 (0.07)|0.15 (0.08)|0.17 (0.08)|0.17 (0.08)|0.18 (0.09)|0.18 (0.09)| |Hetegeneous PNE: Maximal $t/(2\ell_{\max})$|0.09 (0.07)|0.12 (0.07)|0.12 (0.07)|0.11 (0.07)|0.11 (0.07)|0.11 (0.07)| --- [1] Simon P. Anderson et al. Spatial Competition a la Cournot: Price Discrimination by Quantity‐Setting Oligopolists. Journal of Regional Science. 1990. [2] Helmut Bester. Noncooperative bargaining and spatial competition. Econometrica. 1989. [3] Dodge Cahan et al. Spatial competition on 2-dimensional markets and networks when consumers don't always go to the closest firm. International Journal of Game Theory. 2021. [4] Zvi Drezner et al. Competitive location models: A review. European Journal of Operational Research. 2024. [5] M. Elena Sáiz et al. On Nash equilibria of a competitive location-design problem. European Journal of Operational Research. 2011.
null
null
null
null
null
null
How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary Objects
Accept (poster)
Summary: The paper aims to generate motions of different rigs from text inputs. The paper witness the problem in the current 3D content creation community, that it lacks high-quality motion dataset with annotations, and it lacks methods handling hetergeneous skeleton templates. Therefore, the paper present a high-quality text-annotated text-to-object motion dataset, and a motion diffusion model that supports different skeleton templates. Specifically, the paper repsents three rig augmentation methods, and present incorporating Tree Position Encoding to extend MDM incoporating different skeleton structures. Experiment comparing with several component-removed baselines and motion retargetting, proving the effectiveness of the rig augmentation strategy, and generalized motion diffusion models. Experiment with partial data also presents the adapbility to various rigs, motions, and unseen objects. Claims And Evidence: The claims are well supported. Methods And Evaluation Criteria: + My major concern here is on the physical playsibility of the augmented rigs. As these rigs are augmented by language models, it could potentially breaks the physical plausibility of the original rigs (i.e. uneven lengths feet leads to unstable standing, bones might intersect). Does the paper provide evaluation for these properties? I acknowledge that even though without physical playsibility, a MDM can still benefit from the augmented skeletons. However, are these augmented rigs really animation-usable? Theoretical Claims: + Since different objects might behaves drastically different with the same verb, (i.e. how scorpion and horse attack looks drastically different as shown in Figure 10), will these actually harms the cross embodiement generalizability of the MDM? Experimental Designs Or Analyses: + The paper compares the baseline with other methods using 'pose-level' encoders but not 'motion-levels'. Though I acknowledge that it's hard to train such a motion-level encoder, but the paper needs to provide evaluation in terms of the quality of motion, such as motion smoothness, or even perceptual study. Supplementary Material: I reviewed the suuplementary material. Relation To Broader Scientific Literature: + The dataset would be the major contribution, which might interesting both animation and robotics community. + The cross-skeleton MDM model is also interesting. Essential References Not Discussed: Please include these works that has the ability to generate character motion with different rigs, or overlap with the rigging augmentation techniques proposed in the paper. (1) CharacterMixer: Rig-Aware Interpolation of 3D Characters (2) SkinMixer: Blending 3D Animated Models Please also include some paper using surface-representation or multi-view image representation for text to motion synthesis. Here I name some as an example: (1) L4GM: Large 4D Gaussian Reconstruction Model (2) DreamGaussian4D: Generative 4D Gaussian Splatting Other Strengths And Weaknesses: + I can see the paper, especially the rigging augmentation part, might also have broader impact in articulated object augmentation, that might benefit 3D design or robotics community. + The paper would also be interesting in studying skill transfer across embodiments, which could might interested robotics community. Other Comments Or Suggestions: + The paper using the phrase "large-vocab" to describe the text-to-motion dataset. However, detailed stats of word frequency and word count are not provided. + Please specify the RestPE in the supplementary material, if it is not provided. Questions For Authors: + Are examples in figure 1 generated results or examples from the dataset? Please specify. + The dataset uniformed all skeletons as mentioned in section 4.1. However, since different creatures has different sizes, will uniforming object sizes confuses GPT or any text-related components, i.e., text-to-motion model? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer’s detailed review and thoughtful insights. ### Q1: Physical plausibility of the augmented rigs. As noted in our response to Q5 of Reviewer 2SeY, we carefully designed the augmentation pipeline and visually inspected ~10K augmented motions, which we found sufficient at this scale. The resulting skeletons appeared visually plausible and were successfully used for training. That said, we acknowledge that these rigs were not derived from physically simulated or animation-ready meshes, and may not be suitable for production use. As discussed in response to Q3 of Reviewer 4SvT, their primary goal was to increase skeletal diversity and improve model generalization—e.g., robustness to unseen rigs or cross-object transfer. ### Q2: Does using the same verb for semantically different motions across objects hurt cross-embodiment generalizability? Rather than harming generalization, we believe this diversity encourages the model to learn embodiment-aware interpretations of similar textual prompts. The key lies in context-awareness. While both model design and data quality are important, we view this primarily as a data-centric challenge—facilitated by the adoption of a proven architecture (e.g., DiTs)—and best addressed through exposure to diverse, high-quality, and context-rich examples. To this end, we provide multi-level captions that describe actions, part-level dynamics, and initial poses. As shown in Section 7.3, these annotations help the model learn contextualized verb semantics and produce more structured motions. As a promising direction, scaling up with diverse resources like Objaverse-XL (~100K animated 3D meshes), along with automated pipelines for multi-level captioning, could greatly enhance cross-embodiment generalization through fine-grained contextual understanding. We opt to explore this as our future work. ### Q3: Additional evaluation in terms of the quality of motion. As suggested, we conducted additional evaluations on the test sets from Figure 4-(b), where all baselines are available. For motion smoothness, we used the Motion Stability Index (MSI) [1]; higher values indicate smoother, more stable motion. GT scored 12.01e3. Results: - Retargeting: 9.93e3, Ours: 8.78e3, NoAug: 7.39e3, GPT-Caption: 7.30e3, SO-MDMs: 7.21e3. Our method achieves the highest MSI among data-driven baselines, closest to GT. Retargeting scores highest overall due to direct motion transfer but lacks flexibility for novel prompts. We also conducted a user study with 30 participants, who selected the most caption-aligned motion. Results: - Ours: 65.6%, NoAug: 12.4%, SO-MDMs: 10.667%, Retargeting: 9.067%, GPT-Caption: 2.267%. These results show that our method produces smooth and semantically accurate motions, both quantitatively and perceptually. We will include them in the revision. References: [1] Kim et al., Audio-driven Talking Face Generation with Stabilized Synchronization Loss, CVPR 2023. ### Q4: Essential references not discussed & details on RestPE / Figure 1. We will include the relevant references in the revised manuscript. Details on RestPE will be added to the supplementary material. Also, the examples in Figure 1 are from the dataset (not generated); we will clarify this to avoid confusion. ### Q5: Word stats to support “large-vocab” claim. In our work, “large-vocab” refers to the diversity of object categories and skeletal structures, 70+ distinct categories with unique topologies, rather than textual or action diversity within a fixed skeleton, as in prior human motion literature. That said, we appreciate the suggestion and will include text-level stats in the revision. As a preview: - Avg. words per caption: short (12.2), mid (29.5), long (56.2) - Verb counts: short (302), mid (495), long (707) - Noun counts: short (402), mid (560), long (1000) - Top verbs (short): stand (145), walk (99), raise (95), lower (75), strike (73) - Top nouns (short): head (223), body (177), tail (165), leg (150), ground (107) For reference, the full list of object categories is also provided in Appendix Table 2, which may help clarify the intended meaning of “large-vocab” in our current draft. ### Q6: About normalizing scales of different creatures. We apply size normalization, motivated by early observations that it led to faster loss convergence during training. Nonetheless, we believe normalization does not hinder the model’s ability to learn size-specific motion patterns when such cues are present in the data. For example, even if a small and large cat are normalized to the same scale, their motions may still differ in agility, stride, or posture—captured through joint rotations, or temporal dynamics and timing. Although our annotated captions do not specify size explicitly, skeletal topology and relative proportions are preserved, providing structural cues that help the model infer such differences.
Summary: This paper proposes a novel problem, text-driven motion synthesis of different skeletal structures, constructs a dataset, and develops a new model structure. The key innovation is the explicit incorporation of skeletal configuration information through Tree Positional Encoding (TreePE) and Rest Pose Encoding (RestPE). Experiments show the method generates realistic, coherent motions from textual descriptions for diverse and even unseen objects, setting a strong foundation for motion generation across diverse object categories with heterogeneous skeletal templates Claims And Evidence: Overall, the claims are almost supported. However, I still suspect the generalizations. Could the author provide some human motion in a zero-shot way? Methods And Evaluation Criteria: The methods are good. And some human user studies may needed for evaluation. Theoretical Claims: No problem here. Experimental Designs Or Analyses: Why did the author choose PE to encode the rest pose? Does the author try other ways, like attention? Supplementary Material: No Supplementary Material found. Relation To Broader Scientific Literature: The previous method makes it hard to process different skeletons in one model. This paper proposes to using PE to solve it. Essential References Not Discussed: No problem here. Other Strengths And Weaknesses: Strengths: This paper is interesting and meaningful. The dataset and the architecture design make sense. Weaknesses: See the questions. Other Comments Or Suggestions: Typo: 1. Page 2, Line 93, 'Similarly' should not be a reference. Questions For Authors: 1. Could the author provide the video demos link again? I could not open the link in the main paper PDF file. 2. Can this model generate human skeleton motions in a zero-shot way? 3. I did not get the process of rig augmentation. Did the author do retargetting after the augmentation? If yes, how do you keep the motion quality here? If not, could you explain further how to adjust the original motion to the new skeleton? 4. Could the author separate the TreePE and TestPE ablations to see what actually works? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable feedback and questions. ### Q1: Typo & Link for the demo. We will correct the typo. For the demo, please visit: t2m4lvo.github.io ### Q2: Human study. Please refer to our response to Q2 of Reviewer qPq2, where we address a similar point. ### Q3: Why choose to use PE to encode the rest pose. We opted not to explore alternative methods, as our primary goal was to preserve the scalability and architectural simplicity of the backbone, Diffusion Transformers (DiTs/SiTs), which are known for their scalable and generalizable design. Positional Encoding (PE) offered a straightforward and effective way to incorporate rest pose information while maintaining full compatibility with our transformer-based framework. That said, we acknowledge that exploring alternative conditioning strategies could be a fruitful direction for future work. ### Q4: Zero-shot inference on human While our method is designed to accommodate diverse and heterogeneous skeletal templates, we find that it does not extend to human skeletons. This limitation stems from the characteristics of our training data: the Truebones Zoo dataset, even with rig augmentation, does not contain human-like skeletal structures. Additionally, the motion dynamics and textual descriptions in our dataset differ significantly from those in human motion domains. A natural question is why human motion data was omited in our training set. Our primary objective is to expand motion synthesis to underrepresented non-human species within a unified framework. To this end, we used the Truebones Zoo dataset, which consists of ~1K motion clips over 70 non-human object categories. In contrast, existing human motion datasets often comprise tens of thousands to millions of samples. Incorporating human motion would introduce a significant imbalance, likely skewing the model’s capacity toward human motion and limiting its ability to learn meaningful representations for underrepresented yet a lot more diverse categories. We will include a qualitative example and further discussion of this limitation in the revision. In future work, we plan to address this by scaling to larger datasets, such as Objaverse-XL, to construct a more balanced corpus of human and non-human motions. This would enable training a unified model capable of generalizing across a broader spectrum of skeletal structures and motion styles. ### Q5: How to keep the motion quality of the augmented rigs. Yes, we retarget the original motion to the augmented skeletons, using Blender’s retargeting pipeline. To ensure motion quality, we carefully designed the augmentation process to minimize disruption to motion dynamics. Changes like rest pose adjustments and bone subdivision have limited impact, while more sensitive bone length adjustment and bone erasing were applied conservatively. - Length adjustments were restricted to 0.8×–1.2× of the original and applied symmetrically when symmetry existed. - Bone erasing was limited to distal appendages (toes, head tips, tail ends) and redundant spine/root bones. Under these constraints, we visually verified ~10K augmented results and found them suitable for training, contributing to improved generalization across diverse skeletons. We acknowledge that these details were not fully described in the original submission, and we will include a more detailed explanation in the revision to clarify the design choices and safeguards used in the rig augmentation process. To further ensure plausibility, exploring quantitative checks (e.g., joint velocity, momentum shifts, foot contact, end-effector deviations), followed by rejection sampling could be a promising direction. Another promising direction is two-stage training: pretraining on augmented rigs to improve generalization and bone manipulation, then fine-tuning on physically grounded data for higher realism. ### Q6: Further ablation study on TreePE vs RestPE We conducted an ablation study to isolate the effects of PEs. This experiment was performed under the same setting as Table 1 in the paper. | RestPE | TreePE | Rig Aug. | Train | | | Test | | | Test+ | | | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | | | | FID (↓) | R@1 (↑) | Align. (↑) | FID (↓) | R@1 (↑) | Align. (↑) | FID (↓) | R@1 (↑) | Align. (↑) | | X | X | O | 0.98 | 0.33 | 0.66 | 2.27 | 0.26 | 0.68 | 0.86 | 0.39 | 0.74 | | X | O | O | 0.15 | 0.85 | 0.90 | 1.17 | 0.53 | 0.83 | 0.48 | 0.58 | 0.84 | | O | X | O | 0.02 | 0.95 | 0.95 | 0.69 | 0.59 | 0.87 | 0.42 | 0.66 | 0.89 | | O | O | O | 0.01 | 0.97 | 0.97 | 0.68 | 0.60 | 0.89 | 0.26 | 0.67 | 0.93 | The results confirm that both are beneficial: RestPE has a stronger impact, but the best performance is achieved when both are used. We believe the strong performance of RestPE alone is partly due to the model's ability to implicitly infer parent-child relationships from relative offsets and bone lengths. --- Rebuttal Comment 1.1: Comment: Thanks for the further explanations. The authors solved most of my questions. However, the method still suffers from poor generalizations. The author could select the proper amount of human data from the Truebones zoo dataset to verify whether the framework works for humanoid or not. Thus, I decided to keep my original score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the follow-up comment and thoughtful suggestions. We would like to reiterate the primary objective of our work: to push the boundary of motion synthesis toward more diverse object categories that differ significantly in skeletal structure. Unlike prior works that focus primarily on a limited set of object types representable with a single fixed skeletal topology, our goal is to explore whether a unified framework can accommodate the complexity and variability of morphologically diverse categories. That said, while the Truebones dataset is a highly valuable resource due to its diversity and quality, it remains relatively small in scale—especially when compared to the datasets used to train foundation models in language or vision domains, where zero-shot inference is often the primary focus of evaluation. Given this limitation, we believe it is unreasonable to expect strong zero-shot capabilities in our setting, akin to those demonstrated by much larger models. Moreover, even with the inclusion of a small curated set of human skeletons, the broader challenge of generalizing to truly underrepresented object categories likely remains unresolved. Nevertheless, we believe our experiments on the Truebones Zoo dataset offer promising evidence that our method can effectively handle a wide range of skeletons and motion styles—and even generalize to previously unseen categories. We view this as a meaningful step toward broader motion synthesis for open-vocabulary objects, extending well beyond the limited subset of categories addressed in prior work or covered in our current study. We hope this response further clarifies the motivation and contributions of our work, and we sincerely appreciate the reviewer’s constructive feedback and the opportunity to elaborate on our approach.
Summary: This work presents a unified framework for motion synthesis across a diverse range of objects with varying skeletal structures and rest poses. To generate training data, the authors augment the Truebones Zoo dataset by modifying skeletal structures and rest poses , and providing textual descriptions at multiple levels of detail. The diffusion model is built upon DiT, where the technical contributions lie in the new TreePE and RestPE layers introduced to accommodate arbitrary skeletal topologies and rest poses. Additionally, a two-stage training strategy is implemented to address computational resource constraints and data limitations. Based on the reported quantitative and qualitative results, the proposed method demonstrates strong performance compared to baselines and exhibits generalizability to novel skeletons and motion descriptions. ## update after rebuttal After reading the rebuttal, I will maintain my score as weak accept. I do feel the solution proposed by the authors is interesting, and with the release of the augmented dataset, it should have border impact for subsequent works in related areas such as artificial agents and robotics. Claims And Evidence: - The necessity of the proposed positional encoding layers and data augmentation techniques is validated through ablation studies. - The method's generalizability to diverse rigs and motion descriptions is demonstrated through both quantitative and qualitative experiments. Methods And Evaluation Criteria: The proposed method and evaluation are reasonable. Theoretical Claims: n/a Experimental Designs Or Analyses: - The ablation study examines the necessity of positional encodings and rig augmentation. It looks good to me. - The quantitative comparisons in Figure 4 compares the proposed model design with other baseline architectures on text-to-motion synthesis. It's sound to me, with only one small question: why does the simple retargeting method perform better than other baselines? Any insights here? - Motion synthesis on novel objects and skeletons are qualitatively evaluated on different species, which are reasonable to me. - Long motion sequence generation is achieved by conditioning on sequential descriptions with a sliding window. For this experiment, I would like to see more detailed descriptions like how the weighted blending is performed. I can understand the idea in general, but I'm a bit skeptical since I don't think naively blending the joint rotations can always ensure a natural and smooth transition. It's more like a motion in-betweening problem, so more detailed description and analysis are welcome. - Generating motions with multi-level descriptions is sound to me. Supplementary Material: Yes. I reviewed the contents in the appendix and webpage. Relation To Broader Scientific Literature: This paper proposes a motion synthesis framework for a broad range of objects, extending previous motion diffusion models beyond humanoid applications. In general, this approach has potential applications in areas such as artificial agents and robotics, enabling motion synthesis for non-humanoid robots. Essential References Not Discussed: References are fairly enough for me. Other Strengths And Weaknesses: [Strengths] - The paper proposes a motion synthesis framework capable of handling skeletal topologies and rest poses, covering a wide range of objects. - The authors augment the Truebones Zoo dataset to include more skeletal variations and high-quality text descriptions. The release of this dataset should faciliate research in related areas. - Both quantitative results and qualitative evaluations demonstrate that the proposed method surpasses baselines in terms of generation quality. - The paper is well-written, technically correct and good quality. [Weaknesses] - No global translation. Due to the model design, all the generated motions are pinned at the origin without global translation of the root joint. It would be beneficial to include discussions on how to generate the global translation, expecially for those non-humanoid skeletons. Other Comments Or Suggestions: - As the two-stage training stategy is specially designed for computational resource constraints and data limitations, it would be helpful to include details on the hardware specs used for model training as well as the overall training time. Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer for raising important points and providing constructive feedback. ### Q1: Why do the simple retargeting method perform better than other baselines? Data-driven learning-based methods each have limitations in generalization or controllability: - GPT-Caption is trained on captions automatically extracted from rendering images of motions, which tend to be noisy or ambiguous. This weakens the model’s ability to learn reliable text-motion alignment, resulting in limited text-driven controllability. - Single-Object MDM is trained on motion from a single object category and tends to overfit, making it difficult to generalize beyond the training distribution. - w/o Rig Aug is trained on high-quality captions and diverse objects, but without rig augmentation, the model lacks sufficient exposure to structural variation, limiting its cross-embodiment transfer capability, as our response to Q3 of Reviewer 4SvT. In contrast, retargeting directly copies bone transformations from source motion. This naturally boosts novelty and, when source and target skeletons are similar, leads to higher coverage. That said, retargeting can't generate motion from text and depends on existing motions and structural similarity. ### Q2: More description and analysis for long-motion synthesis. We fully agree that naively blending joint rotations, regardless of the representation used (e.g., euler angles, quaternions, rotation matrices, or even continuous 6D representations), does not always guarantee a smooth or physically plausible transition, due to the discontinuous and nonlinear nature of rotational spaces [1]. We also acknowledge that this is closely related to motion in-betweening, which typically requires dedicated interpolation strategies or learned dynamics. To generate long motions, we prepare $B$ consecutive text descriptions, each guiding a motion segment of length $F$, resulting in a long sequence of $B \times F$ frames. Before denoising, we reorganize the sequence into $B$ overlapping segments. Specifically, the $b$-th segment spans frames from $(b - 1)F - (b-1)W + 1$ to $bF - (b - 1)W$ in the original sequence, introducing overlaps of $W$ frames with both the $(b - 1)$-th and $(b + 1)$-th segments. The first segment ($b = 1$) is an exception and simply takes the first $F$ frames, i.e., frames $1$ to $F$. This reorganization results in duplicated content within the overlapping regions. It’s worth noting that, as a result, the final segment ends at frame $BF - (B - 1)W$, and any frames beyond that (i.e., from $BF - (B - 1)W + 1$ onward) are ignored during sampling. Each divided segment is then denoised independently, conditioned on its respective caption. After denoising, the segments are concatenated to reconstruct the full sequence, while the overlapping regions are blended by simple averaging to resolve duplication. While our implementation uses uniform blending (i.e., averaging), more sophisticated strategies—such as linear ramps or cosine-weighted curves—can also be applied. Despite its simplicity, we empirically found that uniform averaging produces smooth, coherent transitions, especially when the overlap size $W$ is moderate (e.g., $W = 5$) and the text prompts transition naturally between segments. As also noted by Reviewer 4SvT, we evaluated long motion quality via semantic fidelity and smoothness, comparing it to short sequences ($F=90$) used during training, to assess whether quality degrades over time. Please find our response to Q2 of Reviewer 4SvT for details. References: [1] Zhou et al., On the Continuity of Rotation Representations in Neural Networks, CVPR 2019. ### Q3: About global translation. In our current setup, we model only joint-wise euler angle rotations and omit global translation of the root joint. However, this was a design choice made for simplicity, not a fundamental limitation of the model. The model operates on input-output tensors of shape $F \times J \times D$, where $D = 3$ corresponds to joint rotations. This can be naturally extended to include additional joint-level features—such as global root translation or relative translations for soft-constrained joints—by increasing $D$ (e.g., $D = D_\text{rot} + D_\text{trans}$), or further expanded to incorporate physically informative signals like joint velocities, foot contact indicators, or positions to enhance temporal coherence, realism, and physical plausibility. We will include a discussion of this direction in the revised manuscript. ### Q4: About computational resources. All models were trained on a Linux system with either an NVIDIA RTX 48GB A6000 or 40GB A100 GPU. - Pose Diffusion Model: \~29GB VRAM, batch size 512, 400K iterations (\~30 hours) - Motion Diffusion Model: \~38GB VRAM, batch size 4, sequence length 90, 1M iterations (\~4 days) We will include this information in the revised manuscript.
Summary: This work presents a major advancement in text-driven motion synthesis for large-vocabulary objects with heterogeneous skeletal structures. By augmenting datasets, introducing novel rig adaptation techniques, and extending motion diffusion models, the authors enable realistic motion synthesis for both seen and unseen objects. This framework lays a strong foundation for animation, gaming, virtual reality, and robotics applications. Claims And Evidence: 1. Claim: The proposed method significantly outperforms all existing approaches in text-to-motion generation. Partially supported, but lacks direct baseline comparisons. While the model is tested against ablated versions (e.g., without rig augmentation or using GPT-generated captions), there is no direct comparison with existing SOTA models like OmniMotion-GPT or SinMDM. The paper would benefit from side-by-side quantitative evaluations against prior text-to-motion models using the same datasets. 2. Claim: The framework enables high-fidelity, temporally coherent long-motion synthesis. Limited empirical validation. Figure 7 shows generated longer sequences, but there are no explicit temporal consistency metrics (e.g., FID over extended motion sequences, smoothness scores). The paper could strengthen this claim by adding quantitative evaluations of motion continuity and stability over time. Methods And Evaluation Criteria: N.A Theoretical Claims: No Theoretical Bound for Generalization to Novel Skeletons: The claim that the model generalizes well to unseen skeletons (Section 7.1) is based on empirical evidence, but there is no theoretical proof explaining why TreePE and RestPE should generalize across arbitrary skeletal configurations. Experimental Designs Or Analyses: N. A Supplementary Material: Yes, A~C. Relation To Broader Scientific Literature: N. A Essential References Not Discussed: N. A Other Strengths And Weaknesses: N.A Other Comments Or Suggestions: N.A Questions For Authors: N.A Ethical Review Concerns: N.A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s valuable time and feedback. ### Q1: Comparisons with SOTA models like OmniMotion-GPT or SinMDM. We agree that comparisons to SOTA models are valuable. However, OmniMotion-GPT and SinMDM address different goals and operate under different assumptions, making direct, meaningful comparisons less feasible. OmniMotion-GPT leverages human motion priors for text-driven motion synthesis of SMAL-based quadrupeds. It relies on: - Predefined joint correspondences between humans and animals, supporting only fixed, four-limbed skeletons (e.g., quadrupeds), and failing on animals with non-analogous structures (e.g., snakes, fish, insects). - Text input alone is insufficient: relevant human motion is required at inference (e.g., animating “a lion pushing a box” needs a human pushing motion), limiting text-only synthesis. - Since our dataset (Truebones Zoo) contains a wide variety of non-human skeletons for which such paired human motion does not exist, applying OmniMotion-GPT in this context is infeasible. SinMDM, while based on MDM like ours, differs fundamentally: - It aims to discover and recombine submotions (i.e., motion motifs) from a single clip using local attention to increase the diversity within a single motion, rather than focusing on text-driven synthesis or spanning across object categories. - It assumes a fixed skeleton and rest pose, making it incompatible with varied or unseen structures, and is not text-conditioned. - To adapt SinMDM for our setting, it would require substantial changes: introducing text conditioning and training a separate model per object type. In that case, the closest equivalent in our experiments is Single-Object MDM, which we already include as a baseline. That said, we view SinMDM as orthogonal and potentially complementary to our work. While SinMDM enhances diversity within a fixed structure and motion, our model focuses on text-driven synthesis and cross-categories modeling. Combining these directions would be a promising avenue for future research. ### Q2: Evaluation on synthesized long motions. We thank the reviewer for the suggestion. We evaluated both semantic consistency and temporal smoothness in long-form motion with respect to the original short-form motion. To assess semantic fidelity, we conducted a retrieval test using the pretrained pose encoder: - Using 10 captions from Sec. 7.2, we generated 100 long motions (900 frames each = 10 × 90-frame segments), with shuffled caption order. These were split into 1000 90-frame segments. - Separately, we generated 100 90-frame reference clips per caption (1000 total). - For each segment, we retrieved the top match from the reference set using cosine similarity. A correct match retrieved a clip from the same caption. This yielded 95.6% top-1 accuracy, indicating that segments remain semantically aligned after blending. To assess temporal smoothness, we measured joint velocity norm (lower = smoother) and Motion Stability Index (MSI [1,2]; higher = smoother) over 5-frame clips sampled from three conditions, using the same generated and reference sequences as above: - Upper Bound: from random mid-segments within reference clips — velocity = 0.094, MSI = 79,602 - Ours: from overlapping segments with blending in long-form generated motion — velocity = 0.188, MSI = 26,716 - Baseline: from hard concatenation of independently generated clips — velocity = 0.444, MSI = 3.11 These results show that our method significantly reduces boundary discontinuity compared to naive concatenation, while preserving motion dynamics close to intra-clip smoothness. We will include these results in the revised manuscript. References: [1] Ling et al., StableFace: Analyzing and Improving Motion Stability for Talking Face Generation, ECCV 2022. [2] Kim et al., Audio-driven Talking Face Generation with Stabilized Synchronization Loss, CVPR 2023. ### Q3: Generalization to unseen skeletons TreePE and RestPE are introduced to make the model skeleton-aware, aiding learning, while rig augmentation is designed to drive generalization. Our intuition behind rig augmentation is simple: First, by constructing and showing the model a distribution of skeletons that share the same motion and caption, we encourage it to focus on the motion’s semantic meaning rather than its specific structural form. In addition, by displaying a variety of skeleton topologies and rest poses for each motion, we aim to teach the model how to manipulate bones to generate the motion. Moreover, as these distributions expand through augmentation, they may begin to overlap across different object classes, which we believe acts as a bridge and helps the model transfer motion patterns more effectively to novel skeletons. We’ll clarify this in the revision—our intent was for PE to support learning via structural awareness, and for rig augmentation to promote generalization by encouraging structural invariance.
null
null
null
null
null
null
Near Optimal Non-asymptotic Sample Complexity of 1-Identification
Accept (poster)
Summary: This paper studies fixed-confidence $1$-identification for sub-Gaussian distributions, where the learner should output one arm whose mean exceed a given threshold if it exists, and output none otherwise. This pure exploration problem with multiple correct answers has been considered under different names in the literature, e.g., Good Arm Identification or any low arm, and with slightly different variants. The authors propose an algorithm named Sequential-Exploration-Exploitation (SEE), see Algorithm 1. This is a phase-based algorithm that call two subroutines at each phase, an exploration algorithm whose sampling rule is based on UCB indices (Algorithm 2) and an exploitation algorithm that only pulls the candidate answer from Algorithm 2 to verify it is actually a good arm (Algorithm 3). A non-asymptotic upper bound on its expected sample complexity of SEE is given in Theorem 5.3. For positive instance, the upper bound remarkably only features the gap between the mean of the best arm and the threshold. Two lower bounds have been derived on the expected sample complexity for negative instances (Theorem 5.4) and positive instances with only one good arm (Theorem 5.5). They also derived a lower bound on the number of pulls of suboptimal arms for positive instances with only one good arm (Theorem 5.6). The empirical performance of SEE is compared with the ones of several algorithms from the literature on several instances. **## update after rebuttal** Including the discussions will improve the paper in its revised version, hence I maintain my positive score. Claims And Evidence: **Non-asymptotic lower bounds only on instances with a unique correct answer.** Theorem 5.4 holds for negative instances where the only correct answer is None. Theorems 5.5 and 5.6 holds for positive instances with a unique good arm. Therefore, there is no non-asymptotic lower bound when the instance has multiple correct answer, which seems to be restrictive. Could the authors comment on the challenges posed by instances with an arbitrary number of good arms ? Taking two arms as an example would already be valuable. My interest is sparked by the fact that non-asymptotic lower bounds are quite challenging to obtain for a pure exploration problem with multiple correct answers, as being correct on one instance doesn't necessarily imply that we are incorrect on an alternative answer due to non-unique correct answers. For example, in terms of $\log(1/\delta)$ dependency, the only tight lower bound in this case is asymptotic in $\delta \to 0$. As I am less familiar with lower bounds based on the random permutation model, I would appreciate having insights on the limitations of multiple correct answer for this kind of lower bounds. **Misleading statement on the matching upper and lower bounds on the pulling complexity.** Given the above comment on the fact that there is no lower bound for positive instances with more than one good arm, it is important to reformulate the statement that suggests matching upper and lower bounds for any instances. For example, Line 27-28 “achieve near optimality, in the sense of matching upper and lower bounds on the pulling complexity” or “our proposed algorithm achieves the non-asymptotic optimality in the sample complexity both the positive case when there is a qualified arm, i.e. an arm with mean reward at least the threshold, and the negative case when there is no qualified arm. We prove matching upper and lower sample complexity bounds, and the gap between these upper and lower bounds is up to a polynomial logarithm factor.” **Gaussian instances.** To the best of my understanding based on reading the proofs, Theorems 5.5 and 5.6 require the instances to be Gaussian and not solely sub-Gaussian (as in Theorem 5.4). Therefore, this should be added to the statements, or the proof should be adapted. For Theorem 5.5, this comes from Lines 1444-1446. For Theorem 5.6, this comes from lemma C.1 which is used in the proof. **Non-asymptotic upper bound of Theorem 5.3.** Even though $\gamma$ seems to be an absolute constant, it would be valuable to have a more explicit closed-form constant. This would allow to better understand the impact of each of the hyperparameters used in SEE. Based on a detailed reading of the proofs, it is not possible to clearly extract a formula from the proofs that heavily rely on $O(\cdot)$ notation. Moreover, it also seems like this constant will be very large. **Non-asymptotic lower bound of Theorem 5.5.** While I understand the need to restrict the parameter $\delta$, it is not clear why the authors restrict themselves to mean parameters in $[0,1]^{K+1}$. Could the authors comment on where they used this condition and why it is important ? This seems unnecessary based on the existing literature proving adopts the random permutation model to prove lower bounds, e.g. Simchowitz et al. (2017, The simulator: Understanding adaptive sampling in the moderate-confidence regime), Chen et al. (2017, Nearly instance optimal sample complexity bounds for top-k arm selection), Al Marjani et al. (2022, On the complexity of all $\epsilon$-best arms identification) or Poiani et al. (2024, Best-Arm Identification in Unimodal Bandits). **Non-asymptotic lower bound of Theorem 5.6.** Similar question, namely why the boundedness of the mean parameters is important. In that case, the condition $\delta< e^{-8}\approx 3. 10^{4}$ starts to become restrictive for practical application. Could the authors comment on whether finer analysis would allow to get a weaker condition on $\delta$. Methods And Evaluation Criteria: See “Experimental Designs Or Analyses” section for details on the empirical evaluation. Theoretical Claims: I checked the correctness of the theoretical results. The depth of my proofreading is detailed in the question on “Supplementary Material”. I highlighted some typos or minor errors in the question “Other Comments Or Suggestions”. Experimental Designs Or Analyses: For varying dimension ($K \in \{10,20,30,40,50\}$) and confidence level ($\delta \in \{10^{-k}\}_{k \in \{2,3,4\}}$), the experiments consider multiple instances with and varying that address specific cases of interests: one negative instance (AllWorse) and several positive instances. The positive instances have either two groups of arms and varying number of good arms (Unique versus OneQuarter), or have linearly decreasing mean (Linear). The experiments are repeated $1000$ times and errors bars are shown. While the experimental setup is convincing, three important benchmarks are missing: Track-and-Stop, Murphy Sampling and lil’HDoC. Updating the plots with those benchmarks will go a long way to provide satisfying empirical evidence on the performance of SEE compared to existing algorithms. This is important and possible during the author-reviewer discussion, at least to have preliminary results. - Track-and-Stop (TaS) algorithm from Garivier and Kaufmann (2016) adapter to GAI. Namely, on top of forced exploration and based on C-Tracking, the optimal allocation is the Dirac distribution on the empirical best arm when the empirical best mean exceeds the threshold, and it is inversely proportional to the inverse square mean gap to the threshold otherwise. A more detailed description exists in Appendix I.2.3 of Jourdan and Reda (2023). As TaS is a reference algorithm for fixed-confidence pure exploration problems, it is important to include it. Appendix I.5 of Jourdan and Reda (2023) suggests that it performs well. - Murphy Sampling (MS) algorithm from Kaufmann et al. (2018). The improved stopping rule of MS is tailored to a slightly different pure exploration problem (Section 5), i.e., return whether the instance is positive without returning a good arm. However, the sampling rule of MS can be used for GAI when combined with the recommendation/stopping rule in Section 4 of Jourdan and Reda (2023). Appendix I.5 of Jourdan and Reda (2023) suggests that MS performs well. - lil’HDoC algorithm from Tsai et al. (2024, “lil’HDoC: An Algorithm for Good Arm Identification Under Small Threshold Gap”). Since it numerically outperforms HDoC, it is important to understand whether SEE outperforms it. See “Relation To Broader Scientific Literature” for more details. Supplementary Material: Appendices A, B.1, B.2, C, D, E in details. I didn’t check all the steps of all the proofs in Appendices B.3. Relation To Broader Scientific Literature: The authors should rephrase the following false (or at least misleading) statement on the literature (L 147-150). “HDoC and APGAI are both $\delta$−PAC algorithm, but they are not $(\Delta, \delta)$-PAC, as they both suffer from the infinite complexity issue, i.e. the upper bound of $\mathbb E[\tau] = + \infty$ if there exists an arm $a$ whose $\mu_a = \mu_0$”. While the known upper bound on the expected sample complexity of both algorithms tends to infinity, it doesn’t imply that the expected sample complexity goes to infinity. In order to prove such a statement, one would need to have a lower bound on the expected sample complexity that tends to infinity when there exists an arm $a$ whose $\mu_a = \mu_0$. Therefore, the current known results don’t imply that HDoC and APGAI are not $(\Delta, \delta)$-PAC. While it seems legitimate to conjecture that APGAI might not be $(\Delta, \delta)$-PAC based on the empirical evidence, HDoC might still be $(\Delta, \delta)$-PAC. Essential References Not Discussed: To the best of my understanding, the authors totally omitted one key related work, namely Tsai et al. (2024, “lil’HDoC: An Algorithm for Good Arm Identification Under Small Threshold Gap”). This paper proposes lil’HDoC, which builds on HDoC by using the law of iterative logarithm. Similarly, as for HDoC, lil’HDoC is designed to sequentially return all the arms above a given threshold. It is $\delta$-correct for this task (Theorem 1) and enjoys guarantees on the sample complexity to return the $k$-th best arm above the threshold (Theorem 3). Using their Theorem 3 for the best arm of a positive instance yields a high probability upper bound on the sample complexity scaling as $O(\Delta_{0,1}^{-2} \log(\log(\Delta_{0,1}^{-2}/\delta)/\delta))$. While being guarantees in high-probability, it seems reasonable to expect that lil’HDoC is also $(\Delta, \delta)$-PAC. Moreover, given the improved performance of lil’HDoC compared to HDoC, it is quite important to compare it with SEE. Other Strengths And Weaknesses: **Lack of conclusion.** The paper cruelly lacks a conclusion that summarizes the contributions and discusses open problems. Could the author write a sketch of the conclusion that they would add in a potential camera-ready version ? Even in the submitted version, some spaces should have been saved to write a proper conclusion. For example, one could put in Appendices some plots showing the impact of $\delta$ for all the instances in Section 6. **Lack of algorithmic simplicity.** It’s worth noting that SEE is quite convoluted algorithmically speaking, and some components exist only for the sake of analysis. First, this is a phased-based algorithm which calls two distinct subroutines at each phase (Algorithms 2 and 3) that do not share samples. This seems to be wasteful. Could the authors elaborate on whether the lack of sharing between observations is only to facilitate the analysis, or whether it is rooted in more fundamental reasons ? Second, the authors mention that the temporary container $Q$ exists only to “facilitate our theoretical analysis”. Could the authors discuss the challenges that arise from simplifying the algorithm by removing the temporary container $Q$ ? It would be great to actually remove this altogether. Third, there are four hyperparameters with arguably “subjective” choice of default values, i.e., $C$ and $(\alpha_{k}, \beta_{k}, \delta_{k})$. Fourth, it is unclear to me what is the intuition behind the exploration and exploitation horizons $(T_{k}^{ee}, T_{k}^{et})$, which are very large. For example, with default hyperparameters values, we obtain $T^{ee}_{1}/K \approx 20.000$. Therefore, the first exploration horizon is actually larger than the empirical stopping time in the experiments of Section 6. Given the simplicity of the GAI setting, it seems unsatisfactory to solve it with a convoluted approach. Other Comments Or Suggestions: - Theorem 5.4 could be stated more precisely instead of using the $\Omega(\cdot)$ notation. The authors show that $\mathbb E[\tau] \ge \log(1/(2.4 \delta)) H_{1}^{neg}$. - Theorem 5.5 could be stated more precisely instead of using the $\Omega(\cdot)$ notation. The authors actually have a result that is not too difficult to state in details. - End of page 12 to beginning of page 13. The $\delta_{k-1}$ and $\alpha_{k-1}$ should not be moved outside the sum over $k$. - Line 617. $\beta_k = 2^k$. - Lines 618-621. There seems to be an error in the block equation. Since $3^{\log_2(x)} = x^{\alpha}$ where $\alpha = \ln(3)/\ln(2) >1$, the authors should obtain that $1/\delta_{L’} = O((H_{1}^{pos}/K)^\alpha)$ and $1/\delta_{L’’} = O(\Delta_{0,1}^{-2\alpha})$. Instead, they claim that the same upper bound holds with $\alpha=1$. This doesn’t seem to change the main result, as only $log(1/\delta_{k})$ terms appear. - Lines 702-705. The first $\delta$ should be a $\delta_{k-1}$ and the last inequality should be $\log(K/\delta)$. Questions For Authors: 1. Could the authors discuss in more details what are the differences between their sampling rule and stopping rule and the ones used previously in the literature, e.g., HDoC, LUCB-G, APGAI and lil’HDoC. 2. To the best of my understanding, the sampling rule used in Algorithm 2 seems to bear similarity with lil’UCB from Jamieson et al. (2014). Could the author highlight the differences ? 3. Could the authors compare the lower boundof Theorem 5.6 with Proposition 2 in Kaufmann et al. (2018) ? Both seems to say something on the expected number of selection of suboptimal arms. Several other questions have been asked in the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your suggestions, and we will correct the typos in the revision. Due to length limit, we can only answer part of the questions in comments. We are looking forward to discuss more in the next iterations. 1. "no non-asymptotic lower bound when the instance has multiple correct answer" + Please check the 1st and 2nd point in the response to the reviewer M2Aq. + If there are multiple qualified arms, our upper bound is nearly optimal regarding the $\delta$-dependent part, but the $\delta$-independent part is loose. + Despite that, **none of the existing algorithms can achieve same theoretical guarantee as us**. 2. "Motivation of temporary container Q and complicated alg design" Please check the 3rd point in our response to Reviewer eELW. 3. "Theorems 5.5 and 5.6 require the instances to be Gaussian and not solely sub-Gaussian" + Gaussian dist with variance $\leq 1$ is 1-subgaussian. Theorems 5.2, 5.3 work for all subgaussian instances, while Lower bound(Theorems 5.5, 5.6) show it is nearly tight in a sub-class of the subgaussian dist. + It is generally impossible to expect the lower bound works for all 1-subgaussian dist. For example, if all the reward are deterministic, an algorithm pulling all the arms once can be $\delta$-PAC with sample complexity $K$. 4. "differences of sampling and stopping rules among HDoC, LUCB-G, APGAI, lil’HDoC, SEE" Please check the last part in our reply to the reviewer M2Aq. The main difference between ours and existing work is + we adopt increasing radius of the confidence interval. + **we do not eliminate any arm.** 5. "compare the lower bound of Theorem 5.6 with Proposition 2 in Kaufmann et al. (2018)" Summary of result: + Sum up the lower bound in our theorem 5.6, we get $\mathbb{E}\tau\geq \Omega(\sum_{a=2}^K\frac{\log 1/\Delta_{1,0}^2 }{\Delta_{1,a}^2})$, meaning the term $\log 1/\Delta_{1,0}^2$ in the Theorem 5.3 might be required. + In Kaufmann et al. (2018), they derive the lower bound $\Omega(\frac{1-K^3\delta}{K\max\\{\Delta_{1,0}^2, \Delta_{0,K}^2\\} })$ We have $\log 1/\Delta_{1,0}^2$ in the numerator and no K in the denominator, suggesting that our bound is stronger. But we require more assumptions. + Only $\mu\_1$ is above $\mu\_0$ + $\mu\_1-\mu\_0$ is sufficiently small + we are required to output an arm while they do not 6. "Discussions on lilHDoC" + Synergizing HDoC with the lil rule, and their framework requires a warm up stage + Derive an extra high-probability upper bound for each $N\_a(\tau)$, $\Pr(N\_a(\tau) < O(\frac{ \log\frac{K}{\delta}+\log\log\frac{K}{\delta\Delta_{0,a}^2} }{\Delta_{0,a}^2})) > 1-\delta$ + in the case $\Delta_{0,a}=0$, this upper bound is infinity + still suboptimal as $\log\log\frac{1}{\delta}$ exists, and the upper bound is infinity for some $\nu\in \mathcal{S}^{pos}\cup \mathcal{S}^{neg}$ + We will include this discussion in our literature review. 7. "lilHDoC, HDoC and APGAI might be $(\Delta, \delta)$-PAC algorithm" Here we rigorously prove lilHDoC is not $(\Delta, \delta)$-PAC. Consider a two-arm instance with $\mu_1>\mu_0=\mu_2$. Arm 1 follows unit variance Gaussian and arm 2 returns constant reward. + With non-zero prob p, $\text{UCB}_1(t) < \mu_0$ holds after the warm up stage. + Then arm 1 will get removed from the arm set. In this case, lilHDoC will keep pulling arm 2 without termination + In an event with positive prob, $\tau=+\infty$. We can conclude $\mathbb{E}\tau=+\infty$ Similar idea applies to APGAI, HDoC. They are not $(\Delta, \delta)$-PAC. 8. "Compare with TaS, MS, lilHDoC in numeric part" While we implement lilHDoC and an adapted version of TaS, the algorithm MS is incompatible with our model + TaS is not designed for 1-identification. The only available stopping condition is provided in Theorem 10, Degenne & Koolen 2019 (S-TaS), which **doesn't provide an explicit value for "large enough C"** We use the pulling rule in TaS, while adopt the stopping rule of S-TaS, using a lower bound on required C. **Equivalently, we remove the "Sticky" part in S-TaS**. + lilHDoC's length of warm up stage is already much larger than the SEE in our numerical settings. We set T=200 as the warm up pulling times for each arm. + MS only determines if an instance is positive or negative, but does not identify a good arm in the former case. MS does not solve the GAI problem, unlike SEE which both solve GAI and identify if an instance is positive or negative. Thus, MS and SEE cannot be compared on the same ground. Point 5 for reviewer K8WR shows the results. References + Jamieson et.al 2014, lil’ UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits + Kaufmann et al. 2018, Sequential Test for the Lowest Mean: From Thompson to Murphy Sampling + Degenne & Koolen et al. 2019, Pure Exploration with Multiple Correct Answers --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough and detailed answers, as well as the additional experiments. At the time being, I will keep my positive score. For the sake of discussion, I detailed some follow-up comments. Feel free to use the extra space to add comments on questions not previously addressed due to space limit. **1**. In the revised version, it would be great to include a more detailed discussion on this lower bound for instances with multiple good arms. This would allow substantiating and nuancing the currently misleading claims on matching upper and lower bounds for all instances. Does the sum really involve the term $\Delta_{1,m}$ that is independent of $a$ ? Given the gap in the non-asymptotic dependency of the lower/upper bound for those instances, do the authors have an intuition as regards whether the lower and/or the upper bound could be improved ? **3**. I don’t expect the lower bound to hold for all sub-Gaussian instances. Yet, the Gaussian assumption should still be explicitly mentioned in the Theorems 5.5 and 5.6. **8**. We thank the authors for including lilHDoC in their experiments. Here are two clarifications on statements made for TaS and MS. - **TaS**. The sentence “The only available stopping condition is provided in Theorem 10” is false. You can use the stopping rule from Jourdan and Reda (2023), see equation (5) in Section 4. Their Lemma 2 ensures that it is $\delta$-correct for sub-Gaussian for any sampling rule. Using a loose upper bound on $C$ probably explains why TaS has poor performance in the provided additional experiments, which are not consistent with the empirical results in Appendix I.5 of Jourdan and Reda (2023). - **MS**. From an empirical perspective, the sentence “MS and SEE cannot be compared on the same ground” is misleading. As detailed in my initial review, MS has been adapted for GAI with good empirical performance, see Appendix I.5 of Jourdan and Reda (2023). From a theoretical perspective, this modified MS and SEE cannot be compared yet. **Miscellaneous**. On the “linear” instance, the mean of APGAI is lower than the one of SEE. Is bold used to highlight the one with the lowest mean + std ? --- Reply to Comment 1.1.1: Comment: Thanks for your careful and detailed reply. Firstly we want to answer some missing questions in our first rebuttal. 1. "why restrict $\mu\_a\in [0, 1]$" + About the upper bound, $\mu\_a\in [0, 1]$ guarantees $\Delta\_{i,j} < 1$, further $\log 1/\Delta\_{i,j}^2 >0$. If $\Delta\_{i,j}>1$, we need to turn to $\lceil ( \log \lceil 1/\Delta\_{i,j}^2 \rceil^+)\rceil^+,\sum\_a \lceil 1/\Delta\_{1,a}^2 \rceil^+$, which is strenuous, but the analysis still holds. + About the lower bound, $\mu\_a\in [0, 1]$ is required. We need to construcut "hard" enough problem instance to show any algorithm must suffer some complexity. 2. "“subjective” choice of default values, i.e., $C,\beta\_k,\alpha\_k, \delta\_k$" We analyze the requirements of $C,\beta\_k,\alpha\_k, \delta\_k$ in appendix B.1. + Exponential decreasing/increasing speed for $\delta\_k, \beta\_k$ is required in our theoretical analysis. + Taking $\beta\_k=2^k,\alpha\_k=5^k, \delta\_k=1/3^k$ seems to be regular choices. 3. "why $T\_k^{ee}$ so large" + Recall defition $T\_k^{ee}=1000(C+1)^2K\beta\_k\log(4K/\delta\_k)$. + The constant 1000 is mainly for **simplifying the calculation in Line 809-871, which is for the upper bound for** $L^{pos}\_{ee}, L^{pos}\_{et}, L^{neg}$. 1000 is large enough such that solving those inequalities becomes easier. + If we remove the coefficient 1000, it will only affects the constant factor in our upper bound, and the theoretical analysis still holds. 4. "No sharing sample between ee and et, wasteful" We admit it is possible to further improve our current result by merging the samples collected in exploration and exploitation periods. + Our current design is to **simplify the proof**, as it is easier to treat $\kappa^{ee}, \kappa^{et}$ independently. + If we merge these two phases, we may need to distinguish the smaller one between $\delta / \alpha\_{\kappa^{et}}, \delta\_{\kappa^{ee}}$, which makes the current analysis more complicated 5. On the “linear” instance, the mean of APGAI is lower The mean stopping time of APGAI is indeed smaller in the group Linear. But we want to clarify our result only estimates its lower bound of empirical mean. Larger forced stopping threshold results in larger value. We will elaborate more on APGAI's result in our revision We will also follow your suggestions about discussing Gaussian assumption, and the the lower bound if multiple good arms exist. Now, we turn to discuss TaS and MS. We acknowledge your reply and we indeed miss the numeric performance of two combinations TaS+GLR, MS+GLR, where GLR is a stopping rule in Lemma 2, Jourdan & Reda (2023). Here we clarify the following facts. + GLR stopping rule guarantees TaS+GLR, MS+GLR are both $\delta$-PAC. But unlike other benchmarks, there aren't theoretical analysis regarding $\tau$. Applying the pulling rules in these two algorithm is based on heuristic experience + Given current numeric result, our proposed **SEE is the best in all the algorithms with performance guarantee on $\tau$** + According to numeric result in Jourdan & Reda (2023), we admit it is possible these two will significantly outperform SEE in positive instances where K is not large. But we fail to rerun all the numeric experiments for these two algorithms because of the time limit Here we present two groups of experiment, to discuss the pros and cons of TaS+GLR, MS+GLR. Take mu0=0.5, repeating times 1000, delta=0.001, i.i.d noise N(0, 1) + Linear, the reward vector is an arithmetic array with mu1=0.3, muK=0.7. The gap is larger comapred to Linear Group in current submission, as the time limit requires to shorten the experiments + AllBetter, mu1=...=muK=0.7 For group Linear, | Method | K=50 | K=100 | K=150 | K=200 | | -------------- | ------------------ | ------------------- | ------------------ | ------------------ | |SEE(This work)|4323$\pm$91|**6539$\pm$110**|**8960$\pm$139**|**10582$\pm$141**| |TaS+GLR|3881$\pm$39|7528$\pm$69|11632$\pm$107|15403$\pm$142| |MS+GLR|**3851$\pm$45**|6680$\pm$72|9280$\pm$95|11839$\pm$117| Some observation + TaS+GLR, MS+GLR perform pretty well when K=50 + When K gets larger, TaS+GLR, MS+GLR becomes worse than SEE The phenomenon is clearer in the instance AllBetter, in which TaS+GLR, MS+GLR don't perform well | Method | K=50 | K=100 | K=150 | K=200 | | -------------- | ------------------ | ------------------- | ------------------ | ------------------ | |SEE(This work) |**4077$\pm$60**|**6198$\pm$100**|**7963$\pm$131**|**9988$\pm$167**| |TaS+GLR|4854$\pm$54|9422$\pm$94|14309$\pm$142|19211$\pm$192| |MS+GLR|6103$\pm$78|10609$\pm$125|14615$\pm$165|18519$\pm$201| In our revision, we will set up more experiments regarding **arm number, reward vector** and release the code for numeric experiments. And we still want to emphasize **our main contribution still lies on the new theoretical analysis.**
Summary: This paper studies the 1-identification problem, a multi-armed bandit exploration problem with the goal of identifying an arm whose mean exceeds a given threshold. The paper introduces a new algorithm that achieves near-optimal non-asymptotic sample complexity. Theoretical guarantees establish its efficiency in both positive and negative instances. The authors also conduct numerical experiments that demonstrate SEE outperforms some baseline algorirthms. Claims And Evidence: I find it concerning SEE does not outperform all other baselines, even in the synthetic experiments. Based on the evidence provided by the authors, the APGAI method performs overall better, hence the limited impact of this work. Methods And Evaluation Criteria: The method was only evaluated on the synthetic benchmarks. Evaluating it on real-world datasets, e.g., Yahoo! Today Module User Click Log, would strengthen the results. Theoretical Claims: I did not check the proofs. Experimental Designs Or Analyses: The experimental design of the synthetic benchmark described in Section 6 and Appendix E.1 seems sound. Supplementary Material: Appendix E. Relation To Broader Scientific Literature: I am not familiar with the literature in the 1-identification domain, but based on these empirical results, the method seems insignificant. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See other comments. Other Comments Or Suggestions: I believe the paper could be written much more clearly. For example, (the list is not exhaustive) * Abstract begins with: _Motivated by an open direction in existing literature_ - all papers are * L015: _or to output None_ - this is clear only to people using Python * L104: _It is obvious to see (∆, δ)-PAC is stronger than δ-PAC._ - you should be more precise in what stronger means * Page 4: All three algorithms took too much time to study, and their presentation could be simplified with a diagram * The paper could use a Conclusions section summarizing the main takeaways and discussing future work Questions For Authors: Can you evaluate your method and the baselines on a real-world dataset and share the results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Our primary contributions lie in the theory towards optimal performance on the 1 identification problem, and there could be misunderstanding in the numerical performance of AP-GAI. We are thankful for the perspective that motivates us to clarify more, and we look forward to discuss more in the next iteration. Let us provide the following clarifications: 1. As discussed in the reviewing process, the main contribution is **we design a new algorithm with best performance guarantee**, compared to all the existing algorithms. 2. Regarding numerical performance + HDoC's curve is slightly below (within 3 standard error) SEE in the group "Unqiue, $K=10$" . In other groups, its curves are all above SEE. + LUCB\_G's curves are all above SEE. + APGAI's curves are below SEE in group "AllWorse" and "Linear, $K\geq 30$", Other are all above SEE. 3. The numerical performance is of AP-GAI has to be interpreted carefully, different from other algorithms. + It is generally impossible for an algorithm to outperform all the others across all the scenarios. For example, a naive algorithm that keeps pulling arm 1 can defeat all the existing algorithms, if the optimal arm is exactly arm 1. + In the plots in section 6, our proposed SEE outperforms all other benchmarks except AP-GAI. It appears from the figure that APGAI is can outperform our proposed SEE in some of the plots + However, as stated in the section 6 and the appendix E, the plotted points in the Figure are only **lower bounds of the the realized empirical stopping time $\tau$ of AP-GAI**, thus the plotted lines for AP-GAI only serves as a lower bound but not the actual performance of AP-GAI. + In the "Unique" experiment group, AP-GAI always fails to terminate even after $8000\cdot K$ rounds, making its performance much worse than all the others. In our evaluation, AP-GAI is the only algo that suffers from the non terminating issue. + In other groups, with some small but non-zero probability (Quarter 14/1000-52/1000, Linear 1/1000-3/1000), AP GAI stuck into non-stopping pulling procedure and we have to terminate it before the real termination. This results in the large error bar in the graph. + Considering stability on overall performance, we feel that SEE is better than AP-GAI, since non terminating is a serious issue, and effectively means that the expected stopping time can be infinite. In our revision, we will emphasize the non terminating issue for clarification. 4. The notations and definitions ("None", delta-PAC, ...) follow the convention in existing research, such as Kano et.al 2017, Degenne & Koolen 2019. We thank the reviewers for suggestion in improving the intuition about SEE. While the complicated algorithm design is necessary, we plan to provide more intuitions in our revision. 5. Additional numerical experiments Following the reviews, we implement + two extra benchmarks Adapted-TaS(A-TaS) and lilHDoC (see 8th point in our reply to reviewer d2eW) Due to the time and space limit, we only present the result in K=30, $\delta=0.001$, with 1000 independent repeating times + an extra RealLfie instance in appendix I.1.1 of Jourdan & Reda 2023, **whose mean reward vector comes from real-world therapeutic protocols data**, $\delta=0.001$, with 1000 independent repeating times Following table reports the empirical stopping times. We ignore the decimal number. | Method | All Worse | Unique | One Quarter | Linear | RealLife(Extra) | |---------------|--------------|----------|----------|----------|------| | SEE(This work) | 19885 $\pm$ 60 | **11029 $\pm$ 180** | **7106 $\pm$ 154** | **9266 $\pm$ 150** | **281 $\pm$ 2** | | HDoC | 23601 $\pm$ 42 | 12023 $\pm$ 122 | 11313 $\pm$ 108 | 13620 $\pm$ 143 | 472 $\pm$ 3 | | LUCB\_G | 23601 $\pm$ 42 | 18401 $\pm$ 179 | 15114 $\pm$ 136 | 18234 $\pm$ 176 | 395 $\pm$ 3 | | APGAI | **16496 $\pm$ 33** | 143266 $\pm$ 3548 | 14716 $\pm$ 1504 | 8762 $\pm$ 1262 | 1568 $\pm$ 571 | | A-TaS(Extra) | 280904 $\pm$ 377 | 32111 $\pm$ 180 | 37393 $\pm$ 277 | 38562 $\pm$ 302 | 1766 $\pm$ 8 | | lilHDoC(Extra) | 37099 $\pm$ 50 | 14930 $\pm$ 108 | 17542 $\pm$ 125 | 19363 $\pm$ 160 | 3610 $\pm$ 0 | SEE ranks 2nd in the "Allworse" group and is the best in all the other groups. The extra numeric experiments also suggest the superior numerical performance of our proposed SEE. References + Kano et.al 2017, Good arm identification via bandit feedback + Degenne & Koolen 2019, Pure Exploration with Multiple Correct Answers + Jourdan & Reda 2023, An Anytime Algorithm for Good Arm Identification --- Rebuttal Comment 1.1: Comment: I want to thank the authors for all the clarifications. I also read other reviews and it is more clear to me where the paper contributions lie. I also commend another empirical evaluation in such a short time. I updated my score accordingly. --- Reply to Comment 1.1.1: Comment: Thanks for your additional effort in the evaluation and the appreciation to our contributions. We will improve our manuscript based on your suggestion.
Summary: This paper addresses the problem of 1-identification in stochastic multi armed bandits. In particular, given a reward threshold $\mu_{0}$ an algorithm solving the 1-identification problem has to return an arm whose associated expected reward is greater than $\mu_{0}$ whether it exists. The authors propose the fixed confidence SEE (Sequential-Explore-Exploit) method, exhibiting a non-asymptotic sample complexity. Additionally, the authors propose a lower bound for the sample complexity in the considered setting, showing that their proposal matches (up to logarithmic factors) the lower bound, thus being near-optimal. Finally, the authors validate their method against baselines, evaluating the empirical stopping time of the methods. ## Update after rebuttal: I believe that including the comments provided by the authors during the rebuttal will enhance the paper. I maintain my positive score. Claims And Evidence: The claims made by the authors are supported by both theoretical and experimental evidence. Methods And Evaluation Criteria: The methods and the evaluation criteria are appropriate for validating the claims done by the authors. Theoretical Claims: I briefly went through the proofs, and the theoretical claims seems to be sound. Experimental Designs Or Analyses: I checked the validity and the soundness of the reported experiments. The only issue I feel to highlight is in the fact that in the experimental section the authors state that they made 1000 runs of the same experiment, but the reported plots do not show the confidence intervals. I suggest to show also the confidence intervals of the presented curves, in order to assess the statistical significance of the results. Supplementary Material: I checked the provided code. Relation To Broader Scientific Literature: The authors propose, as they highlight and up to my knowledge, the first method solving the 1-identification problem providing non-asymptotic expected stopping time (in the fixed confidence setting). Moreover, the authors seems to provide the first lower bound for the sample complexity in this setting and they show their method to be nearly optimal. Essential References Not Discussed: The related works section is complete and enriched by additional discussion in the appendix. The authors provide the reader with a complete overview of the contribution of the recent literature. I feel (up to my knowledge) all the essential references are discussed properly in this work. Other Strengths And Weaknesses: **Strengths** This paper makes a step towards the closure of the 1-identification problem for stochastic multi armed bandits. The paper is complete in the sense that the authors not only provide a novel method and its theoretical analysis, but also they provide the lower bound for the setting allowing for checking the optimality of the proposed method (and of other predecessors). Additionally, I appreciated the rich related works section and the parallel done with the best arm identification, motivating why it is not convenient to just apply a BAI method. Finally, I have appreciated the presence of the sketches of the proofs. **Weaknesses** 1. The paper is well-written and easy to follow until Section 4, where the algorithm is explained. Indeed, the presented pseudo-codes are a little bit hard to follow, even if in the following part it is explained. I suggest to insert in the main paper a more conceptual version of the pseudo-code, leaving the current proposed version in the Appendix. 2. This excess of complexity is a feeling that remains even for the section about the main theoretical results. I suggest just to report the core results, without the presented level of details in the main paper, for instance for what concern lines 349--364 (left). 3. At line 304 (left) the authors cite a lemma in appendix, I suggest at least to say what this lemma is about in the context of the main paper. 4. Besides the comment that I have previously posted on the experimental validation, and even if I think the amount of experiments is adequate for a theoretical paper, I suggest the author to compare their method to some algorithm thought to address best arm identification in order to further highlight the inefficiency of such methods in this setting. 5. Minors: lack of running title; lack of text in Lemma 5.1; SEE in the abstract is expanded in Sequential-Exploration-Exploitation but in the following it is expanded in Sequential-Explore-Exploit; Lemma 4.2 needs to be restated in my opinion. Other Comments Or Suggestions: Typo: 1. line 133 (right) 2. line 149 (right) 3. line 182 (left) 4. line 262 (left) Questions For Authors: See previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your suggestions. We will correct the typos, and we look forward to discuss more in the next iteration. We will also remove part of the sketch proof to allow space for algorithm description. 1. "the reported plots do not show the confidence intervals" In fact, all figures indeed include the error bars, but many of these error bars are too small to be visible, especially in the experiment group "All Worse". Please check the notebook Visualize-Demo-Delta\_0p15.ipynb in the supplement files for details. 2. "compare their method to some algorithm thought to address best arm identification" + LUCB\_G algorithm, one of the benchmarks, can be considered **an adaptation of BAI algorithm UCB** in Jamieson \& Nowak 2014. + In fact, a BAI algorithm without adapted stopping rule would fail to terminate with a qualified arm in the 2 arm instance $\mu_1=\mu_2>\mu_0$. Thus, we only focus on BAI benchmark with adaptation like the previous point. 3. (Also for d2eW) "The paper is well-written and easy to follow until Section 4," We will try our best to provide more intuitions on SEE in our revision. The complicated design seems to be needed for our current algorithm framework. In the following, we will show a simpler version, and discuss why this version fails to work. Then, we explain why we adopt the current design. An **informal and simpler SEE** is as follows. (We only consider positive instances) --alg starts-- For phase index $k=1,2,\cdots$ + (Exploration) Run LUCB\_G with previous exploration history and tolerance level $\delta_k$. Stops when + LUCB\_G stops and return $\hat{a}_k$, or + The total pulling times in all exploration phase $\geq T_k^{\text{ee}}$ + (Exploitation) If $\hat{a}_k\in [K]$, keeps pulling $\hat{a}_k$. Stops when + $\text{LCB}_{\hat{a}\_k}^{et}(\delta) > \mu_0$, or + Total pulling times of $\hat{a}_k$ $\geq T_k^{\text{et}}$ --alg ends-- where LUCB\_G is in Kano et.al 2017, very similar to the alg UCB in BAI area. We wish at the phase $k$ such that $k\geq \max\{\kappa^{ee}, \kappa^{et}\}$ (i.e concentration inequality (3) holds), $\beta_k\geq \Omega(1/(\omega^2 \Delta_{1,0}^2))$, we have + the LUCB\_G algorithm can guarantee $\mu_{\hat{a}\_k}\geq \omega \mu_1+(1-\omega)\mu_0$. + Then the above algorithm terminates at the end of phase k. To achieve this, we need to guarantee **at the start of an exploration phase k,** $LCB\_{a}^{ee}(\delta\_k) < \mu\_0$ **holds for all** $a\in [K]$. However, the above informal SEE **cannot** guarantee this. + It is possible that at the end of phase k-1, we have $LCB\_{\hat{a}\_{k-1}}^{ee}(\delta\_{k-1}) > \mu\_0$, + the last collected sample of arm $\hat{a}\_{k-1}$ is so large, such that $LCB\_{\hat{a}\_{k-1}}^{ee}(\delta\_{k}) > \mu\_0$ also holds. This issue ruins the theoretical analysis of the above informal SEE. To fix this issue, we introduce a **temporary container** Q. + If the LUCB\_G returns arm $\hat{a}\_{k-1}$, we transfer the lasted collected sample of $\hat{a}\_{k-1}$ into Q from the exploration history $\mathcal{H}^{ee}$. + Notice that $LCB^{ee}\_{\hat{a}\_{k-1}}(\delta\_{k-1}) < \mu\_0$ holds **without the latest collected sample**, then at the start of the next , we can guarantee $LCB\_{a}^{ee}(\delta\_k) < \mu\_0$ **holds for all** $a\in [K]$. This is indeed line 14-17 in our Algorithm 2. If the phase k pull $\hat{a}\_{k-1}$ again, + we transfer back the latest collected sample from Q to the $\mathcal{H}^{ee}$ + as the concentration ineq (3) requires the empirical mean $\hat{\mu}\_{a,t}^{ee}=\sum_{s=1}^t X\_{a,s}^{ee}/t$ to be **consecutive summation of indexes of collected sample**. We cannot drop any collected sample. This is line 5-9 in our Algorithm 2. To rigorously formalize the above idea, we have to spend much space in showing the details. Also, we improve the concentration inequality in Kano et.al 2017, which ends up with our current section 4. There are also other ways to fix the issue, such as + Drop all the collected sample in previous phases, and prove independent concentration ineq for each phase + Use an extra concentration event, bounding $|\hat{\mu}\_{a,t}-\hat{\mu}\_{a,t-1}|$ + Create multiple algorithm copies like Katz \& Jamieson 2020 or Chen & Li 2015 But all these methods will result in one or more problems + lead to worse numeric performance + impose harsh restrictions on the parameters $C, \beta\_k, \delta\_k, \alpha\_k$ + strictly larger logarithm factor Considering all these pros and cons, we adopt our current algorithm design. References + Katz & Jamieson 2020, The true sample complexity of identifying good arms + Chen & Li 2015, On the optimal sample complexity for best arm identification + Kano et.al 2017, Good arm identification via bandit feedback" + Jamieson & Nowak 2014, Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting
Summary: In this paper, the authors consider the identification problem a pure exploration problem with bandit feedback, where the objective is to determine where there exists an arm whose mean reward is at least a known threshold or to output None if it believes such an arm does not exist. They proposed an algorithm termed sequential-exploration-exploitation and provided a non-asymptotic analysis of the sample complexity. More precisely, they provided non-asymptotic lower and upper bounds of the expected sample complexity and proved that they match up to a constant. Finally, they conducted experiments on synthetic environments and demonstrated that the proposed method overall outperforms the baselines. ## update after rebuttal My major concern was the poor readability of the paper. However, the authors provided a more intuitive explanation of the algorithm in their response, and I believe the paper could be improved in light of the points raised during the author-reviewer discussion phase. Therefore, I will keep my positive score. Claims And Evidence: The main contribution os this paper is non-asymptotic analysis of the sample complexity with matching lower and upper bounds. The claims are supported by Theorem 5.3 and theorems in Sec 5.2ms in Sec 5.2. Methods And Evaluation Criteria: Theoretical results show the proposed method is $\delta$-PAC with a nearly optimal sample complexity. Although the experiments use synthetic environments, considering the theoretical nature of the paper, I believe it is a standard experimental setting. Theoretical Claims: I have not checked proofs. A sketch of a proof is provided after Theorem 5.3, I have not checked the correctness of it. Experimental Designs Or Analyses: As I wrote above, the experimental design is standard for a theoretical paper. Supplementary Material: I have not reviewed the supplementary material. Relation To Broader Scientific Literature: I believe that the 1-identification problem is practically important, and an optimal algorithm for it would be beneficial for even outside the ML community. Essential References Not Discussed: To the best of my knowledge, related works are adequately discussed. Other Strengths And Weaknesses: - Strengths - They provided a non-asymptotic analysis of the sample complexity and proved that the proposed algorithm is nearly optimal. - Experimental results on several problem instances support the effectiveness of the proposed method. - Weaknesses - As far as I understand, the lower bound provided in Theorem 5.5 holds only for some problem instances $\nu$. Also, the same theorem requires the condition $\mu_1 > \mu_0 \ge \mu_2 \ge \dots \mu_K$ which does not cover all cases of positive instances. - The readability of the paper could be improved. For instance, after Theorem 5.3, the authors provided a sketch of a proof over a half page. It is partially due to the short review period, but, I was unable to follow technical details. Would it be possible to move technical details to the appendix and focus on the main idea of the algorithms and proofs? Other Comments Or Suggestions: Related to the second weakness, is it possible to add theoretical properties (as propositions or lemmas) of each sub algorithms instead of providing very detailed descriptions of the algorithms? Questions For Authors: 1. By the statement of Theorem 5.5, there exists a positive instance $\nu$ such that the inequality holds. Does the lower bound provided in Theorem 5.5 hold whenever $\mu_1 > \mu_0 \ge \mu_2 \ge \dots \mu_K$ holds, or does it hold for only specific instances? Such an analysis is standard for related problem settings (e.g. BAI problem)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your suggestions. Let us respond below to the points raised, and we look forward to discuss more in the next iteration. 1. "If $\mu_1>\mu_0\geq\mu_2\geq...\geq\mu_K$, does the lower bound work for all permutation of [K]" We cannot expect the current lower bound holds for any permutation of [K], but we can only guarantee there exists a permutation such that our lower bound holds. Consider a "naive" algorithm which keeps pulling arm 1 until + LCB_1>$\mu_0$ holds or + UCB_1<$\mu_0$ holds. If UCB_1<$\mu_0$, it turns to keep pulling arm 2 until similar conditions hold. Then if the mean reward of arm 1 is indeed $\mu_1$, its pulling complexity is $\Theta(\frac{\log\frac{1}{\delta}}{\Delta^2_{0,1}})$. The lower bound $\Omega\left(\frac{\log\frac{1}{\delta}}{\Delta_{0,1}^2} + \sum_{a=2}^K\frac{1}{\Delta_{1,a}^2}\right)$ does not hold. But we can "punish" this naive algorithm by considering another permutation. Let the mean reward of arm K is $\mu_1$, then this algorithm will suffer complexity $\mathbb{E}\tau\geq \Omega\left(\sum_{a=1}^K\frac{\log\frac{1}{\delta}}{\Delta_{0,a}^2}\right)$, which is much larger than the lower bound. 2. "The lower bound if $\mu_1>\mu_0\geq\mu_2\geq...\geq\mu_K$ does not hold" + For the instance with multiple qualified answer, i.e. $m:=|\{a:\mu_a>\mu_0\}|>1$, we only know the lower bound $\Omega(\frac{\log\frac{1}{\delta}}{\Delta_{0,1}^2}+\frac{1}{m}\sum_{a=m+1}^K\frac{1}{\Delta_{1,m}^2})$. + The proof is similar to Theorem 5.5, by combining the Theorem 1 in Katz \& Jamieson 2020 and Kaufmann et.al 2016. In this case, the $\delta$-dependent part of our upper bound is nearly optimal, but the $\delta$-independent part is loose. + Our current result is meaningful. Our result is the first to achieve nearly optimality of $\delta$-dependent part in this case. + Also, $\delta$ dependent and independent term in the bound are tight, for the case of unique qualified instance + **None of the existing algorithms can achieve same theoretical performance as us**. 3. "The readability of the paper could be improved." Thanks for your suggestion. In our revision, we will focus more on explaining the intuitions and the reasons behind our algorithm design, which is carefully chosen so that our current bounds can be achieved. We welcome the reviewer to check out the 3rd point of our response to the reviewer eELW. (Primarily for addressing Reviewer d2eW) Here we also list the summary of pulling and stopping rules across different algorithms. The main difference between ours and existing work is + we adopt decreasing tolerance level in the UCB expression. Equivalently speaking, increasing radius of the confidence interval. + **we do not eliminate any arm.** Comparison of sampling rule + HDoC: lilHDoC pull arm by $\arg\max\_{1\leq a\leq K} \hat{\mu}\_{a,t}+\sqrt{\frac{2\sigma^2 \log t}{N\_a(t)}}$. + LUCB-G: pull arm by $\arg\max\_{1\leq a\leq K} \hat{\mu}\_{a,t}+\Theta(\sqrt{\frac{2\sigma^2 \log \frac{\log N\_a(t)}{\delta}}{N\_a(t)}})$. + APGAI: pull arm by $\arg\max\_{1\leq a\leq K} \sqrt{N\_a(t)}(\hat{\mu}\_{a,t}-\mu\_0)^+$, if $\max \hat{\mu}\_{a,t} > \mu_0$, pull arm by $\arg\max\_{1\leq a\leq K} \sqrt{N\_a(t)}(\mu\_0-\hat{\mu}\_{a,t})^+$, if $\max \hat{\mu}\_{a,t} \leq \mu_0$, + Our proposed SEE: + UCB rule in exploration, but the tolerance level decreases (radius of confidence interval increases) at each phase, + Keep pulling the same arm in exploitation. Comparison of stopping rule + HDoC, lilHDoC, SEE: + LCB defined by $\delta$ is above $\mu\_0$ for an arm, output qualified arm, + UCB defined by $\delta$ is below $\mu\_0$ for all arms, output None. + APGAI: + $\max\_{1\leq a\leq K} \sqrt{N\_a(t)}(\hat{\mu}\_{a,t}-\mu\_0)^+ > \sqrt{2c(t,\delta)}$, + $\min\_{1\leq a\leq K} \sqrt{N\_a(t)}(\mu\_0-\hat{\mu}\_{a,t})^+ > \sqrt{2c(t,\delta)}$. The radius of our confidence interval is also defined by lil rule, but we improve the constant compared to the Jamieson et.al 2014. See appendix D.2. References + Katz & Jamieson 2020, The true sample complexity of identifying good arms + Kaufmann et.al 2016, On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models + Jamieson et.al 2014, lil’ UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits
null
null
null
null
null
null
Diversity By Design: Leveraging Distribution Matching for Offline Model-Based Optimization
Accept (poster)
Summary: This paper introduces DynAMO - a method for offline model-based optimization whose objective is to produce a diverse distribution of various designs. The objective is clearly formulated, the method is carefully derived, and empirically evaluated. The empirical evaluation is very extensive and it is certainly impressive. The results show that the method is very strong in terms of the reward of the designs, and unusually strong in terms of design diversity. Some elements of the paper pertaining the technical clarity and the rigor of the mathematical statements could be slightly improved, but it definitely qualifies as a good paper. ## Update after rebuttal The authors addressed my comments and did extra work to communicate that to me. The proposed method is now well-, and correctly-motivated theoretically. It showcases the desired properties (performance + diversity), the empirical evaluation is extensive, and the limitations are described. I strongly advocate for accepting this paper. Claims And Evidence: The claims are supported by evidence. There are several theoretical claims, all of which come with mathematical proofs. Some things like a bit problematic: > Lemma 3.1: You need to make some assumptions about q(x) or loosen the statement to make it hold. q(x) could be non-zero on some measure-zero set of designs, and then your Inequality (16) in the proof does not hold. > The formulation of the objective in Equation (5) is neat. But we can prove that its solution is an EBM distribution like the one in Equation (6), but with a different temperature parameter. Thus, I do not understand why to have a separate pure reward term. Lemma 3.3 makes my point even more explicit. Methods And Evaluation Criteria: The experimental design is very reasonable, and broad. The authors cover enough tasks and many baselines. The results are satisfying. I would, however, ask to clarify what is happening in the experiments: > Do I understand correctly? Does "Baseline" (e.g. Adam) mean the reward is trained with the Baseline optimizer and then the designs are trained with that optimizer too? Theoretical Claims: I have read the theorems and the proofs. Read my comments about their rigor in "Claims and Evidence". Experimental Designs Or Analyses: I confirm the experimental design is valid. I had one question about clarity (see "Methods and Evaluation Criteria"). I have one qualm though: > As your Table A1 shows, your method trades off reward for diversity. Maybe you get a good argmax but the quality of remaining designs, like the median, can be unimpressive. Your results in A1, especially in comparison to baseline Adam, show that you generally produce suboptimal, but very diverse, designs. It is not a deal breaker but it is an important limitation. I offer a deal: if you write it in Limitations, I will raise your score, and if you address my other concerns, I will raise your score further. But the Limitations paragraph is a condition. Supplementary Material: I verified both the proofs and the code. Relation To Broader Scientific Literature: The paper is relevant for the field of offline MBO. It addresses an important problem of design diversity that is often omitted in the MBO literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Algorithm 1. Could you please make it more legible? Right now it looks messy. And I doubt you have while loops inside while loops. I haven’t found it in the code either. Table 1. Adam. TFBind8. Shouldn’t ROMO and GAMBO be in bold? Line 403 (right). “Trabucco et al. (2022) used REINFORCE-style methods similar to Williams (1992) to learn a myopic sampling policy…” - what do you mean by myopic here? Questions For Authors: If the limitation of the trade-off between diversity and quality is acknowledged and discussed, I will raise my score. Then, if my other concerns, regarding clarity and rigor, are addressed, I will consider raising my score further. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer sKFn for their helpful and constructive feedback on our manuscript, and appreciate their efforts in helping us improve our work. Please find our responses below - we would be happy to discuss further and answer any follow-up questions as necessary. Thank you! 1. **(Assumption in Lemma 3.1)** Thank you for this comment. The inequality in Equation (16) follows directly from monotonicity of the integral and holds even if $q^\pi(x)$ is non-zero in a measure-zero set. If we have misunderstood your concern, please let us know. 2. **(Reward Term in Eq. (5))** Thank you for this comment. We include a pure reward term in (5) because the forward surrogate model $r_\theta(x)$ provides an important (albeit sometimes inaccurate) signal of design quality outside the dataset $\mathcal{D}$. This strategy builds off prior work in offline MBO (e.g., [Trabucco et al. ICML (2021).](https://proceedings.mlr.press/v139/trabucco21a.html), [Yu et al. NeurIPS (2021).](https://arxiv.org/abs/2110.14188), [Yao et al. NeurIPS (2024).](https://arxiv.org/abs/2402.06532)). Without this reward term, the Shannon entropy term in **Lemma 3.3** would not appear in our derivation and so the generative policy is not explicitly encouraged to find designs that maximize against the $r_\theta(x)$ surrogate signal. 3. **(Clarification of "Baseline")** Thank you for this question. In general, there are two "optimizers" used in our experiments: (1) a ***model-training*** optimizer used to train a surrogate reward function on the offline dataset; and (2) a ***generative*** optimizer used to generate new designs by optimizing against the trained surrogate function from step (1). "Baseline" only refers to the latter **generative** optimizer. The reward functions used for all the experiments in **Table 1** were trained to approximate the oracle reward function using Adam as the model-training optimizer. We have better clarified this point in our revised manuscript. 4. **(Limitations of DynAMO)** We thank the Reviewer for this comment. You are correct - our method does indeed trade reward for diversity. In many applications of offline MBO, we argue it is most important to obtain a good maximum score over a good median score. We highlight this potential limitation of DynAMO in our revised Limitations section, included at [this link](https://postimg.cc/YjfWXyf4). 5. **(Algorithm 1 Presentation)** Thank you for this comment. We have revised **Algorithm 1** [here](https://postimg.cc/5jGz48wM) to improve its legibility. Regarding the while loops, the outer while loop terminates when the optimizer $a^b$ no longer improves, and is implemented as lines 281-308 of our `main.py` source code file in our Supplementary Material. The inner while loop terminates when the optimal Lagrange multiplier is found, and is implemented in lines 226-256 in the `src/dogambo/models/mlp.py` file (which is invoked in line 304 of our `main.py` source code file). We would be happy to make any additional changes to the presentation of Algorithm 1 to improve its legibility. 6. **(Table 1 Formatting)** We appreciate your careful eye to detail! For the TFBind8 optimization task using the Adam optimizer, the top performing method in terms of design quality was RoMA+ with a score of $96.5\pm 0.0$ (lower bound on 95% CI is 96.5). However, ROMO scores $95.6\pm 0.0$ (upper bound on 95% CI is 95.6) and GAMBO scores $94.0\pm 2.2$ (upper bound on 95% CI is 96.2). Because the 95% CI intervals of ROMO and RoMA+ (and also GAMBO and RoMA+) are non-intersecting, ROMO and GAMBO are correctly left unbolded. 7. **(Meaning of "Myopic")** We follow [Deisenroth et al.](https://dl.acm.org/doi/10.1561/2300000021), [Ngo](https://www.alignmentforum.org/posts/GqxuDtZvfgL2bEQ5v/arguments-against-myopic-training), and others to refer to policies that are trained according a value function with an discount factor of $\gamma=0$ in reinforcement learning as "myopic". Intuitively, this means that the surrogate reward function used by the REINFORCE algorithm only considers the current input design in computing the reward, and not future possible states like in traditional RL applications. We have better clarified this point in our revised manuscript. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your response. Most importantly, thank you for adding the limitation of your work. I will raise my score. Before that, I would like to ask a few more questions. >1. I disagree with you about the inequality in Equation (16), although it is not a big issue. Suppose the space $\mathcal{X}$ is continuous and the only optimum is $x^{\star}$. Then, your probability distribution can be given by $q^{\pi}(x) = \delta (x-x^{\star}) + I(x=x_0)$, where $x_0$ is a sub-optimal point and $I(\cdot)$ is an indicator function. This function is non-negative and integrates out to 1, so it is a valid density function, even though $x_0$ shows up with probability zero. This is a minor thing, but if you want to be maximally correct, you should prove that the distribution $q^{\pi^{\star}}$ has no support on sub-optimal points almost surely (no sub-optimal non-measure-zero set gets support). This would actually strengthen your theorem because then you can just say that $q^{\pi^{\star}}$ has positive support on a, possibly infinite, set of optimal solutions, but no support on a non-measure-zero set of sub-optimal ones. The proof technique is the same, you just have to change the wording. >2. Regarding the explicit reward term in your optimization objective - I disagree again. You say that it is necessary for the entropy term to show up in Lemma 3.3. My point is, this term does not do much. Using linearity of expectation (which entropy and the KL divergence are) you can represent Equation (11) as a single KL divergence between $q^{\pi}$ and, let's call it $\hat{p}^{\tau}(x)$ - an EBM distribution like $p^{\tau}(x)$ but with a different temperature. Thus, the same optimization objective can be obtained from trying to minimize the KL divergence between an EBM, albeit with a different temperature than $\tau$. I'm curious to see what your thoughts are! --- Reply to Comment 1.1.1: Comment: Thank you for the additional feedback and opportunity for discussion! We are grateful for your support of our work and appreciate your efforts in helping us improve our manuscript. Please find our responses below: 1. **(Equation (16))** Thank you for the clarification, we apologize for misunderstanding your original comment. We have revised Lemma 3.1 in accordance with your feedback [here](https://postimg.cc/sQ5QdDQr). 2. **(Reward Term)**: We apologize for the confusion and believe that we might have misunderstood your initial comment. Indeed, we agree with you that our modified MBO optimization objective in Equation (5) is equivalent to a distribution-matching objective (i.e., minimizing the KL divergence with an EBM) for some different distribution $\hat{p}^{\tau}\_{\mathcal{D}}$. We show how to arrive at this result [here](https://postimg.cc/XXWCbdVB) in our revised Appendix. However, we cannot guarantee that $\hat{p}\_{\mathcal{D}}^{\tau}$ has the form $p_{\mathcal{D}}^{\tau'}$ for some $\tau'$ in the most general case. Separately, we include a pure reward term in Equation (5) since it explicitly shows that our objective both maximizes reward while also performing distribution matching, which is consistent with the formalism of prior MBO literature (e.g., [Trabucco et al. ICML (2022)](https://proceedings.mlr.press/v162/trabucco22a); [Yao et al. NeurIPS (2024)](https://arxiv.org/abs/2402.06532); [Trabucco et al. ICML (2021)](https://proceedings.mlr.press/v139/trabucco21a.html); [Yu et al. NeurIPS (2021)](https://arxiv.org/abs/2110.14188)), as recommended by Reviewer kDhU (Other Comments Or Suggestions). We would be happy to answer any additional questions you might have!
Summary: This paper proposes a distributed matching based adversarial optimization framework (DynAMO) aimed at addressing the issue of insufficient design diversity in offline model optimization (MBO) tasks. By explicitly modeling diversity objectives as a matching problem between generative design and offline dataset distribution, this method significantly improves diversity while ensuring the quality of candidate designs, and validates its effectiveness in multiple scientific fields such as biological sequence design. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: * By reparameterizing the optimization objective as a combination of weighted entropy and divergence using Lemma 3.3, a closed form solution (Lemma 3.4) is provided, which solves the algebraic complexity problem that previous studies (such as those related to chi square divergence) could not directly apply. This theoretical contribution expands the application scope of f-divergence in offline optimization, complementing broader probability distribution alignment studies such as divergence optimization in generative models. * D. Section 6 validated the computational feasibility of the method in large-scale design spaces through empirical comparisons (such as with the model proposed by Yu et al., 2021), filling the practical gap of model independent methods (such as Fu&Levine, 2021). Essential References Not Discussed: NAN. Other Strengths And Weaknesses: Strengths: * The experiment covers multiple scientific fields (such as Table A10) to verify the generalization ability of the method. * Table A10 clearly distinguishes the performance of different methods (DynMO, AMO, DynAMO) and enhances readability through confidence intervals and bold annotations. Weaknesses: * The method relies on existing techniques such as KL divergence penalty and adversarial constraints, and further evidence is needed to demonstrate the unique value of their combination. * The theoretical guarantee section (D.4) was not elaborated in the abstract, which may lack original breakthroughs in theoretical depth. * The limitations of methods in ultra-high dimensional design spaces or sparse reward scenarios have not been discussed. * The trade-off parameters between diversity objectives and quality objectives (such as τ value) may rely on domain experience tuning and lack universal guidance. Other Comments Or Suggestions: NAN. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer DFtV for their thoughtful feedback and careful consideration of our work. Please find our responses to your comments below. 1. **(Combination of KL Divergence and Adversarial Constraints)** Thank you for this comment - indeed, we believe that it is a *strength* of our method that DynAMO is able to leverage existing individual techniques--such as KL divergence penalization from imitation learning and adversarial constraints from the GAN literature--and combine them together in a unique way to solve a new problem (offline MBO). Namely, leveraging KL divergence penalization has not been shown in prior MBO literature, and leveraging adversarial regularization has only recently been explored by [Yao et al. NeurIPS (2024)](https://arxiv.org/abs/2402.06532). **Combining them together in a meaningful and principled way to solve the problem of offline MBO is the main novelty and contribution of our work.** Regarding the unique value of combining both techniques, we refer the Reviewer to **Supp. Tables A8-A10** in **Appendix E.2**, which shows that using both KL-divergence and adversarial regularization **together in combination** significantly improves the performance of DynAMO than when using either technique alone. 2. **(Theoretical Guarantee in Section D.4)** To clarify, our primary contribution of this work is the DynAMO algorithm. The goal of the theoretical bound presented in Section D.4 is to use standard, validated techniques from the literature to show theoretically what we already observe in our presented experiments: DynAMO can improve the diversity of designs obtained using different generative optimization methods. Indeed, we use fairly standard methods and techniques to arrive at our result in Section D.4 - for example, see [Ma et al. Proc NeurIPS (2022)](https://arxiv.org/abs/2206.03023), [Huang et al. (2024)](https://arxiv.org/abs/2407.13399), and [Liang (2016)](https://web.stanford.edu/class/cs229t/notes.pdf) for additional background. 3. **(Method Limitations)** Thank you for this comment. We highlight that many of our discrete MBO tasks (i.e., ChEMBL, Molecule) are problems over high-dimensional design spaces as discussed in prior work ([Maus et al. NeurIPS (2022)](https://arxiv.org/abs/2201.11872), [Yao et al. NeurIPS (2024)](https://arxiv.org/abs/2201.11872), [Trabucco et al. ICML (2021)](https://proceedings.mlr.press/v139/trabucco21a.html)). Furthermore, the DKitty task in our work includes a sparse and highly-sensitive oracle reward model as discussed in [Trabucco et al. ICML (2022)](https://proceedings.mlr.press/v162/trabucco22a.html). We have better highlighted these points in our revised manuscript, and have also included an additional paragraph explicitly discussing the challenges with optimization in these settings in alignment with the Reviewer's feedback. 4. **(Setting Hyperparameters)** Thank you for this comment. We highlight that **all experimental results presented in the main text use the same hyperparameter values (e.g., $\tau=1.0$ and $\beta=1.0$) across *all optimizers and MBO tasks***. Furthermore, **Supp. Figures A4-A6** show that the performance of our method is robust to the choice of hyperparameters. In short, while we domain expertise and tuning of hyperparameters might improve our method further, it is by no means necessary to achieve good experimental results using DynAMO.
Summary: This paper presents a novel approach to incorporating design diversity as an explicit objective in the offline model-based optimization problem. Specifically, the original optimization objective is modified to enhance the diversity of generated samples using a distribution matching technique inspired by recent advances in imitation learning. Additionally, a source critic constraint is introduced to mitigate out-of-distribution evaluations of the reward surrogate model. Extensive experiments demonstrate the effectiveness of DynAMO in promoting diversity while maintaining the discovery of high-quality candidates. Claims And Evidence: The claims in this paper are well-supported by clear and compelling evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria, including benchmark tasks and baselines, are well-suited to the problem at hand. Theoretical Claims: I have verified the correctness of the proofs for Lemma 3.1 and Lemma 3.3. Experimental Designs Or Analyses: I have verified the soundness and validity of the experimental designs and analyses presented in the main text, as well as the additional results in the appendix. Supplementary Material: I have reviewed almost the entire appendix, except for parts C, D.3, and D.4. Relation To Broader Scientific Literature: This paper presents a promising direction in offline optimization with potential applications in designing more effective drugs, engineering new materials with desirable properties, and solving various scientific challenges. While previous works (COMs, ExPT, BOSS) focus solely on identifying the best designs, this paper introduces a method to balance the trade-off between optimal quality and diversity. Essential References Not Discussed: No, there are no essential related works missing that are crucial for understanding the key contributions of this paper. Other Strengths And Weaknesses: Strengths: - This paper proposes a novel approach to explicitly balancing the trade-off between reward optimality and design diversity in offline optimization. - The writing is clear and well-structured. - The results demonstrate that DynAMO enhances diversity while maintaining high-reward candidates to some extent. Weaknesses: - The experimental results in both the main text and appendix are poorly presented, primarily as large tables with multiple bold elements, making it difficult for readers to analyze the data. - While the paper shows an improvement in diversity, it comes at the cost of reward optimality. Specifically, although the mean rank appears low, DynAMO does not achieve state-of-the-art performance across all settings. Other Comments Or Suggestions: I suggest presenting the experimental results in a clearer and more structured manner to improve readability and analysis. Questions For Authors: 1. How does the proposed regularization in Eq. (5) enhance the diversity of candidate designs? According to line 159, $\tau$-weighted probability distribution has nonzero support only at offline data points, meaning $p_D^\tau(x)=0$ for $x \not \in D$. Consequently, the KL regularization does not ensure diversity of samples in out-of-distribution (OOD) regions, regardless of the choice of $\tau$. 2. How does the introduction of the source critic constraint prevent the forward surrogate model from evaluating wildly out-of-distribution inputs? Additionally, could $c^*$ in this constraint be replaced by the forward surrogate $r_\theta$? 3. There may be potential unfairness in the comparisons presented in Table 1. First, the reported baseline results using non-original optimizers may not be optimal. For instance, the COMs baseline was originally designed with the Grad optimizer, meaning its hyperparameters were tuned accordingly. When integrated with other optimizers such as Adam or CMA-ES, did the authors retune its hyperparameters? Second, since each dataset has its own value range, normalizing results to a (0,1) scale is necessary to ensure fair comparisons, especially for metrics like the Optimality Gap (Opt. Gap). 4. Although DynAMO achieves a lower Pairwise Diversity score and competitive Best@128 performance compared to other methods, it exhibits poor Median@128 performance in Table A1. Does the lower Pairwise Diversity score stem from weaker performance at other percentile levels? If so, this suggests that the reward distribution of candidates has a wide range, which contradicts the goal of offline optimization—finding a distribution concentrated on high-value designs. The authors should conduct additional experiments to validate this hypothesis. Furthermore, I am curious about a straightforward approach to improving the diversity of existing baselines. For instance, COMs initializes the K-best designs from the offline dataset and then applies gradient ascent to refine them. Instead, could selecting K random designs as initialization increase the diversity of the final candidates? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate Reviewer scBu for their insightful feedback and thorough evaluation of our work. We have provided our responses to your comments below and would be happy to answer any follow-up questions. Thank you! 1. **(Presentation of Results)** To improve the legibility of our results, we summarize the main findings on the task-averaged Rank and Optimality Gap metrics in the [main table](https://postimg.cc/ygg6fyPt) in our revised manuscript, and move the detailed breakdown of algorithm performance by task to the Appendix for the interested reader. 2. **(Diversity versus Quality)** To be clear, **we do not claim that DynAMO will propose designs of higher *quality* than other baseline methods across all settings.** (In fact, *no baseline method* is state-of-the-art across all tasks.) The goal of DynAMO is to increase the diversity of designs without significantly reducing the quality of best design proposed. Indeed, across multiple offline MBO tasks evaluated, there is often no statistically significant difference between the quality of designs proposed by DynAMO and other baseline methods according to the Best@128 evaluation metric. **However, DynAMO is consistantly state-of-the-art in producing diverse sets of designs across all settings.** We have better clarified this primary goal and use case of DynAMO in our manuscript. 3. **(Equation (5) for Diversity)** To clarify, we do not want to sample designs that are truly OOD, as we cannot guarantee the correctness of these designs using an arbitrary surrogate $r_\theta(x)$. Ideally, it would be great to discover designs that are both novel and significantly OOD, but this is not possible while guaranteeing correctness in the offline setting. Consistent with prior work (e.g., [Yao et al. NeurIPS (2024).](https://arxiv.org/abs/2402.06532)), DynAMO seeks to generate **interpolated, in-distribution points** to balance diversity with naive reward maximization. Through the regularization in Eq. (5), we aim to learn a generative policy that proposes designs with coverage of the search space similar to the offline dataset $\mathcal{D}$. If $\mathcal{D}$ is too small or has poor coverage, then our method would likely be less effective - we discuss this in further detail in our Limitations section of our revised manuscript. 4. **(Source Critic Constraint)** The primary goal of the source critic is to penalize designs that are OOD with regards to the offline dataset $\mathcal{D}$. If $c^*(x)$ is large, then $x$ is likely from the same distribution as $\mathcal{D}$ and so we can trust that the prediction $r_\theta(x)$ is a good estimate of $r(x)$ with reasonable confidence. If $c^*(x)$ is small, then $x$ is likely not from $\mathcal{D}$ and so we penalize the reward associated with $x$ to avoid choosing the design as a proposed candidate. Note that this source critic is different from the forward surrogate $r_\theta$ (approximating oracle $r$), and so we cannot replace $c^*$ with the forward surrogate. We note that our use of $c^*(x)$ builds on prior literature from [Yao et al. Proc NeurIPS (2024)](https://arxiv.org/abs/2402.06532) and [Arjovsky et al. Proc ICML (2017)](https://arxiv.org/abs/1701.07875) - we refer the Reviewer to these prior works for additional discussion. 5. **(Fairness in Comparisons)** To ensure fairness in comparisons, we tuned the hyperparameters of DynAMO according to *only on its performance using the Grad. optimizer*, following the same paradigm as COMs, ROMO, and GAMBO. All hyperparameter values for all methods (including DynAMO) were then held constant for all of the other optimizers. While each of the methods (including DynAMO) might benefit from further optimizer-specific and task-specific hyperparameter fine-tuning, such an approach is not feasible in real-world applications. All of the quality metrics reported have already been max-min normalized to a (0, 1) scale according to the equation $\hat{y}=\frac{y-y_{\text{min}}}{y_{\text{max}}-y_{\text{min}}}$ in alignment with prior work. However, we then multiplied the normalized metrics $\hat{y}$ by 100 to improve the legibility of our results, as highlighted in the table captions where relevant. 6. **(Optimization Initialization Strategy and Distribution of Oracle Objective Values)** We thank the Reviewer for suggesting an alternative strategy to improve the diversity of designs through random initialization. We compared top-$k$ and random-$k$ initialization strategies for first-order optimization methods in this [table](https://postimg.cc/SndKjsRT). For most methods, neither initialization strategy results in consistently more diverse designs (except for DynAMO, which benefits from random-$k$ initialization). Furthermore, both COMs and RoMA suffer from *lower quality of designs* proposed when a random-$k$ initialization strategy is used. **These results highlight the significance of DynAMO as a novel algorithm to consistently increase the diversity of final candidates.** --- Rebuttal Comment 1.1: Comment: Thank you for your responses. However, my main concerns remain unaddressed: ### 1. Potential Trade-off Between Diversity and Performance Your empirical results suggest that the observed increase in diversity might come at the expense of overall performance in the proposed designs. Specifically, this could occur if, among the 128 final selected candidates, only a few (or even just one) achieve a high score, leading to a high **128@best** value. Meanwhile, the remaining candidates may be widely distributed across the search space, increasing **Pairwise Diversity** but lowering their individual performance, which in turn reduces **128@median** and other percentile-based metrics. This raises concerns about whether the method truly enhances diversity while maintaining strong performance or if the reported diversity increase results from a broader but suboptimal search. If this is not the case, the authors should provide explicit evidence demonstrating that the improvement in diversity does not come at the cost of performance degradation. ### 2. Unfair Comparisons Despite tuning hyperparameters for the **DynAMO-Grad** optimizer, its reported performance appears less competitive than other existing methods, as shown in **Table A5**. Additionally, my concern regarding the reported performance values after normalization remains unresolved. If the authors indeed normalize the scores to the range **(0,1)** and then multiply by **100**, all resulting values should theoretically fall within **[0,100]**. However, in the **Molecule** task, some reported performance values exceed **100**, contradicting this normalization process. This discrepancy raises questions about whether the normalization was correctly applied, whether there were inconsistencies in scaling, or if additional transformations were performed. --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments and opportunity for discussion! We would be happy to try to address your remaining concerns: 1. **(Diversity)** We acknowledge the concern that the observed increase in diversity may come at the expense of overall performance, particularly if only a few of the 128 selected candidates achieve high scores. As an example, **Supp. Fig. A2** (top left panel) provides evidence of how DynAMO improves diversity while still identifying high-performing designs. Compared to the baseline, DynAMO produces a distribution of designs with a longer tail over better-than-average oracle scores. This tail of designs is unique to DynAMO and is why our method achieves a strong Best@128 score value. However, we acknowledge that the rest of the distribution (i.e., the majority of designs) is indeed suboptimal. This is an inherent tradeoff in our method: by encouraging the exploration of a broader search space, DynAMO is able to uncover a **diverse set of optimal and near-optimal designs that are meaningfully distinct from one another**; however, a significant number sub-optimal designs might also be included as a side effect. We agree with the Reviewer that the inclusion of these suboptimal designs could be viewed as a potential limitation. We have explicitly addressed this point in our revised Limitations section of our manuscript to discuss the trade-off between maintaining high diversity and ensuring consistently high performance across all selected candidates. We appreciate the opportunity to clarify this aspect of our work. 2. **(DynAMO-Grad Performance and Normalization)** Thank you for this comment. Regarding the performance of DynAMO-Grad, Grad. is a relatively weak backbone optimization method: a clear example is that the optimality gap of baseline CMA-ES, BO-qEI, and BO-qUCB all significantly ouperform that of baseline Grad. At its core, DynAMO is an **objective-modifying algorithm** and still relies on an underlying optimizer to optimize against the DynAMO-modified objective. If the optimizer is relatively weak, such as with Grad., then the best performance achievable with the method may still be subpar even though DynAMO-Grad. outperforms baseline Grad. Regarding the normalization schema, the normalization results in a score of 100 if a design achieves an oracle score equal to $y_{\text{max}}$, the best score in the offline dataset. A score greater than 100, such as what we observe in the Molecule task, means that a design was proposed that was *even better* than the best design in the offline dataset. This is indeed the overarching motivation of offline MBO: to discover designs that are better than what we have ever seen previously in the offline dataset. We confirm that normalization was correctly applied (source code available in the Supplementary Material), and there were no inconsistencies in scaling or additional transformations performed. We would be happy to answer any additional comments or questions. Thank you!
Summary: The paper introduces Diversity in Adversarial Model-based Optimization (DynAMO), a new approach for offline model-based optimization (MBO) that aims to generate diverse and high-quality design candidates. The core idea is to frame diversity as a distribution matching problem by optimizing a KL-divergence-based objective, ensuring that the generated candidates capture the diversity inherent in the offline dataset. The challenge is not only to find high-reward designs but also to ensure diversity, avoiding solutions that cluster around a single optima. Additionally, an adversarial source critic is incorporated to prevent the surrogate model from overestimating rewards on out-of-distribution designs, improving the robustness of offline optimization. ## update after rebuttal The authors addressed most of my concerns. I update my rating towards weak acceptance. Claims And Evidence: - DynAMO improves both design diversity and quality compared to baseline methods: While DynAMO consistently improves pairwise diversity and achieves competitive results against baselines, concerns remain regarding the actual performance improvements. For example, in the D’Kitty evaluation, DynAMO does not surpass the best designs already present in the offline dataset, contradicting the fundamental goal of offline MBO—to improve upon the given data. Additionally, normalizing the results, as is standard in offline MBO literature, would likely show that DynAMO’s improvements are marginal and possibly indistinguishable from other methods. Even without normalization, results on tasks like ChEMBL are too close across methods, making it difficult to determine whether DynAMO truly provides a performance advantage. If the goal is to discover multiple high-quality design modes, then a slight sacrifice in top performance might be expected. However, when analyzing Median@128, which should reflect a more consistent quality across diverse designs, DynAMO lags behind other methods. This raises concerns about whether diversity is translating into useful variety or merely scattering designs without maintaining quality. Since the goal of offline MBO is to improve upon the dataset, the results question whether DynAMO's diversity actually contributes to better design candidates or simply increases search space coverage without meaningful gains. - DynAMO is optimizer-agnostic and improves performance across all optimization strategies: Extensive results comfirm that dynamo is compatiblewith different optimization strategies. Methods And Evaluation Criteria: - Pairwise Diversity (PD) and the other discussed metrics are reasonable measures to evaluate whether DynAMO increases diversity in design generation. - Best@128 and Median@128 are standard offline MBO performance metrics and are appropriately used. - ChEMBL results are too close across methods, making it unsuitable for comparing baselines. Similarly, UTR is known to produce similar results across methods, as shown in prior work: Bidirectional Learning for Offline Infinite-width Model-based Optimization (https://arxiv.org/pdf/2209.07507) and Conservative Objective Models for Effective Offline Model-Based Optimization (https://arxiv.org/pdf/2107.06882v1). - Non-normalized results may exaggerate reported gains—preferably, results should be normalized as done in offline MBO literature. - In offline RL, KL-divergence prevents OOD actions as a regularization constraint, keeping optimization within the dataset. This setup may enhance diversity but not necessarily high-quality diverse designs, as seen in Median@128 results. The intuition from imitation learning is not valid since the offline dataset contains both good and bad examples, not only expert designs. - Figure 1 is intended to highlight DynAMO selecting diverse designs, but most appear scattered, raising questions about the trade-off between diversity and maintaining quality. The figure suggests that DynAMO generates varied designs, but they are not concentrated around the different global maxima as intended. - The paper states that secondary objectives (e.g., manufacturing cost, drug toxicity) can benefit from diversity, yet the chosen tasks do not include any secondary objectives. Would it be more appropriate to validate this claim using offline multi-objective optimization tasks, such as those discussed in Offline Multi-Objective Optimization (https://arxiv.org/pdf/2406.03722) Theoretical Claims: No. Experimental Designs Or Analyses: Yes. Please check the previous sections. Supplementary Material: Yes. B, C and E appendices. Relation To Broader Scientific Literature: DynAMO builds on offline MBO methods, incorporating an adversarial critic to mitigate overestimation in surrogate models. Its KL-divergence regularization aligns with offline RL techniques, which constrain policies to remain within the offline dataset distribution to prevent OOD actions. The method also relates to diversity-driven optimization, sharing similarities with quality-diversity algorithms and generative modeling approaches that promote multi-modal exploration. Additionally, DynAMO leverages distribution matching through KL-divergence, guiding optimization to maintain design diversity while staying close to the offline data distribution. Essential References Not Discussed: The authors provide a strong coverage of literature across various topics; however, some works in offline MBO are missing, including: 1. Parallel-mentoring for Offline Model-based Optimization (https://arxiv.org/pdf/2309.11592). 2. Importance-aware Co-teaching for Offline Model-based Optimization (https://arxiv.org/pdf/2309.11600). 3. Offline Model-Based Optimization via Policy-Guided Gradient Search (https://arxiv.org/pdf/2405.05349). 4. Learning Surrogates for Offline Black-Box Optimization via Gradient Matching (https://arxiv.org/pdf/2503.01883). 5. Robust Guided Diffusion for Offline Black-Box Optimization (https://arxiv.org/pdf/2410.00983). (3) is particularly relevant to the reinforcement learning section in related work as it leverages offline RL techniques to guide the search for quality designs.. Other Strengths And Weaknesses: Strengths: - Diversity is an important factor often ignored in offline MBO literature, and this paper explicitly integrates it into the optimization process. - Extensive discussion of the method and its components. - The authors shared the code. Other Comments Or Suggestions: - Why frame the offline optimization problem using RL terminology and reward functions instead of the standard terminology used in offline model-based optimization literature? Questions For Authors: - Please check the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer kDhU for their thoughtful comments and feedback on our proposed work, which we believe have substantially improved the quality of our manuscript. Please find our responses to your comments below. 1. **(Performance According to the Median@128 Metric)** Thank you for this comment. DynAMO does indeed trade reward (according to the Median@128 metric) for diversity. Diversity enables us to find a greater subset of optimal and sub-optimal designs than baseline methods; naturally, this may result in a lower median score since DynAMO cannot simply return points near the global maximum (e.g., see **Supp. Fig. A2**). In many applications of offline MBO, it is more important to obtain a good maximum score than a good median score, and we therefore prioritize the Best@128 and pairwise diversity metrics in our evaluation of DynAMO. We highlight this potential limitation of DynAMO in our revised Limitations section, included at [this link](https://postimg.cc/YjfWXyf4). 2. **(ChEMBL and UTR Tasks)** We would be happy to move the results associated with these two tasks to the Appendix in accordance with this feedback. (Notably, DynAMO's average optimality gap consistently *increases* and its average ranking stays the same or improves after excluding the ChEMBL and UTR tasks.) 3. **(Normalization of Results)** All of the quality metrics reported have already been max-min normalized (0,1) according to the equation $\hat{y}=\frac{y-y_{\text{min}}}{y_{\text{max}}-y_{\text{min}}}$ in alignment with prior work in the offline MBO literature. We multiplied the normalized metrics $\hat{y}$ by 100 to improve the legibility of our results, as highlighted in the table captions. 4. **(Good and Bad Examples in Offline RL)** We note that KL constraints are also commonly used in offline RL, where the dataset can include non-expert trajectories. Intuitively, the direction of the KL term keeps the generated samples inside the support of the offline dataset, with failing to match the offline dataset outside its support being penalized less. We can further tune the tradeoff by using the temperature hyperparameter $\tau$ defined in Equation (6). Using a $\tau>0$, designs that are good are upweighted in the reference distribution $p^\tau(x)$ compared to bad designs - see **Fig. A1** for examples. As a result, we are able to use the standard intuition and techniques derived from imitation learning, and show in our work that our approach works both theoretically and empirically. 5. **(Figure 1)** We thank the Reviewer for this comment. Indeed, DynAMO generates a diversity of designs, many of which are not concentrated around the global maxima. However, the point of DynAMO (and **Figure 1**) is that at least *a few* of the proposed designs are located around the different global maxima. This is in contrast with the baseline method, which proposed a batch of designs that might all be near-optimal, but are all clustered around the same optima and region of the input space. Our primary motivation of DynAMO is that obtaining a small number of diverse and high-quality designs is better than obtaining a large number of high-quality but near-identical designs. If a particular application of offline MBO does not suit this use case, then DynAMO may not be well-suited for the application at hand. We have better clarified this point in **Figure 1** of our revised manuscript. 6. **(Diversity for Secondary Objectives)** To demonstrate how diversity can enable better downstream exploration of secondary objectives, we compare DynAMO-enhanced optimization methods against their baseline counterparts and report the results [here](https://postimg.cc/yDbHbzVP). We find that if an optimization method achieves a higher pairwise diversity score, it also consistently achieves a larger variance in the values of the secondary objectives, supporting the idea that secondary objectives can benefit from diversity. To be clear, we do not claim that DynAMO will discover designs that maximize multiple objectives while only optimizating against a single objective. The goal is that by increasing diversity, we can better interrogate other design properties, which will hopefully be different from one another. 7. **(Additional Baselines)** We thank the Reviewer for sharing these additional baselines with us. We have included all 5 referenced methods in our updated design [quality](https://postimg.cc/m17L4ZKS) and [diversity](https://postimg.cc/jwpxBn35) results. Our results do not significantly change with the inclusion of these additional baselines. 8. **(Choice of Terminology)** Thank you for this comment. We initially chose to leverage terminology from both the offline MBO and RL literature to more explicitly highlight how DynAMO adapts techniques inspired from offline RL to the MBO setting. We have revised our manuscript to remove references to reward functions and other RL-centric terminology to better align with existing offline MBO literature.
null
null
null
null
null
null
Sample Complexity of Correlation Detection in the Gaussian Wigner Model
Accept (poster)
Summary: The paper focuses on detecting correlations between a pair of random graphs, using the Gaussian Wigner model where edge weights are drawn from a Gaussian distribution. This is framed as hypothesis testing, determining whether the graphs are independent (null hypothesis) or edge-correlated (alternative hypothesis). In the latter case, each edges are generated from a Gaussian with correlation coefficient $\rho$, then vertex labels are permuted. The study focuses on scenarios where induced subgraphs are sampled from the original graphs, which the authors motivate by the fact that (among other factors) graphs are often only possible to query via node sampling with an API. The authors derive the optimal rate for the sample size for correlation detection, and propose (but do not theoretically study) an efficient approximate algorithm to reduce running time. Numerical studies back up their theoretical claims. *Update after rebuttal*. Whilst I still have some reservations about the utility to ML researchers, this is a strong, well-written paper with nice mathematical contributions. I have raised my score to 'weak accept'. Claims And Evidence: The core claim, given in Thm 1.4, is a result for the optimal rate for the sample size s required for correlation detection in this model. It depends on the number of nodes and the correlation coefficient. The evidence seems convincing, and leads to specific conditions for the possibility and impossibility of detection. The authors also propose an efficient algorithm to approximate the estimator with lower time complexity, and Sec 5 gives strong empirical evidence that it works on synthetic data (the histograms are well-separated for the independent and correlated models). All claims made in the submission appear to be well-supported by evidence. Methods And Evaluation Criteria: As described above, Sec 5 presents numerical experiments to verify the authors’ theoretical results. This involves generating 100 pairs of graphs with n=50 that are either independent or correlated, then plotting a histogram for the (approximated) estimator. The authors also compare the test statistic tinder different settings, plotting the ROC curves with varying detection threshold, recording the Type II error against the Type I error. They compute the AUC for different sample sizes s -- which naturally increases as s grows. The authors do not compare to baselines or evaluate on benchmark datasets, but it’s not clear that this would make sense for the problem at hand. Theoretical Claims: I did not check the mathematical proofs in detail, but the claims seem reasonable. Experimental Designs Or Analyses: See above. The experimental designs make sense for the paper’s current scope -- namely, correlation detection for the Gaussian Wigner model. Supplementary Material: The supplementary material provides detailed proofs of all theoretical claims. Relation To Broader Scientific Literature: The paper tackles the problem of correlation detection between random graphs. It builds upon a broader literature on graph matching for correlated random graphs, including in the the Erdős-Rényi model (Cullina & Kiyavash (2016; 2017); Hall & Massoulie (2023)) where edges are Bernoulli (c.f. Gaussian) random variables. It specifically focuses on the graph sampling setting, where not all nodes are available (Leskovec & Faloutsos, 2006; Hu & Lau, 2013). I understand that this is a significant area of study in computational mathematics, but it is less clear to me that this will be of interest to researchers in machine learning. Essential References Not Discussed: The references seem reasonable. Other Strengths And Weaknesses: **Strengths**: - Clearly written paper with rigorous mathematical results and concrete algorithms. The authors convincingly demonstrate that this will be of interest to folks working on correlation analysis between graphs, and it seems likely that similar ideas will extend to random graph models beyond Wigner. - Efficient implementation that performs well in practice in numerical experiments. **Weaknesses**: - Whilst mathematically sophisticated and well-presented, it is unclear to me that this problem will be of interest to the audience at ICML. Other Comments Or Suggestions: Please see above and below. Questions For Authors: - Can you provide convincing evidence that correlation analysis between random graphs is a problem of interest in the setting of modern machine learning? For instance, can you cite papers at top conferences addressing this problem, or demonstrate more concretely how your algorithms can be used in computational biology or NLP (lines 44 and 47)? I think it’s great to have high-calibre mathematical work at ML conferences (often lacking!), but it does need to be relevant to the community. - To what extent do you expect your results to generalise to other random graph models like Erdős–Rényi, and to what extent are they peculiar to Gaussian Wigner? In my mind, the first question is the big one – I will be ready to raise my score if you can provide more evidence ML folks should care about this. Would a computational mathematics journal not be more suitable for your strong new results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. **Q1: The correlation analysis between random graphs and its relation with modern machine learning.** There are many papers on top machine learning conferences and journals addressing related problems. Our hypothesis testing problem is also known as graph similarity studied by (Barak, Chou, Lei et al., 2019, NeurIPS), which provides additional references and surveys on this topic. Correlation analysis is also a natural first step towards the graph matching problem, which is studied by (Araya, Braun and Tyagi, 2024, JMLR), (Lyzinski, Fishkind and Priebe, 2014, JMLR), (Lyzinski, Fishkind, Fiori et al., 2015, TPAMI), (Racz and Sridhar, 2021, NeurIPS), (Gaudio, Racz, and Sridhar, 2022, COLT). The algorithm in our paper can be used in various domains: - In NLP, nodes correspond to words or phrases, and the weighted edges represent syntactic and semantic relationships (Hughes and Ramage, 2007, EMNLP). The algorithm can be used to test whether knowledge graphs from different languages are correlated. - In computational biology, proteins are the nodes, and the interactions between them are weighted edges (Singh, Xu, and Berger, 2008, PNAS). The algorithm can be applied to test whether two protein networks are correlated. - In computer vision, nodes are subregions and weighted edges represent adjacency relationships between different regions (Berg, Berg, and Malik, 2005, CVPR). The algorithm can be applied to test whether two graphs represent the same object. We will add more discussions on those applications. **Q2: Extension to Erdős–Rényi model.** Most results in this paper can be extended to the Erdős–Rényi model. The key difference lies in the additional parameter $p$ controlling the edge connection probability. For the details on extending our possibility and impossibility results, please refer to our response to Q1 from Reviewer vzDn. We will add more discussions on this extension. **Q3: Baselines or benchmark datasets.** We are not aware of any suitable benchmark datasets or publicly available baselines for comparison on the correlation detection problem in the Gaussian Wigner model. There are established methods for graph distance and graph similarity such as classical graph edit distance (GED). Under the same numerical experiments, our algorithm achieves superior performance. When $n=50, s=30$, and $\epsilon = 0.01$, the AUC values for the GED-based test at $\rho = 0.98, 1-10^{-6}, 1-10^{-7}$ are $0.53, 0.73, 0.88$, respectively. In contrast, the AUC values in our algorithm for the same values of $\rho$ are $0.92,1,1$. We will add the comparison in our paper. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I'll still have some reservations about the utility to ML researchers (whilst I certainly buy that graph matching is an important problem). Some toy demonstration might really help the optics here. But it's a strong, well-written paper with nice mathematical contributions, so I'll raise my score and recommend acceptance. --- Reply to Comment 1.1.1: Comment: We sincerely thank you again for reviewing our paper and providing instrumental comments for improving it, and we truly appreciate your decision to increase the score. We understand the concern regarding the utility to the machine learning community. We hope the following example offers some intuition for potential relevance. In social network analysis, anonymity is an important concern and is closely related to privacy. For instance, aligning user graphs from LinkedIn and Twitter may unintentionally reveal private information. Although such real-world scenarios are compelling, social network datasets are too large for our current work. To provide a simple illustration, we conduct an experiment on Freeman’s EIES networks (Freeman, 1979), a small dataset of 46 researchers, where edge weights represent communication strength at two time points. We apply our method to test for correlation between these two temporal networks. We examine how sample size affects privacy protection by analyzing the normalized similarity score, defined as the similarity score (line 347, column 2) divided by $\binom{s}{2}$. Indeed, a lower score suggests weaker correlation and greater support for the null hypothesis of independence. We apply our algorithm to the EIES dataset at different sample sizes, $s$ = 10, 20, 40 and compute the corresponding normalized similarity scores: -1.066, -0.905, and -0.651. The scores increase with sample size, indicating stronger detected correlation. The lower scores at small sample sizes reflect failed correlation detection, quantifying the reduction in re-identification risk. We acknowledge that this is a toy demonstration based on a sparse and small dataset with real identities. In future work, if suitable social network datasets become available, we would be happy to explore applications to anonymized data with denser edge weights.
Summary: This paper addresses the problem of detecting correlation between two random graphs generated from the Gaussian Wigner model. In particular, tha authors focus on the scenario where two induced subgraphs are sampled. It establishes the optimal sample complexity for correlation detection and proposes two test statistics. They introduce a fast approximate algorithm based on clique matching and iterative mapping extension, which improves computational efficiency. Finally, they confirm the effectiveness of the algorithm with numerical simulations. Claims And Evidence: All claims are supported by convincing evidence. Methods And Evaluation Criteria: The methods are well-suited for this problem. Theoretical Claims: I have checked the proofs for the main theorem and the possibility results. I have no issues to discuss. Experimental Designs Or Analyses: I have no issues to discuss. Supplementary Material: I have reviewed sections A, B.1, B.2, D. Relation To Broader Scientific Literature: The paper builds on the literature on correlation detection in the Gaussian Wigner model. While previous works focused on detection thresholds for fully observed graphs, this paper extends the results to the sampled subgraph case, using established techniques like the conditional second moment method. Moreover, it builds on existing work in graph and clique matching to propose an efficient algorithm. Essential References Not Discussed: I’m not aware of any relevant references that have been omitted. Other Strengths And Weaknesses: The paper addresses a relevant problem with applications in biological networks, natural language processing, and social network analysis. It benefits from a clear and well-structured presentation, rigorous and complete theoretical results, and convincing numerical simulations that support the claims. The authors provide a thorough discussion of related literature and clearly state the limitations of their work and potential future directions. I have not identified any major weaknesses. Other Comments Or Suggestions: I would suggest to include an overview of the paper's structure in the Introduction, providing a brief description of the topics covered in each section and highlighting all the contributions of the work. Questions For Authors: I do not have any specific questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. We will add an overview of the paper and highlight the contributions in the introduction.
Summary: This paper studies the problem of correlation detection in the Gaussian Wigner Model, formulating it as a hypothesis testing problem. The authors analyze the sample complexity required for correlation detection when only two induced subgraphs are observed. The main theoretical contribution is the derivation of the optimal sample size required for detection, based on a second-moment analysis. Additionally, the paper presents an efficient approximate algorithm for detecting correlation, which improves computational efficiency compared to brute-force approaches. The results provide insight into the statistical limits of graph correlation detection when only partial observations are available. Claims And Evidence: The paper makes several claims about the optimal sample complexity needed to distinguish between independent and correlated Gaussian Wigner graphs. The results align well with previous work on full-graph correlation detection but extend them to subsampled settings, which has not been studied extensively before. However, a notable limitation is that the role of graph topology is largely absent from the analysis. While the authors assume a dense setting, real-world graphs often have sparse structures (e.g., Erdős–Rényi models), and it is unclear how their results generalize to such cases. Additionally, the paper does not account for structural noise, such as missing or spurious edges, which has been a key challenge in previous works on graph alignment. The authors also introduce an approximate detection algorithm, which reduces the computational burden of exhaustive search. The empirical validation suggests that the method performs well on synthetic data, but a theoretical complexity analysis of the algorithm is missing, which would strengthen the practical relevance of the results. Methods And Evaluation Criteria: The theoretical results are derived using standard tools in high-dimensional hypothesis testing, particularly likelihood ratio methods and second-moment analysis. The total variation distance (TV) is used as a measure of distinguishability, and the paper establishes strong and weak detection conditions for correlation inference. These methods are rigorous and align well with prior work in this field. However, there are some limitations: Graph topology is largely ignored – The analysis assumes a dense Gaussian Wigner model, but many real-world applications involve sparse graphs (such as Erdős–Rényi graphs). It would be useful to discuss whether the topology can be incorporated as additional weights. No consideration of edge noise – In realistic settings, edges might be randomly missing or spurious. The results assume that the edge structure is perfectly known, which is a strong assumption. Comparison with prior message-passing approaches is missing – The authors do not compare their method to existing graph alignment algorithms that use message passing (see appendix A in arXiv:2112.13079). It would be interesting to explore whether their algorithm could be extended to sparse settings using those techniques. The empirical evaluation demonstrates the effectiveness of the detection algorithm, but it is limited to synthetic data with idealized conditions. Real-world datasets or at least non-Gaussian noise models could be considered to validate the robustness of the method. Theoretical Claims: The theoretical results appear correct and well-supported by derivations. The main contribution is the identification of the optimal sample complexity threshold for detecting correlation in subsampled Wigner graphs, which is consistent with previous results in the full-graph setting but extends them to the case of induced subgraphs. The use of the conditional second moment method is appropriate for establishing lower bounds on sample complexity. However, the assumption that graph structure does not affect correlation detection is somewhat counterintuitive. In previous work on graph alignment, including results by Semerjian and Massoulié (arXiv:2209.13723), the presence of tree-like structures was shown to have an impact on the detectability of correlation. The authors should clarify whether their results implicitly assume the graph is dense, or if the topology effects can be incorporated into the framework. Experimental Designs Or Analyses: The experiments focus on synthetic graphs generated from the Gaussian Wigner model, with varying correlation strengths and sample sizes. The results confirm the theoretical predictions. Tthere are a few limitations: The graphs are fully observed apart from subsampling, meaning that they do not test settings where edges themselves are noisy or missing. There is no comparison to other detection methods. How does the proposed approach compare to existing graph matching techniques, such as those based on Otter’s constant for tree alignment? The runtime analysis of the algorithm is missing. The approximate algorithm is stated to be efficient, but there is no complexity bound or scalability analysis. Supplementary Material: The supplementary material contains additional proofs and technical details, which are thorough. Relation To Broader Scientific Literature: The paper is connected to graph alignment and graph matching problems. There are strong similarities with prior work on Erdős–Rényi graph alignment, where matching thresholds have been studied extensively. However, the authors do not sufficiently acknowledge prior works that consider sparse graphs and topology effects. Specifically: The results in Hall & Massoulié (2023) on partial graph alignment are relevant, as they consider detection under more general assumptions. The role of Otter’s constant in graph alignment, as discussed in Semerjian and Massouli's work, should be acknowledged, as it might be relevant for understanding detectability limits in tree-like graphs. The message-passing approach for weighted graphs from arXiv:2112.13079 should be discussed as a potential extension for sparse settings. Essential References Not Discussed: Essential References Not Discussed -arXiv:2112.13079 which discusses message-passing approaches for graph matching in weighted settings. -Hall & Massoulié (2023), which explores partial recovery in graph alignment under more general conditions. - Otter’s constant and tree alignment results from Semerjian and Massoullie work, which may have implications for detection thresholds. Other Strengths And Weaknesses: Strengths: The paper provides clear theoretical results on the sample complexity of correlation detection. The use of induced subgraph sampling is novel and relevant for practical applications. The proposed approximate algorithm reduces computational costs compared to brute-force methods. Weaknesses: Graph topology is ignored – The results assume a dense setting, which may not generalize to sparse graphs. No analysis of edge noise – Real-world graphs often contain missing or spurious edges, which are not accounted for. No comparison with prior message-passing methods – The paper does not discuss existing methods for weighted graph alignment. No complexity analysis of the algorithm – The proposed algorithm is claimed to be efficient, but no formal runtime bounds are given. Other Comments Or Suggestions: The authors should clarify whether graph topology can be incorporated into their model. A discussion of how the results extend to Erdős–Rényi graphs would be useful. The paper should cite Semerjian's work on Otter’s constant in graph alignment, as it may be relevant. A runtime analysis of the detection algorithm should be provided. Questions For Authors: How does the method extend to sparse graphs, such as Erdős–Rényi models? Could the results change if edges were noisy or missing? How does the proposed algorithm compare to message-passing approaches for graph alignment? Can Otter’s constant play a role in the detection threshold? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. **Q1: Graph topology and structural noise.** Thank you for the question. The graph topology and structural noise can be incorporated into our analytical framework. Particularly, most results in this paper can be extended to the Erdős–Rényi model. The key difference lies in the additional parameter $p$ controlling the edge connection probability. - Possibility results. The estimator is similar to equation (2), with the function $f$ selected via MLE under the Erdős–Rényi model. Both Type I and Type II errors can be controlled using the Chernoff bound for the binomial distributions in place of Gaussian. - Impossibility results. When the edge connecting probability satisfies $p = n^{-\Omega(1)}$, the optimal results follow a reduction analogous to Proposition 3.1, leveraging existing lower bounds given the location of common vertices. However, such reduction does not yield tight bounds when $p = n^{-o(1)}$. In this regime, a more delicate event is required for the conditional second moment analysis similar to Proposition 3.5. Nevertheless, the reduction to the core set defined in (10) remains valid. We are currently pursuing a complete resolution to this problem. **Q2: Essential references.** Thank you for suggesting those references. The partial recovery in Hall and Massoulié (2023) is relevant to correlation detection problem. This is cited in line 106, column 2. Message-passing approaches in Piccioli, Semerjian, Sicuro and Zdeborová (2022) and the Otter's constant in Ganassali, Massoulié and Semerjian (2024) are also related to our problem under the extension to the Erdős–Rényi model, especially in the analysis of efficient algorithms. We will incorporate the citations and expand our discussion accordingly. **Q3: Runtime analysis.** The time complexity is $O(N_1\cdot s^{K_1}+N_2^{K_2})$ (see line 368, column 1). Our algorithm comprises three main steps. In the first step, we select $N_1$ vertex sets $V_1,\cdots, V_{N_1}$ of size $K_1$ and search for injections $\pi_i$ from $V_i$ to $V(G_2)$, which requires $O(N_1\cdot s^{K_1})$ time. In the second step, we search over all subsets $U\subseteq [N_2]$ with $|U| = K_2$, which takes $O(N_2^{K_2})$ time. In the third step, we iteratively expand the mapping based on our seeds, which takes $O(m^2 s^2)$ time. We typically choose $N_1\asymp s^{K_1}$ and $K_1\ge 3$, leading to an overall time complexity of $O(N_1\cdot s^{K_1}+N_2^{K_2})$. For the trade-off between performance and runtime, please refer to our response to Q1 from Reviewer stro.
Summary: The paper studies the detection of correlations in pairs of graphs generated by the Gaussian Wigner model when only induced subgraphs are observed. It establishes nearly sharp sample complexity thresholds for successful detection via both possibility and impossibility results and introduces two estimators (one maximizing overlap and one based on mean-squared error) alongside an efficient clique-based algorithm. Synthetic experiments demonstrate promising separation between the null and alternative hypotheses. Claims And Evidence: ### Main Claims: - Derivation of nearly sharp sample complexity thresholds for detecting correlation under induced subgraph sampling. - Presentation of two estimators tailored for different regimes. - Proposal of an efficient, clique-based approximation algorithm. ### Evidence: - Theoretical results are supported by detailed proofs using the conditional second moment method and concentration inequalities. - Synthetic experiments (ROC curves, histograms) validate the method’s ability to distinguish between independent and correlated graphs. Methods And Evaluation Criteria: - Use of induced subgraph sampling and rigorous probabilistic tools (e.g., Chernoff bounds, hypergeometric concentration). - A two-pronged estimation strategy (maximal overlap vs. mean-squared error) to handle various correlation regimes. - A novel, efficient algorithm for approximating the ideal estimator via clique-seeding and iterative matching. Theoretical Claims: I have not reviewed the proofs. Experimental Designs Or Analyses: - The synthetic experimental design is sound and well-conceived. - Performance is measured via ROC curves and histograms of the test statistic. Supplementary Material: I have not reviewed the supplementary. Relation To Broader Scientific Literature: The paper builds upon and extends recent work on graph matching and correlation detection in random graphs, particularly in the Gaussian Wigner model. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths: - The paper makes theoretical contributions by deriving near-optimal detection thresholds. - Its blend of rigorous probabilistic analysis with an efficient algorithm is especially commendable. ### Weaknesses: - The paper does not sufficiently explore how the key parameters (e.g., clique size $K_1$, combining size $K_2$, number of samples) affect the performance and running time of the algorithm. More discussion or analysis of these trade-offs would be valuable. - Some sections of the theoretical analysis could benefit from additional explanatory detail to ensure accessibility, particularly for readers less familiar with advanced probabilistic methods. Other Comments Or Suggestions: N/A Questions For Authors: Please refer to my previous comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. **Q1: The impact of the parameters on runtime and performance.** Thanks for the suggestion. We will add more discussions on this point after Algorithm 1. The time complexity is $O(N_1\cdot s^{K_1}+N_2^{K_2})$ (see line 368, column 1). Hence, the runtime increases with $K_1$, $K_2$, and $s$. The performance improves with $s$; it initially improves with $K_1$ and $K_2$ but degrades when $K_1$ and $K_2$ become too large: - $s$: A larger sample size $s$ leads to larger common vertex sets (see line 133, column 2), and thus increases the number of correct mappings in Step 1. - $K_1$: A larger $K_1$ corresponds to matching larger cliques in the first step. This increases the proportion of correct mappings within the $N_2$ candidate pairs when $K_1$ is below the size of common vertex sets. However, choosing $K_1$ beyond this size introduces wrong mappings. - $K_2$: In the second step, we search over all $U\subseteq [N_2]$ with $|U| = K_2$ to identify the seeds. While a larger $K_2$ imposes a stricter matching criterion, choosing $K_2$ beyond the number of available correct mappings from Step 1 will degrade performance. **Q2: Additional explanatory details.** We will add more explanatory details in Sections 2 and 3 including the following before the presentation of technical results: - Section 2: The quantity $\mathsf{e}(\mathcal{H}_\pi^f)$ measures the similarity score of a mapping $\pi$. 1) Under the null hypothesis, $\mathsf{e}(\mathcal{H}^f_\pi)$ has a zero mean for all $\pi$, whereas under the alternative hypothesis, its mean with $\pi=\pi^*$ is strictly positive owing to the underlying correlation. We derive concentration inequalities to ensure that $\mathsf{e}(\mathcal{H}^f_{\pi^*})$ exceeds the maximum spurious score arising from stochastic fluctuations under the null, as shown in Propositions 2.2 and 2.3. 2) The choice of two similarity scores is based on the maximum likelihood estimate with $f(x,y)=-\rho^2(x^2+y^2)+2\rho xy$, as discussed in Remark 2.5. 3) Since vertices are sampled without replacement from the two graphs, the number of intersecting vertices follow a hypergeometric distribution, which is analyzed in Lemma 2.1. - Section 3: The key is to identify the bottleneck for distinguishing the two hypotheses. 1) Under weak correlation, the bottleneck is detecting the existence of latent mapping $\pi^*$. The detection is impossible even with the additional knowledge on the location of common vertices, as shown in Proposition 3.1. 2) Under strong correlation, detecting $\pi^*$ is no longer the bottleneck. We prove Proposition 3.5 by conditioning on a high probability event (8) on the number of intersecting vertices. One key step is the reduction to a subset $I^*$ of intersecting vertices, as discussed in Remark 3.6.
null
null
null
null
null
null
Splitting with Importance-aware Updating for Heterogeneous Federated Learning with Large Language Models
Accept (poster)
Summary: The manuscript addresses federated LLM fine-tuning, highlighting how existing methods often lead to catastrophic forgetting, diminishing the global model's generality, and failing to properly balance model updates across clients with different downstream task datasets. The authors propose the FedICU method, which cleverly enhances the global model's generalization capabilities through importance-aware parameter uploading and consensus-divergence splitting. Experimental results demonstrate the effectiveness of the authors' proposed framework. Claims And Evidence: The claims made in this submission are well-supported by compelling evidence. 1. The authors provide detailed theoretical foundations for both key components in the framework. What's more, the authors provide comprehensive and accurate convergence analysis to elucidate the correctness of the theory. 2. The authors conducted training on heterogeneous datasets and provided complete and detailed test results. Experiments show that the proposed framework outperforms state-of-the-art methods on multiple test benchmarks, demonstrating significant performance improvements. Methods And Evaluation Criteria: The lack of public datasets has made federated LLM fine-tuning an innovative direction. This paper effectively addresses the issue of decreased global model generalization capability during federated large model fine-tuning due to dataset heterogeneity, providing a novel solution for the field. Theoretical Claims: The paper provides theoretical analysis for both components of their framework. The proofs for convergence analysis (Sections E, F, and G in the appendix) establish the mathematical foundations for their approach. Experimental Designs Or Analyses: This paper’s experimental design is robust, comparing FedICU against the state-of-the-art methods across different evaluation criteria. The ablation studies effectively also isolate the contribution of each component. Supplementary Material: I have read all parts of the supplementary materials. Relation To Broader Scientific Literature: The authors focus on addressing the growing challenge of enhancing global model generalization when fine-tuning LLMs in distributed heterogeneous environments. This work is situated within the broader context of diminishing high-quality public training datasets for LLMs and the increasing adoption of federated learning approaches. By building on federated learning foundations, the paper specifically tackles the problems arising from heterogeneous data distributions across clients when using LoRA for LLM fine-tuning Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Studies on combining federated learning with large language models are crucial given the growing scarcity of public training data and increasing demand for privacy-preserving distributed training. This work addresses a critical challenge in this emerging field by tackling non-IID instruction scenarios. 2. The proposed consensus-divergence splitting mechanism and importance-aware updating strategy work synergistically to balance global model capabilities with domain adaptation. The theoretical foundation is solid with comprehensive convergence analysis, and the implementation is elegant with clear architectural design. 3. The framework's approach to decomposing and balancing model updates has broader implications beyond LLMs. This framework offers new insights for federated learning in other domains where maintaining general capabilities while enabling specialization is crucial. Weakness: 1. The paper lacks explanation regarding the communication efficiency of the proposed method. Other Comments Or Suggestions: No Questions For Authors: 1. Is the proposed framework orthogonal to other LoRA training methods? 2. The Importance-aware Updating mechanism focuses on parameter selection based on importance. Have the authors explored the communication efficiency benefits this might provide? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer CCAN: Thank you very much for your recognition of our work. Below I will provide detailed responses to your questions. Thank you for taking your valuable time to offer suggestions for our work. **Q1:Is the proposed framework orthogonal to other LoRA training methods?** (Questions For Authors) **A1:** Yes, our method can be divided into two components, applied to both client and server parts. On the client side, we select important parameters based on generalization importance and special importance, and perform sparse updates on these important parameters. This is compatible with existing LoRA training methods since this process doesn't affect the training logic but simply adds a masking module. On the server side, we split and aggregate LoRA parameters, which is equivalent to first splitting LoRA into two parameter matrices on the server, applying the same training logic to both matrices separately, and then aggregating them back into a single LoRA parameter matrix at the end. Therefore, it doesn't interfere with the LoRA training process either, as it can be considered as only modifying the input/output pipeline stage of the server's LoRA training. In conclusion, our method is orthogonal to other methods for training LoRA. **Q2: Have the authors explored the communication efficiency benefits this might provide?** (Other Strengths And Weaknesses & Questions For Authors) **A2:** Thank you for your suggestions. Regarding FedICU, since LoRA splitting and merging can be performed locally on either the client or server, it doesn't introduce additional communication overhead compared to standard federated learning. For the Importance-Aware Updating component, we'll analyze it mathematically. Assuming there are $K$ clients, each with $N$ parameters, and in the mask built by the parameter importance selection upload component, the proportion that needs to be uploaded is $\alpha$. So the communication cost of standard federated learning would be $C_{std} = O(K * N)$ , while the communication cost of our method is $C_{imp} = O\left(\sum_{k=1}^{K} N \times \alpha_k\right), \text{ where } \alpha_k < 1$, which demonstrates the communication savings of our method.
Summary: This paper proposes FedICU to address client heterogeneity in FL. FedICU consists of two key components: 1. **Consensus-Divergence Splitting**: This method decomposes client updates into magnitude and direction, treating magnitude as consensus and direction as divergence. The two components are then aggregated separately. 2. **Importance-Aware Parameter Selection**: This technique selects the most important parameters for sparse updating, enhancing efficiency. Numerical results demonstrate the superior performance of the proposed method compared to baseline approaches. ## update after rebuttal. I remain unconvinced by the following points: (1) The rationale for using direction (cosine similarity) to measure differences in magnitude is unclear. (2) Whether the observation that "client updates tend to have similar magnitudes but vary in direction" sufficiently demonstrates that magnitudes represent consensus while directions indicate divergence remains unaddressed. (3) Whether the proposed method preserves knowledge in scenarios *where base models perform well* is still not adequately explained. However, I acknowledge and respect the efforts made by the authors, and I am increasing my rating to 3. Claims And Evidence: The claim regarding the relationship between magnitude-direction and consensus-divergence does not appear to be particularly strong. The authors have demonstrated that client updates tend to have similar magnitudes but vary in direction. However, it is unclear whether this observation is sufficient to substantiate their claim. Methods And Evaluation Criteria: Some aspects of the method design are unclear: 1. **Notation Confusion**: The notation appears inconsistent. According to Eq. (1), we have \(\Delta \theta \in \mathbb{R}^{d \times k}\). However, before Eq. (5), this changes to \(\theta_i \in \mathbb{R}^{r \times d}\), and before Eq. (9), it becomes \(\Delta \theta_i^{k} \in \mathbb{R}^{d \times d}\). Could the authors clarify which part of the LoRA parameters is actually being updated? 2. **Assumption in Eq. (7)–(8)**: The design of Eq. (7)–(8) relies on the assumption that "the noise of the consensus is the same." The authors state that this is supported by the results in Figure 3. However, they do not specify which metric is used for Figure 3. If cosine similarity is used, Figure 3 may not sufficiently support this assumption. Could the authors provide further clarification? Theoretical Claims: The convergence rates are given. Experimental Designs Or Analyses: Currently, only the final performance on domain knowledge is reported, which may not be sufficient to fully support the key motivation of this paper. To strengthen the claims, please consider including the following experiments: 1. **Preservation of General Knowledge**: Demonstrate that FedICU does not disrupt the general knowledge of LLMs, whereas other baselines do. This can be achieved by evaluating performance on benchmarks where the backbone LLMs typically perform well. 2. **Effectiveness of Sparse Gradients**: Show that sparse gradients effectively prevent catastrophic forgetting while maintaining generalization performance. 3. **Convergence Improvement**: Provide evidence that Consensus-Divergence Splitting accelerates convergence compared to the baselines. 4. **Handling Client Drift**: Illustrate how FedICU excels in managing varying levels of client drift. 5. **Case Studies on Client Drift**: Include case studies where client drift occurs, highlighting instances where baseline methods fail while FedICU performs well. Supplementary Material: I did not carefully checked the proof. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: The paper is easy to follow. Other Comments Or Suggestions: Since LLMs are generative models, a key concern is whether FedICU could unintentionally expose user data when local LoRA updates are uploaded to the server. Could the authors clarify how FedICU prevents this risk? Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer UVcT: Thank you very much for your time and review suggestions on our paper. We hope the following responses can address your concerns. **Q1:The claim regarding the relationship between magnitude-direction and consensus-divergence does not appear to be particularly strong.** (Claims And Evidence) **A1:** During model training, parameters are optimized toward both global and local optima, and direct aggregation leads to decreased general ability. We decompose parameters into consensus and divergence, where the former represents similar update behaviors across clients, while the latter represents domain-specific update characteristics. We find that during training, magnitude (consensus) updates remain relatively consistent, while directional (divergence) updates differ significantly. This shows that clients maintain general capabilities through consensus parameters and domain-specific capabilities through divergence parameters across downstream tasks. To further support our claim, we conduct an experiment with results shown in the table below. We find that consensus are similar across clients, while divergence differ more significantly, supporting our argument. We will add this experiment to revised manuscript to further substantiate our claim. *Table: Similarity of consensus matrices.* |Clients|1|2|3|4| |-|-|-|-|-| |1|1|0.85|0.89|0.90| |2|0.85|1|0.85|0.91| |3|0.89|0.85|1|0.90| |4|0.90|0.91|0.90|1| *Table: Similarity of divergence matrices.* |Clients|1|2|3|4| |-|-|-|-|-| |1|1|0.23|0.49|0.04| |2|0.23|1|0.34|0.05| |3|0.49|0.34|1|0.11| |4|0.04|0.05|0.11|1| *Table: Similarity of unsplit matrices.* |Clients|1|2|3|4| |-|-|-|-|-| |1|1|0.32|0.54|0.05| |2|0.32|1|0.35|0.07| |3|0.54|0.35|1|0.14| |4|0.05|0.07|0.14|1| **Q2: Notation Confusion** (Methods And Evaluation Criteria) **A2:** We apologize for the inconsistent notation. - In Eq.1, $\Delta\theta \in \mathbb{R}^{d\times k}$. - For the others, the dimension of LoRA should be corrected to $d$ x $k$. In Eq.9 part, we introduce $A$ and $B$ as its decomposition matrices, with dimensions $d$ x $r$ and $r$ x $k$ respectively. **Q3: Assumption in Eq. 7, 8 is not fully supported by Figure 3.** (Methods And Evaluation Criteria) **A3:** In Figure 3, we use cosine similarity to support the assumption that "consensus noise" is the same. To further support it, we conduct an experiment, as shown in the table below. *Table: the $\mu$ and $\theta$ of consensus parameters' update. $\mu$ represents the mean of the consensus vector updates, and $\theta$ represents the fluctuations during the update process.* |Client|$\mu$|$\theta$| |-|-|-| |1|37.44|1.06| |2|37.44|1.06| |3|37.44|1.06| |4|37.44|1.06| From the table, the similarity of $\mu$ and $\theta$ across different clients indicates that the consensus noise is consistent. **Q4:Need experiments to strengthen the claims in different aspects.** (Experimental Designs Or Analyses) **A4:** Thank you for your advice. We discuss our experiment and conduct more to further support our claims below. - Preservation of General Knowledge: Table 1 in paper details our model's performance. We measure FedICU on MT-Bench[1] to test general capabilities. The optimal performance of ours demonstrates that it can preserve the model's general knowledge to a certain extent. - Effectiveness of Sparse Update: According to related research [2, 3], sparse updates can help prevent catastrophic forgetting. Additionally, Table 2 in our paper shows that the model improves when including Sparse Update, proving the effectiveness. - Convergence Improvement: We conduct an experiment measuring the rounds to reach average loss, with the table below. We can find that FedICU optimizes convergence. - Handling Client Drift & Case Studies: In Table 1 of our paper, we test FedICU with other methods across various domains. FedICU achieves optimal average domain knowledge, meaning it effectively handles domain knowledge without excessive bias. *Table: Convergence rounds of FedICU and other methods.* |Method|FedAvg|FedProx|FedAvgM|Scaffold|FedAdam|FedYogi|FedICU| |-|-|-|-|-|-|-|-| |Round|53|43|45|38|37|39|34| We will add this discussion and experiment in revised manuscript to support our claims. [1] Openfedllm: Training large language models on decentralized private data via federated learning. **KDD 2024.** [2] A simple and effective pruning approach for large language models. **arXiv 2023.** [3] LoRASculpt: Sculpting LoRA for Harmonizing General and Specialized Knowledge in Multimodal Large Language Models. **arXiv 2025.** **Q5: A key concern is whether FedICU could unintentionally expose user data to the server.** (Other Comments Or Suggestions) **A5:** FedICU follows FL protocols by only sharing parameter updates, not raw data. With Importance-Aware Update mechanism, FedICU can further reduce exposure risk by filtering parameters. Additionally FedICU follows standard FL logic and is compatible with other privacy-enhancing solutions.
Summary: The paper proposes FedICU, a novel federated learning framework for large language models (LLMs) in heterogeneous settings. It decomposes client updates into consensus and divergence components. In the global aggregation phase, it balances these components based on their contribution to the global model performance. At the client level, it uses an importance - aware parameter updating strategy. Experiments across various domains show that FedICU outperforms existing federated learning approaches in generalization performance and domain adaptation. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. The Consensus - Divergence Splitting method decomposes LoRA updates in a way that effectively captures general and client-specific knowledge, which is crucial for handling heterogeneous downstream instructions. The Importance - Aware Updating method reduces computational overhead and prevents catastrophic forgetting. For evaluation, using metrics like generalization (first turn’s score from MT - Bench), contextual understanding (final score from MT - Bench), and domain - specific metrics (e.g., Human Eval for code, MMLU for finance, GSM8k for math) comprehensively assesses the model's performance in different aspects relevant to LLMs in federated learning. Theoretical Claims: The authors provide theoretical guarantees for the convergence of their proposed algorithms. For the Consensus - Divergence Splitting method, under standard assumptions such as smoothness, bounded gradient, and unbiased and bounded-variance gradient estimates, they prove convergence to a stationary point for non-strongly convex functions and to the unique global optimum for strongly convex functions. For the Importance - Aware Updating mechanism, they also show convergence under similar assumptions, given that the masking error is bounded. The proofs seem to be well-structured and based on established mathematical concepts in optimization theory. Experimental Designs Or Analyses: The experimental designs are reasonable. The authors train the model on four different datasets (Taori et al., 2023; Xiang Yue, 2023; CodeAlpaca - 20k; FinGPT) to simulate heterogeneous data distribution. They compare FedICU with multiple state - of - the - art methods in the same experimental setting. The hyperparameters are set following a benchmark, and all experiments are repeated three times to ensure statistical significance. The ablation study of key components and hyperparameter study further validate the effectiveness and sensitivity of the proposed framework. Supplementary Material: No, the author did not provide any supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader scientific literature. The paper builds on existing works in parameter - efficient fine - tuning for LLMs, such as LoRA, and federated learning for LLMs. It addresses the limitations of previous approaches, like the lack of consideration for heterogeneous downstream instructions and poor understanding of the interaction between parameter - efficient fine - tuning and federated learning. By proposing FedICU, it extends the research in this area, offering a more effective way to fine-tune LLMs in federated learning. Essential References Not Discussed: There do not seem to be any essential references not discussed in the paper. Other Strengths And Weaknesses: A major strength of the paper is its innovative framework, which effectively addresses the challenges in heterogeneous federated learning for LLMs. The combination of Consensus - Divergence Splitting and Importance - Aware Updating is novel and shows significant performance improvement. However, a weakness is that the method relies on LoRA, which may not achieve the same performance as full - parameter fine - tuning. Also, the experimental results are mainly based on specific datasets and models, and the generalization to other scenarios may need further investigation. Other Comments Or Suggestions: Please refer to "Other Strengths And Weaknesses" box. Questions For Authors: 1. In the Importance - Aware Updating mechanism, the threshold for parameter selection (comparing generalization and specialization importance) is based on a binary decision. Have you considered using a more flexible thresholding method, and how would it affect the performance? A more flexible method might better balance the trade-off between preserving general knowledge and adapting to specific domains. If it leads to a significant improvement in performance, it could strengthen the proposed approach. 2. The experiments are conducted on specific datasets and a single model (Llama-2-7b-hf). How do you expect the performance of FedICU to scale when applied to larger models or different model architectures? If FedICU can show consistent performance improvement across different model scales and architectures, it would greatly expand its applicability. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer UEdC: Thank you very much for your recognition of our paper and detailed suggestions. We will answer and explain the issues in detail below. **Q1: The method relies on LoRA, which may not achieve the same performance as full parameter fine-tuning.** (Other Strengths And Weaknesses) **A1:** Thank you very much for your suggestion. Using LoRA is a common method for fine-tuning large language models today, providing significant performance improvements at relatively low cost. In the future, we will further explore how to use LoRA more efficiently to achieve the ability to fine-tune nearly all parameters. **Q2: Consider using a more flexible thresholding method in the Importance-Aware Updating mechanism.** (Questions For Authors) **A2:** Thank you very much for your suggestions. We conduct a supplementary ablation study to investigate the impact of momentum-based parameter selection and continuous-valued parameter weighting on model generalization capability, as shown in the table below. In the Momentum column, we indicate whether we include the momentum component to smooth the mask construction process. In the Smooth column, we use a smoothing mask approach based on the formula below to filter uploaded parameters and their weights. The results validate that our binary component is a simple and effective method. We will add this supplementary experiment to the appendix of our revised manuscript to enhance the description of the effectiveness of our component. *Formula: Smooth updates component.* $$ W[v] = \\begin{cases} \\min{(1, \\frac{G[v]}{I[v] + G[v]})} & \\text{if } G[v] > I[v] \\\\ 0 & \\text{other} \\end{cases} $$ *Table: Experiment results of ablation study about Importance-Aware Update.* | Momentum | Smooth | MT-1 | MT-2 | MT-Final | | -------- | -------- | ---- | ---- | -------- | | ❌ | ✅ | 4.59 | 3.20 | 3.90 | | ✅ | ❌ | 4.60 | 3.33 | 3.97 | | ✅ | ✅ | 4.65 | 3.37 | 4.01 | |✅(Ours)|❌(Ours)|**4.83**|**3.43**|**4.13**| **Q3: How do you expect the performance of FedICU to scale when applied to larger models or different model architectures?** (Questions For Authors & Other Strengths And Weaknesses) **A3:** We greatly appreciate your suggestion. We've conducted supplementary experiments with Mistral-7B to verify that our method is effective across a broader range of model architectures. The results of these supplementary experiments are as follows. We will add this experiment to our revised manuscript to further support our approach. *Table: The performance of FedICU and other methods applied in the Mistral-7B. MT-1 shows the model's general capability, while MT-2 shows the model's level of contextual understanding. MT-Final serves as a comprehensive metric that combines both of the aforementioned indicators. **The results show that FedICU also demonstrates excellent performance across models with different architectures.*** | Method | MT-1 | MT-2 | MT-Final | | -------- | ---- | ---- | ----------- | | Base | 4.10 | 3.29 | 3.69 | | FedAvg | 5.48 | 3.80 | 4.64 | | Scaffold | 5.46 | 3.92 | 4.69 | | FedYogi | 5.54 | 3.85 | 4.70 | | FedProx | 5.50 | 3.83 | 4.67 | | FedAdam | 5.56 | 3.91 | 4.73 | | FedAvgM | 5.55 | 3.73 | 4.64 | | Ours | **5.58** | **4.11** | **4.84** |
Summary: This paper introduces FedICU, a framework designed to enhance fine-tuning of Large Language Models in Heterogeneous Federated Learning. The paper presents two core innovations: - **Consensus-Divergence Splitting**: A technique that decomposes client updates into **consensus (common capabilities)** and **divergence (domain-specific features)**. This allows the global model to **retain fundamental general knowledge** while **capturing client-specific information** effectively. - **Importance-Aware Updating**: A method that evaluates the importance of each parameter update to **reduce unnecessary updates**, **improve communication efficiency**, and **prevent catastrophic forgetting**. Claims And Evidence: The statement in **Lines 319-322**: *"The significance of parameter updates varies across different downstream tasks, with some parameters showing minimal activation. These inactive parameters not only increase computational overhead during global aggregation but also can lead to catastrophic forgetting."* This claim is unclear. For example, consider a **CIFAR-10 classification scenario** where the central global model is trained to generalize across all classes: - **Client 1** has a well-balanced dataset covering all classes, resulting in minimal drift from the global model and smaller parameter updates. - **Client 2** only has samples from classes 0 and 1, causing **significant client drift** and larger parameter updates. In this situation, would **excluding Client 1** (which exhibits less parameter variation) actually **benefit the global model’s performance**? If so, could you clarify how inactive parameter updates contribute to catastrophic forgetting in this context? Methods And Evaluation Criteria: 1. The physical meaning of the importance measures defined in **Equations (14) and (16)** is not very intuitive. They appear to be defined arbitrarily rather than derived from a concrete motivation. It would be helpful to visualize their distributions in an **LLM FL setting** to demonstrate their necessity and relevance. 2. The justification for **Equations (17) and (18)** should be further reinforced with supporting evidence to strengthen their validity. Theoretical Claims: I noticed that the appendix contains theorems, but since there are no theoretical claims presented in the main text, and they do not appear to be central to the paper, I did not review them. Experimental Designs Or Analyses: I did not review the experimental designs or analyses in detail but briefly looked over the results. Supplementary Material: I reviewed the supplementary material by checking the structure, including the theoretical components in the appendix, and briefly skimmed through it. Relation To Broader Scientific Literature: Unlike previous studies, this paper proposes a direct PEFT (Parameter-Efficient Fine-Tuning) method to address catastrophic forgetting, which makes it a novel contribution to the field. Essential References Not Discussed: In **Line 193**, the paper states: *"Moreover, indiscriminate parameter updates risk catastrophic forgetting, where the model loses previously acquired knowledge and suffers degraded generalization capabilities."* To strengthen this claim, it would be helpful to cite works that have empirically demonstrated **catastrophic forgetting in FL**, particularly in **image classification** settings. The following references provide relevant insights: - **[1]** Preservation of the global knowledge by not-true self knowledge distillation in federated learning, **NeurIPS 2022**. - **[2]** Flashback: Understanding and Mitigating Forgetting in Federated Learning, **ArXiv 2024**. - **[3]** FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning, **TMLR 2025**. Including these references would help contextualize the discussion of **catastrophic forgetting in FL** beyond LLM applications and provide additional experimental evidence supporting the argument. Other Strengths And Weaknesses: Apart from the points mentioned earlier, I have no additional comments. Other Comments Or Suggestions: There are inconsistencies in the notation throughout the paper, and the lack of detailed explanations for some metrics makes it very difficult to understand. There are several inconsistencies and ambiguities in the notation throughout the paper: - In **Section 3**, the FL round index is denoted as **i**, but in the pseudo-code, it is written as **t ∈ [T]**. The notation should be consistent. - In **Equation (9)**, **S_{p,q}** includes **v_p, v_q**, but it is unclear how these relate to **A_k^i** and **B_k^i**. Providing a clear explanation would improve clarity. - In **Equation (10)**, the possible values for **j** should be explicitly stated, as the current notation is ambiguous and confusing. - In **Equation (11)**, the temperature parameter is denoted as **T**, but in the pseudo-code, it is written as **τ (tau)**. It would be better to use a consistent notation throughout the paper. - In **Equation (14)**, the normalization process lacks clarity regarding which samples were used to derive the **mean and variance**. Providing a precise description would enhance readability and understanding. Questions For Authors: Apart from the points mentioned earlier, I have no additional comments. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer fzFX: Thank you for your valuable review of our paper. We hope our responses will address your concerns and improve our score. **Q1: Clarify how inactive parameter contribute to catastrophic forgetting.** (Claims And Evidence) **A1:** We apologize for the confusion in our original statement. In FedICU, we refer to inactive parameters, not inactive clients. In neural networks, only a small portion of parameters play crucial roles. To prevent clients focusing solely on local datasets and uploading all parameters that cause forgetting, FedICU calculates parameter importance and sparsely uploads parameters to balance the global model's general and domain capabilities. In the example, FedICU will upload parameters with more importance during training, balancing contributions from both clients. We conduct an experiment with results shown in the table below, demonstrating that FedICU is more stable than baseline methods and selects a substantial number of parameters from Client 1 for uploading rather than ignoring them. *Table: The accuracy of FedAvg and FedICU on CIFAR-10.* |Method/Round|78|79|80|81|82|83|84| |-|-|-|-|-|-|-|-| |FedAvg|41.37|42.38|42.54|41.66|42.48|40.13|40.84| |FedICU|45.57|46.08|46.29|42.42|46.25|46.54|46.75| *Table: The selection ratio in FedICU on CIFAR-10. The ratio means the proportion of parameters for upload. **It shows that FedICU will not ignore Client 1 for the similarity with the global model**.* |Client/Round|78|79|80|81|82|83|84| |-|-|-|-|-|-|-|-| |1|0.4853|0.4852|0.4866|0.4864|0.4866|0.4852|0.4855| |2|0.4850|0.4851|0.4849|0.4855|0.4849|0.4852|0.4857| **Q2: The physical meaning of Eq.14, 16 is not intuitive.** (Methods And Evaluation Criteria) **A2:** The internal logic of Eq. 14, 16 is that we define parameter importance based on the magnitude of updates[1, 2]. To compare client and global, we perform normalization and introduce momentum to make it stable, deriving Eq.14, 16. We add an experiment to show the distribution of importance parameters, with results shown in the table below. *Table: Parameter distribution in FL LLM. Overlap indicates important parameters globally and locally, High-I represents parameters emphasizing general ability, and High-G represents parameters emphasizing domain capability.* |Class/Round|5|15|25|35|45|55|65| |-|-|-|-|-|-|-|-| |Overlap|0.24|0.16|0.13|0.12|0.11|0.10|0.08| |High-I|0.48|0.46|0.46|0.45|0.45|0.44|0.44| |High-G|0.50|0.40|0.38|0.38|0.37|0.36|0.35| The results show definition's physical significance. High-I remains stable while decreasing High-G proportions indicate better identification of core domain parameters. Declining overlap suggests clearer functional partitioning, effectively separating general and domain-specific knowledge. **Q3: The justification for Eq.17, 18 should be reinforced to strengthen validity.** (Methods And Evaluation Criteria) **A3:** Sparsely updating model is an approach to mitigate catastrophic forgetting [2, 3]. In FedICU, We construct masks and sparsely update models (Eq.17, 18) to mitigate forgetting. To further demonstrate the effectiveness, we conduct an experiment as shown in the table below. For Momentum, we test whether to incorporate momentum components, and for Smooth, whether to use binary or smooth updates based on the formula below. Results show our method achieves the best performance, demonstrating its effectiveness. *Formula: Smooth updates component.* $$ W[v] = \\begin{cases} \\min{(1, \frac{G[v]}{I[v] + G[v]})} & \\text{if } G[v] > I[v] \\\\ 0 & \\text{other} \\end{cases} $$ *Table: Experiment results of ablation study. MT-1 shows the model's general capability, while MT-2 shows the model's level of contextual understanding. MT-Final is the metric combining both of the two indicators.* |Momentum|Smooth|MT-1|MT-2|MT-Final| |-|-|-|-|-| |❌|✅|4.59|3.20|3.90| |✅|❌|4.60|3.33|3.97| |✅|✅|4.65|3.37|4.01| |✅(Ours)|❌(Ours)|**4.83**|**3.43**|**4.13**| [1]Learning both weights and connections for efficient neural network.**NeurIPS, 2015**. [2]A simple and effective pruning approach for large language models.**arXiv 2023**. [3]Finding sparse, trainable neural networks.**arXiv 2018**. **Q4: Some essential references are not discussed.** (Essential References Not Discussed) **A4:** Thank you for suggesting additional references. We will incorporate these into the revised manuscript. **Q5: Some inconsistencies in the notation.** (Other Comments Or Suggestions) **A5:** Thank you for pointing out the unclear expressions. - For the second, the vectors are defined in Lines 252-261. Specifically, we perform vector-wise aggregation by processing $A_k^i$ column-by-column to obtain $r$-dimensional vectors $v$, while for $B_k^i$, we process it row-by-row. - For the third, $j \in \{1, 2, ..., n\}$. - For the fifth, we use model gradients for mean and variance. We will correct all of them in the revised manuscript.
null
null
null
null
null
null
Leveraging Sparsity for Sample-Efficient Preference Learning: A Theoretical Perspective
Accept (poster)
Summary: The paper explores the impact of sparsity in preference learning, establishing a minimax lower bound on empirical error under sparse RUM and deriving upper bounds for two sparsity-regularized estimators. The experiments, conducted on both a synthetic dataset and an LLM alignment setting, validate the theoretical findings. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check the details into the proofs due to time constraints. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, the details on experiments. Relation To Broader Scientific Literature: The paper connects the well-known role of sparsity in traditional compressed sensing to preference learning, highlighting its ubiquity across broader literature. Essential References Not Discussed: Reference well discussed. Other Strengths And Weaknesses: **Strengths:** - The paper is comprehensive, establishing both a lower bound and upper bounds for different estimators, along with a clear comparison between them. - The experimental setup is well-designed and effectively validates the theoretical contributions. - The writing is clear and seamlessly integrates the two topics in a straightforward manner. **Weaknesses:** - It focuses solely on the fixed design setup (i.e., a deterministic set of pairs) and does not address generalization error. Other Comments Or Suggestions: No. Questions For Authors: - To address the limitation of the fixed design setup, can we explore ways to generalize the model? - The experimental results primarily focus on the accuracy of the reward model. Can we take this further by evaluating LLMs after applying PPO with the sparsity-regularized reward model? - Maybe I missed the details, in Appendix D.2, you report accuracy versus the number of samples. How are these samples selected? Are they guaranteed to be the same set for training with and without L1 regularization? - Just brainstorm, to verify the presence of sparsity in LLMs, could we train a reward model without regularization at different sample sizes, measure its sparsity, and analyze whether sparsity naturally increases as training progresses? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. We are glad that the reviewer found the theoretical contributions comprehensive, the experiments well-designed, and the writing clear in presenting the connections between sparsity and preference learning. Below, we respond to each of the comments in detail. (1) *"It focuses solely on the fixed design setup (i.e., a deterministic set of pairs) and does not address generalization error." "To address the limitation of the fixed design setup, can we explore ways to generalize the model?"* We appreciate the reviewer for this comment. While our current study focuses on a fixed design setup, this setting is commonly encountered in real-world applications, such as crowdsourced comparisons or benchmark datasets, where the pairwise comparisons are predetermined. That said, extending our analysis to the random design setup and studying generalization error is an exciting and important direction for future work. We thank the reviewer for highlighting this point. In particular, recent work on linear regression under random design (e.g., [1-3]) may provide valuable insights. For example, as suggested in [2], techniques from random matrix theory and random projections may be helpful for such an extension. [1] https://jmlr.org/papers/volume23/19-571/19-571.pdf [2] https://arxiv.org/pdf/2303.01372 [3] https://arxiv.org/pdf/2203.08564 (2) *"The experimental results primarily focus on the accuracy of the reward model. Can we take this further by evaluating LLMs after applying PPO with the sparsity-regularized reward model?"* We thank the reviewer for this suggestion. Indeed, a natural extension is to evaluate LLMs fine-tuned via PPO using the learned reward model. For example, given a set of prompts, one could compare LLM-generated responses through human evaluation or an oracle model, using win rate as a downstream metric. Although we initially considered such a metric, we eventually chose not to pursue it in this paper to maintain our focus on reward modeling without incorporating additional policy optimization. Nonetheless, we agree with the reviewer that this evaluation approach is a compelling direction for future work. (3) *"Maybe I missed the details, in Appendix D.2, you report accuracy versus the number of samples. How are these samples selected? Are they guaranteed to be the same set for training with and without L1 regularization?"* Yes, we use the same set of samples across models to ensure a fair comparison. Specifically, for each trial, the training data is sampled uniformly at random and fixed in advance; this identical dataset is then used to train both the regularized and unregularized models. We will make this point more explicitly in the revised version. (4) *"Just brainstorm, to verify the presence of sparsity in LLMs, could we train a reward model without regularization at different sample sizes, measure its sparsity, and analyze whether sparsity naturally increases as training progresses?"* We appreciate this interesting idea. In theory, the estimation error in $\ell_2$-norm $\lVert\hat{\theta} - \theta^*\rVert_2$ without sparse-regularization will gradually be small given that 1) the number of samples is sufficiently large relative to the feature dimension $d$, and 2) the Gram matrix $\Sigma$ is well-behaved, e.g., satisfying the restricted eigenvalue condition. As a result, in practice, if the ground-truth parameter $\theta^*$ is sparse, the learned parameter would be sparse even without regularization under the above conditions. Nonetheless, given the high dimensionality of the feature space in LLM-based reward models, the current dataset does not meet the scale required for such behavior to emerge. We appreciate the suggestion and agree that this would be an interesting empirical direction to pursue with enough annotated data. We once again thank the reviewer for the recognition and thoughtful questions. We believe these discussions will help guide valuable extensions of this work.
Summary: This paper proposes a sparse setting for preference learning, the authors state that human preferences are driven by some critial factors, of which the dimension is generally low. Therefore, the authors study the preference learning problem in the sparse setting from a theoretical perspective, deriving bounds for estimation error under $l_0$ and $l_1$ regularizations, respectively. Claims And Evidence: I think the k-sparse claim for RUM needs further evidence. Although the authors provide better theoretic bounds for estimation error in dimension $d$, the overall claim of this paper that RUM is sparse is not validated in experiments. The first experiment directly uses this assumption to sample the ground-truth $\theta^*$, and the second experiment uses the prediction accuracy as the evaluation metric, which is supportive but not directly linked to the sparsity assumption. I wonder if there're other baselines the authors could use in the literature to support this assumption. I have no problems with other assumptions. Methods And Evaluation Criteria: If the authors could have further justification for the sparsity assumption, then the proposed methods make sense. Actually, the second experiment could be conducted with one more dataset, to provide more comprehensive evaluation. Theoretical Claims: Yes, I checked the derivation of the four main theorems in the Appendix. Experimental Designs Or Analyses: The experiment 4.1 validates the effectiveness of $l_1$ regularization compared with standard MLE in the preference learning case. However, the ground-truth parameter $\theta^*$ is manually set to be sparse, which undermines this experiment. Also, this experiment does not justify the assumption that RUM in preference learning satisfies sparsity. The experiment 4.2 uses LLM alignment as the task, and the reward model accuracy on the test set is used as the evaluation metric. However, the rm-static dataset, which is a split of HH dataset [1], may contain noise [2], reducing the reliability of the experiment results. The authors should consider adding a cleaner dataset for experiment. [1] https://huggingface.co/datasets/Anthropic/hh-rlhf [2] Impact of preference noise on the alignment performance of generative language models, COLM 2024 Supplementary Material: Yes, I briefly scan the codes, of which I have no problem. Relation To Broader Scientific Literature: The authors may consider apply the sparsity setting to other preference learning algorithms with utility models, for example, DPO [1]; The sparsity assumption might somehow relate to sparse attention [2, 3], but in different perspectives. [1] Direct Preference Optimization: Your Language Model is Secretly a Reward Model, NeurIPS 2023 [2] Linformer: Self-Attention with Linear Complexity [3] Efficient Streaming Language Models with Attention Sinks, ICLR 2024 Essential References Not Discussed: I think the authors have covered the most related literature. Other Strengths And Weaknesses: Strengths: The theoretical derivations in this paper are complete and provide insights for learning utility functions based on pairwise comparisons under the sparsity assumption. Weaknesses: 1. The sparsity assumption in this paper is not well justified. 2. The experiments lack sufficient baselines. Other Comments Or Suggestions: From a reader's perspective, I would suggest the authors adding references for theorem derivations around each theorem in the main content. Questions For Authors: 1. How would the authors justify that using sparsity in this scenario is a reasonable approach? 2. Is the experiment in section 4.1 necessary? As previous research has shown that $l_1$ regularization works well under the sparsity condition in MLE, what is the main difference between preference learning under sparsity and previous literature? 3. Are there any other evaluation metrics for the quality of reward models in experiment 4.2? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. We greatly appreciate the recognition of our theoretical contributions and the time taken to carefully examine the proofs of our four main theorems. Below, we address the comments and questions in detail. **(1) Evidence for k-sparse assumption for RUM.** *"I think the k-sparse claim for RUM needs further evidence…"* We thank the reviewer for raising this point, which was also noted by **Reviewer ACSW**. A detailed response is provided there in **(2) On the sparsity assumption and its practical validity**. Due to space constraints, we refer the reviewer to that response. Here, we offer additional clarifications specific to the reviewer’s comments. - *"I wonder if there're other baselines the authors could use in the literature to support this assumption…"* As noted in the introduction, the theoretical and empirical foundations for sparsity in preference learning remain largely underexplored. Nevertheless, we observe increasing interest in related domains. For example, a very recent work by Jin et al. (Sparsity-Agnostic Linear Bandits with Adaptive Adversaries, NeurIPS 2024) studies sparsity in linear reward functions in the context of adaptive bandits. While not directly addressing pairwise comparisons, this work provides complementary motivation for sparse reward structures. - *"the second experiment uses the prediction accuracy as the evaluation metric, which is supportive but not directly linked to the sparsity assumption"* We thank the reviewer for the constructive feedback. Empirically, we observe that the learned parameters under $\ell_1$ regularization are indeed highly sparse. For example, in Figure 4 (frozen backbone setting), the learned sparsity level $k/d$ is around 4-8% for both datasets, and the sparsity-aware method consistently outperforms the baseline. These results help to empirically validate the sparsity assumption. We will include these results in the updated version. **(2) Real-Data Experiments.** - *"the second experiment could be conducted with one more dataset…" "The experiments lack sufficient baselines."* We thank the reviewer for this suggestion. In addition to rm-static, we include results on the SHP dataset in Appendix D due to page limitation. Notably, SHP contains human-written responses, while rm-static contains machine-generated ones, giving us two distinct distributions, as mentioned by Ethayarajh et al., (2022). In both cases, $\ell_1$ regularization outperforms the baseline. - *"However, the rm-static dataset… may contain noise [2], reducing the reliability of the experiment results. The authors should consider adding a cleaner dataset for experiment."* We thank the reviewer for the observation. While rm-static is widely used as a benchmark dataset for alignment-based preference learning, we agree that evaluating on cleaner datasets, or applying data-filtering techniques (e.g., confidence-based data filtering in [2]) beforehand, could provide additional value. We are very interested exploring this direction and will consider it in the future work. - *"Are there any other evaluation metrics for the quality of reward models in experiment 4.2?"* We thank the reviewer for this question. One indirect way is to evaluate the performance of an aligned LLM associated with the reward models. Specifically, one can sample prompts, generate responses from the aligned LLMs, and then assess quality via human annotation or an oracle model. The win rate of the aligned LLM can serve as a downstream measure of the quality of the reward model. **(3) Synthetic Experiments.** *"The experiment 4.1 validates the effectiveness of l1-regularization compared with standard MLE in the preference learning case. However, the ground-truth is manually set to be sparse, which undermines this experiment." "Is the experiment in section 4.1 necessary? … what is the main difference between preference learning under sparsity and previous literature?"* We thank the reviewer for this question. As correctly pointed out, the purpose of Experiment 4.1 is not to justify the sparsity assumption, but to validate the effectiveness of $\ell_1$ regularization in the sparse RUM setting. It serves as a sanity check for our theoretical analysis, which provides estimation error bounds under sparse ground truth. While $\ell_1$ regularization has been well-studied in sparse regression and related settings, sparse preference learning under RUM poses unique challenges. In particular, each pair of data only provides one-bit comparison feedback, rather than direct access to real-valued rewards. For completeness, we include the numerical experimental results in Section 4.1 in the paper. We thank the reviewer again for the constructive comments and the recognition of our work. We will incorporate the above clarifications, along with other suggestions from the reviewer, in our updated version. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response to my questions. Most of my concerns regarding the justification of sparsity assumption, and the soundness of experiments are resolved. Based on the overall quality of the work, I decide to maintain my score.
Summary: The authors analyze the sample complexity of RUMs where utilities are an inner product between $d$-dimensional item features $x$ and preference parameters $\theta$, and where $\theta$ is $k$-sparse. Whereas existing results on the sample complexity of learning $\theta$ find that error decays as $\Theta(d / n)$ with $n$ samples, the authors show that when $\theta$ is $k$-sparse, this rate can be improved to $\Theta(k \log (d / k) / n)$, a large improvement when $d$ is large and $k$ is small. Moreover, the authors show that the $\ell_1$-regularized MLE has a near-optimal sample complexity. In experiments on synthetic data, the authors demonstrate the sample complexity benefit of using $\ell_1$ regularization, in line with their theoretical results. They also demonstrate the usefulness of accounting for the sparsity of preferences with $\ell_1$ regularization in RLHF of LLMs. ### Update after rebuttal Ah, the squared vs not squared norm explains my confusion. Thanks! Glad to see the other reviewers agree on recommending acceptance. Claims And Evidence: All claims are well supported by theoretical, simulation, and experimental evidence. Methods And Evaluation Criteria: The methods and evaluation criteria are solid. Theoretical Claims: The claims are very clearly stated and the supplement has proofs. I skimmed the proof of Theorem 3.1, which looks reasonable. Experimental Designs Or Analyses: The experiments and subsequent analyses, while not central to the paper, are well done. Supplementary Material: I skimmed the supplement to check that it contained proofs of theoretical claims. Relation To Broader Scientific Literature: This paper provides theoretical backing for the practice of $\ell_1$ regularization in RUMs when the utility parameters are sparse, proving an asymptotic improvement of the best known sample complexity rates when preferences are sparse. Essential References Not Discussed: There is another ICML paper that comes to mind involving sparse RUMs (where only a subset of features are "salient" in a particular comparison) that also derives sample complexity rates: Bower & Balzano. Preference Modeling with Context-Dependent Salient Features, ICML 2020: https://proceedings.mlr.press/v119/bower20a.html From what I can tell, their convergence rate is $O(\sqrt{d \log d / n})$. I think this paper is very much worth discussing in relation to authors' results. A few other papers that could have been in the related work, but are not essential (listing in case the authors were not aware of these and feel that any of them are worth including) - https://proceedings.mlr.press/v97/seshadri19a.html (has sample complexity results for a RUM variant) - https://proceedings.neurips.cc/paper/2020/hash/6affee954d76859baa2800e1c49e2c5d-Abstract.html (above model applied to rankings, in the style of Plackett-Luce; also has sample complexity results) - https://dl.acm.org/doi/10.1145/3447548.3467250 (uses $\ell_1$ regularization in a RUM to learn sparse parameters) - https://doi.org/10.1017/nws.2023.20 (discusses sample complexity benefit of Laplacian regularization for RUMs with preferences correlated over graphs) - https://proceedings.mlr.press/v119/rosenfeld20a.html (another ML-based RUM variant) Other Strengths And Weaknesses: I wish all papers were this well written; thanks to the authors for submitting something so polished. The results and experiments are very nice and I would like to see this paper accepted. It just needs a discussion of Bower & Balzano (2020) and I would like to resolve my question below about the convergence rate from Zhu et al (2023). Strengths: - The paper is extremely clearly written, to a very high standard. - The problem is both timely in its applications to RLHF and timeless in its application to choice modeling more broadly - The results appear to be a significant improvement on prior sample complexity results in the sparse setting, which is very encouraging for using $\ell_1$ regularization - The experiments do a great job of succinctly demonstrating the benefit of leveraging sparsity through $\ell_1$ regularization Weaknesses: - People have been using $\ell_1$ regularization when fitting choice models with sparse parameters for many years, so while these new bounds provide some nice theoretical grounding for this approach, it doesn't necessarily impact practice. Perhaps raising this issue is necessary for the RLHF crowd - One important related work on RUMs with sparse features is not discussed Other Comments Or Suggestions: 1. In 1.2, it's worth mentioning in the contribution bullet points that the $\ell_0$-regularized estimator is infeasible in practice, but $\ell_1$-regularization is practical (otherwise from reading this section, it's not clear why we wouldn't always use $\ell_0$ regularization). This point is made very clearly in 3.2.1 and 3.2.2. Questions For Authors: 1. The bounds in Zhu, Jodan, and Jiao 2023 appear to be $O(\sqrt{d / n})$ rather than $O(d / n)$ as stated Table 1. Is there something I'm missing about their results? 2. How does this model and these sample complexity results compare to those in Bower & Balzano (2020)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s recognition of the clarity of our presentation, the significance of the theoretical contributions, and the relevance of our work to both RLHF and choice modeling. We also thank the reviewer for taking the time to examine the proofs, and are glad that the experimental results were found solid. We address the reviewer’s comments and questions below. **(1) Comparison with Bower & Balzano (2020).** *"One important related work on RUMs with sparse features is not discussed." "How does this model and these sample complexity results compare to those in Bower & Balzano (2020)?"* We thank the reviewer for pointing out this closely related paper. While both works consider comparisons based on a subset of features, there are two key differences: 1) In our setting, the subset of relevant features, which correspond to nonzero entries in a globally sparse parameter vector, is fixed across all comparisons. In contrast, Bower & Balzano (2020) identify salient features in a context-dependent manner, selecting those with the largest pairwise variance for each comparison. 2) Our subset of features are selected through optimization whereas Bower & Balzano (2020) select feature sets through sample variance of features. As a result, their model is not globally sparse, and the corresponding estimation rate is $O(d \log (d))$, whereas our analysis yields a sharper rate of $O(k \log (d/k))$ under global sparsity. **(2) Clarifying the rate from Zhu et al. (2023).** *"The bounds in Zhu, Jordan, and Jiao 2023 appear to be rather than as stated Table 1. Is there something I'm missing about their results?"* We thank the reviewer for this observation. We note that our results are stated in terms of the **squared semi-norm** $\lVert \cdot \rVert_\Sigma^2$ as the error metric, whereas Zhu et al. (2023) report bounds in terms of the **semi-norm** $\lVert \cdot \rVert_\Sigma$. We will clarify this in the caption of Table 1 to avoid confusion. **(3) Brief review of the listed works.** *“A few other papers that could have been in the related work, but are not essential (listing in case the authors were not aware of these and feel that any of them are worth including)”* We thank the reviewer for highlighting these additional related works and pointing out their connection to our work. We will include the references and discussion in the updated version of the paper. We will incorporate the above discussion, along with the other suggestions mentioned by the reviewer, in our updated version of the paper.
Summary: The paper investigates leveraging sparsity in preference learning to achieve improved sample efficiency. Under the sparse random utility model (RUM), the authors derive minimax optimal estimation rates, emphasizing the theoretical benchmark provided by an $l_0$-constrained estimator. However, recognizing that this estimator is computationally intractable, the authors propose practical $l_1$-regularized estimators and rigorously establish their estimation guarantees. Empirical evaluations substantiate that the $l_1$-regularized estimator, being computationally feasible, achieves significant improvements over standard methods, particularly when sparsity is present. Claims And Evidence: 1. When the sparsity is present, *sparse preference learning* reduces the sample complexity from $\Theta(d/n)$ to $\Theta(k/n \log(d/k))$. The claim is well supported by theoretical derivations. The statements about matching upper and lower bounds are accompanied by detailed proofs. 2. An $l_1$-regularized estimator can be made computationally tractable. Under an additional assumption, it can achieve a near-optimal rate. In particular, a "fast rate" of $O(k/n\log(d))$ is shown if the Gram matrix satisfies a restricted eigenvalue-type condition. The claims are also well-supported. Methods And Evaluation Criteria: 1. I think $l_0$-constrained MLE method is mainly for the theoretical purpose. It makes sense to develop $l_1$-regularized MLE method, both theoretically and empirically. 2. The authors use synthetic data to control sparsity and dimension, and they verify that the $l_1$-based estimator outperforms the unregularized baseline in small-sample, large-dimension conditions, which is consistent with the theoretical predictions. Theoretical Claims: I did not check the proofs in detail. Experimental Designs Or Analyses: In Section D.1, they try full fine-tuning (rather than just the last layer) on the same dataset and metrics to see if regularization still helps. The setup seems solid, with consistent hyperparameter choices and a straightforward accuracy measure. However, the improvement they report is only about 3%, which isn’t huge. It raises the question of whether the extra effort is worth such a small gain. Even so, I don’t see any major flaws in how they designed or evaluated these experiments. Supplementary Material: I reviewed the experiments section. Relation To Broader Scientific Literature: The paper fits into the broader RLHF literature by addressing sample-efficiency concerns in reward modeling. Essential References Not Discussed: Nothing noteworthy to me. Other Strengths And Weaknesses: Strengths: 1. Their theoretical analysis is thorough and lines up with known compressed sensing results. 2. They give a clear reason for using sparse preference learning. I think the direction is novel and meaningful. Weakness: 1. It is unknown that how often the ground-truth parameter is sparse in real-world application. Hence, it is unclear how effective the approach actually is. 2. The reported performance gains can be modest, which might limit its practical impact. Other Comments Or Suggestions: It might be useful for you to study widely used real-world reward models—perhaps from different RLHF or recommendation pipelines—and check if they already show sparsity in practice. If they do, your approach is strongly reinforced; if not, you could explore alternative ways to handle approximate or partial sparsity. Questions For Authors: 1. Can you provide insights or examples of practical scenarios in which your assumptions are realistically met or potentially violated? 2. Can you share a few insights about the implication of your work on direct preference optimization (DPO)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and the recognition of our theoretical contributions, the clarity of our motivation, the novelty of applying sparsity to preference learning, and the solid experimental setup. Below, we address the concerns and questions: **(1) On the empirical gains (~3%).** *“The improvement they report is only about 3%, which isn’t huge. It raises the question of whether the extra effort is worth such a small gain”* - **Limited room for improvement due to inherent randomness.** Under RUM, the probability of a pairwise preference is given by $P(A\succ B) = F(\frac{r^*(A) - r^*(B)}{\sigma})$, which captures the inherent randomness in human decision-making. Thus, the same pair (A, B) may receive conflicting annotations, **fundamentally bounding the maximum achievable test accuracy**. For example, if 7 out of 10 data points prefer $A\succ B$, and 3 prefer $B\succ A$, then the maximum achievable accuracy on this pair is 70%. In real-world datasets, [1] reports 19–37% of crowd-labeled preferences are noisy. This implies that even with the ground-truth model, the expected accuracy would be limited to roughly 63–81%. Given that a random guess yields 50% accuracy, a 3% improvement represents a substantial fraction of the remaining improvement room. - **Consistent gains and interpretability with negligible extra effort.** Adding regularization term to the training loss does not change the model architecture and incurs marginal overhead. Despite its simplicity, $\ell_1$ regularization consistently improves performance, particularly in low-sample regimes. Moreover, the sparsity induced by $\ell_1$ regularization enhances interpretability by identifying the most relevant features for the preference model of interest. **(2) On the sparsity assumption and its practical validity.** *“It is unknown how often the ground-truth parameter is sparse in real-world application. Hence, it is unclear how effective the approach actually is.” “Can you provide insights or examples when your assumptions are realistically met or potentially violated?”* - **When does sparsity arise?** Sparsity commonly emerges in classical preference learning, especially in high-dimensional feature space where human preferences are influenced by only a few factors. For example, a user’s preference for smartphones may depend primarily on price, camera quality, and UI design, whereas many other attributes (e.g., place of manufacture) may have little influence for that user. Similarly, a reader’s preference over articles may depend solely on the presence of a few key words. When the feature vector is a binary indicator over a large vocabulary, the reward parameter is naturally sparse. - **When might the assumption be violated?** Sparsity may not hold when the feature space is low-dimensional (e.g., 3–5 features), or when features have been manually curated. In such cases, the benefits of sparsity-aware estimation are expected to diminish. Nonetheless, in many modern applications, such as LLM alignment or recommendation systems, feature spaces often contain thousands to millions of dimensions. In these settings, identifying relevant features beforehand is infeasible, making sparsity both a practically and necessary modeling assumption. - **Empirical support in LLM alignment.** Our LLM alignment experiments show that $\ell_1$ regularization induces highly sparse models while outperforming the baseline, even without hyperparameter tuning. For example, in the frozen backbone training setup (Figure 4, Section D.2), where the pre-trained model defines the feature map $\phi$, we observe: - rm-static dataset: $k/d\approx$ 4.5% (n=800) and $k/d\approx$ 7.5% (n=3200). - SHP dataset: $k/d\approx$ 4.2% (n=800) and $k/d\approx$ 7.2% (n=3200). These results show that $\ell_1$ regularization selects a small and informative subset of features, reinforcing the practical validity of the sparsity assumption. **(3) Implication on DPO.** *“Can you share a few insights about the implication of your work on DPO?”* DPO ([2]) bypasses explicit reward modeling by directly optimizing the policy, where $\beta \log(\pi/\pi_{\text{ref}})+\beta\log Z$ acts as a proxy reward that depends on the policy $\pi$. This contrasts with reward-based approaches, where reward and policy are decoupled, allowing assumptions (e.g., sparsity) and regularization to be applied directly to the reward model. However, since DPO couples the reward signal with the policy and the policy architecture is fixed, incorporating sparsity-aware regularization is non-trivial. Additionally, even when the reward model $r(x)$ is sparse, the parameter of the optimal policy $\pi^*(x) = \frac{1}{Z}\pi_{\text{ref}}(x)\exp(\frac{1}{\beta}r(x))$ is not necessarily sparse. We thank the reviewer again for the insightful comments. We will incorporate the discussion in the updated version. [1]https://arxiv.org/pdf/2306.05685 [2]https://arxiv.org/pdf/2305.18290
null
null
null
null
null
null
Neural Graph Matching Improves Retrieval Augmented Generation in Molecular Machine Learning
Accept (poster)
Summary: This paper introduces a method, MARASON, for predicting a mass spectrum from molecular graphs. MARASON extends an existing deep learning framework (ICEBERG) by integrating retrieval with neural graph matching. The model retrieves reference molecules with known spectra, then aligns fragments between the target and reference molecules to guide its prediction of the target’s spectrum. Experimental comparisons show that MARASON outperforms current state-of-the-art methods in terms of accuracy and retrieval performance. Claims And Evidence: Most of the paper’s claims are backed by evidence. However, the reference to retrieval augmented generation (RAG) is misleading because the paper does not actually describe a “G (generation)” process. It is unclear whether this approach truly utilize RAG. Methods And Evaluation Criteria: Please report separately the results on the FT-HCD dataset and the FT-CID dataset. Theoretical Claims: N/A Experimental Designs Or Analyses: Mostly Supplementary Material: Supplementary Notes A, B, and C were reviewed. They all appear appropriate. However, it is recommended that the Supplementary Material should be elaborated a bit more to improve completeness. See Weakness #4 Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The paper is motivated by the assumptions that “similar structures tend to have similar fragmentation patterns in chemistry, and similar fragments tend to have similar response factors that relate their abundance to the observed intensity”, citing only (Shahneh et al., 2024). However, there is a lot of literature in computational mass spectrometry and cheminformatics that specifically discusses the correlations between structural similarity and fragmentation outcomes. I recommend citing additional references to support this idea. MoMS-Net and 2DMolMS are missing. Other Strengths And Weaknesses: Strengths: • Clear discussion of background/previous work/motivation. • Clear explanation of relative concept and description of modeling approach. • The tables presenting results are well-constructed and straightforward. The ablation studies are comprehensive. • Manuscript is clearly written and easy to follow Weaknesses: 1. The paper presents a framework for mass spectrum simulation but does not clarify which specific type of mass spectrometry it supports. While ICEBERG is trained on NIST’s tandem mass spectral (MS/MS) data, the reference “NIST20” in this paper does not explicitly stating whether it is electron ionization (EI) data, tandem MS, or another type of mass spectral library. It would be helpful if the authors clarified the applicability of their method to different mass spectrometry modes—such as MS/MS, EI, or GC-MS and provided evidence or discussion on its performance across these various platforms. 2. There is lack of discussion about the architecture of the GNN and other layers used in the framework. It is unclear whether they are identical to the GNN block in ICEBERG. Do all the GNN blocks share the same structure in FIG. 2? Section 4.2.3 suggests that separating these two GNNs leads to better spectrum similarity, implying that they may have distinct architectures. To address this weakness, I recommend including a more detailed description of the GNN modules, MLP blocks and the matching layers blocks in the Supplementary Information and updating FIG. 2 accordingly. Other Comments Or Suggestions: None Questions For Authors: 1. In lines 209–211, you mention “in our experiments, we use the training dataset to mitigate concerns about data leakage.”, which is unclear how the data leakage is prevented. I suggest rewriting this sentence to elaborate how this procedure prevents leakage. 2. In lines 217–219, you mention retrieving up to three reference spectra with collision energies similar to the target. How was the number three selected, and have you investigated how performance or computational efficiency might change if this limit were increased? 3. In section 3.4 , when predicting the intensities, you rely on learned embeddings (H, Hr), reference intensities (Tr), and the matching matrix (X̄), along with a Tanimoto similarity measure. However, in real-world mass spectrometry, many experimental factors (e.g., ionization mode, instrument settings) can affect fragment intensities. Does this framework explicitly account for these variables, or do you assume consistent conditions across the NIST dataset? 4. NIST released an updated tandem mass spectral library (NIST23), which includes around 60% more compounds than NIST20. I understand that the NIST database is not freely available, but, if possible, would you consider re-running the experiments on NIST23 to see how the results generalize with a larger dataset? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the state-of-the-art accuracy of MARASON and the organization of this paper. We believe there are several misunderstandings and we clarify them as follows. > The paper does not describe a generation process and does not utilize RAG. The problem MARASON aims to tackle is _generating_ MS/MS from a given molecular structure. On the technical side, our first-stage model is an _auto-regressive generator_ that predicts one bond-breaking event at each step. > The paper does not clarify which specific type of mass spectrometry it supports. We will emphasize that MARASON is developed with ESI-MS/MS in the main text. As discussed in section 4.4.1, “We trained our models on the NIST 2020 dataset with 530,640 high-energy collision-induced dissociation (HCD) spectra and 25,541 unique molecular structures.” We believe we made it clear that our experiment setting contains MS/MS spectra (more specifically, Orbitrap spectra) with an instrument type label “HCD” in the NIST database. > Please report separately the results on the FT-HCD dataset and the FT-CID dataset. To clarify, as discussed in section 4.4.1, we only use spectra labeled as “HCD” instrument type in NIST. We do not find any entries with instrument type labels ``CID`` or ``FT-CID`` in NIST. We believe we are working with an open-source training and testing framework that supports most baselines and all methods are trained and tested on the same dataset, ensuring fair comparison. > Questions on model architecture details and GNN layers. We would like to clarify that MARASON adopts the same backbone architecture as ICEBERG, including both the GNN and transformer layers. Regarding the GNN blocks shown in Figure 2: the first and third GNNs operate on molecular graphs and are structurally identical to their counterparts in ICEBERG. The second GNN, discussed in Section 3.3.2 under “DAG Hierarchical Embedding Learning,” serves as the neural graph matching module. While all GNN blocks share the same architecture, our strategy involves using separate weights depending on the module's function—i.e., whether it is used for intensity prediction or for graph matching. We will include more detailed discussions and highlight such differences in Figure 2 in future revisions. > I recommend citing additional references for the correlation between spectrum and structural similarities. Thank you for the suggestion, we plan to include the following references on molecular networking. Please also let us know if there are any specific references in your mind. * Aron AT et al. Nature protocols. 2020 * Wang M et al. Nature biotechnology. 2016 > MoMS-Net and 2DMolMS are missing. We do not find any open-source implementation for MoMS-Net and the dataset statistics in Table A1 are different from ours, making it challenging to include MoMS-Net for comparison. We do not find any references to 2DMolMS—if you meant 3DMolMS, it is the first entry in Figure 3 and Table 1. > How to prevent data leakage is unclear. Thanks for the suggestion. One possible concern of running RAG on MS/MS simulation is that the retrieved spectra might be considered as a source of data leakage that simplifies the problem. Therefore, in numerical evaluations, we restrict retrieving spectra only from training data; in real-world cases, there is no doubt about using any reference spectra available for MARASON. We will elaborate in future revisions. > Why retrieve three reference spectra with different collision energies? Since the reference spectra may not have the exact collision energy of interest, MARASON tries to interpolate from three collision energies. We did not try including more references because the more reference spectra we have, the more computational overhead there will be. > Does this framework explicitly account for experimental variables? Our framework handles ionization mode by different adduct types. We incorporate collision energy as an important instrument variable. For others, we assume they are the same across Orbitrap instruments. If any important factors are missing, please let us know and we would be more than glad to address them in future work. > Testing on NIST23 for results with a larger dataset We purchased the NIST23 license, but the raw data file (.SDF) we used for NIST20 is not available at least in the NIST23 distribution we purchased. We will certainly explore NIST23 after figuring out how to extract training data from it. Within the timeframe of rebuttal, we try to address this concern from another direction by showing that MARASON achieves state-of-the-art retrieval accuracy on MassSpecGym (please refer to the table in response to reviewer VQAR). > Recommend to elaborate a bit more on Supplementary Material. As we cannot edit the supplementary materials at this time, we will try to include more details in future revisions. If there are any specific points that feel unclear to you, please let us know.
Summary: This paper proposes a modification of a method for generating mass spectra from molecular structures. Inspired by recent successes in retrieval augmented generation (RAG), the authors decided to apply this technique for querying similar molecules from the training set and to use them as references for generating spectra with a model that extends the previously proposed ICEBERG. The proposed model, named MARASON, retrieves the nearest reference molecule along with three of its spectra to construct a representation vector used for predicting peak intensities. A neural graph matching algorithm is introduced to align the fragment graphs in the fragmentation DAG of both the reference and target molecules. MARASON surpasses other methods on both random and scaffold splits in predicting mass spectra. Additionally, it demonstrates excellent performance in compound retrieval using mass spectra. ## update after rebuttal The Authors addressed all my comments. I decided to maintain my positive score. Claims And Evidence: The claims made in the paper are supported by experimental evidence. The method can generate mass spectra more accurately, even in the scaffold-based split setup (Figure 3). Moreover, Table 1 shows that retrieval accuracy is also improved over earlier methods. The ablation study confirms the effectiveness of both RAG and neural graph matching. Methods And Evaluation Criteria: The methods and evaluation criteria are adequate to solve the described problem. However, some parts of the method description could be more detailed, especially given that the code is not yet available. I highlighted these parts in the Questions for Authors. Theoretical Claims: There are no theoretical claims that need formal proofs. Experimental Designs Or Analyses: The experiments answer the posed research questions. What would make the claims in the paper stronger, would be conducting statistical tests for the results described in Section 4.2.1. It would also be interesting to see examples of predicted spectra compared to the original spectra and the spectra of the reference compound. For reference, similar qualitative results were presented in earlier works. Such plots would demonstrate how similar the predicted and reference spectra are. Supplementary Material: I read the whole supplementary material. Relation To Broader Scientific Literature: This paper not only presents a significant contribution to mass spectra prediction, surpassing earlier models, but also demonstrates that RAG can be effectively utilized in the molecular domain. Additionally, neural graph matching is shown to be more effective than classical matching methods for processing fragmentation DAGs. Essential References Not Discussed: All the key references have been discussed. Other Strengths And Weaknesses: Most of the comments have been addressed in the other sections. Furthermore, I appreciate that Figure 2 offers a clear overview of the proposed method. To enhance this paper, an evaluation on a second dataset, such as NPLIB1, is recommended. Other Comments Or Suggestions: N/A Questions For Authors: 1. Section 3.2.2 says: "All reference intensities at the same collision energy are processed by a set transformer, followed by an average pooling layer that merges intensity embeddings per fragment from three collision energies." I understand that there are three different energies for the reference compound, yet here processing intensities at the same collision energy are described. Could you elaborate on this process in more detail? What vectors serve as input to the set transformer? 2. How is this model trained? Are there any new components in the loss function? To train neural graph matching, do you use any additional training steps, or is it trained alongside intensity prediction as part of the vector in Equation 7? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We truly appreciate your recognition of our contribution to the mass spectrometry field and our technical novelty of introducing neural graph matching. We conduct statistical tests and perform preliminary MassSpecGym experiments following your suggestions. We will work actively to complete the new results in future revisions. > Conducting statistical tests for the results described in Section 4.2.1 will make claims stronger. Thank you for the suggestion. We first generate results with 3 random seeds for the scaffold split and perform a t-test on ICEBERG (w/ collision energy) and MARASON. The P-values are 0.005 and 0.002 for random split and scaffold split, respectively, both above the 95% confidence interval of statistical significance. > It would also be interesting to see examples of predicted spectra compared to the original spectra and the spectra of the reference compound. For reference, similar qualitative results were presented in earlier works. Such plots would demonstrate how similar the predicted and reference spectra are. Thank you so much for the suggestion, unfortunately, we do not have the option to include visualization results with Openreview. We will add the visualization of reference spectra, predicted spectra, and ground-truth spectra in the appendix in future revisions. > To enhance this paper, an evaluation of a second dataset is recommended. We truly appreciate your suggestion. We have run our method on the recently developed MassSpecGym benchmark [1], which is a publicly accessible library with spectra collected from MoNA, MassBank, and GNPS. We share an initial result of its performance here: | Top-$k$ accuracy | 1 | 5 | 20 | | - | - | - | - | | FraGNNet | 31.93 | 63.20 | 82.70 | | MARASON | 34.03 | 64.04 | 85.39 | MARASON outperforms FraGNNet, the state-of-the-art on MassSpecGym, in terms of retrieval accuracy. We get the aforementioned result within the tight rebuttal time frame, and we will keep working on this benchmark with a comprehensive study in future revisions. [1] Bushuiev et al. MassSpecGym: A benchmark for the discovery and identification of molecules. NeurIPS 2024 > Section 3.2.2 says: "All reference intensities at the same collision energy are processed by a set transformer, followed by an average pooling layer that merges intensity embeddings per fragment from three collision energies." I understand that there are three different energies for the reference compound, yet here processing intensities at the same collision energy are described. Could you elaborate on this process in more detail? What vectors serve as input to the set transformer? You are right about how we process reference spectra at each collision energy. Our purpose of using three reference spectra is that the target collision energy may not have a close match energy in the reference database and multiple reference spectra could be considered together to interpolate. We first concatenate spectral peaks with their corresponding collision energies and the target collision energy. After that, we feed the concatenated spectral vectors into a set transformer and eventually a linear layer, whereby our design of the linear layer is to “move” the peak intensities to the target energy level. Finally, we use average pooling to collect information from the same peaks at different energy levels to generate a reference spectrum embedding that is a learned interpolation from three energies. > How is this model trained? Are there any new components in the loss function? To train neural graph matching, do you use any additional training steps, or is it trained alongside intensity prediction as part of the vector in Equation 7? The first-stage model that generates fragments is trained in the same way as ICEBERG-Generate. The second-stage model that predicts intensities is trained end-to-end, where the cosine loss between predicted spectra and ground truth spectra is the only supervision. We are able to make such a design choice because being fully differentiable is one of the major advantages of neural graph matching.
Summary: This study introduces MARASON, an advanced computational framework that enhances RAG in mass spectrum prediction through neural graph matching. It evolves from the ICEBERG framework through a synergistic integration of graph-based neural architectures and spectral alignment mechanisms. Claims And Evidence: The claims are clearly supported. Methods And Evaluation Criteria: The method is technically sound. Theoretical Claims: The paper focuses on application of AI method, and no proof is needed. Experimental Designs Or Analyses: The experimental designs are sound. Supplementary Material: I've checked the appendix, which provides further experimental result. Relation To Broader Scientific Literature: The contributions of this paper are novel and origin. Essential References Not Discussed: References are sufficient. Other Strengths And Weaknesses: Strengths: - The integration of RAG and neural graph matching under the ICEBERG framework synergistically enhances both prediction accuracy and generalization capability, establishing a new state-of-the-art performance benchmark. - The paper is clear and easy to follow. Weaknesses: - The model's training and evaluation were exclusively conducted using the NIST dataset, with no validation performed on alternative publicly accessible mass spectral libraries (e.g., MoNA) or laboratory-curated in-house datasets. Other Comments Or Suggestions: See questions. Questions For Authors: 1. Under circumstances where the mass spectral database lacks structural analogs, could the RAG strategy potentially lose efficacy due to failed similarity-based retrieval, thereby reverting to a purely de novo generation mode dependent on deep learning architectures? 2. The MARASON framework demonstrates significant performance enhancement over the baseline (ICEBERG) in mass spectrometric analysis. I wonder whether the authors could elucidate the respective contributions of its dual modules. Specifically, does the observed enhancement primarily stem from additional information introduced by spectral retrieval or from the structural alignment capability enabled by neural graph matching? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for recognizing our state-of-the-art performance and our technical soundness. We update with experiments on the recently developed MassSpecGym benchmark, evaluation with non-similar reference structures, and elaboration on the ablation study to address your concerns. We are more than happy to clarify any further questions. > The model's training and evaluation were exclusively conducted using the NIST dataset, with no validation performed on alternative publicly accessible mass spectral libraries (e.g., MoNA). We truly appreciate your suggestion. We have run our method on the recently developed MassSpecGym benchmark [1], which is a publicly accessible library with spectra collected from MoNA, MassBank, and GNPS. We share an initial result of its performance here: | Top-$k$ accuracy | 1 | 5 | 20 | | - | - | - | - | | FraGNNet | 31.93 | 63.20 | 82.70 | | MARASON | 34.03 | 64.04 | 85.39 | MARASON outperforms FraGNNet, the state-of-the-art on MassSpecGym, in terms of retrieval accuracy. We get the aforementioned result within the tight rebuttal time frame, and we will keep working on this benchmark with a comprehensive study in future revisions. A bit more clarification on why we focused on NIST: as discussed under “software and data” on page 9, NIST is the only database where all spectra have collision energy annotations. In most open-source MS/MS libraries, collision energy labels are at least partially missing, making it challenging to develop a RAG model with them. It is still a reasonable assumption to have access to collision energy values for prospective use because it is an experimental variable set on MS/MS instruments. [1] Bushuiev et al. MassSpecGym: A benchmark for the discovery and identification of molecules. NeurIPS 2024 > Under circumstances where the mass spectral database lacks structural analogs, could the RAG strategy potentially lose efficacy due to failed similarity-based retrieval? Thank you for bringing up this important point. MARASON benefits from RAG with Tanimoto similarity > 0.3 (90.7% of the testing set), under low Tanimoto similarity (< 0.3) regimes, the performance of MARASON is still close to the non-RAG baseline. We group test instances based on the Tanimoto similarity between the retrieved structure and the target structure. We report cosine similarities (higher is better) as follows | Tanimoto similarity | (0, 0.1] | (0.1, 0.2] | (0.2, 0.3] | (0.3, 0.4] | (0.4, 0.5] | (0.5, 0.6] | (0.6, 0.7] | (0.7, 0.8] | (0.8, 0.9] | (0.9, 1] | | - | - | - | - | - | - | - | - | - | - | - | | MARASON | N/A | 0.550 | 0.614 | 0.690 | 0.741 | 0.789 | 0.815 | 0.808 | 0.805 | 0.824 | | MARASON (non-RAG) | N/A | 0.566 | 0.611 | 0.682 | 0.727 | 0.768 | 0.791 | 0.780 | 0.759 | 0.784 | All results are from a random split on NIST20. MARASON (non-RAG) is from the first entry in Table 2. This study also highlights a strategy for further performance improvements: use standard MARASON when the retrieved structure has Tanimoto similarity > 0.3 and use the non-RAG version otherwise. There is a trend of decreased performance for non-RAG MARASON when the Tanimoto similarity drops, because a lower Tanimoto similarity means there are fewer similar structures in the training set, making it a more challenging out-of-distribution instance. We will incorporate new results and discussions in future revisions. > Does the observed enhancement primarily stem from additional information introduced by spectral retrieval or from the structural alignment capability enabled by neural graph matching? Thank you for the thought-provoking question. We are happy to elaborate based on our ablation study presented in Table 2. Simply retrieving the reference spectrum and concatenating it with the input to the neural network—a naive RAG-style approach—does not improve performance for MS/MS generation. In fact, it slightly degrades it: the resulting cosine similarity is 0.737, representing a 0.3% decrease compared to the non-RAG baseline. Our findings suggest that effective fragment-level matching is crucial for realizing the benefits of retrieval. For instance, applying a simple Hungarian matching algorithm raises the cosine similarity to 0.746, which is a 0.9% improvement over the baseline. Further, introducing a learnable neural graph matching module with carefully designed architecture yields a cosine similarity of 0.757, amounting to a 2.4% relative improvement over the non-RAG model. While gains in retrieval accuracy are even more substantial—partly because the cosine similarity metric is more saturated—the top-1 retrieval accuracies indicate that there remains considerable room for improvement. In summary, the structural alignment capability enabled by the neural graph matching module is the primary driver of MARASON's performance improvement. --- Rebuttal Comment 1.1: Comment: Major of my concerns have been addressed, and I would like to update my score to 3.
Summary: Authors present MARASON, which augments a previously-developed framework ICEBERG by retrieving the most similar molecules in a database to a target structure based on Tanimoto similarity. Both target and reference structures are fragmented using ICEBERG; a GNN is used to construct a matching matrix to predict which reference fragments are matches; these are then used to help predict peak intensities for the original target molecule. Authors evaluate their framework on retrieval accuracy, where the closest spectrum to the generated spectrum is retrieved based on cosine similarity, and show that their method outperforms previous baselines that do not use retrieval-based augmentations. ### update after rebuttal Thank you to the authors for their response. I am glad the authors were able to evaluate their method on MassSpecGym and the performance gain is notable given that the benchmark has harder train/test splits. > We treat spectra with different collision energies as distinct spectra. Based on my understanding, baseline methods first combine spectra from different collision energies into a single spectrum and work with that -- I think it would be good for the authors to disentangle this effect, or at least discuss it if relevant. I am happy with the rebuttal and will increase my score. Claims And Evidence: The claims presented in the paper are well-supported in my opinion. Authors use the ICEBERG framework and test whether or not adding a module that incorporates information from fragments that are predicted to be matching from other molecules in a known database. They show that on the same data splits from NIST, incorporating this module increases the performance on both spectral similarity and retrieval. Methods And Evaluation Criteria: Authors evaluate their method on NIST, which is a standard benchmark in the field. Authors use two data splits for training/eval -- random and scaffold, which is good. Authors use 3 random seeds for the random split but it seems like only 1 seed for the scaffold split -- why is this the case? Authors could also evaluate their method on more recent such as MassSpecGym [1], which would show the utility of the RAG component in harder split settings. [1] https://arxiv.org/abs/2410.23326 Theoretical Claims: Authors don't make any theoretical claims. Experimental Designs Or Analyses: In Section 3.3.2, authors describe how they build a fragment embedding for a fragment F_i from a molecule M, although they concatenate a lot of information and it's not clear why it's all needed. For instance, why did the authors decide to incorporate the embedding difference between M and F_i as well as the chemical formula difference between M and F_i? Secondly, in the hierarchical embedding, why are both the forward and reverse graphs needed, as opposed to using a bi-directional GNN? Otherwise, I think authors did a good number of ablations for their RAG strategy and also made the direct baseline comparison more fair to the baseline. However, could authors clarify how they incorporated the collision energy information? Were the spectra for each collision point treated as separate entires or where the combined into a single spectrum? The authors improved the ICEBERG baseline, but based on my understanding, it's also possible to condition other models like NEIMS and MassFormer on this information. Supplementary Material: Yes, I reviewed all the supplementary material. Authors write that they pulled FraGNNet values directly from the paper -- are the data splits etc. identical to be able to make this comparison? Relation To Broader Scientific Literature: The problem of MS prediction is an important problem for improving the characterization of molecules, especially given the sparsity of experimental characterizations. The performance metrics indicate that the problem is challenging (the best top-1 performance is 27%). While the method cannot be used to predict the structure of an unknown molecule, it can be used for this indirectly by predicting spectacular for unannotated molecules and then using cosine similarity. Essential References Not Discussed: I am not 100% familiar with SOTA models/works in this domain, but to my knowledge authors are not missing essential references. Other Strengths And Weaknesses: Strengths: - Incorporating a RAG module into the ICEBERG framework had a positive effect on all metrics considered. - RAG for MS has not been previously explored to my knowledge (but it has been explored in other molecule settings) - Multiple data splits done (random/scaffold) - The paper is well-written and easy to follow for the most part; figure 2 is helpful for clarifying the method - The paper was submitted to the applications track which I think is appropriate; I think it's an interesting application of existing methods and empirically shows a nice improvement. Weaknesses: - Evaluation on 1 dataset - Unclear explanation of how collision energies were incorporated - Only 1 seed for scaffold and RAG ablations Other Comments Or Suggestions: - page 3: "non-bond" → should it be "non-bold"? - Authors write that this represents "neutral losses" but don't explain what that is; it would be helpful to define this concept. - why did the authors choose 0.1 Da? Is it possible to demonstrate the method on a more fine-grained binning? Questions For Authors: [1] How did you make your design choices for the fragment-level hierarchical embeddings of what information to include? It seems like a lot of information is concatenated but not clear why (see Experimental Designs Or Analyses for other notes) [2] Why is random splitting done on 3 seeds but scaffold/RAG ablations only on 1? [3] Is it possible to compare on a second benchmark like MassSpecGym? (it's not necessary to do this for me to change my evaluation of the paper, but I think it would strengthen the paper a lot) [4] Can authors clarify how they incorporated spectra of different collisions? Also, authors improved the ICEBERG baseline by incorporating multiple collisions, is it also possible to improve other baselines this way? If authors can clarify their experimental protocol/choices I will be happy to revisit my evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for agreeing with the novelty and technical solidity of our paper. We provide more random seeds, new MassSpecGym results, and MassFormer baseline with collision energy as per your comments. Please find our reply to your questions as follows. > Why does the scaffold split experiment have only 1 random seed? In the ICEBERG paper, there is only one random seed with scaffold split. Following your suggestion, we run MARASON on 3 random seeds on scaffold split with the following updated results: |cosine similarity|top-1 acc|top-5 acc|top-10 acc| |-|-|-|-| |0.727±0.002|0.284±0.001|0.705±0.006|0.856±0.004| From the results, MARASON’s variance to random seeds is less significant compared to the accuracy improvement (for handy reference, ICEBERG w/ collision energy has a cosine similarity of 0.711) and the trend is consistent with either random or scaffold split. We will update the results in future revisions. > Request to evaluate on MassSpecGym Although the rebuttal time frame was quite short, we retrained our model and were able to get the following retrieval accuracies on MassSpecGym. We demonstrate the superiority of MARASON on MassSpecGym for candidates with the same formula |Top-$k$ accuracy|1|5|20| |-|-|-|-| |FraGNNet|31.93|63.20|82.70| |MARASON|34.03|64.04|85.39| We will perform more benchmarking and update results in future revisions. > Questions on model design choices and the "neutral loss" embedding. We would like to clarify that the model architecture was adopted from ICEBERG and we hypothesized that the GNN architecture of ICEBERG learns the structural information essential for graph matching. To elaborate on encoding the “neutral loss”, since $\mathcal{M}, \mathcal{F}_i, \mathcal{M} - \mathcal{F}_i$ represent precursor, ionized fragment, and neutral loss, respectively, subtracting the embedding of $\mathcal{F}_i$ from $\mathcal{M}$ represents the embedding of the neutral loss. We will elaborate on these details in the revised manuscript. > Why are both the forward and reverse graphs needed in DAG embedding? When embedding the fragmentation DAG, the forward graph handles parent-to-children message passing and the reverse graph handles children-to-parent. We believe the update functions of both directions should be distinct to avoid degeneration into an undirected graph. Empirically, this strategy was more powerful than using bi-directional GNNs during our model development. > How different collisions are incorporated? We treat spectra with different collision energies as distinct spectra. The collision energy value becomes another input dimension that is concatenated to the GNN input. In NIST retrieval experiments, we compute spectra at all collision energies, compare each one with its corresponding real spectrum, and compute the average cosine similarity over all recorded collision energies. The preliminary MassSpecGym experiments follow the official benchmark setting, which we will elaborate upon in the revised version. > Are the data splits identical to FraGNNet? As discussed in the FraGNNet paper (Appendix I), their NIST benchmark follows MassFormer and ICEBERG to ensure fair comparison, which should align with our benchmark. > It's also possible to improve other models with RAG. We absolutely agree with that; we select ICEBERG as the base model because it is the current open-source state-of-the-art. Another conclusion is that neural graph matching is vital for the success of RAG—just concatenating the reference spectrum to the neural network even harms the test performance, as shown in Table 2. Ultimately, we have demonstrated a RAG approach that can be readily adapted to other modeling tasks in the molecular spectroscopy domain. > Is it possible to improve other baselines with collision energy? Yes. We applied the same design to MassFormer and validated changes to retrieval accuracies on NIST’20: |Top-$k$ accuracy|1|5|10| |-|-|-|-| |MassFormer|19.1|55.0|71.6| |MassFormer (w/ collision energy)|20.9|59.6|76.4| |MARASON|27.8|68.5|82.7| It shows a marginal improvement but still does not outperform MARASON. > Why did the authors choose 0.1 Da? Is it possible to demonstrate with more fine-grained binning? Since we want to compare all methods under the same metric, every model’s output should be transformed into the same mass resolution. 0.1 Da is the result of balancing between the mass resolution of HCD/Orbitrap spectra and the feasibility of implementing binned-prediction baselines. Empirically, experimentalists also find rounding to 0.1 Da resolution is sufficient in practice. MARASON (and ICEBERG) are adaptable with any mass resolution and can be run with higher resolution; however, selecting a finer-grained resolution will make binned-prediction methods unfeasible as a 0.01 Da resolution with a maximum mass of 1500 Da will require a 150,000-dim output in their neural networks. > Typo on page 3 Thanks for pointing this out. It has been fixed.
null
null
null
null
null
null
Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging
Accept (poster)
Summary: This is an experimental paper. This paper enhances VLMs with reasoning capabilities using Reasoning LLMs. Specifically, this paper integrates reasoning capability of Reasoning LLMs into VLMs through linear merging. Extensive experiments demonstrates the effectiveness of this simple model merging method on math reasoning tasks. Additionally, this paper experimentally disentangles perceptual and reasoning abilities within the model parameter space. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: They experimentally proof their claim in Sec. 4 and Sec. 5. Experimental Designs Or Analyses: Recently, there are many reasoning LLMs based on Qwen-series, I think the author should consider including this series in experimental designs to further valid their claims. Supplementary Material: I checked the whole supplementary material. Relation To Broader Scientific Literature: Methods in this paper is widely used. But as far as I know, I haven't seen similar findings of this paper in other literature. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. To valid the effectiveness of merging model to enable reasoning capability of VLMs, more comprehensive experiments should be conducted. Like Qwen-based model, Results of InternVL in other benchmark. 2. The technical contribution is minimal. I would not be inclined to reject a paper based solely on the method. But the workload of experimental analysis and discovery in this paper is not enough to support it to be accepted by a top-tier conference. There is still a lot to do after getting the analysis and discovery. For examples, as the author found that the perception ability primarily located in the early layers and the reasoning ability mainly lied in later layers, why not try a dynamic model merging method considering this finding? Other Comments Or Suggestions: No Questions For Authors: 1. Can the author try to include Qwen-based model to futher valid the effectiveness of model merging? 2. Why not report the results of InternVL on other benchmarks considering that LLaVA and Idefics report them. Can the author also reports them to make the claim more confidential. 3. Based on the finding in Sec. 5, Did the author try to only merge reasoning llm in the later layers? would this allow VLMs to achieve a good balance between perception and reasoning? For multimodal math reasoning benchmarks, there is no doubt that VLMs also needs a strong perception to understand the image input. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the comments! We address the concerns below. W1: >The workload of experimental analysis and discovery in this paper is not enough…For examples…why not try a dynamic model merging method considering this finding? We sincerely appreciate your feedback on our work. But we respectfully disagree with the comments on “insufficient workload” for the following reasons: - A Simple Method Does Not Mean Less Work: We actually tried it in our early experiments but it worked similarly to the simple linear merging method [see response to Q3]. Given no clear performance gain, we opted for the simpler solution. - We want to emphasize here again that our goals are: - **To see whether the perception and reasoning abilities can be infused by model merging**. We incorporate three VLMs of varying architectures and sizes, along with six reasoning models across diverse domains, and evaluate on five multimodal reasoning benchmarks.This evaluation process accruing costs of around $1,000. - **To understand how perception and reasoning abilities are distributed in parameter space** within a VLM. To achieve this: In Figures 4 & 5, we conducted masking experiments on each even layer (MLP/Attention) under various configurations (e.g., masking LLaVA to LLaMA or random noise), evaluating each independently—resulting in 128 runs in total. In addition, we conducted 4–5 such preliminary experiments. We use GPT-4 (via VLMEvalKit) to evaluate model outputs, accruing costs of around $3,000. - Although **GPU hours and monetary costs** do not directly equate to the quality of the work, these data points **do illustrate one aspect of the hidden workload** we have put into this work. Besides, we also add results based on Qwen model [see response to Q1], hopefully these added results can help recognize the workload of this paper. Q1/2: >include Qwen-based model.., report the results of InternVL on other benchmarks considering that LLaVA and Idefics . Thank you for the advice! We expand our experiments for Qwen-based model here. Specifically, we merge Qwen2-VL-7B-Instruct with Qwen2-Math-7B, the results are: |**Model**|**Method**|**MathVista**|||**MathVerseBenchmarks** ||||||**MMStar**||**DM** |**MV**| |-|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |||All|General|Math|Overall|T-D|T-L|V-I|V-D|V-O|All|Math||| |Qwen2-VL|Baseline|61.2|69.6|54.1|31.8|35.9|31.4|31.5|33.1|26.9|59.9|59.2|34.4|21.1| ||+Qwen2-Math|60.2|68.0|53.5↓0.6|31.9↑0.1|37.1|31.7|31.5|32.5|26.7|59.5|58.4↓0.8 |35.0↑0.6|21.7↑0.6| From the results, we see that merging benefits certain benchmarks like MathVerse, DynaMath and MathVision. However, there is a decline in performance for benchmarks MathVista and MMStar. The unstable increase pattern is similar to what we observe with Idefics and InternVL, indicating that VLMs that have undergone math-related pure text training (as many state-of-the-art models do today [1]) benefit less from merging with reasoning models. As for the large-scale model InternVL-76B, we have already evaluated it on the same benchmarks, as shown in the InternVL row of Figure 3 in our paper. Q3: > Did the author try to only merge reasoning llm in the later layers? would this allow VLMs to achieve a good balance between perception and reasoning? Thank you for the great suggestion of trying a dynamic model merging method! Like we have briefly discussed this in W1 response, we did test a dynamic approach based on our layer-wise insights.Inspired by the observation that perceptual abilities lie in early layers, we merge only the later layers to preserve them. |**Model**|**Method**|**MathVista**|||**MathVerseBenchmarks**||||||**MMStar**||**DM**|**MV**| |-|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |||All|General|Math|Overall|T-D|T-L|V-I|V-D|V-O|All|Math||| |LLaVA|Baseline|37.4|51.7|25.4|20.1|25.9|20.8|21.1|16.5|16.0|43.8|30.0|22.8|11.8| ||+Dart-prop|38.0|48.7|28.9↑3.5|23.7↑3.6|30.7|24.8|25.5|19.8|17.4|43.6|33.6↑3.6|24.5↑1.7|14.8↑3.0| ||+keep5layers|37.9|48.5|28.9↑3.5|22.0↑1.9|29.6|23.2|23.2|17.8|16.4|43.8|30.8↑0.8|23.7↑0.9|15.1↑3.3| Although the results show clear gains over the baseline, they are only on par with linear merging. Given no clear performance gain, we opted for the simpler solution. One possible reason is that interpretation-based analysis is inherently subjective and only captures coarse-grained trends, often with substantial noise. For instance, while later layers may show stronger reasoning, earlier layers still play a role. Thus, merging only the later layers in reasoning-focused LLMs may not yield performance gains. Decomposing the two abilities may enable successful merging but doesn’t necessarily yield a better merging method. This also consists with previous findings from [2] that **different merging methods behave very similarly at larger scales**. [1] What matters when building vision-language models? NeurIPS 2024. [2] What matters for model merging at scale? Prateek et al. 2024 --- Rebuttal Comment 1.1: Comment: Further Concern: I still think the method of this paper is to apply method A (linear merging) to field B (multimodal reasoning) and it can work directly. No difficulties were encountered, and there was no need to consider how to solve them during applying A to B. Can the author further explain this point? --- Reply to Comment 1.1.1: Comment: Thanks for your prompt reply. We truly appreciate your feedback and will revise the paper accordingly to better highlight its contributions. Firstly, we would like to respectfully clarify a **misunderstanding**: - Our paper is **not just a method paper**. Instead, we aim to **use model merging as a lens to understand** *how perception and reasoning ability are encoded and interact* within vision-language models. - The merging method itself is a means to an end—a byproduct of our broader interpretability analysis. We argue that understanding the internal dynamics of perception and reasoning from a mechanistic interpretability perspective is also a valuable and underexplored scientific direction. Secondly, we want to summarize our **core contributions** here, which go beyond simply applying an existing method: 1. **A novel approach for mechanistic interpretability**: We are the first, to our knowledge, to use model merging as a tool to investigate where and how perception and reasoning are encoded in a large vision-language model. 2. **Non-trivial and practical empirical observation**: As pointed out in this feedback, we show that *simple linear merging can successfully merge perception and reasoning abilities across modalities*. However, we want to emphasize that the information is **not straightforward and trivial** as it may seem to be. For example, **prior work [1]** shows that naively interpolating two neural networks with entirely disjoint optimization trajectories can lead to a **catastrophic drop** in accuracy. In fact, this drop is quite common in model merging and is potentially due to conflicting parameter updates [2,3,4]. Thus, our finding that linear merging works across modalities for combining reasoning and perception abilities is of value for both the research and practitioner communities. 3. **Successful localization of distinct functional subspaces**: In Section 5, we design experiments that selectively mask parameters (e.g., replacing original Llava parameters with Llama’s weights or random noise) to localize perception and reasoning layers before and after merging, and reveal how the two abilities interact or stay disentangled after merging. Our results suggest that **perception and reasoning lie in largely orthogonal subspaces** in parameter space, **allowing for modular and effective merging**. In short, we are not merely proposing to “apply method A to field B”. Instead, we ask: **Can we infuse perception and reasoning abilities from different modalities? And how does it work?** And we provide both tools and insights to answer that question. We hope the above clarification helps you reconsider the contribution of our work, from both empirical and interpretability perspectives. Thank you again for your time and thoughtful feedback. [1] What is being transferred in transfer learning? Behnam Neyshabur. et al. ICLR 2020. [2] What matters for model merging at scale? Prateek et al. 2024. [3] TIES-Merging: Resolving Interference When Merging Models. Prateek Yadav.NeurIPS 2023. [4] Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch. Le Yu et al. ICML 2024.
Summary: This paper proposes merging models across modalities to enable the incorporation of the reasoning capabilities of LLMs into VLMs. Effectiveness of this training-free recipe is verified via extensive experiments. It also evaluates VLMs in different sizes to verify the generalization ability of merging. Additionally, this work provide interesting insights about perception and reasoning abilities across different layers by investigating into merged VLMs. "## update after rebuttal While I appreciate the authors’ efforts to address most of my concerns, their response to my questions about leveraging the finding—that perception and reasoning abilities lie in different areas of MLLMs—for improved merging performance is not insightful enough. It's claimed that different merging methods behave very similarly at larger scales. Consequently, I maintain my original score. Claims And Evidence: Yes, via extensive experiments and case study of LLaVA. Methods And Evaluation Criteria: Yes, the reasoning ability is measured by math reasoning benchmarks, while perception ability is measured by general VQA benchmarks. Theoretical Claims: There's no theoretical claims in this paper. Experimental Designs Or Analyses: Yes, I've checked the main experiments and analysis of merged LLaVA. Supplementary Material: Yes, I've checked section C in the appendix, which compares with another merging method. Relation To Broader Scientific Literature: The main contribution of this work is to apply a training-free recipe to merge VLMs with reasoning expert models. It's more a combinational innovation because the linear merge recipe was proposed by prior work. Essential References Not Discussed: No Other Strengths And Weaknesses: ## Strength: 1. The problem addressed is timely and interesting, it might provide a new perspective for inference time scaling in VLMs. 2. It conducted extensive experiments on different size VLMs, demonstrating noticeable improvements and good generalization ability. 3. The analysis focusing on merged LLaVA provide insights about perception and reasoning patterns across different layers. ## Weakness: 1. Lack of baselines of other merging methods. Only TIES is provided in the appendix. 2. The investigation about perception and reasoning pattern across layers only considers LLaVA. It would be more interesting to also include other model families or larger models, such as Idefics2-8B, InternVL2-LLaMA3-76B. 3. Lack of significance test since many of the performance improvements in table 2 are within a 1% margin. Other Comments Or Suggestions: See Strengths And Weaknesses Questions For Authors: I'm interested in how to leverage the findings that perception and reasoning abilities lie in different areas of VLMs, so that we can achieve better merging performance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thorough review and detailed comments! Your suggestions will be helpful in improving the paper. Q1: > Lack of baselines of other merging methods. Only TIES is provided in the appendix. We appreciate the suggestion and have expanded our experiments to evaluate DARE merging [3] (Dare-TIES and Dare-Linear), another SOTA merging method to explore additional merging techniques. We follow the hyperparameter search strategy same as TIES, parameterized by ($\alpha_1$, $\alpha_2$), where $\alpha_1$ is tuned for the VLM and $\alpha_2$ for the Math LLM, in order to obtain the most comparable checkpoints on benchmarks that emphasize vision and textual reasoning. The results are shown below. ||**MathVision**|**MathVerse-Text-Dominant**| |:-:|:-:|:-:| |Baseline|13.8|25.9| |Linear|14.8|30.7| |TIES(1,0.4)|15.1|29.7| |TIES(1.6,0.2)|14.5|31.4| |Dare-TIES(0.8,0.2)|17.8|22.7| |Dare-Linear(0.8,0.2)|17.4|21.7| The results show that DARE performs comparably to linear merging and TIES, consistent with our finding that different merging methods yield similar performance (line 657), with none significantly outperforming simple averaging. This supports our choice to adopt the linear merging method in our paper. This finding is also consistent with the previous finding in [2] that different merging methods tend to exhibit similar behaviors at larger scales. Given this, we chose not to focus extensively on exploring alternative merging methods but instead to explore more about the interpretability and composition of the model’s internal abilities. Q2: >It would be more interesting to also include other model families or larger models. We focused on LLaVA for our interpretability experiments mainly due to resource limits. Specifically, we conducted masking-out experiments on each even layer (for MLP/Attention, respectively) across four configurations (e.g., masking LLaVA to LLaMA). These experiments, presented in Figures 4 and 5, involved a total of 128 runs. We use GPT-4 to evaluate these runs (by VLMEvalKit), which incurred a substantial cost of approximately $3,000. Due to the high computational cost, our study currently focuses on LLaVA. However, we recognize the value of extending this analysis to other model families and leave it as promising future work. Q3: >Lack of significance test… Thank you for your valuable feedback! We acknowledge that some of the performance improvements in Table 2 fall within a 1% margin. We conduct t-tests to compare the results of the merged model with the base LLaVA. The resulting p-values are presented in the table below, with statistically significant results (p < 0.05) shown in bold. Merging general reasoning models such as Mammoth2 and Magpie does not lead to statistically significant improvements, showing a similar pattern of marginal accuracy gains as shown in Table 1 of our paper. In contrast, merging Math LLMs, such as Dart-Prop, leads to statistically significant improvements. This aligns with our conclusion in the paper (line 204) that merging with math-related models provides the greatest benefit. |**Model**|**Method**|**MathVista**|**MathVerse**|||||**MMStar**|**DM**| |-|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |||Math|T-D|T-L|V-I|V-D|V-O|Math|| |LLaVA|+Dart-Uniform|0.15|**0.00**|**0.00**|**0.00**|**0.02**|0.46|0.56|0.21| ||+MAmmoTH-2|0.77|0.58|0.30|**0.04**|0.70|0.35|1.00|1.00|0.82|1.00| ||+Magpie-v0.3|0.67|1.00|0.06|0.07|0.84|0.62|0.56|0.96| ||+Dart-prop|0.06|**0.00**|**0.01**|**0.00**|**0.01**|**0.02**|**0.05**|**0.00**| Q4: > I'm interested in how to leverage the findings that perception and reasoning abilities lie in different areas of VLMs, so that we can achieve better merging performance. During our exploration, we wondered whether insights from the decomposition of perception and reasoning abilities could be leveraged to develop better merging methods, similar to the approach proposed by [1]. To investigate this, we experimented by keeping the early layers unchanged to preserve perceptual ability while only merging the later layers to enhance reasoning ability (see Q3 to Reviewer xnvF for details). Our results indicate that this approach does not yield any performance gains. Given no clear performance gain, we opted for the simpler solution. Our finding is also consistent with previous findings from [2] that **different merging methods behave very similarly at larger scales**. [1] Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models, Lucas et al. ICLR 2025. [2] What Matters for Model Merging at Scale?. Yadav et al. arXiv 2024. [3] Language models are super mario: Absorbing abilities from homologous models as a free lunch. Yu et al.ICML. 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed explanations. Most of my concers are addressed. Regarding Q3, could you please provide a revised version of Table 2 to indicate p < 0.05 with underscore? --- Reply to Comment 1.1.1: Comment: Thank you for your response! We present the updated Table 2 with significance tests included below (results with * indicate p < 0.05). Our conclusion remains consistent: merging with math-related models such as Dart yields greater improvements in reasoning ability compared to general reasoning models, as supported by the significance test results. We will update the Table in our paper accordingly and thank you for your valuable suggestion! |**Model**|**Method**|**MathVista**||**MathVerse Benchmarks**||||||**MMStar**||**DM**|**MV**| |-|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |||All|Math|Overall|T-D|T-L|V-I|V-D|V-O|All|Math||| |LLaVA|Baseline|37.4|25.4|20.1|25.9|20.8|21.1|16.5|16.0|43.8|30.0|22.7|13.8| ||+Dart-Uniform|**38.2**|28.3 ↑2.9|23.6 ↑3.5|**32.0***|**25.6***|25.4*|19.3*|**17.4**|42.5|31.6 ↑1.2|**24.5 ↑1.8**|15.8 ↑2.0| ||+MAmmoTH-1|36.7|25.4 ↑0.0|21.1 ↑1.0|26.9|23.1*|22.6*|16.6|16.4|**44.1**|30.8 ↑0.8|22.6 ↓0.1|15.8 ↑2.0| ||+MAmmoTH-2|37.4|25.7 ↑0.3|20.6 ↑0.5|26.0|22.0|22.1*|16.4|16.1|43.8|30.0 ↑0.0|22.5 ↓0.2|14.1 ↑0.3| ||+Magpie-v0.3|36.8|25.9 ↑0.5|20.7 ↑0.6|26.8|22.2|22.6|16.2|15.5|**44.1**|30.8 ↑0.8|22.7 ↑0.0|**16.4 ↑2.6**| ||+DeepSeek-R1-Distill|38.1|27.0 ↑1.6|22.1 ↑1.1|28.4|22.7|22.5|17.3|15.1|43.7|33.2 ↑3.2|24.3 ↑1.6*|15.1 ↑1.3| ||+Dart-prop|38.0|**28.9 ↑3.5***|**23.7 ↑3.6***|30.7*|24.8*|**25.5***|**19.5***|**17.4***|43.6|**33.6 ↑3.6***|**24.5 ↑1.8***|14.8 ↑1.0|
Summary: This paper investigates the impact of integrating math-specific LLMs into VLMs through model merging. The experimental results demonstrate that this approach effectively transfers reasoning abilities from math-specific LLMs to VLMs in a training-free manner. Furthermore, the authors conduct extensive experiments to analyze the distribution of perception and reasoning abilities within the model’s parameter space by modifying different layers. Their findings indicate that perception capabilities primarily originate from the early layers, while reasoning abilities are predominantly derived from the middle-to-late layers. Additionally, they conclude that merging with reasoning models enhances the mathematical reasoning ability across most layers of the model. Claims And Evidence: Most claims are supported by corresponding empirical evidence. However, the authors assert that "after merging, we observe that all layers begin to contribute to reasoning, whereas the distribution of perception abilities across layers remains largely unchanged." I could not find any text discussing this observation in further detail. Methods And Evaluation Criteria: Model merging is a common and effective approach in the field of vision-language models. Theoretical Claims: This paper is empirically focused, with minimal theoretical equations or claims. Experimental Designs Or Analyses: The experimental results (e.g., Tables 2 and 3) and analysis (e.g., Figures 4 and 5) are detailed and well-presented. I appreciate the thorough analysis of Figures 4 and 5. However, I find the conclusion stated in line 375—"Merging with Reasoning Models Enhances Almost All Layers’ Math Reasoning Ability"—to be inconsistent with Finding 4, which states, "The contribution of the middle-to-late MLP layers and almost all attention layers to math reasoning has increased." The observation that more layers show an impact on accuracy does not necessarily imply an improvement in their reasoning ability. Instead, I believe this section presents a more robust analysis rather than definitive evidence of enhanced reasoning ability. Supplementary Material: I have reviewed Appendix C, which presents the results of an alternative merging method TIES. Relation To Broader Scientific Literature: This paper demonstrates that integrating math-specific LLMs can consistently enhance the reasoning capabilities of VLMs, offering valuable insights into the field of multimodal integration. Essential References Not Discussed: As far as I know, most essential references have been appropriately discussed in the manuscript. Other Strengths And Weaknesses: Strengths: The paper provides a comprehensive analysis of integrating the reasoning abilities of large language models (LLMs) into vision-language models (VLMs). It includes detailed performance comparisons and insightful visualization analyses, effectively highlighting the impact of this integration. Weaknesses: A major concern is the experimental setting. Although merging LLMs with extensive mathematical knowledge into VLMs is rational, given that math abilities are crucial for logical reasoning, it is unclear why the authors did not explore the use of other powerful logical LLMs. Other Comments Or Suggestions: Some of the character sizes of figures, particularly in Figures 4 and 5, appear to be too small, which may make them difficult for readers to interpret. Questions For Authors: I am curious about the performance improvements observed in Figures 4 and 5 when modifying or replacing the model parameters. Intuitively, such changes would be expected to either maintain or decrease performance rather than enhance it. Could you clarify this discrepancy? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thorough review and detailed comments! We address your questions below. Q1: >The authors assert that "after merging, we observe that all layers..whereas the distribution of perception abilities across layers remains largely unchanged." I could not find any text discussing this observation. Thanks for your advice! This observation is briefly discussed in Section 4 (line 376–383) in paper. After merging with Dart, nearly all layers exhibit a larger performance drop in math reasoning tasks compared to LLaVA, suggesting that reasoning capabilities are more widely integrated across layers post-merging. The statement about “distribution of perception abilities across layers remains largely unchanged” is an **implicit deduction**. - the higher influence of all layers in math reasoning task (Math subset of MathVista) implies that reasoning is layered on top of perception abilities, rather than replacing them. Since this task requires both perception and reasoning, the observation that early (perception-focused) layers continue to show sensitivity by masking supports this interpretation. If reasoning overrides perception, the influence of early layers would diminish—but it is not observed. - Besides, in general VQA tasks (see left side of ⑤ and ⑦ vs. ⑥ and ⑧), we observe no substantial shift in the behavior of individual layers after merging. This further supports the idea that perception-related representations remain stable. We appreciate your feedback and will revise the discussion in Section 4 of the next version to clarify this point. Q2: >More layers show an impact on accuracy does not necessarily imply an improvement in their reasoning ability. Thank you for your thoughtful feedback! We agree that the original phrasing may convey a stronger claim than intended. We will revise it to clarify that our analysis suggests a potential correlation, not conclusive evidence. Specifically, we hypothesize that the presence of more layers may correlate with improved reasoning performance, as the amount of reasoning-related latent knowledge stored across layers may also increase. Q3: > it is unclear why the authors did not explore the use of other powerful logical LLMs. Thank you for your valuable feedback. We share your interest in exploring the generalization of reasoning capabilities across domains, including logical reasoning. - Our paper already includes diverse LLMs trained on various reasoning tasks. Specifically, we incorporated Magpie, which covers planning and coding tasks (see [1] for details), and Mammoth2, a model trained on various reasoning domains including Socratic reasoning - a form of logical reasoning. The summary of training data as shown below. |Domain|Size|Subjects| |-|-|-| |MathStackExchange|1484630|Mathematics| |ScienceStackExchange|317209|Physics, Biology, Chemistry, Computer Science| |Socratic|533384| Mathematics, Science, Humanities (Logical)| Performance results for both are at table 1 in our paper. They show only minor gains, suggesting that combining general-purpose reasoning models remains insufficient for strong performance on math-heavy benchmarks. - To explore further on logical reasoning, we add the experiment by fine-tuning a LLaMA3-8B on LogiCoT [3] (a logical chain-of-thought dataset) and merging with LLaVA. As results shown below, this pure logic-focused approach helps the math-reasoning tasks, indicating good generalization ability, we will update these experiment results in our paper. |**Model**|**Method**|**MathVista**|||**MathVerseBenchmarks**||||||**MV**| |-|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |||All|General|Math|Overall|T-D|T-L|V-I|V-D|V-O||| |LLaVA|Baseline|37.4|51.7|25.4|20.1|25.9|20.8|21.1|16.5|16.0|13.8| ||+logic|37.0|49.1|26.7↑1.3|23.5↑3.4|30.5|25.0|26.4|20.3|15.1|11.2↓2.6| Q4: >Some of the character sizes of figures appear to be too small. Thank you for the feedback! We will enlarge the font sizes in these subfigures to ensure clarity in the revised version. Q5: >I am curious about the performance improvements observed in Figures 4 and 5 when modifying or replacing the model parameters. For instance, in the math-targeted VQA tasks ① and ③ (right panel) shown in Figure 4, performance notably improves after masking by LLaMA, indicating that the image SFT training—designed to equip LLaVA with image embedding comprehension—**may overemphasize learning image perception**, and cause forgetting issues to diminish LLaMA’s inherent mathematical reasoning skills. Restoring the original parameters recovers this capability, highlighting the need to preserve strong reasoning skills in VLMs to counteract domain shift introduced by image training. [1] Magpie: Alignment Data Synthesis From Scratch By Prompting Aligned Llms With Nothing, Zhangchen et al. ICLR 2025. [2] MAmmoTH2: Scaling Instructions from the Web. Xiang et al. NeurIPS 2024. [3] LogiCoT: Logical Chain-of-Thought Instruction Tuning. Liu et al.EMNLP 2023 --- Rebuttal Comment 1.1: Comment: Thank you for the authors' detailed response. I am still curious, however, about which type of LLMs is more effective for improving multi-modal reasoning and perceptual abilities—using logic-focused LLMs or math-based LLMs? --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful response! To enhance reasoning capabilities for VLMs, we recommend merging with math-based LLMs for the following reasons: - Merging with pure-logical LLM is not as effective as pure-math LLM : To make our conclusions clearer, we put together our additional experiment results for LogiCoT [1] with the results for Dart-Prop in the table below. | **Model** | **Method** | **MathVista** | | | **MathVerseBenchmarks** | | | | | | **MV** | |-----------|------------|:-------------:|:---:|:---:|:------------------------:|:---:|:---:|:---:|:---:|:---:|:------:| | | | All | General | Math | Overall | T-D | T-L | V-I | V-D | V-O | | | LLaVA | Baseline | 37.4 | 51.7 | 25.4 | 20.1 | 25.9 | 20.8 | 21.1 | 16.5 | 16.0 | 13.8 | | | +logic | 37.0 | 49.1 | 26.7 ↑1.3 | 23.5 ↑3.4 | 30.5 | 25.0 | 26.4 | 20.3 | 15.1 | 11.2 ↓2.6 | ||+Dart-prop|38.0|48.7|28.9 ↑3.5|23.7 ↑3.6|30.7|24.8|25.5|19.8|17.4|14.8 ↑1.0|** We observe that while merging with a purely logical LLM offers benefits on certain benchmarks, such as MathVerse and the Math subset of MathVista, it still lags behind merging with the math-specific LLM, Dart-Prop in almost all cases. - Merging with general-purpose reasoning LLMs—which contain both logical and mathematical domain knowledge—also shows to be less effective. As shown in Table 2, our experimental results indicate that general-purpose reasoning LLMs such as Magpie and Mammoth2 also show limited improvement compared with Dart. This indicates that merging with an LLM trained on mixed knowledge—including logical, mathematical, and other reasoning domains—does not outperform merging with an LLM specialized in mathematical reasoning. Regarding perception ability, we believe that merging can preserve core perceptual capabilities but does not lead to any improvement of perception. This is because the injected knowledge from LLMs primarily targets reasoning and does not involve perception-related information. Thank you again for your time and valuable comments. We will incorporate these discussions into our paper according to your suggestions. [1] LogiCoT: Logical Chain-of-Thought Instruction Tuning. Liu et al.EMNLP 2023
null
null
null
null
null
null
null
null
TINED: GNNs-to-MLPs by Teacher Injection and Dirichlet Energy Distillation
Accept (poster)
Summary: This paper proposes a layer-wise method with Dirchlet energy imitation for node-level knowledge distillation from GNN teacher to MLP student, to reduce the latency of inference for time-sensitive applications. Claims And Evidence: The problem is well-defined, and the claims in introduction section is clear, and supported by experimental results. Methods And Evaluation Criteria: The layer-wise distillation method is straight-forward, and make sense, but why the Dirchlet energy ratio is a suitable quantity for distillation need more justifications. Theoretical Claims: I believe the proof is correct, however, the error bound can be loose since $\lambda_{max}$ can be a large quantity especially for graphs with node degree skewed, making the theoretical result not very valuable. Experimental Designs Or Analyses: The experimental design is detailed and comprehensive, however it lack a more recent baseline VQGraph, which has been mentioned in the related work section. Supplementary Material: I have read the proof and additional experimental results. Relation To Broader Scientific Literature: Compared to previous approaches, this paper proposes layer-wise distillation using energy ratio for knowledge trasnfer. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The experiments are comprehensive, and most recent baselines are included. This work proposes layer-wise distillation based on Dirchlet energy ratio, which quantifies the smoothness of the node embeddings. Weaknesses: 1. The use of Dirchlet energy ratio is not well justified, i.e., why it is appropriate for knowledge trasnfer, what makes it effective to outperform previous methods? 2. The improvement compared to previous baselines are not very significant, and some baselines are missing (e.g., VQGraph) Other Comments Or Suggestions: N/A Questions For Authors: 1. The biggest concern is the justification of using Dirchlet energy ratio as the quantity for knowledge transfer, and the reason why it is more effective than previous SOTA methods. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Thank you for appreciating the strengths of our work. Below are our responses to address your important comments. >**C1.** The justification of using Dirchlet energy ratio as the quantity for knowledge transfer, and the reason why it is more effective than previous SOTA methods. **Response:** Thank you for the insightful comment. Please find our justifications below. (i) The goal of distillation is to enable the student MLP to preserve the GNN teacher's knowledge as much as possible. Our analysis reveals a crucial finding: the Feature Transformation (FT) and Graph Propagation (GP) operations in a GNN layer often exhibit opposing smoothing effects. GP tends to be aggressive, while FT is more restrained in smoothing, as shown in Figure 1 of the paper. These distinct smoothing patterns represent important teacher knowledge that the student should preserve. We achieve this by leveraging the proposed Dirichlet Energy Distillation (DED), where we use the well-recognized smoothness measure, Dirichlet Energy, to design the DE ratio. This DED technique is validated to be effective in the experiments. (ii) Moreover, in our design, the parameters of FT operations in teacher GNN have been directed injected into the student by the teacher injection technique (TIN), while the GP operations are not compatible with the FC layers in student MLP, and the DED technique with Dirichlet Energy ratio helps on the distillation of GP operations. Therefore, *our methods, combining TIN and DED, work together* to achieve better performance than existing methods, as validated by extensive experimental results and ablation studies. >**C2.** I believe the proof is correct. The error bound can be loose on graphs with node degree skewed. **Response:** Thank you for the insightful comment. As stated in our paper, the value of our analysis lies in its attempt to establish a theoretical relationship for using simple MLP layers to approximate complex GNNs. Our theoretical analysis focuses on the general case across all possible graphs, with the error being bounded by the largest eigenvalue. We acknowledge that this bound can be large in scenarios where the node degree distribution is skewed. In experiments, the observed actual error is typically much smaller than the theoretical result, as validated in Table 3 of the paper. > **C3.** The experiments are comprehensive, and most recent baselines are included. Compare with a recent baseline VQGraph, which has been mentioned in the related work section. **Response:** Thank you for your feedback on the comprehensiveness of our experiments. As suggested, we have included a comparison with VQGraph. We thoroughly searched all the hyperparameter spaces specified in the original VQGraph paper (Table 12 of VQGraph). Specifically, VQGraph has the following hyperparameter search space for the teacher SAGE with codebook: max_epoch $\in$ \{100, 200, 500\}, hidden_dim $=$ 128, dropout_ratio $\in$ \{0, 0.2, 0.4, 0.6, 0.8\}, learning_rate $\in$ \{0.01, 1e-3, 1e-4\}, weight_decay $\in$ \{1e-3, 5e-4, 0\}, codebook_size $\in$ \{8192, 16384\}, lamb_node $\in$ \{0, 0.01, 0.001, 1e-4\}, and lamb_edge $\in$ \{0, 1e-1, 0.03, 0.01, 1e-3\}. For distillation, the hyperparameter search space of VQGraph is: max_epoch $\in$ \{200, 500\}, norm_type $\in$ \{“batch”, “layer”, “none”\}, hidden_dim $=$ 128, dropout_ratio $\in$ \{0, 0.1, 0.4, 0.5, 0.6\}, learning_rate $\in$ \{0.01, 5e-3, 3e-3, 1e-3\}, weight_decay $\in$ \{5e-3, 1e-3, 1e-4, 0\}, lamb_soft_labels $\in$ \{0.5, 1\}, and lamb_soft_tokens $\in$ \{1e-8, 1e-3, 1e-1, 1\}. The table below reports the results of VQGraph and our method TINED+. Observe that our method outperform VQGraph on the datasets. We will include this comparison in the paper. **Table A** Results of VQGraph and our TINED+ with SAGE as teacher on transductive setting. The best result is in italics. ||Cora|Citeseer|Pubmed|Computer|Photo| |-|-|-|-|-|-| |VQGraph|78.66|74.66|73.02|80.16|92.32| |TINED+|*83.70*|*75.39*|*77.75*|*84.82*|*94.05*|
Summary: This paper addresses the GNN2MLP distillation task, which aims to transfer knowledge from computationally expensive Graph Neural Networks (GNNs) to more efficient Multilayer Perceptrons (MLPs) for faster inference on graph-structured data. Claims And Evidence: Yes Methods And Evaluation Criteria: Overall, the evaluation makes sense. But the hyperparameter search space is too broad. Theoretical Claims: Overall, the theoretical results look good to me. Experimental Designs Or Analyses: The reviewer has checked Section 5. The comparison with vanilla MLPs is unfair since TINED's MLP has more layers. Supplementary Material: The reviewer has checked the proof and additional experiments. Relation To Broader Scientific Literature: The contributions of this work are related to GNN-to-MLP distillation and Dirichlet energy analysis. Essential References Not Discussed: N/A Other Strengths And Weaknesses: # Strengths: This work effectively categorizes the key components of a GNN layer into two distinct parts based on their operational patterns: feature transformation and message passing. For feature transformation, which shares the same structural knowledge with student MLPs, the authors directly transfer the parameters. For message passing, which constitutes the core functionality of GNNs, the authors identify its distinct smoothing effect compared to feature transformation. # Weaknesses: 1. The term "fine-tuning" (Equation 6) is ambiguously defined. The paper lacks clarity on how the parameter $\eta$ balances between inherited and learned parameters, which requires further elaboration. 2. This work focuses primarily on performance optimization rather than efficiency. The inference time of TINED does not show any improvement compared to other distillation baselines. Other Comments Or Suggestions: There are typos in the manuscript. For example: "nornamlization" and "hyperparamrter". Questions For Authors: Can TINED effectively handle dynamic graphs in which node features or graph structure evolve over time? If not, what modifications would be necessary to accommodate such scenarios? The Dirichlet Energy ratio of an operation quantifies smoothing effects but may potentially conflate multiple factors, such as softmax functions and batch/layer normalization between layers. How does TINED ensure that the computed DE ratios accurately reflect the intended smoothing/diversification effects of specific operations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### Thank you for acknowledging the strengths of our work. Below are our responses addressing your important comments. > **C1.** The hyperparameter search space. **Response:** We clarify that we mainly adopt the conventional hyperparameter search space for the teacher model and the student model, following existing studies. For the hyperparameters of our method, we also employ grid search within a limited space. Detailed information about the hyperparameters is provided in the appendix. > **C2.** Compare with vanilla MLP with the same number of layers as TINED. **Response:** As suggested, in addition to the vanilla MLP with 2 layers, we also compare with a 4-layer MLP, denoted as MLP*. In **Table A** and **B** below, under both transductive and production settings, our method, TINED, outperforms MLP* across all datasets. Moreover, compared to the MLP with 2 layers, MLP* shows degraded performance, indicating that additional layers cause the MLP to overfit on these datasets. **Table A: Compare with MLP\* with 4 layers under transductive setting. The results are averaged over 10 runs and reported with standard deviation** | | cora | citeseer | pubmed | a-computer | a-photo | |-|-|-|-|-|-| | MLP | 60.84±1.08 | 63.41±1.96 | 69.41±2.88 | 70.07±1.77 | 80.19±1.48 | | MLP\* | 58.20±3.16 | 60.81±3.48 | 67.61±1.51 | 66.84±3.03 | 78.66±2.48 | |TINED |82.63±1.57| 74.43±1.53 | 77.09±2.14 | 85.18±1.12 | 93.97±0.53| **Table B: Compare with MLP\* with 4 layers under *prod*(*ind*&*tran*) setting** | | setting | cora | citeseer | pubmed | a-computer | a-photo | |-|-|-|-|-|-|-| | MLP | *ind* | 61.31±2.16 | 63.95±2.95 | 69.66±2.68 | 70.36±2.48 | 79.76±2.00 | | | *tran* | 60.88±1.41 | 62.99±2.39 | 69.67±2.61 | 69.92±2.03 | 79.53±2.05 | | MLP\* | *ind* | 58.67±2.42 | 62.15±3.50 | 67.76±1.88 | 68.09±2.60 | 77.27±2.17 | | | *tran* | 58.12±1.81 | 61.46±2.46 | 68.07±1.87 | 67.90±2.41 | 77.05±2.48 | |TINED | *ind* | 74.38±1.28 | 72.68±1.97 | 75.64±3.02 | 82.83±1.45 | 91.96±0.72| | | *tran* | 80.04±1.50 | 72.20±1.66 | 75.83±2.81 | 84.87±1.38 | 93.74±0.51| > **C3.** Clarify fine-tuning with $\eta$ (Equation 6). **Response:** In Eq (6), the parameter $\eta$ controls the balance between the inherited teacher knowledge and learned parameters during fine-tuning. A small $\eta$ tends to let the student make less changes to the inherited teacher parameters, and as $\eta$ increases, the fine-tuning will update the parameters aggressively. As shown in **the experiments in the response to comment C1 of Reviewer qs85**, when varying $\eta$, a mild $\eta$ setting can strike a good balance, leading to better performance. > **C4.** This work focuses primarily on performance optimization rather than efficiency. **Response:** Our goal is to achieve better effectiveness than existing distillation methods, ensuring that the student MLP operates significantly faster than the teacher GNN, while maintaining comparable efficiency to existing methods. As shown in Figure 3 of the paper, our methods TINED and TINED+ achieve the best trade-off between accuracy and inference time. For example, on CiteSeer, while the teacher requires *153.14 milliseconds (ms)* for inference, our methods, TINED and TINED+, achieve the highest accuracy at the cost of just *1.63 ms*. All the distillation methods maintain the same order of efficiency, around *1-2 ms*. > **C5.** Can TINED effectively handle dynamic graphs? If not, what modifications would be necessary? **Response:** In this work, we focus on static GNNs, but we agree that distillation on dynamic GNNs is a promising direction. We believe that the proposed Teacher Injection (TIN) and Dirichlet Energy Distillation (DED) techniques are compatible with dynamic GNNs, such as DynGNN, EvolveGCN, and TGAT. For dynamic GNNs, TIN can be adapted to periodically inject temporal parameters from the teacher into the student, and DED can be performed adaptively since the DE ratio can be efficiently calculated on new graph snapshots. We leave a detailed investigation of these adaptations for future work. > **C6.** How does TINED ensure that the computed DE ratios accurately reflect the intended smoothing/diversification effects of specific operations? **Response:** We clarify that the operation considered by the DE ratio in Definition 4.3 is the GP operation in Eq. (4), which performs aggregation and concatenation, excluding subsequent softmax and batch normalization. Our formulation precisely controls the computation boundary to prevent conflation with other factors, ensuring that the DE ratio accurately reflects only the smoothing/diversification effect of the GP operation itself. DE is a widely adopted measure in the literature for assessing the smoothing effect of GNNs [1], making it a reasonable choice for designing our DE ratio. [1] T Konstantin Rusch, Michael M Bronstein, and Siddhartha Mishra. A survey on oversmoothing in graph neural networks. 2023.
Summary: This paper introduces TINED, a method for distilling Graph Neural Networks (GNNs) into Multi-Layer Perceptrons (MLPs) via layer-wise Teacher Injection and Dirichlet Energy Distillation (DED). The key idea is to directly inject parameters from GNN feature transformation layers into MLP layers and use DED to preserve opposing smoothing effects of GNN operations, specifically feature transformation (FT) and graph propagation (GP). Theoretical bounds on approximating GP with MLP layers are also provided. Experiments conducted on seven datasets demonstrate that TINED outperforms both original GNN models and prior distillation methods. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence The authors present the following key claims: 1. "The valuable knowledge of a teacher GNN is preserved in the well-trained parameters of its FT and GP operations." (line 50-53). This is substantiated by the ablation study (Table 6) and main benchmark results (Tables 1 and 2). 2. "The FT and GP operations in a GNN layer often exert opposing smoothing effects: GP aggressively smooths node embeddings, while FT is more restrained and can even diversify embeddings." (line 84-87). This is validated through DE ratio analysis (Figures 1 and 7). Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem at hand. The authors develop their methodology based on a systematic analysis of the GNN2MLP distillation problem. Their approach effectively addresses the identified challenges: preserving knowledge in teacher layers and managing the disparity between the FT and GP operations. The evaluation employs widely recognized benchmarks to assess the performance of the proposed method. The authors distinguish between two inference settings: graph-dependent and graph-free. This distinction corresponds to two practical scenarios: warm start and cold start, respectively. In cold start scenarios (where new nodes have no edges), graph information is inaccessible during inference. Conversely, in warm start scenarios (where new test nodes have edges), limited graph information can be utilized during inference, as demonstrated by [1]. While the authors provide a brief discussion of this distinction (line 185-190), a more comprehensive elaboration would be beneficial. [1] Tian, Y., Zhang, C., Guo, Z., Zhang, X., & Chawla, N. (2023). Learning MLPs on graphs: A unified view of effectiveness, robustness, and efficiency. In International Conference on Learning Representations. Theoretical Claims: The theoretical claim regarding the bound on the approximation error of using MLPs to approximate GP (Theorem 4.1) has been verified and found to be mathematically sound. Experimental Designs Or Analyses: Yes, I have checked the soundness of the experimental designs and analyses. The comprehensive evaluation framework incorporates both transductive and inductive/production settings (Tables 1 and 2). The authors present a thorough set of experimental results, including: ablation studies (Table 6), TSNE visualization of learned embeddings (Figure 5), approximation errors of MLPs (Table 3), inference time comparisons (Figure 3), learned DE ratio analysis (Figure 6), parameter sensitivity analysis (Tables 4 and 5), and performance evaluations across different teacher models (Figure 4). Supplementary Material: N.A. Relation To Broader Scientific Literature: The work is appropriately situated within the existing literature: 1. The GNN2MLP pipeline and evaluation benchmarks build upon established research from previous works [1] and [2]. 2. The concept of distilling layer-wise teacher structures has been explored in NLP and CV literature. The authors extend this approach to the GNN2MLP distillation domain. 3. The analysis of smoothing effects using Dirichlet energy was introduced in [3], where it was applied to entire non-decoupled GNN layers. This work makes a novel contribution by separately analyzing the FT and GP operations within GNN layers and leveraging their distinct smoothing properties to enhance the distillation process. [1] Zhang, S., Liu, Y., Sun, Y., & Shah, N (2022). Graph-less Neural Networks: Teaching Old MLPs New Tricks Via Distillation. In International Conference on Learning Representations. [2] Tian, Y., Zhang, C., Guo, Z., Zhang, X., & Chawla, N. (2023). Learning mlps on graphs: A unified view of effectiveness, robustness, and efficiency. In International Conference on Learning Representations. [3] Rusch, T. K., Bronstein, M. M., & Mishra, S. (2023). A survey on oversmoothing in graph neural networks. arXiv preprint arXiv:2303.10993. Essential References Not Discussed: The research by Winter et al. [1] examines the underlying mechanisms contributing to the effectiveness of distillation methods. Including this work in the discussion would provide additional context and insights. [1] Winter, D., Cohen, N., & Hoshen, Y. (2024). Classifying nodes in graphs without GNNs. arXiv preprint arXiv:2402.05934. Other Strengths And Weaknesses: Strengths: - Originality: The investigation into GNN's internal layer structure represents a novel contribution. The application of Dirichlet energy to analyze the intrinsic properties of different components within GNN layers is particularly innovative. - Significance: The identification of opposing smoothing effects between FT and GP operations in GNN layers constitutes an important finding. This insight not only enhances distillation efficiency but also contributes to the broader understanding of GNN architectures, as most GNN models can be analyzed within this framework. - Clarity: The manuscript is well-written, with the methodology presented in a clear and systematic manner. - Soundness: The authors provide comprehensive experimental evidence to support their claims. Weaknesses: 1. The approach requires fixed layer correspondence between teacher and student models. TINED appears unable to generalize to scenarios where teacher and student architectures have significantly different layer counts (e.g., distilling a 4-layer GNN into a 2-layer MLP). An extension of TINED to address this limitation would be valuable. 2. The experimental evaluation focuses exclusively on homogeneous graphs. The performance of TINED on heterophilous graphs remains unexplored. Other Comments Or Suggestions: N.A. Questions For Authors: The following questions merit consideration: 1. (Teacher Injection Mechanism): The direct injection of FT parameters from GNNs into MLP layers raises concerns about potential overfitting of the student MLP to the teacher's initialization, particularly if the teacher model is suboptimal. How does the proposed approach address this risk? 2. (Distillation of parameterized GP operations): The handling of parameterized GP operations, such as GAT's attention module, warrants further explanation. Can this teacher knowledge be effectively transferred to the student MLP? This represents an important consideration, as GNN knowledge resides not only in FT layers but also in the parameters of GP operations. 3. (DED on large graphs): For large-scale graphs such as Products, subgraph sampling (\zeta) is employed to approximate Dirichlet Energy. What is the impact of \zeta on the fidelity of DE ratio calculations? 4. (TINED+): The description of the "with graph structure" version of TINED (TINED+) lacks clarity. Does this implementation simply incorporate NOSMOG techniques on top of the base TINED architecture? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### Thank you for recognizing the strengths of our work. Here are our responses to your important comments. > **C1.** Elaboration on deployment with and without graph dependency (line 185-190) **Response:** When performing inference on a new node with limited connections to a graph, such as a new user joining a social network or a new product being added to an online shopping platform, the node has restricted access to the existing graph structure. In these cases, the graph structure is unavailable to the node (without graph dependency), so inference relies solely on node features. Conversely, when conducting inference on nodes that have sufficient connections within the graph structure (with graph dependency), it is possible to leverage both node features and graph features. This allows for a graph-dependent inference approach, utilizing the rich information available from the node's connections within the graph. > **C2.** Generalize to student MLP with various layer counts. **Response:** Thank you for the insightful comment. As shown in Appendix A.3, our approach for decoupled GNNs, such as APPNP in Eq (17), consolidates all graph propagation (GP) operations into a single FC layer in MLP, which can be easily extended to multiple layers. For traditional GNNs like SAGE, our design focuses on capturing layer-to-layer GNN knowledge within MLPs through the proposed techniques, Teacher Injection and Dirichlet Energy Distillation. Consequently, the number of layers in our method is related to the teacher model. One potential way for generalization is to explore layer fusion techniques, where multiple teacher layers are compressed into fewer student layers while maintaining their overall behavior. We will include this discussion in the revised paper. > **C3.** The performance of TINED on heterophilous graphs remains unexplored. **Response:** Please find the experiments on heterophilous graphs (Squirrel and Amazon-ratings) in the **response to comment C3 of Reviewer qs85**. The results show that TINED and TINED+ outperform existing approaches. As future work, we plan to develop dedicated techniques to further improve the effectiveness of heterophilic GNN distillation. > **C4.** (Teacher Injection Mechanism) How to handle when teacher is suboptimal. **Response:** We clarify that the goal of knowledge distillation is to create a student model that closely mimics the teacher model, while whether the teacher model is suboptimal or not is not the focus. After injecting teacher parameters on FT operations into the student, the MLP layers in the student are further fine-tuned in Eq. (6) and trained with Dirichlet Energy Distillation and the overall loss in Eq. (8,9). These distillation training steps on the student MLP also mitigates the mentioned risk, and improves student performance. > **C5.** (Distillation of parameterized GP operations) Explanation on the handling of the parameterized GP operations. **Response:** As shown in Eq. (4) in Section 4.1 and Eqs. (15, 16, 17) in Appendix A.3, different GNNs have distinct GP operations, such as GAT with attention. Therefore, for these various GP operations, we choose to distill them into fully-connected (FC) layers in MLP, without directly using these GP parameters, but employing carefully designed techniques including layer-wise Dirichlet Energy distillation. On the other hand, the FT operations in different GNNs share a similar formulation that is compatible with FC layers in MLPs, as shown in Eq. (4) and Appendix A.3. Thus, we can directly inject the parameters of FT operations into the student model. > **C6.** The impact of $\zeta$ on large datasets to approximate DE ratio **Response:** The parameter $\zeta$ controls the subgraph sampled to estimate the DE ratio, helping to avoid running out of memory on large datasets. We vary $\zeta$ and report the results on the large ogbn-arxiv dataset in **Table A** below. Observe that our TINED maintains stable performance as $\zeta$ changes, and we do not need to consider $\zeta$ larger than 0.8, to reduce computational overhead, while maintaining distillation quality. **Table A: Vary $\zeta$ on large graph ogbn-arxiv** |$\zeta$|0.1|0.3|0.5|0.7|>0.8| |-|-|-|-|-|-| |ogbn-arxiv|64.31±0.71|64.32±0.91|64.38±0.75|*64.44±0.80*|OOM| > **C7.** The implementation of the "with graph structure" version and NOSMOG. **Response:** The phrase "with graph structure" refers to conducting inference on nodes that have sufficient connections within the graph, allowing the use of both node features and graph features. In this setting, we use DeepWalk node embeddings for observed nodes, and for unobserved nodes, we use the average positional encodings of their observed neighbors as their encoding, consistent with the ways in NOSMOG. We will include this clarification in the paper. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal. Since my concerns have been solved, I would like to increase my score to 4.
Summary: This work proposes TINED, a method to distill knowledge from teacher GNNs into student MLPs. Extensive experiments show its good performance. Claims And Evidence: No Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: No, but the author seems to provide the code. I do not check that. Supplementary Material: Appendix, experimental setting. Relation To Broader Scientific Literature: na Essential References Not Discussed: I am not familiar with this field. Other Strengths And Weaknesses: pros: 1. well-written and easy to follow (figures are clear) 2. The topic (GNN distillation) is promising from a practical perspective. 3. The idea is easy and interesting, cons: 1. I doubt that the method relies highly on hyperparameters. Could the author provide the sensitivity analyses on the datasets mentioned in this paper? 2. I am not very familiar with this field but I remember there are many works focusing on distill GNN into MLP. I hope other reviewers check the novelty of this work. 3. The results seem to rely on homophilic datasets in this paper. Could the author add experiments on heterophilic dataset( eg. Amazon-Ratings, squirrel, pokec, etc). Other Comments Or Suggestions: na Questions For Authors: See cons. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### We appreciate your effort in reviewing our paper. We have conducted extensive experiments to address your key comments. Thank you for your consideration. > **C1.** Could the author provide the sensitivity analyses on the datasets mentioned in this paper? **Response:** In our method, the parameter $\eta$ controls the degree of fine-tuning in Eq. (6), while $\beta$ controls the importance of Dirichlet Energy distillation in Eq. (9). We have varied them with results on A-computer and Cora datasets in **Tables 4** and **5** of the paper. Below, **Tables A** and **B** present the accuracy results of varying $\eta$ and $\beta$ across more datasets. In **Table A**, as $\eta$ increases across all datasets, TINED exhibits a distinct pattern where performance first improves and then declines, with the best results in italics. This trend underscores the trade off controlled by $\eta$. A similar pattern is observed for $\beta$ in Table B. Note that hyperparameter tuning is essential in machine learning research. Appendix A.7 of the paper details the search space for our parameters. **Table A: Vary $\eta$. Results are averaged over 10 runs with standard deviation** | $\eta$| cora | citeseer | pubmed | a-computer | a-photo | |-------------|------|----------|--------|------------|---------| | 1e-09 | 73.80±2.56 | 67.20±1.86 | 73.39±2.10 | 72.53±2.17 | 86.06±2.38 | | 1e-06 | 73.50±2.48 | 67.19±1.87 | 73.32±1.88 | 72.73±1.99 | 86.35±1.79 | | 0.001 | 77.12±0.91 | 67.24±1.83 | 73.69±1.83 | 84.13±1.23 | 87.30±2.08 | | 0.01 | 79.26±1.30 | 69.38±1.70 | 75.12±2.32 | *85.17±1.21* | 90.48±1.38 | | 0.1 | 81.40±1.69 | 71.52±1.58 | 76.55±2.81 | 83.61±1.67 |*93.97±0.54* | | 0.5 | 82.01±1.64 | *74.57±1.42* | *77.10±2.15* | 83.23±1.29 | 93.51±0.60 | | 1.0 | *82.61±1.58* | 73.57±1.39 | 76.65±2.77 | 82.85±0.91 | 93.37±0.51 | | 10.0 | 78.85±1.63 | 73.56±1.48 | 75.61±2.73 | 70.63±4.87 | 89.53±0.78 | **Table B: Vary $\beta$** | $\beta$ | cora | citeseer | pubmed | a-computer | a-photo | |-------------|------|----------|--------|------------|---------| | 1e-09 | 81.33±1.49 | 73.39±1.28 | 76.63±2.42 | 84.70±1.15 | 93.41±0.61 | | 1e-06 | 81.64±1.71 | 73.39±1.30 | 76.28±2.76 | *85.17±1.21* | 93.48±0.65 | | 0.001 | 81.64±1.57 | 73.39±1.31 | 76.51±2.64 | 84.80±1.21 | 93.65±0.68 | | 0.1 | 81.71±1.59 | 73.65±1.38 | *77.10±2.15* | 84.70±0.99 | *93.97±0.58* | | 1.0 | *82.61±1.58* | 73.81±1.29 | 75.84±2.66 | 84.29±1.08 | 86.59±5.66 | | 10.0 | 80.92±2.15 | *74.57±1.42* | 71.57±2.76 | 71.35±6.30 | 78.00±3.19 | > **C2.** I am not very familiar with this field but I remember there are many works focusing on distill GNN into MLP. I hope other reviewers check the novelty of this work. **Response:** In other reviews, the reviewers acknowledged the comprehensive experimental evaluations compared to existing methods, as well as the originality and significance of our methods. Thank you. > **C3.** Could the author add experiments on heterophilic dataset. **Response:** As suggested, we have conducted new experiments on representative heterophilic datasets, Squirrel and Amazon-ratings under the production setting. In **Tables C(1)** and **C(2)** with and without graph dependency, our TINED and TINED+ outperform existing approaches under all settings, often with a significant margin. Note that our work, as well as existing methods, does not explicitly focus on graphs with heterophily. As future work, we plan to conduct an in-depth investigation and develop dedicated techniques to further improve the effectiveness of heterophilic GNN distillation. **Table C(1): *prod* (*ind* & *tran*) setting on heterophilic dataset (without graph dependency)** | Data | Eval | SAGE | KRD |FFG2M|GLNN | GLNN* | TINED | |-------|-----|------|- |-|------|-------|--------| | Squirrel | *prod* | 35.47 |38.34 |37.16| 39.90 | 39.70 | **41.95** | | | *ind* | 41.44±4.66 |42.00±4.78 |42.11±4.77| 44.89±5.67 | 45.00±4.56 | **46.89±5.23** | | | *tran* | 33.98±1.66 |37.43±3.34 |35.93±3.36| 38.65±1.09 | 38.37±1.02 | **40.72±1.36** | | Amazon-ratings | *prod* | 47.55 | 50.33 |49.56| 49.87 | 49.41 | **50.70** | | | *ind* | 47.45±1.48 |47.55±0.97 |47.68±1.00| 47.72±1.00 | 47.71±1.14 | **49.02±1.02** | | | *tran* | 47.58±0.48 | 51.03±1.73 |50.03±1.50| 50.41±0.45 | 49.84±0.39 |**51.12±0.44** | **Table C(2): *prod* (*ind* & *tran*) setting on heterophilic dataset (with graph dependency)** | Data | Eval | NOSMOG |NOSMOG* | TINED+ | |-------|-----|---------|--------|-------| | Squirrel | *prod* | 38.17 | 39.43 | **40.89** | | | *ind* | 45.44±4.75 | 44.33±3.24 | **46.78±3.80** | | | *tran* | 36.35±1.69 |38.20±1.59 | **39.42±1.05** | | Amazon-ratings | *prod* | 47.86 | 48.80 | **50.42** | | | *ind* |47.47±1.45 | 48.46±1.37 | **49.51±1.54** | | | *tran* | 47.96±0.31 | 48.88±0.56 | **50.65±0.56** |
null
null
null
null
null
null
Online Learning with Unknown Constraints
Accept (poster)
Summary: The paper provides new insights into the problem of online learning with unknown constraints. Lower and upper bounds that connect the difficulty of the problem with Eluder dimension are derived. Claims And Evidence: Yes Methods And Evaluation Criteria: See Questions for Authors Theoretical Claims: No Experimental Designs Or Analyses: No Supplementary Material: No Relation To Broader Scientific Literature: It is well addressed in Related works, though I have concerns. See Questions for Authors. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: See Questions for Authors. Other Comments Or Suggestions: See Questions for Authors. Questions For Authors: Mainly I have doubts on whether the proposed methods are only theoretical constructs with very weak practical implications. The concerns are noted below: 1) The practical motivation of the problem setup and the how the proposed modelling of the safety sets and the associated feedback mechanisms are connected to real-life applications is highly unclear. 2) Why is the constraint function assumed to be fixed? Environment changes over time can very well induce changes in functional forms of the constraint set. 3) The feedback mechanism $P_{sig}$ is assumed to be known to select the online regression oracle. How practical is it to obtain such prior knowledge or to even verify such assumptions? 4) The assumed regret minimization online oracle is only shown to be computationally feasible in a simplified setting of linear optimization. In general the computational feasibility of this assumption is unclear. This also adds into my concerns regarding the usefulness of the proposed methodologies in real life applications. 5) In Assumption 3.2, the functional form of the loss $\ell$ remains fixed. Do we have to know in advance, the form of loss function in general (example, it is squared error, or loss from the GLM family like linear or logistic regression type losses). Or can the loss be selected adversarialy by nature? 6) How computationally feasible is to do the sampling in Line 8 of the Algorithm 1? 7) Though the paper provides theoretical insights, the provided upper and lower bounds do not match. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their questions. If the reviewer wishes, we are more than happy to engage in follow-up discussions through OpenReview! We would like to highlight the fact that our paper is theoretical in nature, and our main focus is to establish information theoretically when safe learning is possible, and to develop algorithms for safe online learning with general safety function classes $\mathcal{F}$ given access to an online learning oracle and an online regression oracle. > The practical motivation of the problem setup and the how the proposed modelling of the safety sets and the associated feedback mechanisms are connected to real-life applications is highly unclear. We believe there is practical motivation in enforcing a strict per-round safety constraint. The feedback mechanism is general enough to encompass a standard $f^*(a_t,x_t) + $ (subgaussian gaussian noise) model commonly assumed in the literature as well as binary feedback following a Boltzmann distribution, which is a well-studied form of feedback in sociology known as the Bradley-Luce-Shepard rule. > Why is the constraint function assumed to be fixed? Environment changes over time can very well induce changes in functional forms of the constraint set. While other works (e.g. Neely et al 2017 "Online convex optimization with time varying constraints") consider time-varying constraints in the setting of "long term constraints" (a significantly weaker safety ask), in the setting of strict per-timestep constraints, one would need a fixed safety constraint for there to be any hope of learning such a function in a every-time safe way. Let us further recall that we allow for a context and that the safety function is also a function of this context. Thus one could encode changes in safety set over time by encoding time as a part of context. > The feedback mechanism $P_{sig}$ is assumed to be known to select the online regression oracle. How practical is it to obtain such prior knowledge or to even verify such assumptions? If we have prior knowledge about the problem at hand we can select for the appropriate feedback mechanism. For example, in settings with instrument readings with noise, we could adopt a standard $f^*(a_t,x_t) + $ (subgaussian gaussian noise) model. With human-generated feedback models, we could use the Bradley-Luce-Shepherd rule and assume that the feedback is binary following a Boltzmann distribution. > The assumed regret minimization online oracle is only shown to be computationally feasible in a simplified setting of linear optimization. In general the computational feasibility of this assumption is unclear. This also adds into my concerns regarding the usefulness of the proposed methodologies in real life applications. This is true, we only show a concrete implementation for safe linear optimization. In this paper, our focus is more theoretical in nature, instead opting to show the existence of such online learning oracles, and proving that the performance of our algorithm depends on variations of complexity measures found in the online learning literature. > In Assumption 3.2, the functional form of the loss $\ell$ remains fixed. Do we have to know in advance, the form of loss function in general (example, it is squared error, or loss from the GLM family like linear or logistic regression type losses). Or can the loss be selected adversarialy by nature? Yes, we assume we know the form the loss function takes - we believe this is a standard assumption in the online learning literature. > How computationally feasible is to do the sampling in Line 8 of the Algorithm 1? This wholly depends on choice of mapping $\mathbf M$ - for the linear optimization example, this is simply sampling from a set of $d+1$ vectors. Discovering other settings where all steps/oracles are computationally efficient is an active work in progress. > Though the paper provides theoretical insights, the provided upper and lower bounds do not match. This is true - getting exactly matching upper and lower bounds is a work in progress. The main take away of the lower bound was to show that at least asymptotically, minimizing the proposed complexity measure is necessary for safe learning.
Summary: The paper addresses the problem of online learning with unknown safety constraints, where constraint satisfactions is required per round. The authors provide a general meta-algorithm based on the Online Learning oracle for online learning strategy with known constraints, and Online Regression oracle for constraints estimation. The algorithm is based on constructing Optimistic and Pessimistic safe action sets, and a mapping strategy from the optimistic to the pessimistic set. The paper provides an upper regret bound which is also depending on the Eluder dimension describing complexity of the constraint function class and on a new complexity measure $\mathcal V_{\kappa}(\cdot)$. The paper also provides a lower bound demonstrating that if $\lim_{T \rightarrow \infty} \sum_{t = 1}^T \mathcal V_{\kappa}(\tilde p_t, \mathcal F_t) > 0$, then no safe algorithm can ensure a sublinear regret. The paper also provides concrete algorithm and bounds in some problem cases such as with linear constraints. ## Update after rebuttal: I thank the authors for the response. I have also read the other reviews, and I decided to keep my initial score. Claims And Evidence: Yes, it seems so, mostly. However, there are some inconsistent notations and unclarities regarding the new complexity measure. In particular, the novel complexity measure proposed is recalled as $\mathcal V_t(\kappa)$ in the introduction, and $\mathcal V_{\kappa}(\cdot)$ further in the paper. It would be good to keep them consistent. Also, it is unclear to me why the complexity measure depends on the time step? In particular, it depends on $(\tilde p_t, \mathcal F_t, x_t)$ (Maybe a typo, but in Theorem 4.4 the dependence on the $x_t$ is omitted $(\tilde p_t, \mathcal F_t) $ ) Here $x_t$ is a context received at step $t$. How can the complexity measure dependent on the contexts? Should it maybe be rather defined in a way which is independent, like a worst case? And as an asymptotic sum? How can this complexity measure be estimated in advance in practice? Also, at the beginning of Section 4 it is written that a bounded $\mathcal V_{\kappa}(\cdot)$ is asymptotically necessary. Although, does from theorem 4.4 it follows that one needs to have not just bounded complexity measures $\mathcal V_{\kappa}(\cdot)$, but their sum to be growing less than linearly on T. Also, the intuition of notion $\kappa$ is not very clear. It would be great if it explained more. Methods And Evaluation Criteria: The proposed method makes sense, the problem seems to be quite general. Theoretical Claims: I did not check the proofs in the appendix, and not all the derivations in the paper body. In general they seem correct Experimental Designs Or Analyses: There are no experimental designs Supplementary Material: No Relation To Broader Scientific Literature: The paper provides new insights about the complexity of a per-step online learning, also in comparison to the complexity of cumulative constraints satisfaction. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: The paper is quite clearly written in general and tries to add the intuition for any notion and claim they state. However, the number of the notations is quite high, and sometimes hard to follow. Also, the complexity notion $\mathcal V_t$ is questionable since it depends on time-step $t$ and on the particular realizations of the context $x_t$ and the estimated version space $\mathcal F_t$. Is there a way to have a more general notion of complexity? Maybe the asymptotic sum directly, under some conditions on the feedback $\{x_t\}$? Other Comments Or Suggestions: There are a few typos I noticed, apart from $\mathcal V$: - line 260, right side: "with respect to with respect to" - line 275, right side: comma Questions For Authors: See Claims and Evidence Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review and helpful suggestions, which we will be sure to incorporate into our final version. If the reviewer wishes, we are more than happy to engage in follow-up discussions through OpenReview! > In particular, the novel complexity measure proposed is recalled as $\mathcal V_t(\kappa)$ in the introduction, and $\mathcal V_k(\cdot)$ further in the paper. Thank you for the feedback - we had made this notational choice in the "key contributions" section as we wanted to emphasize the dependence on $\kappa$, and avoid cluttering the definition with technical definitions of $\mathcal F_t, \tilde p_t$. We will update the measure as $\mathcal V_\kappa(t)$ in the "key contributions" to keep it more consistent. > Also, it is unclear to me why the complexity measure depends on the time step? In particular, it depends on $(\tilde p_t, \mathcal F_t, x_t)$ This is true, the complexity measure really is a function of just those three parameters. The complexity measure captures the difficulty of matching a target $\tilde p_t$ using distributions from $P_t$ (defined through $\mathcal F_t$ and $x_t$) under worst case loss while off-setting information gain w.r.t $\mathcal F_t$, weight by $\kappa$. However, when applied to our Main Algorithm (Algorithm 1), the instantiations of those three parameters depend on time-step. > Maybe a typo, but in Theorem 4.4 the dependence on $x_t$ is omitted This is not a typo - as stated in the theorem statement, our Theorem 4.4 considers a simplified non-contextual setting (i.e. $\mathcal X = \\{ \\}$). In non-contextual settings, we drop the context argument (mentioned in line 274 right column footnote). > How can the complexity measure dependent on the contexts? Should it maybe be rather defined in a way which is independent, like a worst case? This indeed is a valid alternative way of defining the complexity measure - we could instead have taken worst case contexts. However, as the pessimistic and optimistic sets are dependent on contexts, we thought this would make the notational a little less convoluted. > How can this complexity measure be estimated in advance in practice? In practice, we believe the best way to address this complexity measure is to use a choice of $\kappa^*$ that ensures that $\mathcal V_{\kappa^*} < 0$ (various concrete examples of this methodology described in section 5). If one wanted to directly estimate the value of the complexity measure, one could deploy the methodology mentioned in the response to reviewer pLb5. > ... it follows that one needs to have not just bounded complexity measures $\mathcal V_t(\kappa)$, but their sum to be growing less than linearly on T Yes, this suggests we'd need the sum of $\mathcal V_t(\kappa)$ to be asymptotically growing sublinearly in T (otherwise safe learning is impossible). This is precisely our main takeaway in justifying the necessity of $\mathcal V_t(\kappa)$ as a measure of the complexity of safe learning! > ... the intuition on the notion of $\kappa$ is not clear. Thank you for the feedback. As mentioned in line 57 left column, $\mathcal V_t(\kappa)$ defined in eq 5 ... captures the per-timestep trade off between loss minimization and information gain w.r.t. unknown constraint. $\kappa$ is the parameter that controls the weight placed on information gain. While this is evident upon inspection of eq 5, we agree that it would be illuminating to mention this explicitly. We will update this paragraph to make this more clear! > Is there a way to have a more general notion of complexity? Maybe the asymptotic sum directly, under some conditions on the feedback $x_t$? Yes, we could instead use the asymptotic sum directly as the measure of complexity instead. Getting the upper and lower bounds to match is an ongoing work, and we would like to arrive at a unified measure of complexity. > Typos in lines 260 and 275 Thank you for pointing out these typos to us. We will fix them in the final version.
Summary: This paper studies the online adversarial learning problem of achieving no-regret guarantees while playing actions subject to an unknown constraint at each time-step. At each round $t$, the set of safe actions is a function of the adversarially chosen context and an unknown safety constraint $f*$; after playing an action the learner receives a loss vector and safety feedback correlated to the unknown constraint value. The goal is to achieve cumulative loss with sublinear regret with respect to the best action in hindsight that satisfies all constraints, while also playing actions that satisfy the constraints with high probability each round. The authors present a generalized algorithm that achieves this, making use of a black-box no-regret algorithm with constrained actions sets (used to make predictions that achieve low regret while simultaneously choosing only safe actions), and a black-box online regression model (used to predict the value of the unknown constraint function $f*$ at each round). They show that a simpler version of this algorithm solves the slightly-modified problem of achieving no-regret guarantees while satisfying long-term constraints (as opposed to a constraint at each round). Finally, since the algorithm is non-constructive (assuming the existence of a mapping function satisfying a particular property), they discuss several settings of action sets, safety constraints, and loss functions for which this mapping can be concretely defined, and thus the algorithm itself. The general idea of the algorithm is to use the regression model’s predictions of the value of $f*$ at each round to determine a subset $\mathcal{F}_t$ of the constraint functions that includes the true safety function with high probability. Then the constrained no-regret algorithm can be used to predict a distribution over actions by constraining to the set of actions that could be safe (inferred from $\mathcal{F}_t$), which is mapped to a distribution $p$ over the set of actions which are definitely safe using a mapping function $M$. The authors define a complexity measure (defined in part by $M$) that at a high-level, trades off between how much worse your loss regret may be because you need to play a safe action, and how much new information you expect to learn about $f*$ using the current distribution $p$. By optimizing for $M$, one can keep the sum of this measure over time small, and the overall regret can be bound by this sum as well as the regrets of the two black-box algorithms used as subroutines. Claims And Evidence: The claims are all proved. Methods And Evaluation Criteria: Yes, the defined setting seems reasonable. The assumption of receiving (possibly noisy) feedback as a function of the safety function’s value generalizes to many safety-critical settings. Theoretical Claims: I looked at the proof for the main Theorem 4.2 (which looked good), though not all of the lemmas preceding it. Experimental Designs Or Analyses: There are no experiments. Supplementary Material: For the most part no, apart from a few early lemmas / proofs. Relation To Broader Scientific Literature: I am not very familiar with the learning with constraints literature – based on the papers cited by the authors it seems that the main difference between these results and prior ones are that other algorithms either focus on achieving strong safety guarantees without prioritizing achieving low regret, or bound the number of “unsafe” actions that will be taken over a long time, rather than consistently playing safe actions with high probability. Essential References Not Discussed: Not to my knowledge Other Strengths And Weaknesses: Strengths: Safety-critical settings often coincide with non-distributional / adversarial settings, so this is an interesting area of study and the algorithm seems novel and applicable. The complexity measure introduced which is not only sufficient (in a sense) for bounding to get low regret, but also necessary to have low for safe learning to be possible indicates that it does measure something critical to the safe learning setting. The extension of this work to the long-term constraints setting is interesting because the approach seems quite different from what I have generally seen in the area (usually involving no-regret algorithms using the Lagrangian or some modification as a loss function). I also think the paper is very well-presented, with the structure and intuition behind the ideas / proofs making it fairly easy to follow throughout, even for people not familiar with the area. Weaknesses: In Section 4.4, the authors say that Assumption 5.1 is necessary for safe learning in the given setting, but this seems misleading – they show this by providing a counter-example, but this doesn’t prove the necessary condition – couldn’t there be settings where Assumption 5.1 doesn’t hold and still safe learning is possible (unless the adversary is choosing the safety function, which I don’t think they are doing). Other Comments Or Suggestions: N/A Questions For Authors: 1. On line 134, is $p_{signal}$ a distribution parametrized by $f^*(a,x)$? Should the mapping be from [-1,1] to a distribution over $\mathcal{Z}$? 2. In Section 3.3, performance is compared to some set of policies $\Pi$ which I assume need not be deterministic. Does this mean $\Pi_T$ is the set of policies $\pi$ which always only induce distributions over actions whose support is within the constrained action set, or is it comparing to specific runs of the policy that satisfy the constraints? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their review and questions. If the reviewer wishes, we are more than happy to engage in follow-up discussions through OpenReview! > In Section 4.4, the authors say that Assumption 5.1 is necessary for safe learning in the given setting, but this seems misleading – they show this by providing a counter-example, but this doesn’t prove the necessary condition – couldn’t there be settings where Assumption 5.1 doesn’t hold and still safe learning is possible (unless the adversary is choosing the safety function, which I don’t think they are doing). We would like to point out that $f^*$ is unknown and fixed - we could indeed interpret it as being chosen by the adversary before the game starts (and hence the adversary could choose $f^*$ in the offending $\mathcal F^*$ mentioned in the counter-example). Alternatively, we could view Assumption 5.1 as being $f^*$-dependent. We could instead require that $\forall \mathcal F \subseteq \mathcal F^{FAS}$ s.t. $f^*\in \mathcal F$ ... the gap-condition in the body of Assumption 5.1 holds. > On line 134 ... is $p_{signal}$ a distribution parameterized by $f^*(a,x)$? Should the mapping be from [-1,1] to a distribution over $\mathcal Z$ Yes, $p_{signal}$ is a distribution parameterized by $f^*(a,x)$. And yes! Thank you for pointing out this typo, we will update this in our final version. > In Section 3.3, performance is compared to some set of policies $\Pi$ which I assume need not be deterministic. Does this mean $\Pi_T$ is the set of policies $\pi$ which always only induce distributions over actions whose support is within the constrained action set, or is it comparing to specific runs of the policy that satisfy the constraints? $\Pi_T$ would be the set of policies which only induce distributions over intersection of constrained action sets $\cap \mathcal A_t$. This aligns with the fact that generally we compare against policies that are always safe (induce distributions over safe unknown safe set).
Summary: This paper is on contextual online learning with unknown constraints that are stochastic and roundwise. Let $\ell$ be a given loss function, of an action $a \in \mathcal{A},$ a context $x \in \mathcal{X}$ and an "outcome" $y \in \mathcal{Y}$; and let $f_* \in \mathcal{F}$ an unknown constraint function parametrised by $a$ and $x$. It is demanded that, without knowing $f_*$ a priori, the learner must ensure that the selected action $a_t$ satisfies $f_*(a_t, x_t) \le 0$, serving as a hidden constraint. This is enabled both by the a priori knowledge of, for each possible context $x$, a safe set $\mathcal{A}_0(x) \subset \mathcal{A},$ which allows for initial selection of actions, and noisy feedback of a safety signal $z_t$ (which one can just think of as $f\_*(a\_t, x\_t) + \mathrm{noise},$ although richer structures can be accommodated). This essentially constitutes a stochastic feedback of the unknown constraint, and actions meeting these constraints are termed safe. Of course, while meeting this constraint, the learner attempts to optimise a cost. This is captured through a regret with respect to a policy $\pi$ of the form $\sum \mathbb{E}\_{a_t \sim p_t}[ \ell(a_t, x_t,y_t)] - \ell(\pi(x_t), x_t,y_t)$. Here, this outcome signal $y_t$ is not revealed to the learner until after the action $a_t$ is selected, and $p_t$ is a law over actions that the learner uses to select $a_t$ (and this must be supported on $a : f_*(a, x_t) \le 0$). Thus, this boils down to a mildly structured version of the standard adversarial online learning setup (which is used for reasons that will be seen below). Throughout, the set of competing policies is restricted to the elements of a known class $\Pi$ that are always safe, i.e., $\Pi_T := \{ \pi \in \Pi : \forall t, \pi(x\_t) \in \mathcal{A}\_t \},$ where $\mathcal{A}\_t = \mathcal{A}\_t(x\_t) = \\{a : f\_*(a, x_t)\le 0\\}$. At a high level, then, this setup considers online learning with unknown unconstraints, where the constraint information follows a stochastic bandit framework, while the cost minimisation follows the adversarial online learning framework. As such, the setting mixes aspects of safe stochastic bandits, specifically the roundwise enforcement of constraint using noisy bandit observations of the same, and online learning with unknown static constraints. The paper aims at general characterizations of when sublinear regret and roundwise-safety can be attained in this setting, along with general methods that may attain the same. Towards this, the authors begin with an algorithmic framework that generalises a "optimistic to pessimistic projection" idea of Hutchinson et al. that was proposed for the setting of safe linear bandits. At a high level, let $\mathcal{F}\_t \subset \mathcal{F}$ denote the version space of functions that agree with the constraint feedback, constructed via an online regression oracle (and naturally, under and Eluder dimension constraint on $\mathcal{F}$). This version space induces two natural sets of actions at time $t$ - the optimistically safe actions $O\_t = \\{ a : \exists f \in \mathcal{F}\_t : f(a, x_t) \le 0 \\},$ and - the pessimistically safe actions $P\_t = \\{a : \forall f \in \mathcal{F}\_t : f(a,x_t) \le 0\\}$. Note that $P\_t$ is all of the actions that certifiably are safe (with high probability et c.) given all of the safety feedback (and the known safe actions a priori). As such, to ensure safety, the learner _must_ be restricted to selecting actions only from $P\_t$. Thus, a crucial challenge of this setup is the expansion of $P\_t$ towards the true set of feasible action (this also shows up in safe bandits). On the flipside, the competing policies cannot select an action outside of $O\_t$, and so if we could pick actions from this set using a low-regret method, we would ensure good cost regret. The algorithmic approach taken is to map from a good law over actions in $O\_t$, to a law over $P\_t$. Concretely, let $M$ be a map from laws over $O\_t$ to laws over $P\_t$ (that can further utilise the $\mathcal{F}\_t$ and the context $x_t$). The algorithmic framework proposed in the paper is thus: - using an online learning oracle, compute a distribution $\tilde{p}\_t$ over $O\_t$. It is assumed that this is low-regret against policies in $\Pi_T$ - Draw and play $a_t \sim p\_t = M(\tilde{p}\_t , \mathcal{F}\_t, x\_t)$. $\newcommand{\vv}{\mathcal{V}} $ To control the regret of such play, the authors define a complexity measure associated with this map $M$, and parameterised by a number $\kappa \ge 0$, $$ \vv\_{\kappa}^M(\tilde{p}\_t, \mathcal{F}\_t, x\_t) := \sup\_{y} \\{ \mathbb{E}\_{a \sim M(\tilde{p}\_t)}[ \ell(a, x_t, y)] - \mathbb{E}\_{a \sim\tilde{p}\_t}[ \ell(a, x_t, y)] \\} - \kappa \mathbb{E}_{a \sim M(\tilde{p}\_t)}[\Delta\_{\mathcal{F}\_t}(a, x_t)].$$ The first term above is a worst-case bound on how much extra cost is incurred due to playing according to $p_t = M(\tilde{p}\_t),$ while the second term is the expected width, wherein $\Delta\_{\mathcal{F}\_t}(a, x) = \sup_{f,f' \in \mathcal{F}\_t} |f(a,x) - f'(a,x)|$. Note that, just by using the fact that we can bound $\mathbb{E}\_{M(\tilde{p}\_t}[ \ell(a, x_t, y_t)] - \mathbb{E}\_{\tilde{p}\_t}[ \ell(a, x_t, y_t)]$ in terms of $\vee_\kappa^M + \kappa \mathrm{width}$, this gives a bound on the regret of the form $$ \mathrm{Reg}(\textrm{Online Learing}) + \kappa ( \mathrm{Reg}(\textrm{Width})) + \sum_t \mathcal{V}^M\_\kappa(\tilde{p}\_t, \mathcal{F}\_t, x_t),$$ where this second term is bounded in terms of a regression cost, the eluder dimension, and an accumulated approximation level. The authors offer the interpretation of $\vv\_{\kappa}^M$ as a tradeoff between the cost of gaining information about the constraint function $f\_{\*}$, and between minimising regret (which in turn requires fidelity to the unsafe distribution $\tilde{p}\_t$). Naturally, for any $\kappa,$ we can define an optimal $M_*$ by minimising the objective, which yields a (trajectory-dependent!!) "measure of complexity", $\vv\_\kappa(\tilde{p}\_t, \mathcal{F}\_t, x_t).$ This is presented as an analogue of the decision-estimation coefficient for this problem. One can also, for a fixed $M$, optimise the $\kappa$ to yield the strongest bounds, and various results of this type are stated. Naturally, there are two questions of interest: how pertinent is the measure $\vv\_{\kappa}$ to such regret problems, and if there are situations with maps $M$ that yield sublinear regret in the above setup. To address the first, the authors show a lower bound, which essentially says that in a context-free situation with a fixed known loss that is optimised at a safe point, and with a good regression oracle and bounded Eluder dimension, there was a $\kappa$ such that $\sum \vv\_{\kappa}(\tilde{p}\_t, \mathcal{F}\_t) = \Omega(T),$ then one can find a non-expandable subset of safe actions $P\_{\*}$ that are a) all suboptmal by a constant amount, and b) non-expandable (i.e., information about $f\_{\*}$ for actions in $P\_{\*}$ does not leak information about $f\_{\*}$ for actions outside of $P\_{\*}$). This implies a strong result, that if the initial safe action set if $P\_{\*},$ then sublinear regret is impossible. For the second point, the authors study the behaviour of linear constraints (recovering extensions of prior work of Hutchinson et al.), and show a compositionality result that, e.g., extends this to generalised linear models. Claims And Evidence: The paper is positioned as providing a general complexity measure for this problem, $\vv\_{\kappa}$ and a general methodology driven by this object. The main claims beyond this are that this methodology can yield nontrivial regret bounds in certain scenarios (which is demonstrated for linear losses and constraint), and that without sublinearity of $\sum \vv\_{\kappa}(\tilde{p}\_t, \mathcal{F}\_t),$ there may be situations where no method can obtain sublinear regret. For me, all of the subsequent claims are fine, and the main sticking point, naturally, is whether I buy this $\vv\_{\kappa}$ as a general complexity measure. The obvious issue here is that this is strongly trajectory-dependent. Now, if the lower bound had said (and this would be very strong indeed) that no matter the situation, if $\vv\_{\kappa}$ is not summable, then no method can learn, then I would agree that nothing more can be said, but that's not quite what the lower bound shows: the setup is very particular. The authors also appeal to the DEC formulation, but note that this paper is careful to state things in terms of a static model class. A final issue, of course, is the efficiency of this method: it is not clear to me that in general this viewpoint recovers tight bounds. I do appreciate that this gets the right rates in the linear setting, but is this enough? Certainly, Assumption In other words, I don't quite buy that $\vv\_{\kappa}$ "characterizes the difficulty of safe learning" (line 56, col 1): the evidence is too thin for such a strong claim, both from the viewpoints of upper and lower bounds. Methods And Evaluation Criteria: This is a theoretical paper. The main methodological idea in the proposed framework is to project distributions over an optimistic estimate of the feasible set to that over a pessimistic estimate of the same, which (significantly) generalises an existing approach of Hutchinson et al. in safe bandit problems (and so makes sense). Theoretical Claims: I at least skimmed through most of the proofs, and they appear correct to me. I read appendices A and B more closely, and these are correct as far as I can tell. Experimental Designs Or Analyses: N.A. Supplementary Material: I read appendices A, B in detail, and skimmed appendix C enough to get the gist. Relation To Broader Scientific Literature: I think the contributions are novel, and extend certain existing ideas strongly and in a insightful way. The results themselves are interesting. The observation that (although in a restricted setting) if $\vv\_{\kappa}(\tilde{p}\_t, \mathcal{F}\_t)$ is not sublinearly integrable, then _no method_ can learn nontrivially is, to me, surprisingly and insightful. The general upper bound is also interesting, especially since it can be realised well for linear models, but leaves the question of how generic it really is open. Overall, despite the gaps, I am certain that the results will be of interest to the online learning community at ICML. Essential References Not Discussed: I don't know if this is essential, but work on doubly-optimistic methods is omitted from the discussion of safe bandits (Chen et al., ICML 2022, "Startegies for Safe...", Gangrade et al., COLT 2024, "Safe Linear Bandits over Unknown Polytopes"). Other Strengths And Weaknesses: One important weakness is that the paper is that the main ideas are presented in a pretty rushed way. Take page 5, column 2, the main motivation and presentation of the underlying idea. The introduction of $M$ would benefit from some explanation as to why this projection idea could possibly get good regret, which would help introduce the idea of $\vv\_{\kappa}^M$. This would also benefit from an explanation of what the terms therein are, and why they are arising/natural. In part this is also true of section 5. I also found the order of the presentation to be confusing, and did not really understand the idea of the paper until I went and stared at Algorithm 1, understood the methodological idea, and then went back to the general complexity measure being proposed. It somehow seems like in order to assert the generality of the result, some level of comprehensibility is sacrificed. I think this hurt the paper more than it helps it. Note that this may well be an issue of space: the DEC paper, which the authors compare their contributions to, takes a 130+pages (with 60+ pages in the main text) to explain its ideas and convince them that they really have something concrete. If this is the reason, then the authors should, IMO, think about if a conference paper with limited page counts is really the right venue to send their work to, and if it may not instead benefit with a more comprehensive treatment sent to a journal. In any case, right now I think I find the claims of $\vv\_{\kappa}$ being a general complexity measure that captures safe online learning to be somewhat exaggerated, and find the presentation a bit lacking, but I do think that the results are strong and interesting. The former mostly is why my recommendation below is a weak accept rather than an accept. Other Comments Or Suggestions: - There are some jarring variations in the line spacing across the document (compare, e.g., line 127 onwards in columns 1 and 2). Usually people do this for space, but there's like enough whitespace floating about the document for this to not be necessary. Please fix this, it just distracts from reading. - IMO the discussion following assumption 3.2 should simply describe, in words, that this property holds true, and move Proposition 3.3 to the appendix --- this is not per se adding much to help explain the main ideas of this paper, and this space can be more valuably spent doing that, or expanding upon the very terse treatment of section 5. I think in part I am saying this because for most online-learning algorithms I can think of, one begins with a round-wise control on the loss of the actions, and with this view it is not surprising to me that as long as the action space available to the learner contains the actions of all competing policies, the regret is controlled. There may well be issues with this (haven't thought too much), but IMO a reader would be happy to buy the assumption on the basis of this (and a general proof of achievability in the appendix). - Please be explicit about the independence structure of the feedback (rather than telling me about its generality): in particular, my reading is that it is intended that $z_t$ is conditionally independent of the history given $x_t$ and $a_t$ (which should be stated). Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the careful read and very helpful suggestions, which we will be sure to incorporate into our final version. If the reviewer wishes, we are more than happy to engage in follow-up discussions through OpenReview! > The obvious issue here is that this is strongly trajectory-dependent. Now, if the lower bound had said (and this would be very strong indeed) that no matter the situation, if $\mathcal V_\kappa$ is not summable, then no method can learn, then I would agree that nothing more can be said, but that's not quite what the lower bound shows: the setup is very particular. We believe this point to be important in the reviewer's assessment of the strength of our lower bound. We would like to ask the reviewer **what they meant by "no matter the situation"**. We would like to clarify that $\mathcal V_\kappa(\cdot) := \inf_{M} \mathcal V_\kappa^M(\cdot)$ (line 248 right column), and hence the lower bound states that if even the best mapping is not summable, no method can learn. On the other hand, if this optimal mapping makes $\mathcal V_\kappa$ summable then this mapping can be used in upper bounds. Correspondingly, our lower bound is in a setting where $\mathcal V_\kappa$ is not summable no matter what mapping is used. Furthermore, while this complexity indeed is trajectory-dependent, the lower bound correspondingly also sums across the trajectory of the best mapping. When we use a particular mapping, the trajectory is what is generated by that mapping. As to the setup being very particular, this setup was engineered to show the necessity of the $\mathcal V_\kappa(\cdot)$ terms. The other terms in the upper bound (Eluder and oracle bounds) are well justified in the literature and each one can already individually shown to be necessary. So we justify the necessity of the novel complexity term we introduced alone. > A final issue, of course, is the efficiency of this method: it is not clear to me that in general this viewpoint recovers tight bounds. I do appreciate that this gets the right rates in the linear setting, but is this enough? Certainly, Assumption In other words, I don't quite buy that $\mathcal V_\kappa$ "characterizes the difficulty of safe learning" (line 56, col 1): the evidence is too thin for such a strong claim, both from the viewpoints of upper and lower bounds. To the best of our knowledge at the time of writing, existing works have focused solely on the linear setting. We believe our methodology is a first step towards the analysis of more general settings. > I don't know if this is essential, but work on doubly-optimistic methods is omitted from the discussion of safe bandits (Chen et al., ICML 2022, "Startegies for Safe...", Gangrade et al., COLT 2024, "Safe Linear Bandits over Unknown Polytopes"). While these works do indeed have a "doubly optimistic" flavor to them, these works do not study the same setting of per-round strict safety constraints. Among "doubly optimistic" works, we believe that the referenced work of Hutchison et al., AISTATS 2024 "Directional Optimism for Safe Linear Bandits" is the most relevant as it considers a per-round strict safety constraint setting. > One important weakness is that the paper is that the main ideas are presented in a pretty rushed way... . It somehow seems like in order to assert the generality of the result, some level of comprehensibility is sacrificed. Thank you for the feedback, and we will work towards expanding the presentation in sections 4 and 5. As the reviewer later suggests, shortening section 3.3 to make space for this expansion seems like a wonderful idea. As noticed by the reviewer, a focus of our paper was to express the generality of the approach (as existing works focus solely on the linear setting) - and we will work to make this presentation more digestible. > There are jarring variations in line spacing ... We will fix this in our final version! > The discussion following assumption 3.2 should simply describe, in words, that this property holds true, and move Proposition 3.3 to the appendix ... Thank you for the feedback, we will shorten this section. The reason we provided the proposition was because while the underlying proof technique is standard, we couldn't just cite existing results and claim Sequential Rademacher Complexity upper bounds the online learning regret as we needed to satisfy the given constraints - which requires some nontrivial manipulation. > Please be explicit about the independence structure of the feedback ... We will make a note early on explicitly mentioning independence structure. We had hoped $z_t \sim p_{\text{signal}}(f^*(a_t,x_t))$ would highlight this.
Summary: This paper studies the problem of bandit with constraints. Specifically, the forecaster wants to minimize the regret, while keeping the safe constraints satisfied with high probability. To resolve this question, the authors propose a new complexity measure, and they provide upper bounds and lower bounds to the regret 1. They proposed an algorithm with regret bounded in terms of this complexity measure. 2. If the sum of this complexity measure from step one to $T$ scales linear with $T$, then no algorithm can achieve sublinear regret. Claims And Evidence: Yes, the claims in this submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I checked the proofs of main theorems including Theorem 4,2, Theorem 4.3 and Theorem 4.4. Experimental Designs Or Analyses: No experiments in this paper. Supplementary Material: Yes, I reviewed the proofs of Theorem 4,2, Theorem 4.3 and Theorem 4.4. Relation To Broader Scientific Literature: Previous literature either studies safe bandit with known constraints, or linear bandit with unknown constraints, or convex optimization with unknown constraints. The setting considered in this setting of bandit with function approximation and unknown constraints, is new and not appeared in previous literature. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: Strengths: 1. This paper is well written. The proofs are correct. 2. The developing of this new concept $V_\kappa$ is insightful, as it provides both upper and lower bounds to the regret. Weaknesses: 1. The upper and lower bounds do not match. Other Comments Or Suggestions: I don't have further comments and suggestions. Questions For Authors: I have the following questions regarding this paper: 1. The lower bound only states that when the sum of $V(F_t)$ is at least $\Omega(T)$, then no algorithm can achieve sublinear regret. Can you prove a lower bound stating that the regret is lower bounded by the sum of $V(F_t)$? 2. In the lower bound you designed, we have to solve the optimization problem to find the p to minimize $M(p)$. Is there any efficient algorithm that can achieve this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their review and questions. If the reviewer wishes, we are more than happy to engage in follow-up discussions through OpenReview! > "The upper and lower bounds do not match" and "Can you prove a lower bound stating that the regret is lower bounded by the sum of $\mathcal V(\mathcal F_t)$?" This is true - getting exactly matching upper and lower bounds is a work in progress. The main take away of the lower bound was to show that at least asymptotically, minimizing the proposed complexity measure is necessary for safe learning. > "In the lower bound you designed, we have to solve the optimization problem to find p to minimize $M(p)$. Is there any efficient algorithm that can achieve this? We are a little confused why the reviewer is interested in the optimization problem to find p minimizing $M(p)$ in the lower bound. As for solving this optimization in the upper bound, (line 243 right column), we notice that this is a saddle-point optimization. This can be solved up to desired accuracy (through standard techniques) by treating it as a two-player game where the min-player who chooses $p$, and the max-player chooses $f,f',y$. This can be solved by forming an $\epsilon$-net over the set of actions and while the optimization time complexity would be exponential in the dimension, it is still optimizable and gives a concrete mapping for a fixed $\kappa$. For finite, linear and generalized linear examples we show computationally efficient (but sub-optimal from point of view of optimization in line 243) mappings such as scaling etc suffice and provide upper bounds through our techniques.
null
null
null
null
Controlling Underestimation Bias in Constrained Reinforcement Learning for Safe Exploration
Accept (oral)
Summary: This paper presents Memory-driven Intrinsic Cost Estimation (MICE), an algorithm for reducing constraint violations in constrained RL throughout training, rather than just at the end of training. It does this using a memory buffer that stores a representation previously seen constraint-violating states, which are compared to new states to form an intrinsic reward term to reduce value function over-estimation leading to repeated visitation of constraint violating states. Claims And Evidence: The claims of the paper seem to be well-supported by the theoretical and experimental evidence provided. Methods And Evaluation Criteria: The proposed methods and evaluation seems reasonable for the problem. Theoretical Claims: I did not check the proofs, but the theorem claims seem reasonable Experimental Designs Or Analyses: The experimental design seems reasonable, the estimation bias and ablation experiments test specific claims the paper makes about why the proposed method works. Supplementary Material: I reviewed the hyperparameter sensitivity and additional baselines experiments, as well as the algorithm box and the detailed environment descriptions Relation To Broader Scientific Literature: This work seems significant in that reducing the frequency of constraint violations during constrained RL training is valuable for enabling CRL to be used for many tasks which require training where constraint violations have a real cost (for example, training a robot policy on a real robot). To my knowledge, prior work on CRL hasn't attempted to address during-training violations and has mostly focused on the state of the policy after training. That this work also improves the training process in general and finds a better solution is a worthwhile contribution as well. Essential References Not Discussed: Conservative Q Learning and similar methods have been used for offline RL with success, while the goal and method are different the general concept of conservative value estimation to ensure good behavior is shared, it might be worth mentioning in the discussion of related work on overestimation in RL. Other Strengths And Weaknesses: Overall, I liked this paper. The problem being addressed is significant and to my knowledge novel. The proposed method makes intuitive sense, and is theoretically justified, with sufficient experimental validation on standard benchmarks to demonstrate efficacy. If there's one weakness to highlight, it's that I wonder about the use of a random network projection for the memory state comparison (I've got a question below along those lines), but overall I think this is a solid paper that is ready for acceptance. Other Comments Or Suggestions: The paper title at the top of each page is not correct Questions For Authors: When using a random network to project into latent space, my understanding is that distance in random latent space is mostly a "these two states are or are not similar" thing, rather than having use as a scalar quantity. As MICE uses a k-NN distance in this space, it makes me wonder if the numerical value matters or if it's just a matter of being moderately distant from previously sampled datapoints. Could the intrinsic rewards be quantized, and would that affect performance? I don't think this is critical to answer but the use of a random latent space seems at odds with the idea of specific distances mattering. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's positive and insightful comments. The followings are detailed responses to the points raised by Reviewer kehG. >When using a random network to project into latent space, my understanding is that distance in random latent space is mostly a "these two states are or are not similar" thing, rather than having use as a scalar quantity. As MICE uses a k-NN distance in this space, it makes me wonder if the numerical value matters or if it's just a matter of being moderately distant from previously sampled datapoints. Could the intrinsic rewards be quantized, and would that affect performance? I don't think this is critical to answer but the use of a random latent space seems at odds with the idea of specific distances mattering. **Response:** We appreciate the reviewer’s insightful comments regarding the use of random projection in state comparisons. In our experiments, random projection is implemented using a Gaussian random matrix of shape $(n, m)$, which projects states from the original dimension $n$ to a lower dimension $m$. According to the Johnson-Lindenstrauss lemma [1], random projection approximately preserves Euclidean distances in the original space, which is proven to be valid in similar KNN-based methods in prior works [2][3]. Additionally, we fully agree with the reviewer that relative similarity between states is more important than absolute scalar distance. In our implementation, both extrinsic and intrinsic costs are normalized to ensure training stability. This normalization alters absolute distance values but preserves similarities between states, which is key to maintaining meaningful guidance for intrinsic cost computation and policy learning. To further assess the impact of random projection, we conducted an additional ablation study on it. The results in Figure 1 (<https://anonymous.4open.science/r/7532-6C07/experiments3.pdf>) demonstrate that using random projection does not degrade policy performance or increase constraint violations, further supporting the idea that state similarity is likely more critical than absolute numerical values. We appreciate the reviewer’s constructive questions and hope these additional experiments and clarifications address the concern. >The paper title at the top of each page is not correct. **Response:** We sincerely appreciate the reviewer’s careful review. We will correct this typo in the revised manuscript. [1] Johnson, W. B., Lindenstrauss, J., et al. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984. [2] Hu, H., Ye, J., Zhu, G., Ren, Z., and Zhang, C. Generalizable episodic memory for deep reinforcement learning. In International Conference on Machine Learning, pp. 4380–4390. PMLR, 2021. [3] Zhu, G., Lin, Z., Yang, G., and Zhang, C. Episodic reinforcement learning with associative memory. In International Conference on Learning Representations.
Summary: The paper proposed MICE, which introduces the concept of intrinsic cost to combat the issue of underestimation bias in cost-value function present in many safe RL algorithms. The paper discusses their flashbulb memory design, which attaches additional intrinsic cost to previously visited risk regions, and shows that this reduces underestimation bias in safe RL. Theoretically, the paper provided theoretical bound on the constraint violation and convergence guarantee. ----- Score raised to 4: Accept after rebuttal ----- Claims And Evidence: Most of the claims in the paper are well supported by theory and experiment. The only doubt I have is that: 1. To compute the similarity between newly encountered state and the state in the memory, the paper proposed a kernel function which is approximated using k-nearest neighbors. KNN, as a non-parametric model, is known to suffer from curse of dimensionality. This might cause difficulty for MICE to handle very high-dimensional state representation. The embedding function $f()$ would be key in this regard. The paper might benefit from investigating how this embedding function interact with the effect of KNN choice. Methods And Evaluation Criteria: The baselines chosen are well suited for this problem. I'd also highlight that MICE is extended based on CPO, thus the baseline comparison with CPO (in Appendix C.3.1) should be moved to the main paper. I do note that CPO and MICE performance are very close, and MICE is relatively safe in some domains (Hopper, PointGoal1). Theoretical Claims: The theoretical claim is well supported, I do have to highlight I did not do step-by-step check on the detailed proof in the Appendix. Experimental Designs Or Analyses: 1. In the 8 chosen tasks, MICE's performance seems rather close to CPO although it's relatively safer in two domains. I'd think that expanding to more tasks might verify that MICE is reliably safer than CPO. 2. In the sensitivity analysis (Appendix C.3.2) of discount factor, the oft-used default of 0.99 seems to produce a policy which oscillates around the constraint threshold 25. This behavior seems largely similar to other constrained RL algorithm (e.g. PPO-Lagrangian, PID-Lagrangian). The authors might want to consider expanding to other tasks to check how the discount rate of 0.99 behaves. Supplementary Material: Code is provided in the supplement. The supplement also includes detailed maths, proof, algorithm, additional result and hyper parameters. I checked the additional result and commented in earlier sections of this review. Relation To Broader Scientific Literature: This is quite relevant to the safe RL literature where cost-value underestimation might be present. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: 1. Is discount rate of 0.99 used across all baselines for the main result? Questions For Authors: Please refer to earlier sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's positive and insightful comments. The following are the detailed responses to the points raised by Reviewer rbBW. >The paper might benefit from investigating how this embedding function interact with the effect of KNN choice. **Response:** We appreciate the reviewer’s valuable comment regarding investigating the embedding function and KNN choice. In our paper, we utilize a random projection method as the embedding function to compress states before applying KNN. Specifically, states are projected from their original dimension $n$ to a lower embedding dimension $m$ using a Gaussian random matrix of shape $(n, m)$. This significantly reduces the computational complexity of KNN from $O(N n)$ to $O(N m)$, where $N$ is the number of states in memory. Prior studies [1][2] have validated the effectiveness of random projection in similar KNN-based methods. We further conducted ablation experiments of random projection (Figure 1, <https://anonymous.4open.science/r/7532-6C07/experiments2.pdf>), which confirm that using random projection in MICE does not reduce policy performance or increase constraint violations, while substantially reducing training time. Additionally, we examined the sensitivity of $N_k$ selection in KNN across multiple environments (Figure2, <https://anonymous.4open.science/r/7532-6C07/experiments2.pdf>). Results show that a larger $N_k$ enhances policy safety by considering more unsafe states, but it increases computational overhead. Conversely, a smaller $N_k$ may not fully utilize memory information, potentially leading to higher constraint. To balance safety, performance, and efficiency, we set $N_k = 10$ in MICE for all tasks. We appreciate the reviewer’s constructive comments and hope that these analyses provide further clarity. >The baseline comparison with CPO (in Appendix C.3.1) should be moved to the main paper. I'd think that expanding to more tasks might verify that MICE is reliably safer than CPO. **Response:** We appreciate the reviewer’s valuable comments. Following your advice, we will move the comparison between MICE and CPO from Appendix to the main text for improved clarity. To further verify the safety advantages of MICE, we conducted additional experiments in a broader set of environments. The results in Figure 3 (<https://anonymous.4open.science/r/7532-6C07/experiments2.pdf>) demonstrate that MICE consistently achieves superior constraint satisfaction across all tasks, while CPO exhibits significant constraint violations. Additionally, to provide a more intuitive quantitative comparison, we present the difference in constraint violation rates between CPO and MICE during training. The results in Table 1 (<https://anonymous.4open.science/r/7532-6C07/experiments2.pdf>) show that CPO consistently exceeds MICE in violation rates in all environments, with an average violation rate 34.4\% higher than MICE. This further confirms the safety advantages of MICE. >The authors might want to consider expanding to other tasks to check how the discount rate of 0.99 behaves and comparing with PPO-Lagrangian and PID-Lagrangian. **Response:** We appreciate the reviewer’s valuable comment regarding the behaviour of discount factor 0.99 in more tasks and its comparison with PPO-Lagrangian and PID-Lagrangian. To address this, we evaluated MICE with a discount factor of 0.99 on additional tasks and compared its performance against PPO-Lagrangian and PID-Lagrangian. As shown in Figure 4 (<https://anonymous.4open.science/r/7532-6C07/experiments2.pdf>), MICE consistently achieves strong constraint satisfaction while maintaining superior policy performance. In contrast, PPO-Lagrangian and PID-Lagrangian exhibit significant oscillations in constraint during training. In our experiments, we used the consistent discount factors across all methods. [1] Hu, H., Ye, J., Zhu, G., Ren, Z., and Zhang, C. Generalizable episodic memory for deep reinforcement learning. In International Conference on Machine Learning, pp. 4380–4390. PMLR, 2021. [2] Zhu, G., Lin, Z., Yang, G., and Zhang, C. Episodic reinforcement learning with associative memory. In International Conference on Learning Representations. --- Rebuttal Comment 1.1: Comment: I thank the authors for conducting additional experiments to clarify my doubts. I've raised my score as these points have been sufficiently addressed. Good luck! --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your time and effort in reviewing our response and raising the score.
Summary: The paper tackles a important issue in CRL: the underestimation bias in the cost value function, arising from the functio approximation error, which often results in unsafe exploration and frequent constraint violations. The paper proposes the MICE algorithm to address this issue. It uses a flashbulb memory mechanism to store unsafe states and then compute an intrinsic cost based on the pseudo-count of state visits to high-risk regions. Claims And Evidence: The paper provides convergence proofs, worst-case constraint violation bounds, and other theoretical results that support its claims regarding bias correction and safe exploration. Methods And Evaluation Criteria: The proposed methods are well-aligned with the underestimation challenge inherent in CRL. Theoretical Claims: I reviewed the proofs. Experimental Designs Or Analyses: The claim that the intrinsic cost mechanism alleviates underestimation without overly conservative behavior is supported by both theoretical analysis (via the adaptive balancing factor) and experimental results. However, the evidence might be less convincing if one considers potential sensitivity to hyperparameters or the handling of outlier states not captured by the flashbulb memory. The use of KNN with random projection is theoretically motivated to reduce computation. However, the experiments do not extensively report on implementation details, computational overhead, or scalability issues, which might be relevant for real-world applications or more complex simulations. Supplementary Material: All parts. Relation To Broader Scientific Literature: The paper integrates and extends ideas from CRL, intrinsic motivation, and memory-based exploration. By combining these concepts, it provides a theoretically grounded and empirically validated framework Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. The idea of using a flashbulb memory to derive an intrinsic cost is innovative and draws an interesting parallel with human risk perception. 2. The paper provides solid theoretical underpinnings including convergence analysis and constraint violation bounds. However, the assumptions (e.g., finite MDP, extensive sampling) might restrict practical applicability in more complex or continuous environments. Other Comments Or Suggestions: 1. Typically, for the KNN part, the choice of N_k is crucial. If N_k is too small, the count might be noisy; if it’s too large, it may smooth out important local variations. This parameter may require careful tuning depending on the environment. I suggest the authors to provide a hyperparamter sensitivity experiment on it. 2. Please provide the detailed architectural design, hyperparameter settings, or the exact implementation specifics of the random projection layer. Questions For Authors: I have a concern regarding the flashbulb memory mechanism described in the paper. The flashbulb memory stores only states where the extrinsic cost is greater than zero. This raises the issue that samples truly suffering from underestimation—possibly due to function approximation errors—may not be captured if they do not exhibit extrinsic cost > 0 or if they are outliers. Such outlier states are potentially more prone to underestimation, yet they may not fall within the types of states stored in the memory and therefore might not be adequately corrected by the intrinsic cost mechanism. Has the paper addressed this issue, and if so, what strategies are proposed to handle these cases? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's positive and insightful comments. The following are the detailed responses to the points raised by Reviewer 9DRS. >The assumptions (e.g., finite MDP, extensive sampling) might restrict practical applicability in more complex or continuous environments. **Response:** We thank the reviewer’s valuable comment. To address this concern, we conducted additional experiments comparing MICE with CPO in SafetyCarButton1-v0 and SafetyPointButton1-v0 environments. These tasks are more complex, requiring agents to navigate to a target button and correctly press it while avoiding Gremlins and Hazards. Results presented in Figure 1 (<https://anonymous.4open.science/r/7532-6C07/experiments1.pdf>) show that MICE achieves superior constraint satisfaction while maintaining policy performance comparable to CPO. Furthermore, we acknowledge potential limitations where direct environmental sampling is infeasible. To address this, offline data can be leveraged to construct memory, thus enhancing MICE’s applicability in these special cases. >I suggest the authors provide a hyperparameter sensitivity experiment on $N_k$ in KNN. **Response:** We appreciate the reviewer’s valuable suggestion. To address this, we conducted hyperparameter sensitivity experiments for $N_k$ across multiple environments and analyzed the impact, as shown in Figure 2 in (<https://anonymous.4open.science/r/7532-6C07/experiments1.pdf>). The results show that increasing $N_k$ enhances the safety of the policy, as it considers more unsafe states in memory, but also raises computational overhead. Conversely, a smaller $N_k$ may lead to insufficient leveraging of unsafe state information, potentially leading to higher constraint. In this paper, we selected $N_k = 10$ uniformly across all environments to balance safety, performance, and computational efficiency. >Please provide the detailed implementation specifics of the random projection layer. **Response:** We appreciate the reviewer's valuable comment regarding the details of random projection. Our random projection layer is implemented using a Gaussian random matrix with shape $(n, m)$, projecting states from the original dimension $n$ to a lower embedding dimension $m$. According to the Johnson-Lindenstrauss lemma [1], this approach approximately preserves relative Euclidean distances in the original space, which is proven valid by prior KNN-based methods [2][3]. In our work, this dimensionality reduction effectively decreases the computational complexity of KNN from $O(N n)$ to $O(N m)$, where $N$ is the number of states in memory. Specifically, in SafetyPointGoal1, the state dimension is reduced from 60 to 8. Furthermore, we conducted ablation experiment on random projection. As demonstrated in Figure 3 (<https://anonymous.4open.science/r/7532-6C07/experiments1.pdf>), utilizing random projection in MICE does not degrade policy performance or increase constraint violations, while effectively reducing training time. >The flashbulb memory only stores states with extrinsic cost greater than zero. However, the states with outlier extrinsic costs may not store in the memory and therefore might not be adequately corrected by the intrinsic cost mechanism. How does the paper handle these cases? **Response:** We appreciate the reviewer’s insightful question. In our tasks, the extrinsic cost is derived by the environment based on the agent's and obstacles' coordinates, providing unbiased real data without outliers. Underestimation primarily occurs due to constraint minimization during optimization, which disrupts the zero-mean property of noise, especially in regions with higher extrinsic costs. Therefore, current memory mechanism adequately captures these underestimated states in our tasks. However, we fully agree with the reviewer that in other special tasks, where extrinsic costs may be biased, outlier states could arise. To handle such cases, alternative criteria for adding new states into memory could be applied. One possible strategy is using the expected subsequent costs as the criterion for adding states into memory: a state would be stored if its expected future cost exceeds a threshold. This approach effectively mitigates the impact of individual outliers by considering multiple future states collectively. [1] Johnson, W. B., Lindenstrauss, J., et al. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984. [2] Hu, H., Ye, J., Zhu, G., Ren, Z., and Zhang, C. Generalizable episodic memory for deep reinforcement learning. In International Conference on Machine Learning, pp. 4380–4390. PMLR, 2021. [3] Zhu, G., Lin, Z., Yang, G., and Zhang, C. Episodic reinforcement learning with associative memory. In International Conference on Learning Representations. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I appreciate your discussion regarding the corner case, and I look forward to seeing this discussion included in your revised version. I have raised my score accordingly. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate you taking the time to review our response and raising the score. We will include these additional experiments and discussions into the revised version of the paper.
null
null
null
null
null
null
null
null
Latent Score-Based Reweighting for Robust Classification on Imbalanced Tabular Data
Accept (poster)
Summary: This paper introduces a latent score-based reweighting framework for improving classification robustness on imbalanced tabular datasets. The approach leverages score-based generative models (diffusion models) to estimate the joint distribution P(X,Y), identifying underrepresented data regions and upweighting samples accordingly. Experiments on six real-world tabular datasets show that the method achieves improved worst-case accuracy under distribution shifts while maintaining competitive mean accuracy compared to baseline robustness-enhancing methods. ## update after rebuttal I read the author's rebuttal carefully and have updated my evaluation accordingly. Claims And Evidence: Yes. Methods And Evaluation Criteria: The idea of balancing the data distribution density in feature space makes sense for more robust classification. Conceptually it should help models to focus more on the underrepresented group with lower density in specific feature subspace. Theoretical Claims: NA Experimental Designs Or Analyses: The experimental designs generally make sense, but I wonder how this method works with tree-based models such as XGBoost and LightGBM, which are known to generally better than neural networks on tabular data. Supplementary Material: I checked the code base and it seems sound. Relation To Broader Scientific Literature: It may serve as a general technique for more robust learning on tabular datasets. Essential References Not Discussed: The references seem appropriate to my best knowledge. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and easy to follow. 2. The proposed framework does not require pre-defined group labels, this can be an advantage in real-world scenarios where the group labels may not be available due to privacy concerns. 3. I like the visualizations that help illustrate the core ideas. Weaknesses: Please see my questions. Other Comments Or Suggestions: NA Questions For Authors: 1. It’s not intuitive to me how the method can benefit the learning of the worst group. In my opinion, since the method is upweighting samples in low-density area, then the authors are assuming that the worst group is under-represented group with fewer samples distributed in a low-density feature space? I would appreciate clarification on this. 2. It’s unclear how the selected datasets contain distribution shifts, please add more detailed description in the dataset section or appendix. 3. Also, why do shifts matter for robust classification? The proposed method is simply trying to balance the distribution in feature space via reweighting and helps learning in low-density area. It should be beneficial even without train-test shift. 4. For tabular data, it is known that tree-based ensembles (like XGBoost or LightGBM) are typically better than neural networks and is more commonly used in practical industry scenarios. I also wonder how this technique works well with those classifiers. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your detailed review and valuable feedback. Below is our concise response: **[A1]** The explanation is as follows: - Your interpretation of our method is correct. We first learn score to model the complex data distribution. A subsequent score-based reweighting ensures balanced representation, improving learning for under-represented regions. - Regarding "worst group accuracy", this metric assesses performance based on groups defined by a non-causal sensitive attribute. **The group with the lowest accuracy is termed the "worst group", reflecting model robustness when the chosen attribute does not causally determine the target variable**. For example, when predicting income levels, city of residence could serve as the grouping attribute. It does not directly determine an individual's income, but different cities may correlate with varying income distributions. Therefore, such correlations may induce bias and imbalance, adversely affecting accuracy for certain groups. Specifically, data imbalance significantly impacts model robustness, as samples from low-density regions are naturally under-represented and form the "worst group". Achieving robust model performance requires maintaining high accuracy across all groups. Our method addresses this issue by reweighting samples based on our score proxy, ensuring balanced representation independent of inherent data distribution biases. In summary, worst group accuracy serves as a measure of model robustness but can be negatively affected by data imbalance. Our proposed method mitigates this issue by ensuring balanced learning across all samples, resulting in enhanced worst group accuracy and robust model performance. **[A2]** In our paper, "distribution shift" refers to the differences between imbalanced training data and test data—the latter implicitly treated as balanced under the robustness measure. Please refer to **A3** for detailed explanations. Regarding how our dataset presents train-test shifts (i.e., imbalance in training data), please see Table 10, which shows the distribution of samples across different groups. **[A3]** This is a valuable insight. Indeed, our method remains beneficial even in scenarios without explicit train-test shifts. The core intuition behind our approach involves modeling data distribution and balancing the original data via our score-based proxy. Train-test shifts do not directly affect either our method or evaluation criteria. In our paper, we emphasize "distribution shifts" primarily to **highlight the distinction between the imbalanced training distribution and the test distribution—the latter being implicitly approximated as balanced under our robustness measure (which computes worst-group accuracies across all sensitive attributes)**. Specifically, the "shift" we discuss stresses the imbalance present in the training data with respect to either covariates $x$ or target class $y$. To measure model robustness, the ideal evaluation criterion is the sum of worst-group accuracies across **all potential non-causal attributes**. Under this criterion, an ideal model should perform well in **every region of the data space**, implicitly corresponding to a balanced test distribution. Therefore, the primary cause of degraded model performance under this testing condition is the imbalance present in training data, creating an implicit train-test shift highlighted in our main text. To mitigate this imbalance, we model the training distribution with our score-based method, thereby effectively enhancing robustness. We will further clarify this point in our final manuscript. **[A4]** We deeply appreciate your insightful question. Indeed, our score-based method is compatible with tree-based ensembles. To demonstrate this, we conducted an additional experiment employing three tree-based models to assess whether our sample weights improve models' robustness beyond neural networks. The results are presented at [here](https://anonymous.4open.science/r/ICML-rebuttal-9750/tree_as_base_model.md). For a comprehensive comparison, we also evaluated JTT, a boundary-based method discussed in Section 1, using these tree-based models. Results indicate that our method consistently enhances worst-group accuracy across base models. In contrast, JTT performs well only on certain models. This finding supports our earlier observation of boundary-based methods—they rely solely on decision boundaries, which are disconnected from training distributions, leading to unstable robustness improvements. Conversely, our strategy effectively captures distributional imbalance, generating reliable weights to enhance worst-group accuracy. These results further confirm our method could work well with tree-based models. --- We would like to thank you once again for raising these concerns. We believe that these discussions significantly enhance the rigor of our paper. Please feel free to reach out with any questions—we are more than happy to engage further. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply. Most of my concerns are resolved, and I have adjusted my score accordingly. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable suggestions and supportive feedback. Your insightful comments have greatly contributed to improving the quality of our manuscript. We will carefully incorporate the new experiments and related discussions into the final version. We are also pleased to confirm that we have already addressed your previous concerns. Should you have any additional suggestions, we would be more than happy to engage in further discussions and make any necessary refinements to the manuscript. Best Regards, Authors
Summary: This paper introduces a score-based approach to address data distribution imbalance. First, a variational autoencoder (VAE) is used to transform raw data into a latent representation space, where a diffusion model is applied to learn the joint data distribution. The proposed method estimates relative density using score similarities between noisy data points, mitigating the extreme weight imbalances caused by traditional log-likelihood calculations. A reweighting scheme is then introduced, where each sample is assigned a weight based on its relative density difference, ensuring a more balanced training distribution. Finally, an unbiased classification model is trained using these reweighted samples, and during inference, only the trained encoder and classifier are used for prediction. The proposed method show its effectiveness on several tabular datasets. Claims And Evidence: The mathematical formulations and logical progression appear well-structured with no major issues. Methods And Evaluation Criteria: The proposed method appears to be appropriately designed for achieving unbiased learning. However, benchmark tabular datasets are highly diverse, with varying characteristics, and evaluations typically involve a large number of datasets. Since the proposed method has not been evaluated on a large number of tabular datasets, additional validation is needed. Theoretical Claims: I did not identify any major issues. Experimental Designs Or Analyses: When measuring performance, only three runs were conducted, and it is unclear why the standard deviation was reported separately in a different table. Additionally, no information is provided on dataset statistics, such as the number of categorical features or the number of samples. Furthermore, there is no discussion of time complexity or scalability. Supplementary Material: Yes, it contains the source code. Relation To Broader Scientific Literature: If the method is shown to be effective for classification on general tabular datasets, it would have significant potential for broader applications and extensions. Essential References Not Discussed: The paper has omiited literature on tabular classification [1,2,3]. [1] Gorishniy, Yury, et al. "TabR: Tabular Deep Learning Meets Nearest Neighbors." The Twelfth International Conference on Learning Representations.\ [2] Holzmüller, David, Léo Grinsztajn, and Ingo Steinwart. "Better by default: Strong pre-tuned mlps and boosted trees on tabular data." Advances in Neural Information Processing Systems 37 (2024): 26577-26658.\ [3] Prokhorenkova, Liudmila, et al. "CatBoost: unbiased boosting with categorical features." Advances in neural information processing systems 31 (2018). Other Strengths And Weaknesses: **Strengths** S1. This paper is well-written.\ S2. The motivation is clear. **Weaknesses** See above. Other Comments Or Suggestions: No. Questions For Authors: * Does using an autoencoder impose restrictions on the choice of network architecture? * Could you provide experimental results on more datasets? * In tabular data classification, there are many datasets where tree-based models outperform neural networks. Do the authors think it is acceptable not to compare the proposed method with tree-based models? * Do the used datasets not contain categorical features? * In tabular datasets, where each dataset has vastly different characteristics, I believe that, as shown in [2], comparing performance across various benchmark datasets [4, 5] is necessary to validate the effectiveness of the proposed method. What do you think about this opinion? [2] Better by Default: Strong Pre-Tuned MLPs and Boosted Trees on Tabular Data\ [4] Asuncion, Arthur, and David Newman. "UCI machine learning repository." Nov. 2007,\ [5] Gijsbers, Pieter, et al. "Amlb: an automl benchmark." Journal of Machine Learning Research 25.101 (2024): 1-65. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We sincerely appreciate your detailed reviews. Below are our responses to your concerns (**R**) and questions (**A**): **[R1]** Due to readability considerations, we report standard deviations in Appendix A.3. For detailed dataset statistics, please refer to Appendix A.5, which lists the attributes selected for evaluation, and Table 10, which provides sample statistics. In addition, Table 7 clarifies how our hyperparameters affect runtime. For all these suggestions, the revisions are currently underway and we believe our next version will meet your requirements. **[R2]** We appreciate your suggestions on the undiscussed studies and will incorporate them in our manuscript. **[A1]** The primary role of the autoencoder in our method is to provide latent representations for subsequent score-based modeling. **It serves solely as a module for semantic compression and the architecture design does not directly affect subsequent score-based modeling**. In our implementation, we follow the practice from TabSyn [1]. Specifically, a tokenizer first converts both numerical and categorical features into an embedding space of dimension $d \times (N _{\text{num}} + \sum _{i=1}^{N _{\text{cat}}}C _i)$, where $N _{\text{num}}, N _{\text{cat}}, C _i$ denote the counts of numerical features, categorical features, and categories within the $i$-th categorical feature, respectively. Then a transformer serves as encoder and decoder, and its output is detokenized to reconstruct the original features. Only the tokenizer and encoder are used for further score-based modeling. **[A2]** Sure. We have conducted additional experiments on three datasets from [3] and [4], as you suggested. The results are listed in [here](https://anonymous.4open.science/r/ICML-rebuttal-9750/new_dataset.md). Our method consistently delivers the best worst-group accuracy, clearly enhancing robustness. While mean accuracy sometimes reflects a trade-off, our method often matches or exceeds baseline performance, confirming that our robustness is improved through effective score-based balancing rather than sacrificing overall accuracy. For more details about datasets, please check **A5**. **[A3]** We would like to first clarify that our main contribution is a novel score-based method designed to enhance robustness, rather than providing a new model architecture. Therefore, our approach (and those boundary-based baselines) is complementary to model architectures (e.g., neural networks, tree-based models), making direct comparisons inappropriate. However, our reweighting strategy can indeed integrate with tree-based models. To address your concern, we conducted extra experiments using 3 tree-based models on 4 datasets. Details are provided in our **A4** to **Reviewer eedu**. These results confirm that our score-based weights could work well with tree models to enhance robustness. **[A4]** Nearly all datasets used in our experiments contain categorical features. How we preprocess these features into latent embeddings is detailed in **A1**. **[A5]** Validating performance across diverse datasets is crucial, and our experiments have already been designed with this principle in mind. Given the primary objective of our paper—to enhance robustness under various data imbalances—we selected datasets reflecting different imbalance scenarios. Specifically, datasets in columns 2–9 of Table 1 primarily exhibit class-label $y$ imbalance. To evaluate robustness under covariate $x$ imbalance, we adopted the ACS dataset with state-level validation (columns 2–7 in Table 2). Thus, we believe the selected datasets comprehensively cover key data imbalance challenges. Regarding your references, we kindly clarify that [3] is the source repository for several datasets we used (e.g., Adult, Bank). :) Given time and resource constraints, evaluating all datasets from [3, 4] is infeasible. However, our original experiment design has tested our method on datasets used in prior studies [1, 2]. Moreover, we also appended extra experiments on 3 new datasets from [3, 4], as reported in **A2**. Thus, we believe our current validation sufficiently reveals the effectiveness of our method across diverse imbalance scenarios. --- We deeply appreciate your suggestions and will incorporate these clarifications into our revised paper. Please feel free to reach out with any new concerns about our method—we are more than happy to engage further. [1] Zhang, Hengrui, et al. "Mixed-Type Tabular Data Synthesis with Score-based Diffusion in Latent Space." *The Twelfth International Conference on Learning Representations*. [2] Liu, Jiashuo, et al. "On the need for a language describing distribution shifts: Illustrations on tabular datasets." *Advances in Neural Information Processing Systems* 36 (2023). [3] Asuncion, Arthur, and David Newman. "UCI machine learning repository." Nov. 2007. [4] Gijsbers, Pieter, et al. "Amlb: an automl benchmark." Journal of Machine Learning Research 25.101 (2024). --- Rebuttal Comment 1.1: Comment: Thank you for your response. However, my concern regarding the choice of evaluated datasets has grown. Tabular data is highly diverse, and no single method consistently outperforms others across all datasets. Accordingly, recent research on tabular data [1, 2] tends to conduct broad evaluations across multiple datasets within benchmarks (e.g., the AutoML benchmark). For example, RealMLP [2] is evaluated on 90 tabular datasets. In contrast, the proposed method reports performance improvements on all evaluated datasets, which raises a strong suspicion that the authors may have cherry-picked only the favorable results. Furthermore, despite the fact that tree-based methods have demonstrated strong performance on tabular datasets, the authors only include experiments involving tree-based methods on four datasets. While I agree that the proposed method shows some promise, I believe that its effectiveness should be supported by experiments conducted on a large number of datasets. Therefore, I lowered my score to 1. [1] Holzmüller, David, Léo Grinsztajn, and Ingo Steinwart. "Better by default: Strong pre-tuned mlps and boosted trees on tabular data." Advances in Neural Information Processing Systems 37 (2024): 26577-26658.\ [2] Gorishniy, Yury, et al. "TabR: Tabular Deep Learning Meets Nearest Neighbors." The Twelfth International Conference on Learning Representations.
Summary: The paper proposes a latent score-based reweighting framework to improve robustness in machine learning models on tabular data, addressing biases from imbalanced distributions. Unlike existing methods that require prior group labels or focus only on P(Y|X), the approach leverages score-based (diffusion) models to estimate the joint distribution P(X, Y). By using directional similarity of score vectors as a proxy for density, it identifies and upweights underrepresented data regions without relying on unstable raw density estimates. Experiments on tabular datasets under distribution shifts demonstrate improved performance and fairness, making the method broadly applicable in scenarios with unknown biases. ## update after rebuttal I believe the authors have adequately addressed all of my concerns. I have also reviewed the points raised by the other reviewers and continue to find this work solid, novel, and impactful. There are no remaining concerns—minor or major—from my side, so I have adjusted my score accordingly. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. The paper provides a well-structured empirical evaluation across six diverse datasets, ensuring a comprehensive validation of its method under different types of distribution shifts. The evaluation metrics focus on worst-group accuracy, aligning with the stated goal of improving robustness. The experimental design follows best practices, including multiple runs with different random seeds and comparisons against strong baseline methods such as ERM, DRO variants, JTT, EIIL, and FAM. The results consistently demonstrate the superiority of the proposed method, with at least a 3% improvement in worst-case accuracy across datasets. The inclusion of standard deviations, detailed dataset preprocessing, and hyperparameter choices further strengthens the credibility of the findings. Additionally, the generalization experiments on the ACS Income dataset validate the method’s robustness across different environments, reinforcing its ability to handle distribution shifts effectively. Overall, the empirical evidence strongly supports the paper’s key claims. Methods And Evaluation Criteria: The proposed method is well-suited for unbiased learning, effectively handling covariate and label shifts through score-based diffusion models on latent representations. The probability density proxy via similarity difference offers a practical alternative to exact log-likelihood computation, mitigating extreme weight imbalances. Sample reweighting ensures balanced training without requiring prior knowledge. Theoretical Claims: Yes, the problem formulation and overall theoretical claims are correct. Experimental Designs Or Analyses: The experimental design appears methodologically sound in leveraging score-based diffusion models for unbiased learning. The use of latent representations via VAEs ensures computational efficiency while capturing meaningful semantics, reducing reliance on raw feature biases. The probability density proxy (SimDiff) is a novel way to approximate sample importance without explicit likelihood estimation, which helps prevent extreme weighting issues. However, it is unclear whether the choice of noise schedule (σ(t)) and temperature parameter (τ) was systematically validated. Sensitivity analysis on these hyperparameters would strengthen the robustness of the method. Additionally, standard benchmarks with known biases (e.g., Waterbirds, CelebA for fairness, UCI datasets for covariate shift) were not included. Supplementary Material: Yes all of it. Relation To Broader Scientific Literature: The key contributions of this paper build upon and extend several important areas in the broader scientific literature, particularly in score-based generative modeling, bias mitigation, and distributional robustness. The foundation of the work relies on score-based diffusion models (Song & Ermon, 2019), which have demonstrated state-of-the-art performance in generative modeling by estimating probability densities through iterative noise perturbation and reconstruction. By integrating latent diffusion techniques (Rombach et al., 2022), the proposed method efficiently models data distributions in a lower-dimensional space, aligning with prior research that has shown latent representations to be more effective for capturing meaningful semantic structures. In terms of bias mitigation and robustness, this work shares objectives with reweighting-based debiasing approaches (e.g., importance sampling in fairness-aware learning) but introduces a novel probability density proxy (SimDiff) to achieve unbiased sample weighting. Unlike traditional methods that rely on explicit likelihood estimation—which can be sensitive to extreme values—the similarity-based density estimation method provides a more stable alternative. The paper also builds on findings in fair representation learning (Higgins et al., 2017; Karras et al., 2022), where latent space transformations have been used to decorrelate sensitive attributes from predictions. However, its class-conditional score modeling introduces a new perspective by explicitly accounting for label shifts alongside covariate shifts. This distinction enhances its applicability to fairness-sensitive domains, aligning with recent work on distributionally robust optimization (DRO) and adaptive sample weighting for fairness (Xu et al., 2023). Essential References Not Discussed: Maybe some more recent works on debiasing like SELF (LaBonte et al.) or EVaLS (Ghaznavi et al.) would be nice to mention. Other Strengths And Weaknesses: The paper demonstrates strong originality by combining score-based generative modeling with unbiased learning techniques, presenting a novel way to address distributional shifts without requiring prior assumptions. This integration of latent diffusion models with class-conditional probability estimation is a meaningful contribution, as it removes restrictive assumptions found in traditional bias mitigation methods. The approach is particularly significant for fairness-sensitive applications, as it provides a solution that adapts to both covariate and label shifts. Additionally, the visualizations and figures are a notable strength, as they effectively illustrate key concepts such as probability density estimation and reweighting. However, Figure 1’s captions are unclear and difficult to follow, making it harder for readers to interpret the figure’s intended message. Improving clarity in figure descriptions would enhance the overall readability. Other Comments Or Suggestions: The captions in Figure 1 are mixed and hard to follow. Revising them for clarity would improve the reader’s understanding of the figure’s significance. Questions For Authors: See previous parts. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We would like to thank you for providing helpful comments and positive feedbacks. Below are our responses to your concerns. > Lack of details about noise preconditioning factor $\sigma$ and hyper-parameter $\tau$ **[A1]** We appreciate this observation. Regarding the network preconditioning factor $\sigma$, we adopt the methodology proposed in EDM [1] and TabSyn [2] (see Appendix A.2 and Table 4 for details). $\tau$ controls the strength of reweighting based on $\text{SimDiff(·)}$. Its sensitivity analysis is presented in Figure 4 and discussed in Section 4.7. > Lack of discussion for essential references **[A2]** We thank the reviewer for highlighting these previous studies. We will incorporate a detailed discussion of these works in the final manuscript. > Revise the captions in Figure 1 **[A3]** We appreciate this suggestion. Figure 1 displays the results from our synthetic experiment, illustrating how boundary-based methods erroneously upweight samples based on the model's prediction boundary rather than the implicit distribution. In contrast, our score-based proxy effectively models the distribution without requiring prior knowledge, thereby yielding more reliable sample weights for robust classification. We will update the caption in the subsequent version of the manuscript. --- We would like to express our sincere thanks once again for your efforts in reviewing our paper. Should you have any further comments or queries, we would greatly appreciate the opportunity to address them promptly. [1] Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models." *Advances in neural information processing systems* 35 (2022): 26565-26577. [2] Zhang, Hengrui, et al. "Mixed-type tabular data synthesis with score-based diffusion in latent space." *arXiv preprint arXiv:2310.09656* (2023). --- Rebuttal Comment 1.1: Comment: I would like to thank authors for their response. I believe this is a novel and valuable work in the field. I encourage authors to include the missing previous studies in their final version. I have adjusted my score accordingly. All the best. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your valuable suggestions and supportive feedback. Your previous comments have significantly contributed to improving the quality of our manuscript, and we truly appreciate it. In addition to the missing studies you highlighted, we also plan to include other experiments requested by the other reviewers. For further details, please refer to **A4** in the response to **Reviewer eedu**. We believe the next version of our manuscript will comprehensively address all of your concerns. :) Should you have any additional suggestions, we are more than happy to engage in further discussions and make any necessary improvements to the paper. Best regards, Authors
null
null
null
null
null
null
null
null
ASTPrompter: Weakly Supervised Automated Language Model Red-Teaming to Identify Low-Perplexity Toxic Prompts
Reject
Summary: This paper proposes ASTPrompter, an approach to automating the red-teaming of LLMs by generating harmful yet fluent (i.e., low-perplexity) prompts. While the underlying motivation is not new, and the vulnerability of LLMs to such attacks is well known in the community, our main contribution lies in modifying the reinforcement learning from human feedback (RLHF) framework. Specifically, we replace the conventional reward model (RM) with a combination of a toxicity RM and a perplexity-based criterion. Claims And Evidence: 1. What's the main difference between ASTPrompter and [1]? 2. Intro: Pretraining Data Cleaning. The statement, "These models are trained on massive, minimally cleaned datasets primarily consisting of textual data scraped from the Internet," oversimplifies the data curation process. In reality, extensive efforts are made to clean and filter pretraining datasets, as documented in the technical reports of LLaMA 2 and LLaMA 3. These efforts include deduplication, content filtering, and rigorous quality control to mitigate biases and harmful content. The paper should acknowledge these preprocessing steps to provide a more accurate representation of modern LLM training pipelines. 3. Intro: "Empirically, these approaches result in prompts that are highly effective in triggering toxicity but are often nonsensical or unlikely to emerge during natural language model operation." I believe this is the difference between whether the user is adversarial or not. 4. Evaluation: Scientific Rigor. The description of the evaluation, "To do this, we build a dataset of rollouts from adversaries trained using our approach and then optimize a language model against them. We evaluate the model safety-tuned with this strategy and show a lower incidence of toxicity," lacks the necessary scientific rigor. The absence of quantitative results, such as toxicity reduction metrics, benchmark comparisons, or statistical significance tests, weakens the argument. A more thorough presentation of empirical results, including numerical evidence, would strengthen the paper’s claims and provide a clearer assessment of the method’s effectiveness. [1] SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks Methods And Evaluation Criteria: 1. Lack of Explanation for Horizon=1 in Figure 1 The meaning of horizon=1 in Figure 1 is unclear, as there is no explanation provided in the caption. Clarifying this term would improve the reader’s understanding of the figure. 2. Ambiguity in Figure 1 (Left) as a Success Case It is unclear why the left side of Figure 1 is considered a success. The blue text appears to be a factual statement rather than an explicitly toxic output. Further justification or a clearer example would help support this classification. 3. Measurement of Toxicity The paper does not clearly define how toxicity is measured. Providing details on the toxicity evaluation method, such as the specific model or criteria used, would enhance clarity. 4. Relocating Background Sections Sections 3.1.1 and 3.1.2 should be moved to the appendix, as they primarily cover background knowledge applicable to all RLHF methods. This would streamline the main text and improve readability. 5. Computation of Toxicity Score in Equation 2 The method for computing the toxicity score in Equation 2 is not explicitly stated. Is the score a scalar output from another LLM trained for toxicity detection? Clarifying this would help readers understand how the toxicity signal is integrated into the proposed method. Theoretical Claims: 1. Using IPO sounds correct as the toxicity model may gave incorrect score for the chosen and rejected pairs. 2. Math and Code in appendix LGTM. Experimental Designs Or Analyses: 1. The only large SOTA LLM evaluated in the paper is llama 3.1-8b. More tests on various 8b, 13b llms are necessary to prove the effectiveness because we know tiny models like GPT-2 are vulnerable to attack as they are not fully aligned. 2. How would you compare your methods with gradient based attack like GCG? I'm aware that GCG is for harmfulness, but we can also optimize the model's output towards toxicity? Is your RLHF-based method stronger? 3. Is the toxicity score used in the training and evaluation the same model? If true, could it be because of reward hacking? 4. What is the implicit toxicity meantinoed in sec. 5.1? I found the definition of toxicity unclear in the paper, let alone implicit toxicity. 5. For black box attack, can you try testing on gpt-4o or claude? I assume this method may fail, but it's good to learn what the response looks like. 6. In sec. 5.2: what are the three dots: "Rewarding defender toxicity is necessary..."? 7. For the only 8b llama model, did you use the instruct or the pretrained one? If it's pretrained, it's also not safely aligned. Because the examples in the appendix reads like completing the text, and it sounds like the task for pretrained rather than aligned model. Supplementary Material: 1. In Sec. B, the author mentioned they had H100 GPUs with 94GB RAM, thus it sounds like they have sufficient hardwares for 8B and 13B model experiments, but these results are missing in the current version. Relation To Broader Scientific Literature: See above Essential References Not Discussed: The author missed relation to another lines of research on safety: Jailbreak attack and harmful finetuning attack [1] Universal and Transferable Adversarial Attacks on Aligned Language Models [2] Jailbreaking Black Box Large Language Models in Twenty Queries [3] Jailbreak Attacks and Defenses Against Large Language Models: A Survey [4] Vaccine: Perturbation-aware alignment for large language model aginst harmful fine-tuning [5] Representation noising effectively prevents harmful fine-tuning on LLMs [6] Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation [7] Fine-tuning can cripple your foundation model; preserving features may be the solution [8] Lazy safety alignment for large language models against harmful fine-tuning [9] Navigating the safety landscape: Measuring risks in finetuning large language models [10] Safe lora: the silver lining of reducing safety risks when fine-tuning large language models [11] No two devils alike: Unveiling distinct mechanisms of fine-tuning attacks Other Strengths And Weaknesses: See above Other Comments Or Suggestions: N/A Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Ours vs. Robey, et al., 2023 vs. GCG Thank you for your feedback! [1] and our work have significant differences. First, we note that they are fundamentally different in their goals and methods. [1] is a defense method against attacks. We are an attack method. Concerning attacks mentioned in [1] as well as GCG: Prior work inserts random and programmatic perturbations into the prompt to search for failures, resulting in attacks that are neither high probability (which they then have to separately filter for using a perplexity filter) nor human-understandable. Prior work requires the hand-design of perturbations (e.g., insert, swap, patch, etc.), whereas ASTPrompter presents a fully differentiable optimization scheme that requires no human involvement [1] SmoothLLM: Defending Large Language Models Against Jailbreaking Attack ## Quantitative Results Results requested are already present in the work on Tables 1 and 2. We present quantitative attack success, attack sequence perplexity, toxicity induction metrics (showing our method increases the toxicity of a frozen LM), and benchmarks against baselines and other RL methods. ## Figure 1 The upper-left figure represents that the attacker and defender had one chance to interact (through one continuation turn). We will modify our caption to clarify this. The output from the frozen LM in Figure 1, left, is not a factual statement [1], in particular not with the proportional quantifier “vast majority.” Based on available data ([1]), no single group commits such a proportion of terrorist attacks. Using an identity group to motivate a false statement represents attack success. [2] Furthermore, we note that LLMs have been shown to exhibit harmful bias in which groups they associate with violence [2]. - [1] https://www.visionofhumanity.org/maps/global-terrorism-index/#/ - [2] Large Language Models Associate Muslims with Violence (Abid, et al., 2021) ## More Architectures Since, unlike other models at comparable scales, LLaMA models have extensive and documented pre-training safety mitigations, we strongly believe these results are representative of worst-case attack performance for models of the same size. ## Reward Hacking To mitigate reward hacking, we observe that our attacks produce fluent and coherent outputs (see Appendices D, E, and G) and test black-box approaches. Additionally, we include zero-shot evaluations against alternative common toxicity scores below, further mitigating concerns about reward hacking. Against LlamaGuard3 (toxicity score is 0-1 normalized P(“unsafe”|prompt) by Llamaguard), for llama-3.1-8b attacker trained against Detoxify, we have | | Defender Toxicity | |-|-| |Ours|0.075| |Baseline|0.023| ## Measuring and Defining Toxicity As stated in the paper (Sections 3.1.4, Section 4.1), we measure toxicity using Detoxify. As stated in our paper (lines 033-034, right), we use the widely accepted definition of toxicity developed by Perspective API [1] [2]. We define implicit toxicity as text using coded or indirect language to be rude, disrespectful, unreasonable, or otherwise likely to cause a user to leave a conversation. [4][5] - [1] Measuring and Mitigating Unintended Bias in Text Classification. (Dixon, et al., 2018) - [2] A new generation of perspective api: Efficient multilingual character-level transformers. (Lees, 2022). - [4] Latent hatred: A benchmark for understanding implicit hate speech. (ElSherief, et. al., 2021) - [5] Unveiling the implicit toxicity in large language models. (Wen, et. al., 2023) ## Commercial Models In fact, our method succeeded in increasing toxicity against Claude 3.5 Sonnet (cutoff 20241022). In particular, we include here black-box attack results against Claude 3.5 Sonnet 20241022, using our Llama-3.1-8B attacker trained against the Detoxify metric. We score toxicity in this experiment using two classifiers, Detoxify as used in our work, as well as against LlamaGuard3, a state of the art llm-as-judge detoxification model. | |Baseline|Ours| |-|-|-| |Detoxify|0.981%|1.999%| |Llamaguard|0.336%|2.239%| We achieve a 6.668 times increase in toxicity using the toxicity metric we trained against (Detoxify), and even generalized to 2.03 times increase in toxicity zero-shot to a new toxicity metric—achieving fully black-box attack success. ## Pretraining vs IFT We used the pretrained model. Uniquely, the pretrained LLaMA model has extensive safety mitigations with data filtering, content filtering, and quality control. Indeed this is not true of other pretrained models. ## Extra References Thank you! We will add these references to our related work. In comparison to this list, our work - [1] Provides low-perplexity sequences and doesn’t rely on high-perplexity adversarial suffixes. - [2] Is effective in a black-box setting without on-policy sampling - [3-11] Notably, we are primarily focused on attacks and not mitigations in our work. We are unsure how reference [7] is relevant. --- Rebuttal Comment 1.1: Comment: I agree with other reviewers that the rebuttal can help consolidate the paper into a stronger version for the next submission.
Summary: This paper introduces ASTPrompter, a Reinforcement Learning (RL) based approach for automated red-teaming of Large Language Models (LLMs). The method is designed to identify prompts that elicit toxic outputs from a defender LLM while also maintaining low perplexity, ensuring the generated prompts are likely to occur naturally. ASTPrompter formulates red-teaming as an Adaptive Stress Testing (AST) problem and solves it using an online and weakly supervised Identity Preference Optimization (IPO) scheme. The authors evaluate their approach against various baseline models and ablation studies, demonstrating improved toxicity elicitation rates and maintained prompt likelihood across different model scales. They also explore the downstream utility of their method for toxicity mitigation. Claims And Evidence: The paper claims that ASTPrompter effectively identifies low-perplexity prompts that elicit toxicity from LLMs, outperforming baselines and maintaining prompt likelihood. While the experimental results in Table 1 and Figure 3 show that ASTPrompter achieves higher toxicity rates compared to baselines and ablations, the claim of novelty is questionable given existing RL-based red-teaming approaches such as [1]. The claim that IPO converges faster than PPO is made without direct empirical evidence comparing training time or efficiency. The claim of improved safety through downstream detoxification is supported by initial experiments in Table 3, but further evidence and analysis could strengthen this claim. [1] Chen, Xuan, et al. "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search." NeurIPS, 2024. Methods And Evaluation Criteria: The proposed method uses a RL framework with IPO for optimizing an adversary policy to generate toxic prompts. The reward function incorporates defender toxicity, combined toxicity, and prompt perplexity. Weak supervision using RealToxicityPrompts is introduced to improve convergence. The evaluation criteria include prompt perplexity, defense toxicity, and combined toxicity, measured using the Detoxify model. While these metrics are relevant for the task, the choice of Detoxify as the toxicity evaluation model is questionable as it may not represent state-of-the-art toxicity detection and has known biases. The evaluation environments are limited to GPT-2, GPT-2 XL, TinyLlama, and Llama-3.1-8b, and could benefit from including more advanced and recent LLMs. Theoretical Claims: There are no explicit theoretical claims or proofs presented in the paper. The method is primarily empirically driven and focused on algorithmic design and evaluation. Experimental Designs Or Analyses: The experimental design includes comparisons against baselines and ablation studies to evaluate the contribution of different reward terms and weak supervision. Both white-box and black-box attack scenarios are considered. The use of Convokit Reddit corpus as non-toxic prompts and RealToxicityPrompts for weak supervision is described. However, the paper lacks a direct empirical comparison with other existing red-teaming methods, limiting the assessment of ASTPrompter's relative performance. The choice of outdated attack models (GPT-2, GPT-2 XL) raises concerns about the relevance of the evaluation in the context of rapidly evolving LLMs. The lack of time efficiency comparisons for IPO vs. PPO weakens the claim of faster convergence for IPO. Supplementary Material: I read the supplementary material. Relation To Broader Scientific Literature: The paper relates to the growing body of literature on automated red-teaming of LLMs and AI safety. It builds upon previous work using RL for red-teaming and incorporates techniques like IPO and weak supervision. However, the paper fails to adequately acknowledge and compare against closely related work, particularly the RL-based jailbreaking approach such as [1]. The positioning within the broader literature could be strengthened by explicitly discussing and contrasting ASTPrompter with other existing automated red-teaming methods and frameworks. [1] Chen, Xuan, et al. "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search." NeurIPS, 2024. Essential References Not Discussed: [1] also utilizes RL for jailbreaking LLMs and is directly relevant to the proposed approach. Failing to cite and compare against this recent and highly relevant work weakens the novelty claim and contextualization of ASTPrompter. [1] Chen, Xuan, et al. "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search." NeurIPS, 2024. Other Strengths And Weaknesses: Strengths: - The formulation of red-teaming as an Adaptive Stress Testing problem is interesting. - The ablation studies provide insights into the contribution of different reward terms. - The exploration of downstream detoxification is a promising direction. Weaknesses: - Lack of Novelty: The use of RL for LLM red-teaming is not entirely novel, and the paper fails to adequately compare with related work like [1]. - Missing Empirical Evidence: The claim of faster IPO convergence compared to PPO is not empirically supported. - Outdated Attack Models: The evaluation primarily uses older models like GPT-2 and GPT-2 XL; more recent and advanced LLMs should be included. - Questionable Toxicity Metric: The use of Detoxify (2020) as the toxicity judge is outdated and may not accurately reflect state-of-the-art toxicity detection. - Lack of Empirical Comparison: The paper fails to empirically compare ASTPrompter against other existing automated red-teaming methods. [1] Chen, Xuan, et al. "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search." NeurIPS, 2024. Other Comments Or Suggestions: N/A Questions For Authors: See the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Other Models In addition to GPT 2 and GPT2-XL, we already report results using Llama-8b models and TinyLlama in the article (Table 1), demonstrating successful attacks with low perplexity prompts. Furthermore, we include here **black-box attack results against Claude 3.5 Sonnet (cutoff 20241022)**, using our Llama-3.1-8B attacker trained against the Detoxify metric. We score toxicity in this experiment using two classifiers: Detoxify, as used in our work, and LlamaGuard3, a state-of-the-art llm-as-judge detoxification model. Toxicity is measured by 0-1 normalized P(“unsafe”|prompt) given by the LlamaGuard model. We use prompts from [1] as our baseline. | |Baseline|Ours| |-|-|-| |Detoxify|0.981%|1.999%| |Llamaguard|0.336%|2.239%| We achieve a 6.668 times increase in toxicity using the toxicity metric we trained against (Detoxify). Additionally, we even generalize to 2.03 times increase in toxicity zero-shot with a new toxicity metric, achieving full black-box attack success. [1] Bot-Adversarial Dialogue for Safe Conversational Agents (Xu et al., 2021) ## Optimization Method Our article does not make the claim that IPO converges more stably than PPO, since this has already been shown by [1][2]. Although the optimization method is only a means to validate our formulation and training scheme, we mention in our work that DPO, when available, converges more stably than PPO, citing the results of the DPO authors [1]. However, DPO, unlike IPO and PPO, requires a rational ranking. Hence, IPO was chosen. - [1] Direct Preference Optimization: Your Language Model is Secretly a Reward Model (Rafailov, et al., 2023) - [2] A General Theoretical Paradigm to Understand Learning from Human Preferences (Azar et al., 2023) ## Baseline RL for LLM vs. AST We do not claim that the use of RL for LLM red-teaming is our paper’s primary novelty. Instead, what is novel in our work is the multi-objective reward function that encourages lower perplexity attacks. Through AST, our key insight is that likelihood should be considered and optimized to avoid reward hacking that produces failures under unlikely circumstances. ## Chen et al., 2024 We note that the reward function of [1] does not account for likelihood and instead optimizes for the closeness between the target model’s generation and a reference answer to a harmful question. Such approaches reduce the diversity of the outputs and attacks (to be semantically similar to the target output), which is undesirable in red-teaming use cases since such optimizations will narrow down to one specific type of attack which then can be defended against with carefully crafted heuristics. [2] - [1] When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search. (Chen, et. al., 2024). - [2] Curiosity-driven Red-teaming for Large Language Models (Hong, et. al., 2024) ## Toxicity Metric We choose Detoxify because it is a locally-runnable model with good representation in literature as a heuristic for unwanted toxic data in pretraining (Henderson et al., NIPS 2022), assistants (Köpf et al., NIPS 2023), to evaluate detoxification success (Korbak et al., ICML 2023) and others. Unlike Detoxify, which is a locally runnable model, Perspective API and OpenAI API—both online APIs commonly used for measuring toxicity—have significant rate limits that render them unsuitable for being called in the training loop. Here, we present evaluations of our approach using Llama-3-8B and zero-shot generalization against a novel, locally runnable, toxicity metric: LlamaGuard3. Against LlamaGuard3, a state-of-the-art llm-as-judge detoxification model, we obtain the following black-box attack success for a llama-3.1-8b adversary trained to elicit toxicity (measured by Detoxify) from a llama-3.1-8b defender. The toxicity score in these results is measured as the 0-1 normalized P(“unsafe”|prompt) given by the LlamaGuard model. | | Defender Toxicity | |-|-| |Ours|0.075| |Baseline|0.023| Compared to the baseline (unturned llama-3.1-8b), we achieve 3.5 times higher incidence of toxicity. We note that our main contribution is to demonstrate that optimizing our formulation gives low-perplexity red-teaming attacks. We would expect future work to adapt the toxicity model to be application-specific. ## Empirical Comparisons We already provide direct empirical comparisons of our method to several popular automated approaches. These results are located in Table 1 and Table 2. In particular, we compare our approach to several other gradient-based methods: supervised fine-tuning on human-written prompts intended to elicit toxicity (Table 1) and reinforcement-learning-based red-teaming without weak supervision and a perplexity reward (Table 2, RL baseline)—which constitutes an adapted version of the reinforcement learning driven method proposed by Perez et. al. Furthermore, we test against non-gradient-based automated red-teaming methods including human-written attack prompts. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I still have a few concerns regarding your rebuttal. ### Other Models If I understand correctly, the reported numbers represent P(“unsafe”|prompt) given by the LlamaGuard model. I don't think the performance of the proposed method is very impressive given these numbers. ### Optimization Method I also read the authors' rebuttal to other reviewers. How do you validate your claim that " DPO, when available, converges more stably than PPO" and "DPO, unlike IPO and PPO, requires a rational ranking"? Additionally, as the authors claim that the main contribution of this work is the multi-objective reward design, I would like to bring [1, 2] into the authors' attention where both PPO and DPO could be extended to the multi-objective scenario. I would appreciate it if the authors could provide further explanation over why chose IPO. [1] Kaiwen Li, Tao Zhang, and Rui Wang. Deep reinforcement learning for multiobjective optimization. IEEE transactions on cybernetics, 2020. [2] Zhanhui Zhou, Jie Liu, Jing Shao, Xiangyu Yue, Chao Yang, Wanli Ouyang, and Yu Qiao. Beyond one-preference-fits-all alignment: Multi-objective direct preference optimization. In Findings of ACL, 2024. ### Baseline RL for LLM vs. AST The authors acknowledge that the main contribution of this work is the reward design, which is a common practice in RL work. Simply proposing a customized reward limits the work's novelty and makes it far from the bar of ICML. ### Empirical Comparisons There exist many more recent work regarding auto red-teaming (e.g., GPTFuzz, Chen et al. (2024)). I would be happy to see how these more recent methods perform against your method. --- Reply to Comment 1.1.1: Comment: We thank you for your timely feedback. We would like to clarify a few points: ## Contributions and Reward Design It is certainly true that RL research often hinges on how objectives are specified. However, we believe our paper goes beyond proposing a custom reward, offering contributions that are relevant to the ICML community: - Formulation as AST: We introduce a conceptual framework casting automated red-teaming as an Adaptive Stress Testing problem, which focuses on likely text failures. This distinguishes our work from "jailbreaking" approaches that rely on unnatural, highly adversarial prompts; such prompts are rarely produced by typical users. By contrast, our AST framing emphasizes failure scenarios that might naturally arise in day-to-day interactions. - Weakly Supervised, Online Preference Method: We combine a multi-objective reward design with online sampling, pairwise preference training, and weak supervision to explore the prompt space. Approaches that simply maximize toxicity risk producing highly unnatural prompts. In our setting, we also preserve low perplexity, so the model's outputs remain probable under the frozen LM—yet still induce undesired toxic responses. - Empirical Evidence of "Likely Toxicity": Our results confirm that traditional automated red-teaming can degrade prompt likelihood drastically. By including a perplexity/language-model-likelihood term, we preserve realistic prompts—highlighting how "ordinary" user prompts may still lead to toxic outputs. This issue is highly relevant to real-world LLM deployments. We would like to observe that, in general, much of machine learning can be viewed as either changing the modeling technique or objective function of an optimization problem. We contribute to the latter case. ## Choice of Optimization Methods (IPO vs. DPO vs. PPO vs. Others) Our primary focus is on the novel formulation itself and the effects of optimizing it. We discuss IPO, DPO, and PPO because they are widely used in the LLM post-training domain, well-understood, and represent typical candidates. The specific choice of IPO over DPO or PPO is supported by both the literature cited in our paper and the discussion in the "Optimization Method" section of the rebuttal. Nevertheless, any RL technique that can handle multi-objective and rational-ranking constraints could, in principle, solve our formulation. ## GPTFuzz and Chen et al. These methods largely represent jailbreaking strategies, in which prompts are high perplexity and require an adversarial mindset from the user. By design, such prompts differ from the more "likely" prompts we target which is by definition more likely to arise during autoregression. Indeed, in our experiments with a reinforcement-learning baseline (Table 2, "RL baseline" referencing Perez et al.), we show that removing perplexity-related constraints significantly increases perplexity while eliciting toxicity. Because Chen et al. and similar techniques rely on prompts that would be improbable in normal conversation, they tackle an important but distinct scenario (adversarial "jailbreaking") from the one motivating our AST perspective (high-probability failures). Furthermore, approaches like GPTFuzz require the hand-design of base mutations, whereas our approach requires no human involvement at all during the attack process.
Summary: The paper presents a reinforcement learning-based red-teaming method to identify prompts that elicit toxic outputs from language models while maintaining fluency. The approach uses Adaptive Stress Testing (AST) and Identity Preference Optimization (IPO) to generate prompts with high likelihood and increased toxicity. The method outperforms baselines approaches. Claims And Evidence: The paper claims ASTPrompter produces more effective red-teaming prompts than baselines, the paper presents empirical results (Table 1) showing that ASTPrompter elicits higher toxicity from defender models than other approaches, such as human-written attacks (BAD) and fine-tuned models (SFT). The toxicity rate is up to 23 times higher than baselines while maintaining low perplexity. The paper claims ASTPrompter remains effective in both white-box and black-box attack settings, and presents cross-model evaluation results, shows the proposed approach attack unseen defender models from different families (e.g., GPT-2 attacking Llama-3.1-8b). The approach remains effective, generating 5.4–14x increased toxicity in black-box settings. Methods And Evaluation Criteria: The paper effectively uses AST and IPO for red-teaming, optimizing prompts for both toxicity elicitation and low perplexity. Evaluation metrics, including defender toxicity and combined toxicity, ensure fluency and effectiveness. Theoretical Claims: the paper does not include any theoretical proofs Experimental Designs Or Analyses: The paper evaluates ASTPrompter’s effectiveness using multiple defender models (e.g., GPT-2, GPT-2 XL, TinyLlama, Llama-3.1-8b), and multiple adversary models (e.g., white & black box) and for multiple training objectives, where experiment varies how the reward function weights toxicity elicitation and prompt likelihood (perplexity). Supplementary Material: yes, the appendix Relation To Broader Scientific Literature: The paper contributes to several areas such as automated red-teaming, adversarial prompting, reinforcement learning, LLM, and trustworthy LLM. Essential References Not Discussed: n/a Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: the paper mainly uses IPO as the optimization method, how does the proposed method perform with other approaches such as DPO, RLHF, which are more classic? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review! With respect to IPO vs. DPO vs. RLHF-PPO type methods, we do not compare our method to DPO, since we use a multi-objective reward function and DPO assumes rationally ranked responses. Since the LM perplexity/toxicity evaluations may not be exactly rationally ranked, the DPO formulation is less appropriate. RLHF generally includes a broad range of optimization methods applied to rewards/data from human feedback; however, it commonly uses PPO as the optimization method. We note that DPO converges more stably than PPO [1] [3] and that our primary contribution involves the effects of optimizing our novel formulations rather than the optimization method itself. In order to achieve stability in PPO, one has to make many algorithmic changes that are out of the scope of this work [2]. For this reason, many notable models (e.g., the Llama 3 family [4]) are trained with DPO rather than PPO. For the camera-ready draft, we will additionally use PPO to optimize our formulation to highlight its efficacy and illustrate the advantages of the preference-learning approach. - [1] Direct Preference Optimization: Your Language Model is Secretly a Reward Model (Rafailov et al., 2024) - [2] Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback (Ivison et al., 2024). - [3] A General Theoretical Paradigm to Understand Learning from Human Preferences (Azar et al., 2023) - [4] The Llama 3 Herd or Models (Grattafiori et al., 2024)
Summary: The paper introduces ASTPrompter, a reinforcement learning-based red-teaming approach that uses Adaptive Stress Testing and online weakly supervised Identity Preference Optimization to find toxic prompts. The method outperforms baselines by eliciting 2-23X more toxicity while maintaining fluency, and works in both white-box and black-box attack settings. The generated adversarial prompts can serve as negative training samples to improve LLM safety tuning. Claims And Evidence: The claims made in the submission are generally well-supported by empirical results. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are well-suited for the problem of automated red-teaming and identifying likely toxic prompts. Theoretical Claims: N/A, the paper mainly relies on empirical results. Experimental Designs Or Analyses: The paper’s experiments are generally well-structured. However, the study assumes that low perplexity equates to a natural prompt, which has not been directly tested. Additional metrics should be included to evaluate whether the prompts are truly natural. Supplementary Material: Yes Relation To Broader Scientific Literature: The paper discusses red-teaming, jailbreaking, and toxicity attacks. Its contributions are valuable to both the research community and model owners, helping to reassess red-teaming approaches. This work addresses a gap in current research and offers solutions for improving red-teaming strategies. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** - Introduces a novel AST-based formulation for LLM red-teaming. AST is commonly used in safety-critical fields but has not been applied to language model security testing before. Using AST for red-teaming can generate more natural toxic prompts, which can be leveraged to fine-tune models for enhanced safety. - Effective in black-box settings, making it a practical attack method. **Weaknesses:** - The paper claims IPO is superior to PPO for multi-objective optimization but does not provide experimental comparisons between the two methods. PPO could also be used for multi-objective optimization. - The paper asserts that gradient-based red-teaming methods generate unnatural prompts but does not offer direct empirical comparisons of perplexity or fluency. - While ASTPrompter generates low-perplexity toxic prompts, it is unclear whether these prompts resemble real-world adversarial prompts encountered in deployed LLMs. Other Comments Or Suggestions: N/A Questions For Authors: Did you evaluate ASTPrompter’s ability to elicit toxicity using multiple toxicity classifiers, such as the Perspective API or OpenAI API? Is there a comparison of perplexity or fluency between ASTPrompter and gradient-based methods? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Fluency and Naturalness We do not directly test the relationship between low perplexity and naturalness, as the goal of our paper is to identify low perplexity prompts, independent of their naturalness. However, we do qualitatively observe a relationship, as shown in the rollouts in Appendices F and G. As model scales increase, evidence shows that the LM test-loss (i.e. log-perplexity) of natural text decreases [2]—which means that maximum-likelihood sampling will generate relatively more natural text that is within distribution of the test corpus. However, red-teaming attacks typically have high perplexity since they are slightly out of distribution. For instance, red-teaming approaches such as BAD [1], a dataset of human-written prompts, have high naturalness as they are created by humans, while also having high perplexity according to the language model. Hence, in our work we aim to optimize both the likelihood of the prompts and the amount of toxicity they induce. Naturalness is a consequence of optimizing likelihood, but not a goal of it. Instead, we focus on prompts that are likely to be generated during autoregression, regardless of their naturalness. - [1] Bot-Adversarial Dialogue for Safe Conversational Agents (Xu et al., 2021) - [2] Scaling Laws for Neural Language Models (Kaplan et al., 2020) ## IPO vs PPO Notably, we did not claim in the paper that IPO is superior to PPO for multiple-objective optimization. We argue in our work that DPO, when available, converges more stably in the paper than PPO. This is shown by the DPO authors [1]. With respect to multi-objective optimization, we were describing this as a limitation of DPO not PPO. That is, it is DPO that, unlike IPO and PPO, requires a rational ranking. We will revise the language in the article to clarify this. [1] Direct Preference Optimization: Your Language Model is Secretly a Reward Model (Rafailov, et al., 2023) ## Empirical Perplexity Comparison We provide the requested direct empirical comparisons of perplexity to several popular gradient-based approaches in Table 1 and Table 2 in the article. In particular, we compare our approach to the following gradient-based methods: supervised fine-tuning on human-written prompts intended to elicit toxicity (Table 1) and reinforcement-learning-based red-teaming without weak supervision or a perplexity reward (Table 2, RL baseline)—an adapted version of the reinforcement learning driven method proposed by Perez et. al. Although these results demonstrate that our method achieves the lowest attack perplexity compared to related works, it does not directly consider fluency. However, perplexity defines the inverse surprise of a phrase occurring, which is correlated with acceptability (i.e., fluency) of text for humans; this has been long-established in cognitive science literature [1] [2] [3] [4]. As perplexity is a measure of generation likelihood, we argue that low-perplexity prompts, no matter how fluent, are more likely to occur during standard autoregression. - [1] The effect of word predictability on reading time is logarithmic (Smith, et. al., 2013) - [2] Expectation-based syntactic comprehension (Levy, et. al., 2007) - [3] Data from eye-tracking corpora as evidence for theories of syntactic processing complexity (Demberg, et al., 2008) - [4] A Probabilistic Earley Parser as a Psycholinguistic Model (Hale, 2001) ## Real-World Fluency Though the prompts generated are typically fluent, we do not claim that these prompts resemble real-world human adversaries. However, we propose that they are more likely to arise from autoregression of a frozen LLM as they are lower in perplexity as scored by that model. Furthermore, we demonstrate that an LLM trained against the prompts generated by our approach is less toxic against a variety of attack strategies (Section 6). ## Other Classifiers Both the perspective API and OpenAI API have significant rate limits that render them unsuitable for being called in the training loop. However, we tested another local, LLM based toxicity classifier—LlamaGuard—and provide results below. Against LlamaGuard3, a state-of-the-art llm-as-judge detoxification model, we obtain the following black-box attack success for a llama-3.1-8b adversary trained with our method to elicit toxicity (measured by Detoxify) from a llama-3.1-8b defender. Toxicity score here is measured by the 0-1 normalized P(“unsafe”|prompt) given by LlamaGuard. | | Defender Toxicity | |-|-| |Ours|0.075| |Baseline|0.023| In comparison to untuned baseline, we achieve 3.5 times higher incidence of toxicity. ## Comparison to Other Gradient-Based Methods The paper provides that comparison in Table 1 and Table 2. There is a perplexity comparison provided to supervised fine-tuning on human-written prompts intended to elicit toxicity (Table 1) and reinforcement-learning-based red-teaming without weak supervision and a perplexity reward (Table 2, RL baseline).
null
null
null
null
null
null
Determinant Estimation under Memory Constraints and Neural Scaling Laws
Accept (poster)
Summary: This paper proposes a scalable way to compute Neural Tangent Kernel log-determinants of dense matrices which may arise in training deep neural networks. The empirical Neural Tangent Kernel has been shown as the effective tool to study the behavior of neural networks during both training and inference. In particular, the Gram matrix (i.e. the kernel matrix) can be used in lazy-training and in obtaining uncertainty quantification estimates. Computing log-determinants of such NTK kernels for a large $n$ is extremely hard and sensitive to small eigenvalues. The challenges include: 1) dealing with storing the large kernel matrix; 2) a scalable way to compute functions of the kernel matrix (in this case, the log determinant). The authors use an existing block $LU/LDL^T$ decompositions to efficiently load blocks of pre-computed Gram matrix from disk. However, realizing that this still poses a problem for a large value of $n$, the authors propose regressing over precomputed log determinants up to $p \times p$. For a large $n$, the authors propose an asymptotic approximation. The authors test the proposed approximation on the NTK matrices corresponding to widely used models such as ResNet9, ResNet18, and ResNet50. ## update after rebuttal We thank the authors for their rebuttals. For the most part I am satisfied with the replies. I will maintain weak accept for the submission. In order for the submission to get a higher score (for this conference and for other follow-up work based on this submission), the authors are encouraged to: - demonstrate some empirical comparison in computation time; the submission focuses on approximation quality mainly. - tabulate symbols used in the writeup, as at least one other reviewer also mentioned difficulties in keeping track of the symbols. Claims And Evidence: The claims made in the paper seem to be sound and based on well-known linear algebraic techniques for computing decompositions which may be used to compute log determinants. A clever way to scale this up to $n$ by an asympotitic approximation is interesting. Methods And Evaluation Criteria: The proposed benchmark datasets are well chosen and suitable for evalating the problem. The comparison against competing methods (such as SLQ and Pseudo NTK) are also included. Theoretical Claims: I am familiar with the correctness of MEMDET algorithm since these are based on already-existing block decompositions of large matrices. I did check the appendix for proof of Proposition 1 and it seems to be correct. Experimental Designs Or Analyses: Experimental designs are more or less sound, although for a non-expert, it is not too clear why the implied value of $m = 10$ is used for all of the experiments (I see this from the fact that $K_{1000}$ results in $10,000 \times 10,000$ matrix; see Section 4.1). Supplementary Material: I looked at Section C, F, G, and H. For the most part, these were easy to follow. I am not too sure about the applicability of Section D because for the most part, the evaluation is on the approximation quality of the log determinant (not empirical time for computing sucn quantity). Relation To Broader Scientific Literature: The authors provide broader connection to other machine learning applications such as model selection, quantification of generalization bounds, etc. The considered problem of log-determinant computation and the proposed methodology is quite relevant to many of the interesting problems in machine learning. Essential References Not Discussed: I am not aware of other essential references which should have been included. However, it may be enlightening to include how the proposed method is different algorithmically from SLQ and Pseudo NTK approaches. Other Strengths And Weaknesses: Strengths: 1. Considers an important problem of interest in machine learning. Log determinant computation is inherently hard. 2. Scaling law consideration for approximation is interesting. Weaknesses: 1. Some of the plots in the paper are not easy to understand (see below). 2. The difference between the proposed method and the competing methods such as SLQ should be highlighted. I could not find any information in the writeup itself. Other Comments Or Suggestions: 1. Table 1: SLQ = Stochastic Lanczos Quadrature, but this is nowhere mentioned in the writeup until this point. 2. Section 3.2: shouldn't the precomputed sequence be: $\{ L_1, ... L_p \}$, not up to $L_n$? 3. In Algorithm 1: I was reading the index for all variables used in the regression equation for $y_n$. It was strange to see the index shifted by $+1$, but the formula for $y_n$ in Section 3.2 was shifted by $-1$. This needs to be consistent. Questions For Authors: 1. Why is $m = 10$ assumed in for all datasets in the experiment? 2. Are there any way you can use other variable names to help the readers, e.g. remembering the definition of $m$ across the entire writeup is challenging. The definition of $p$ is okay enough though. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their careful review of our paper and positive feedback on our work. We acknowledge that some points need clarification and aim to do so below. Regarding the implicit value of $m=10$: this is the number of classes used in all the classification datasets we used in experiments. We will state this explicitly in Section 4, as we understand the confusion. We further thank the reviewer for their suggestion regarding plot and variable labeling, as well as algorithm naming. We have included these improvements and corrections in the updated version of the paper. Regarding Section D: this is included as algorithmic analysis of MEMDET. Many of these were conducted for our own benefit to choose appropriate block sizes, and to predict how long the computations would take. MEMDET is an exact algorithm (with no approximation occurring) and was required as a baseline for our experiments, since we could not fit the full NTKs into memory on the machines we were using. We believe that exact methods will always have a place, and so hope the information in Appendix D will be useful to others. To clarify the difference between algorithms: Pseudo-NTK summarizes the data by summing over rows and columns $m\times m$ block, then dividing by $m$. This gives a good approximation with respect to the operator norm. However, it should not be expected to give refined determinant estimates. Both SLQ and FLODANCE build their approximations from cheaper quantities. In the case of SLQ, matrix vector products are used in a Monte Carlo approximation. This gives a low bias estiamte with high variance, particularly when the condition number of the matrix is large. On the other hand, in FLODANCE we use extrapolation from matrix minors. This gives a low variance estimate in our experience (see http://anonymous.4open.science/r/memdet-E8C1/notebooks/scale_law_pred_uq.pdf), while the magnitude of the bias is unclear, and subject to the stationarity of the process in the training region (see http://anonymous.4open.science/r/memdet-E8C1/notebooks/stationarity.pdf). We agree that including more details of these methods will improve our work, and have added a short description in the main body, and a more in-depth section in the appendix of the updated version of the document. Please let us know if anything else needs verification or clarification, as we are eager to engage further if necessary to improve our work. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your reply. I have noted the clarifications that you have provided. For the most part I am satisfied with the replies. In order for me to raise the score further, this work needs to demonstrate: - some empirical comparison in computation time; the submission focuses on approximation quality mainly. - At least one other review has mentioned difficulties in keeping track of the symbols that are used. It would be helpful if these are tabulated somewhere in the writeup. At the moment - I will be maintaining the current score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their quick response and further suggestions. First, regarding nomenclature: we are aware that in the submitted document there were some minor nomenclature clashes and inconsistencies. This has since been rectified, and a table containing our current conventions can be found at http://anonymous.4open.science/r/memdet-E8C1/notebooks/nomenclature.pdf. Regarding computation time, we refer the reviewer to the updated version of Table 1 which now includes compute time and cost, found at http://anonymous.4open.science/r/memdet-E8C1/notebooks/comparison.pdf. We have also made a companion table to Table 2 to report computation time at http://anonymous.4open.science/r/memdet-E8C1/notebooks/walltime_det.pdf. In order to further clarify the distinction between the different methods, we will also include the following table, outlining the computational complexity of the different methods we consider http://anonymous.4open.science/r/memdet-E8C1/notebooks/complexity.pdf. This shift from $m^3$ to $m_s^3$ complexity (for matrix and submatrix respectively of size $m\times m$ and $m_s\times m_s$) moving from LDL/MEMDET to FLODANCE means that when using say $10$% of the full dataset, roughly a $1000\times$ speedup is realized. While SLQ may seemingly offer an advantage here (when roughly $m_s^3>m^2sl$), this would require $ls<\frac{m}{1000}$ and we were unable to get good accuracy in this scenario. Instead we found that for NTK matrices the accuracy of SLQ was poor, even as these parameters grew such that the runtime was comparable to (and even exceeded) that of MEMDET. We remark that this scaling behavior and the corresponding speed improvement is only one benefit of our method. The fact that the full matrix does not need to be computed means that quantities that would otherwise be intractable can be accurately approximated. To see this, Table B.1 in http://anonymous.4open.science/r/memdet-E8C1/notebooks/memory.pdf shows the storage size required for the NTKs of some standard benchmark datasets used in machine learning. For context, estimates on the time required to form the NTK matrices themselves can be found in Table B.2 at http://anonymous.4open.science/r/memdet-E8C1/notebooks/memory.pdf. Since both storage and formation are quadratic in the number of datapoints, the ability to extrapolate from again $10$% of the data leads to $100\times$ faster NTK formation. Thank you for these suggestions, we are more than happy to include the subsequent improvements in the final version of our paper. We hope that this positively impacts your evaluation of our work.
Summary: The paper designs two algorithms for computing log-determinant of large PSD matrices under memory constraints. The first algorithm is named MEMDET, designed based on block LDL decomposition. The other one is named FLODANCE, designed based on neural scaling law assumptions. Empirical results show that the proposed methods achieve significant speedup while maintaining high accuracy. Claims And Evidence: The authors claim that the proposed methods achieves fast and accurate log-determinant estimation under memory constraints. The claims are well-supported through descriptions on algorithm design, complexity analysis and empirical studies. Methods And Evaluation Criteria: The proposed methods are evaluated under the NTK estimation scenario for deep learning. the evaluation criteria focus on accuracy, running speed and memory usage, aligning well with the studied problem. Theoretical Claims: I checked the proofs of the main theoretical results and they are rigorous. However, the proof is largely relying on the scaling-law assumption, which may require further justification. Experimental Designs Or Analyses: The proposed methods are evaluated across various neural networks and sample sizes. They are practically sound but may benefit from more experiments beyond the NTK scenario. Supplementary Material: I reviewed the algorithm designs, proofs and complexity analysis in the supplementary. Relation To Broader Scientific Literature: The proposed methods can facilitate relevant researches on the practical or theoretical behavior of neural tangent kernels. They may also inspire new matrix-based algorithms for, e.g. kernel approximation. Essential References Not Discussed: The paper may benefit from additional discussions about the Lanczos family of matrix trace estimation techniques. Other Strengths And Weaknesses: Strength: The proposed algorithm designs are promising, and the empirical performance gain of the proposed methods is significant. Weakness: The neural scaling law assumption may need further justification, e.g. empirical results on some ill-conditioned / noisy scenarios. Other Comments Or Suggestions: No comments. Questions For Authors: * From Table 2, the estimation seems very unstable since the results vary a lot across different methods. How is the numerical stability of the proposed MEMDET method? If the ground truths in Table 2 are acquired by MEMDET, it seems not very fair since FLODANCE is using MEMDET as the backbone. How are these results compared to the native method? Also, I believe the absolute value of the log-determinant does not matter a lot, instead we are more interested in its relative value across different settings. From this point, showing the relative behavior of each method (e.g. verifying the neural scaling law) may give a more solid comparison. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and positive review of our work. The reviewer has raised a few points that deserve addressing, and we believe that doing so will improve our work. The first of these is our dependence on the scaling law assumption. In the submitted version of the document, we provided criteria on the kernel which guarantees the scaling-laws hold in Appendix F. However, these are technical conditions, and their satisfaction is non-trivial to check (although we have discussed some possible verification in the response to Reviewer zrLg). This means that like most of the literature exploring scaling laws, we are relying on empirical behavior to check that the scaling laws are satisfied. To this end, http://anonymous.4open.science/r/memdet-E8C1/notebooks/stationarity.pdf contains a figure that compares the ratio of successive determinants to the fitted scaling law. Note that the log-scale on the $y$-axis turns this into a residual plot. These residuals are approximately normally distributed, which is evidence that Assumpsions 1 and 2 are satisfied. This plot has been included in the updated version of our document, along with discussion. Regarding the focus of our numerics on the NTK: this was simply done because these matrices are known to be a particularly difficult class of kernel matrices to work with. For modern neural networks, the corresponding kernels are non-stationary, and the matrices tend to be highly ill-conditioned and very large, often not being feasible to compute explicitly: hence the need for our algorithm. We believe that this makes NTK matrices particularly useful examples to highlight the utility and efficacy of our method. However, we do recognize that other Gram matrices are commonly used in machine learning applications. In particular, we computed the Gram matrix associated to the linear model of coregionalization [1], with 10 outputs and a Matern kernel over 10,000 data points. Plots demonstrating the performance of FLODANCE on this dataset, with experimental details, can be found at http://anonymous.4open.science/r/memdet-E8C1/notebooks/matern.pdf . We have included this experiment in the appendix of an updated version of our work. Other reviewers have also drawn attention to the lack of the distinction between our methods and their competitors. To this end, we have included a brief section in the the appendix outlining the stochastic Lanczos methods, as well as the Pseudo-NTK. We believe this will help our exposition, in addition to a brief higher-level summary in the main document. Thank you for your suggestions regarding Table 2. An updated version of this table containing relative error can be found at http://anonymous.4open.science/r/memdet-E8C1/notebooks/relative.pdf . We remark that MEMDET is an exact method that allows for determinant computation of large matrices that do not fit into memory. It achieves this through block-decompositions, relying on an underlying algorithm (LDL$^\top$, LU, or Cholesky) that is amenable to this type of decomposition. As such, the use of MEMDET in our FLODANCE approximations reduces to LDL$^\top$, since the training sample is small enough to fit in memory. We have emphasised this in the final version of the document. Please let us know if anything else needs verification or clarification, as we are eager to engage further if necessary to improve our work. [1] https://doi.org/10.1007/BF02066732 --- Rebuttal Comment 1.1: Comment: Thanks for the response and the additional experiments. I understand that MEMDET is an exact method, but what I am curious about is its numerical stability. Even for exact methods, the accumulating rounding error in each step can still lead to a large bias in the result, especially for ill-conditioned problems like NTK. Given the current results, it seems that there are already non-negligible differences between 32-bit and 64-bit results, indicating that the method may be numerically unstable. Have you compared your method with some native ones like direct eigenvalue decomposition, perhaps for some small $n$ so it could fit into memory? How do their results differ between 32-bit and 64-bit computations? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their engagement and for raising this question. Before addressing it, we would like to clarify what we mean by *precision* in our paper, as it may help disentangle the origin of numerical differences across settings. Our computational pipeline consists of three stages: 1. **Training stage:** We trained all neural networks (e.g., ResNet50, ResNet9) in 32-bit precision, which is the default and standard practice in most deep learning frameworks like PyTorch. 2. **NTK computation stage:** The NTK matrix is computed from the trained model and stored in various precisions (e.g., 16-bit, 32-bit, and 64-bit, from the same pre-trained model). The "precision" of the NTK matrix, as referred to throughout our paper, reflects the compute and storage format at this stage. Due to the large expense of forming these matrices, it is often tempting or necessary to form and store these matrices in lower precisions. Our low-precision experiments highlight the pitfalls of mixed-precision in these cases as per Section 2.1, regardless of the downstream use case. 3. **Log-determinant computation stage:** Regardless of how the NTK matrix was computed and stored (16-bit, 32-bit, or 64-bit), *all* log-determinant computations were performed in 64-bit precision, across all methods: MEMDET, SLQ, FLODANCE, etc. This presents the ``best-case'' mixed-precision scenario. Since MEMDET entirely eliminates memory requirement barriers, it became practical for us to perform high-precision computations (e.g., 64-bit in stage 3) even on large matrices—thus mitigating common concerns about overhead associated with higher-precision formats. **Clarifying Differences Between 32-bit and 64-bit Results.** Log-determinants computed from NTK matrices generated in 32-bit and 64-bit precision can differ, but these differences arise from the input matrices themselves being different (i.e., generated under different floating-point formats upstream, as discussed in Section 2.1). For a fixed input matrix, all exact log-determinant methods—whether based on MEMDET or eigenvalue decomposition—produce nearly identical results. To illustrate this, we performed two evaluations: 1. In this comparison (https://anonymous.4open.science/r/memdet-E8C1/notebooks/eig_small.pdf), we compared MEMDET against both NumPy’s `slogdet` and `eigh` on the same matrices. The results match closely, with relative errors comparable to the differences between `slogdet` and `eigh` themselves. 2. In this figure (https://anonymous.4open.science/r/memdet-E8C1/notebooks/comp_eig_ldl.pdf), we extended this comparison to NTK matrices of increasing size (up to $\sim 82,000$), in both 32-bit and 64-bit forms. For a fixed matrix, all three methods produce log-determinants consistent up to relative errors in the range of $10^{-12}$ to $10^{-7}$. Differences between the 32-bit and 64-bit plots reflect differences in the NTK matrix itself, not instability in MEMDET or any of the other methods. This confirms that any variation in log-determinants across precision settings is attributable to upstream matrix formation, not the methods used for computing log-determinants. **On Actual Stability Challenges and Our Solution.** That said, we do want to highlight one real stability issue we encountered and addressed. Although NTK matrices are theoretically positive definite, when computed in lower precision (especially 16-bit or 32-bit), small eigenvalues may flip sign near zero, causing Cholesky decomposition to fail. MEMDET avoids this issue by using LDL decomposition instead—a numerically stable alternative that shares the same computational cost but is more robust in the presence of ill-conditioning (see also the end of Section 2.2 on p.4 of our submitted document). Furthermore, we validated that MEMDET’s block-wise LDL decomposition matches the standard (in-memory) LDL decomposition to machine precision. This is illustrated in this figure (https://anonymous.4open.science/r/memdet-E8C1/notebooks/vary_num_blocks.pdf), where we varied the number of memory blocks used by MEMDET. The results show relative errors consistently around $10^{-14}$, reinforcing that our method maintains high numerical fidelity regardless of memory constraints. As part of our ongoing effort to improve the completeness of experiments, the larger-scale NTK matrices used in Table 1 and Section 4.2 are being updated. Specifically, the NTK matrix for ResNet9 (at full size $500,000$) has now been recomputed in 64-bit precision, and the NTK matrix for ResNet50 will be updated shortly using a full-size matrix of $500,000$ in 64-bit (currently being processed). As a reminder, all log-determinant computations on the large matrices in these experiments were already performed in full 64-bit precision using MEMDET. We hope this clarifies the distinction between differences due to matrix precision and the stability of MEMDET itself, and has had a positive impact on your assessment of our work.
Summary: This work addresses the memory and computation bottlenecks in estimating log-determinants of large matrices such as the Neural Tangent Kernel (NTK) for large models and datasets. The paper introduces MEMDET, a memory-constrained algorithm for exact log-determinant calculations for matrices too large to fit in memory by using block LU decompositions with an efficient block ordering to minimize data transfer. The paper then proposes FLODANCE, an accurate algorithm for extrapolating log-determinants computed on a small subset of the dataset to the much larger full dataset by exploiting the scaling behavior of the log-determinants for a class of kernels. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I checked the proofs in Appendix F and G. Experimental Designs Or Analyses: All experiments in Section 4 appear sound. Supplementary Material: I reviewed Sections A, F, G, H and I. Relation To Broader Scientific Literature: In contrast to conventional numerical methods for estimating log-determinants such as SLQ, I find FLODANCE's approach of exploiting the scaling behavior of the log-determinants of a specific class of kernels for efficient estimation quite creative and novel. Analyzing the scaling behaviors under various limits has shown to be a powerful tool in machine learning broadly, such as predicting optimal hyperparameters at scale [1]. This paper demonstrates that this idea can inspire similar algorithmic progress in numerical linear algebra. [1] Tensor programs v: Tuning large neural networks via zero-shot hyperparameter transfer. Yang et al. 2022 Essential References Not Discussed: None that I'm aware of. Other Strengths And Weaknesses: Strength: Overall, the paper is very well presented. The conceptual insights and experimental results are both strong. Weakness: Three nontrivial assumptions are required to prove the scaling law (Theorem F.1), but they were not verified in the experiments. In this case, it's unclear how the theory presented in Section 3.1 can actually explain the observed scaling behavior of the log-determinants. Other Comments Or Suggestions: While FLODANCE performed strongly in the experiments relative to alternatives, it's unclear how useful the absolute level of accuracy it achieves is for practical applications (e.g. model selection). Can the authors compare FLODANCE with alternatives in some downstream applications that require estimating the log-determinants? Questions For Authors: 1. Can you motivate the form of the non-asymptotic correction to the exponent $\nu$? How much does it affect the accuracy of the estimate? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and positive review of our work. This work is indeed part of a broader active research program aimed at providing computational tools for estimating linear algebraic quantities at scale. We will make this more clear in the final version of our paper, and cite the related work for hyperparameter selection that you suggested. Regarding the assumptions underlying the scaling law behavior, we acknowledge that these technical assumptions have been identified but not verified. To do so for the empirical neural tangent kernel matrix may not be realistic, however, we have come across [1], which examines the spectral decomposition of the analytic NTK (an approximation of the empirical variant we consider) on the sphere. In [1, Theorem 4.8], it appears that our Assumption F.1 is verified over a bounded eigenbasis (Assumption F.2), and Assumption F.3 is automatically satisfied as we consider $f_p^\ast = 0$. If desired, we are happy to include this discussion in the final manuscript. More generally, however, the theory is instead used to establish plausibility of the scaling law, since there is a well-characterized class of kernels for which these assumptions hold. We then use the scaling laws as an empirical theory in much the same way that others do in the ML literature. In terms of this empirical behavior, a plot of the residuals of the successive $\log$-determinant increments after removal of the scaling law can be found at http://anonymous.4open.science/r/memdet-E8C1/notebooks/stationarity.pdf. This figure demonstrates that these values are approximately normally distributed, empirically validating the scaling law in Assumption 1 (as well as Assumption 2). This figure is included in the updated version of our work, and clarification has been added regarding the distinction between this empirical verification and the assumptions stated in Appendix F. Regarding downstream applications, we note that the $\log$-determinant is typically only one of multiple terms appearing in quantities of interest. For this reason, we believe it is premature to include studies computing these partial quantities as approximations, until we have developed the toolkit to treat the other terms. The non-asymptotic terms that appear in the exponent model sublinear behavior that occurs in the pre-limit. To be clear, they model the function $\nu = \nu(n)$ by allowing for subsequent terms beyond the constant in its Laurent series. These terms were included for flexibility as we found that they do improve accuracy for matrices at the scale of our experiments. However, they are not strictly necessary: when the exponent is treated as a constant in $n$, we found that our method still outperforms SLQ (see http://anonymous.4open.science/r/memdet-E8C1/notebooks/relative.pdf for a version of Table 2, updated to display relative error for reviewer ikyx). Errors and cost also now appear in an updated version of Table 1 found at http://anonymous.4open.science/r/memdet-E8C1/notebooks/comparison.pdf . Algorithm 1 has also been updated to explicitly state how the terms are computed. Please let us know if anything else needs verification or clarification, as we are eager to engage further if necessary to improve our work. [1] Murray, M., Jin, H., Bowman, B., \& Montufar, G. (2023, February). Characterizing the Spectrum of the NTK via a Power Series Expansion. In International Conference on Learning Representations.
Summary: This work proposes a method to scale the calculation of matrix determinant to extremely large matrices, especially in context of ill-conditioned matrices such as the empirical NTK matrix. They first develop a memory-constrained algorithm which computes determinant exactly and can serve as a baseline for their later experiments. The core idea behind their main method is to derive an scaling law for the determinant and then use the obtained fit to extrapolate to determinants of even larger matrices. The approach is quite interesting and, besides helping with the particular problem, might also inspire techniques that utilize extrapolation for computation of spectral quantities pertaining to neural networks. Claims And Evidence: - The experimentation is all specific to NTK. But the method is pitched as a method which is generally applicable. While that's true, I don't think there is strong empirical evidence to support this claim. Especially the scaling law approach may not work in other contexts. Also, see methods and evaluation criteria section Methods And Evaluation Criteria: - It would have been more interesting to see the use of determinant in some specific context, and then observe that a better estimation leads to better improvements. But, you know, often it's not the value of determinant in isolation that is needed, and rather some other quantities too. And then, in deep learning, things can always turn out a bit weird, so would have been nice to ground it somehow. - Another thing I am not fully sure is if you just want to calculate determinant, how many samples of the dataset must be used to do so. This is very unclear. It could be that all estimates using 1000 samples are terrible, but you really need like 50K samples to get to the most accurate estimates. Therefore the gold standard is a bit hard to gauge. Perhaps the authors can do some testing in an appropriately downscaled setting, where say the determinant is exactly computed, in 64 bit precision, on all datapoints in the training set. Right now it is unclear how much to read in their FLODANCE estimates closer to the memdet baseline. - Finally, it is also unclear how specific are the determinant estimates to the 'seed'-subset used for the scaling laws. See the questions section below for elaboration. Theoretical Claims: -- Experimental Designs Or Analyses: In general, the experimental design seems fine but I have a few questions about them: - what is the floating point precision used for the baselines such as block diagonal, SLQ, Pseudo NTK? - do all the methods under comparison use the same sized subset? - what is p in the table 2? also, i suppose memdet baseline is the L_n one? if so, please label it more explicitly, you don't remember all possible symbols in your first few reads :) Supplementary Material: No. Relation To Broader Scientific Literature: I think the use of scaling laws could be an interesting direction for numerical estimation, which I believe is relatively unexplored. Essential References Not Discussed: -- Other Strengths And Weaknesses: Overall the paper is well written, and the idea of using scaling laws is clever. But how accurate and robust these determinant estimates are and how much the downstream applications gains from it are far from clear. Other Comments Or Suggestions: It's not clear to me how much the details of MEMDET are relevant to a ML audience. Maybe put some of that in appendix, and describe the scaling law and its theory a bit better. Questions For Authors: - How specific are the determinant estimates to the subset used for deriving the scaling laws? Can you present some experimental evidence which would support that your estimate is robust to this change? On a similar note, how big must the subset be to allow for this robustness? - I believe the scaling laws has been shown in the context of LLMs trained in an online manner. Is there evidence that similar scaling laws also hold in the classical vision settings as used here? Otherwise that should be stated as an assumption as well. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for taking the time to review our paper, and for their constructive feedback. Regarding general applicability of our methods: we stress that MEMDET computes the exact (64-bit) determinant, and is a general method that works for arbitrary matrices. On the other hand, our neural scaling law approach, FLODANCE, is only suggested to work for Gram matrices satisfying Assumption 1. In our writeup and experiments, we focus on NTK matrices, since they are known to be a particularly difficult class of kernel matrices. Other algorithms (e.g. SLQ) already perform very well on better behaved, more traditional classes of matrices. For modern neural networks, the corresponding NTK kernels are non-stationary, and the Gram matrices tend to be highly ill-conditioned and very large, often not being feasible to compute explicitly: hence the need for our algorithm. We believe that this makes NTK matrices particularly useful examples to highlight the utility and efficacy of our method. However, we do recognize that other Gram matrices are commonly used in ML applications. In particular, we computed the Gram matrix associated to the linear model of coregionalization [1], with 10 outputs, and Matern kernel over 10,000 data points. Plots demonstrating the performance of FLODANCE on this dataset, along with experimental details, can be found at http://anonymous.4open.science/r/memdet-E8C1/notebooks/matern.pdf . As the reviewer has pointed out, the $\log$-determinant often appears as one term in more elaborate quantities of interest. Examples of this are the quadratic-form term in the $\log$-marginal likelihood of a Gaussian process (along with their gradients for the purposes of training), and the curvature term that appears in the IIC. Estimating these quantities is a clear goal. However, the other terms appearing in these quantities are highly challenging to estimate as well, requiring separate novel techniques that we believe will each be of independent interest. This work is part of a broader research program under active development, and the current submission already contains two novel techniques for computing and estimating a crucial term, with corresponding software. We believe this strikes an appropriate balance of utility, impact and digestibility for an outlet such as ICML. To clarify the sensitivity of FLODANCE to subsample size and seed subset, we have conducted the following experiments. First, http://anonymous.4open.science/r/memdet-E8C1/notebooks/param_selection.pdf shows the effect of the number of training samples on prediction accuracy, and describes the experimental setup. As is to be expected, the general trend is that using more samples results in better prediction. There is a natural tradeoff between computation time and accuracy that is up to the practitioner to determine. Second, we tested the sensitivity of the estimate to the training sample. To this end, we sampled an ensemble of 15 submatrices of the NTK for ResNet-50 trained on CIFAR10. Each submatrix contians 10,000 datapoints with 10 classes, forming a $100,000\times100,000$ NTK matrix. Experimental details and plots can be found at http://anonymous.4open.science/r/memdet-E8C1/notebooks/scale_law_pred_uq.pdf Third, we upgraded our computations on both architectures to use the full CIFAR-10 dataset with $(n = 50,000)$, resulting in NTK matrices of size $(m = 500,000)$. For ResNet9, we also recomputed log-determinants using 64-bit precision via MEMDET. The updated results are provided in http://anonymous.4open.science/r/memdet-E8C1/notebooks/scale_law_fit.pdf (was Figure H.1) and http://anonymous.4open.science/r/memdet-E8C1/notebooks/scale_law_pred.pdf (was Figure 4). The 64-bit computation for ResNet50 is underway and will be completed in a few days, with updated results included in the final version. We note that while the new plots are derived from the same NTK kernel and methodology, their appearance differs from earlier versions due to a technical adjustment. To improve quantization in 32-bit precision, we had previously scaled the NTK matrices, giving an additive term proportional to $n$ in the $\log$-determinant. Hence, the curves differed from the current ones by a linear function of $n$. This shift does not affect the fitting behavior or predictive quality, but it alters the shape of the curve. In the updated 64-bit computations, this scaling was no longer necessary, resulting in cleaner and more interpretable visualizations without any transformation applied. As is evident by these additional studies, FLODANCE is quite robust to these factors. We thank the reviewer for suggesting these experiments, as we believe their inclusion in the final version of the document will improve the quality of our work. Please let us know if anything else needs verification or clarification, as we are eager to engage further if necessary to improve our work. [1] https://doi.org/10.1007/BF02066732
null
null
null
null
null
null
One Wave To Explain Them All: A Unifying Perspective On Feature Attribution
Accept (poster)
Summary: The paper explores feature attribution by determining the importance of individual wavelets in prediction tasks. Unlike traditional vision-based attribution methods that assess pixel importance, this approach evaluates how wavelets contribute to model predictions. The key idea is to compute gradients with respect to wavelets to measure their significance. Extensive experiments are conducted to validate the effectiveness of the proposed method. Claims And Evidence: The paper claims to introduce a new method that improves interpretability in feature attribution by identifying the importance of individual wavelets. However, while the method assigns importance scores to wavelets, it is unclear how this directly improves interpretability. Wavelets are not inherently interpretable features, and the paper does not provide sufficient justification for why identifying important wavelets leads to a more interpretable representation. Methods And Evaluation Criteria: The authors select one dataset per modality out of the three considered, which is a reasonable choice given the scope of their study. Additionally, they assess faithfulness using the Insertion and Deletion metrics. While these provide valuable insights, it is worth noting that faithfulness can be defined in multiple ways, and alternative definitions may offer complementary perspectives. For example, in the context of NLP, the survey "Towards Faithful Model Explanation in NLP: A Survey" discusses various definitions of faithfulness, many of which share common ideas with vision-based approaches. Theoretical Claims: There are no theoretical claims in the paper Experimental Designs Or Analyses: -- Supplementary Material: I briefly glimpsed the references section. Relation To Broader Scientific Literature: The paper presents an approach positioned within explainability research, contributing to the broader literature by examining the importance of different wavelets. Prior work in explainability has typically emphasized methods that rely on interpretable features, ensuring that the extracted insights align with human understanding. In contrast, this paper leverages non-interpretable features, which, while valuable in their own right, may not directly advance explainability in its conventional sense. Given this, it may be beneficial to consider situating the paper within a broader methodological context—perhaps as an approach that improves model performance or provides alternative forms of analysis—rather than as a direct contribution to explainability. Such a reframing could more accurately reflect its impact within the field. Essential References Not Discussed: The paper's presentation may give the impression that XAI has been primarily focused on vision-based applications, as stated in “While XAI has been predominantly applied in image classification, it is also extending into other fields, such as audio and volume classification.” However, XAI has also been widely explored in other domains, such as NLP, where numerous explainability methods have been developed and studied. Citing relevant works from NLP and other areas could provide a more comprehensive view of the broader landscape of XAI research (e.g., Towards Faithful Model Explanation in NLP: A Survey, A Comparative Study of Faithfulness Metrics for Model Interpretability Methods, Faithfulness Tests for Natural Language Explanations) Other Strengths And Weaknesses: **Strength**: The paper presents a nice idea that is innovative and has potential value in the field. The authors have clearly invested significant time and energy into the experimental aspects of the work, which is evident from the detailed experiments. **Weakness**: A primary weakness of the paper is that wavelets, as used in the proposed method, are not inherently interpretable features. This raises concerns about the overall interpretability of the method, as the use of wavelets may undermine the transparency of the approach. Additionally, the presentation of the algorithm could be improved. Specifically, when the authors mention "directly evaluate ∂fc(W−1(z))/∂z” it is unclear how this is achieved. Providing more clarity on the exact steps of this evaluation would significantly improve the paper's presentation and make the methodology more accessible to readers. Other Comments Or Suggestions: The title suggests a unifying perspective, but it is unclear what exactly is being unified. At first glance, it resembles the SHAP paper, which unifies different explanation methods for various attribution techniques. Clarifying what is being unified—whether it's explanation methods, domains, or something else—would provide better clarity for the reader. Questions For Authors: -- Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We first would like to thank reviewer UvKx for the reviews on our work and underlining the potential of our idea, regarded as innovative. We would like to address the comments raised by the reviewer on our work. These comments will be taken into account in our work and will help us improve the quality and the clarity of our work. ### Interpretability of wavelet coefficients The reviewer challenged our assertion that wavelet coefficients are interpretable coefficients and stated that we did not provide enough justification. We kindly refer the reviewer to **section 2.2, page 3, where we explain why wavelets are interpretable features for feature attribution**. We add the wavelet transform of a signal provides a more interpretable representation of this signal than raw pixels or Fourier transforms by **capturing spatial and frequency information**. Fourier transforms only encode frequency content whereas wavelets preserve localization, making them particularly effective for analyzing structured data like audio and images. In audio, for instance, wavelets isolate transient components corresponding to phonemes, yielding a more intuitive representation of meaningful patterns. Wavelets **extract high-level features that better convey the overall structure of a signal**, making them a more interpretable alternative to the original data representation. **For images, wavelet coefficients can be intuitively understood by viewing them as capturing details at different scales**, like how our vision works when we look at an image from different distances. **Wavelet coefficients align with the definition of interpretability by Kim et al. (2016, [1]), as the decomposition into different scales enables users to predict how the method will emphasize on specific features (e.g. edges or gridded patterns on images)**. We would like to thank the reviewer for pointing this out and will further enrich section 2.2. with the provided expanation. ### On the scope and positioning of the paper and our claim on unification The reviewer stated that our paper could be situated amond approaches that improve model performance or alternative forms of analysis. While this perspective is interesting, as our work indeed bridges the gap between explainability and model robustness for instance, **we believe that our work aligns well with explainability and feature attribution as we essentially propose to attribute an importance score to input features, taken as the wavelet coefficients of the signal of interest rather than its pixels** as in traditional feature attribution methods. The reviewer mentioned that where our identification lies was unclear, a concern shared by R1. We thank the reviewer for this remark and acknowledge that indeed, we remain elusive on what we aimed at unifying. **Unlike SHAP, we do not unify attribution methods but rather unify the domain in which attribution is carried out in a single input representation, the wavelet transform of this signal**. We edited our manuscript to reflect the changes. ### Clarifications on the evaluation of $\partial f_c(W^{-1}(z))/\partial z$ The reviewer mentioned that it was unclear how the computation of $\partial f_c(W^{-1}(z))/\partial z$ was achieved. In Pytorch, we **require the gradients on the wavelet transform of the signal $z$** and reconstruct the original image from its wavelet coefficients, $x=W^{-1}(z)$. We **carry out a forward and compute the derivative of the prediction with respect to the wavelet coefficients**. We then re-express the signal into its wavelet transform to retrieve the gradients of the model with respect to the wavelet coefficients of the input signal. **The novelty lies in expanding existing attribution methods (SmoothGrad and IntegratedGrad) into the wavelet domain**. ### Discussion of the faithfulness metric & additional references The reviewer highlighted an additional definition of the faithfulness, as discussed in "Towards Faithful Model Explanation in NLP: A Survey". We thank the reviewer for this reference and will add it to our manuscript. **Regarding our definition of the faithfulness, we acknowledge that there are several definitions of this notion**. For this work, we chose the definition of Muzellec et al (2023), but as **we believe that one metric is not sufficient to reflect the behavior of a method, we extensively evaluated our method with alternative metrics in the supplementary materials, section B.2, p.16**. The reviewer underlines that we do not discuss references from NLP. We would like to recall that **since our method cannot be theoretically applied to text data, we did not considered works in this field (see Introduction, second column, l47-48), although we acknowledge that there have been many works in the field of XAI for NLP**. We thank the reviewer for the reference and will add it to our manuscript. - [1] Kim et al, 2016. “Examples Are Not Enough, Learn to Criticize! Criticism for Interpretability.” NIPS’16 --- Rebuttal Comment 1.1: Comment: The authors clearly took the time to write a detailed rebuttal, which I appreciate. However, even after rereading Section 2.2, I still do not understand why wavelets are considered interpretable features. Since the authors rely on a feature importance method, it is crucial that the features themselves are interpretable. In tabular data, interpretability typically refers to features like "age" or "height"—concepts that are inherently understandable to humans. How do wavelets fit this criterion? The authors argue that wavelets are more interpretable than the Fourier transform, but being "more interpretable" than Fourier does not necessarily make them interpretable in an absolute sense. --- Reply to Comment 1.1.1: Comment: We appreciate the prompt response and rebuttal acknowledgment from the reviewer. Individual input features for tabular data are indeed highly interpretable. This **high interpretability however is not available for input features of image/audio/3D data (pixels/audio samples/voxels)**. For such domains concept based/prototypical explanations fill this gap, where individual units of interpretation (concepts/prototypes) are regarded as human understandable. **We agree that wavelet coefficients are not as understandable as concepts/prototypes.** However, concept or prototypical explanations are extracted through internal representations of the network. They require information about network architecture and also explicit access to internal layers, which is not possible in many cases (eg. for proprietary models accessible only via APIs). This is why post-hoc attribution methods and saliency visualization are still important as they are the only tools to offer insights in such cases. **Our method and its comparison is against post hoc attribution methods for the aforementioned data domains, which do not require internal model information as prototypical or concepts-based approaches do**. Among these methods, as we argue in the rebuttal (to you and reviewer KCeN) and paper (Sec 2.2), **wavelets provide a more suitable and interpretable representation than raw input features, super-pixels or Fourier coefficients to perform attribution**. Moreover, they also provide a clean pathway to unify attribution methods for multiple modalities, i.e. image/audio/3D data. Here is an application that we hope will make the wavelet coefficients more intuitive: the textures isolated by wavelet coefficients can correspond to features in medical images, such as glaucoma [1]. The advantage of classifying glaucoma images using wavelet-based descriptors is that these descriptors are fixed and defined in a closed form, as opposed to features from CNN models, which can be hard to decipher. Our approach leverages the expressivity of wavelets to explain modern—and more accurate—classification models. In summary, while we acknowledge the validity of your underlying point, we believe that WAM should be evaluated in comparison to post-hoc attribution methods for image/audio/3D data, where interpretable concepts either do not exist or require access to the model's inner layers. In contrast, the wavelet transform can capture and disentangle **textures, edges, and other patterns that correspond to intuitive attributes that vary depending on the context** (e.g., glaucoma, as mentioned earlier, or fields and roads in remote sensing images, veins in leaf images, etc.). - [1] Dua, S., Acharya, U. R., Chowriappa, P., & Sree, S. V. (2011). Wavelet-based energy features for glaucomatous image classification. Ieee transactions on information technology in biomedicine, 16(1), 80-87.
Summary: This paper proposes an explanation method for DNN by using wavelet coefficients as features for attribution instead of image features. The proposed WAM can be adopted across diverse modalities, including audio, images, and volumes. It unifies and extends existing methods, SmoothGrad and Integrated Gradients, within the wavelet domain by transporting the gradient w.r.t. the input to the transformed wavelet domain. ## update after rebuttal Thank the authors for the detailed rebuttal. My concerns are basically addressed. For the results of localization evaluation (Point Game) and analysis of multi-class cases, the authors haven't updated the rebuttal yet. Still, I strongly recommend adding these results to the final version of the paper. I will keep my rating. Claims And Evidence: - Using wavelets to decompose image information is a novel view for interpreting the model’s decision for obtaining what information the model has seen when making the prediction. The methodology seems reasonable and claimed clearly. - The key part of the proposed wavelet method is transporting the gradient w.r.t. original input to the gradient w.r.t. the transformed input (wavelet domain). This limits that the proposed method cannot adopt the intermediate features generated from the model, which are used more by other gradient-based explanation methods [A, B, C] [A] Selvaraju R R, Cogswell M, Das A, et al. Grad-cam: Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE international conference on computer vision. 2017: 618-626. [B] Jiang P T, Zhang C B, Hou Q, et al. Layercam: Exploring hierarchical class activation maps for localization[J]. IEEE Transactions on Image Processing, 2021, 30: 5875-5888. [C] Zhao C, Chan A B. ODAM: Gradient-based Instance-Specific Visual Explanations for Object Detection[C]//ICLR. 2023. Methods And Evaluation Criteria: - For evaluating explanation methods for image inputs, except faithfulness evaluation, it’s also better to provide localization evaluation, Pointing Game [D], and visual explanation accuracy [E]. [D] Zhang J, Bargal S A, Lin Z, et al. Top-down neural attention by excitation backprop[J]. International Journal of Computer Vision, 2018, 126(10): 1084-1102. [E] Oramas J, Wang K, Tuytelaars T. Visual explanation by interpretation: Improving visual feedback capabilities of deep neural networks[J]. arXiv preprint arXiv:1712.06302, 2017. Theoretical Claims: Seems no problem. Experimental Designs Or Analyses: - About the visualization results in A.3, the distinction between dog and cat as target class is less apparent in the wavelet domain. Does that demonstrate that when there is a multi-category in the image, the model cannot be interpreted well in the wavelet domain? Should give more examples and discussion on this kind of case. Supplementary Material: I have reviewed the appendix, and there's a question about visualization results in A.3. Please see the Experimental Designs Or Analyses part. Relation To Broader Scientific Literature: This paper proposes a way to interpret the model with inputs of audio and volumes. Almost all previous explanation methods are designed for models with image inputs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - duplicated reference for Grad-CAM (Page 11 Line 593-604) Questions For Authors: The questions mentioned above and Is that possible 1) adopting wavelet in the feature layer; 2) wavelet-based explanation method for interpreting Transformer; 3) wavelet-based explanation method for other task models like object detectors and VLMs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer EDJ8 for the review and the comments on our work. We also thank the reviewer for pointing the duplicate reference, which we have corrected. ### Regarding the evaluation criteria The Reviewer said to include localization evaluation, Pointing Game, and visual explanation accuracy alongside faithfulness evaluation for assessing explanation methods for image inputs. We thank the reviewer for providing additional benchmarks. **We kindly refer the reviewer to the supplementary material, as we have evaluated our method accross a wide range of common metrics from the existing literature**. To complement these results, **we are currently implementing the Pointing Game and Visual Explanation Accurary, as suggested by the reviewer, and will update the rebuttal if we get results**. ### Regarding the multiclass classification (Cats and Dogs examples) The reviewer outlined the fact that from appendix A3, the distinction between dog and cat as target classes is less apparent in the wavelet domain and suggested providing more examples and discussion on multi-category interpretation challenges. **To be more conclusive regarding the assertion ''Does that demonstrate that when there is a multi-category in the image, the model cannot be interpreted well in the wavelet domain'', we are gathering more examples**. Localization benchmarks should also give us information on the behavior in multi-category settings. **We try our best to get the results and will update the rebuttal**. ### Questions *1. adopting wavelet in the feature layer;* This is an interesting question and was actually planned for future work. **To the best of our knowledge, no work has adopted wavelets in the feature layers**. Like [A,B] and contrary to the statement of the reviewer, the wavelet decomposition does not prevent us from using the intermediate layers of the model to compute the explanations. **In principle by applying wavelet decomposition to intermediate feature maps, we could obtain a "multiscale interpretation" of feature importance**. *2. wavelet-based explanation method for interpreting Transformer;* If the reviewer refers to the interpretation of the attention mechanism using wavelets, it is an interesting suggestion but beyond the scope of this work. **If it is meant applying WAM to Transformer-based architectures, then we kindly refer the reviewer to the supplementary materials B2 were we apply the WAM to a wide variety of topologies (ViT, ConvNext, 3D Former)**. Results remain the same no matter the choice of the classification model. *3. wavelet-based explanation method for other task models like object detectors and VLMs* These two comments are a very good suggestion and we would like to thank ther reviewer for these remarks. We remained focused on the classification task but **in principle, our method could be expanded to object detectors. The way to follow would be to expand ODAM to the wavelet domain**. Regarding VLMs, since our method unifies the domain for feature attribution it makes sense to look for interpreting multimodal models. However **VLMs handle text data, and this modality is unsuitable for the wavelet transform, so attribution in a suitable latent space for text data shoud be explored**. We are currently applying WAM to object detectors and will update the rebuttal if we get results.
Summary: Presents a feature attribution method that performs attributions on wavelets derived from input domain. This helps to naturally extend explanations that are outside the image domain, such as an audio input domain. The method essentially constructs a wavelet transform of the input, then applies standard gradient and IG attribution methods to wavelet-domain inputs. Insertion/deletion tests for faithfulness are performed to compare WAM to other popular methods across audio, 3d, and image domains. ## update after rebuttal After reading the rebuttal, I am confident in my original assessment. I encourage the authors to include analysis of choice of $\Lambda$ and various wavelet choices, which I could not find in the original paper. We also encourage the author to address if wavelets have any absolute edge over all other methods and are, in some analytic sense, better than all other possible methods, or if wavelets just have a comparative edge over some current popular methods. Upon further reflection, we also recommend an edit for readability. Claims And Evidence: The method, WAM, does appear to have good performance compared to other popular gradient-based attribution methods over a variety of tests. The paper demonstrates some advantage to using wavelets, i.e. for attributing at different feature scales. Methods And Evaluation Criteria: The faithfulness metric is an appropriate method, and the evaluation sets seem appropriate. Theoretical Claims: I looked over the appropriateness of eq 4. Experimental Designs Or Analyses: Insertion/deletion is an appropriate metric, and at least the visual dataset is standard, to my knowledge. Supplementary Material: No Relation To Broader Scientific Literature: The paper piggybacks off of the gradient-based attribution literature (Sundararajan IG paper), adapting the method to better handle models for volume and audio input domains, by using wavelets. Thus it contributes to (the small amount of) literature on wavelet based attributions. Essential References Not Discussed: None known. Other Strengths And Weaknesses: See questions and suggestions Other Comments Or Suggestions: Pg 4, line 209, left: "Varying pixel values provide no information to what is changing on the image." Please expound. Changing on the image? No information to what? Your paper claims WAM is a unifying perspective of feature attribution. When I read this, I expected the paper to present an analysis that unifies multiple attribution methods under one theory. However, the paper seems to provide a method that is adaptable to multiple input domains. Perhaps some more up-front clarity/renaming would clarify this fact and prevent confusion. I can see how wavelets is another way to attribute to the input domain, so that attributions now incorporate a sense of scale. While this is an advantage, can you provide any argument as to why wavelets are $\textit{the}$ appropriate terms in which to attribute? What about super-pixels? What about Fourier transforms and attributing in k-space? Is this method more appropriate than those, theoretically or experimentally? Questions For Authors: What wavelet functions were chosen for each experiment? What $\Lambda$ was chosen? Why were they chosen? Sorry, I did not catch this on my read though; perhaps they should be presented more prominently. How does the faithfulness score and the quality of explanations vary with the choice of wavelet function and choice of $\Lambda$? Is it sensitive to these choices? A major component of IG is completeness, i.e., the total change in function value at the baseline vs input value is accounted for by the attributions. This gives meaning to the value of the IG output: i.e., the value of an IG attribution is equivalent to function change. Does WAM_IG satisfy completeness? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We first would like to thank reviewer KCeN for reviewing our manuscript and for the comments on our work. The points raised by the reviewers will help us improve the quality of our work. **Question** *Pg 4, line 209, left: [...] Please expound / I can see how wavelets is another way to attribute to the input domain, so that attributions now incorporate a sense of scale. [...] can you provide any argument as to why wavelets are appropriate terms in which to attribute? What about super-pixels? What about Fourier transforms and attributing in k-space? Is this method more appropriate than those, theoretically or experimentally?* Our formulation "varying pixel values provide no information to wht is changing on the image" is unclear and we modified it for "moving from one pixel to the next is only a shift in the spatial domain and does not capture relationships between scales or frequencies". This sentence explains why in our view, the pixel domain is insufficient to interpret the decision of a model. **The wavelet domain captures both the spatial component (as done by pixels, or superpixels) and the spectral component, as done by the Fourier transform**. Either one of these approaches is unsuitable for attribution (see [1,2] for Fourier), as **for attribution a spatial or temporal dimension is required and a spectral dimension desirable** to assess what the model sees at a given location. **Question** *What wavelet functions were chosen for each experiment? What was chosen? Why were they chosen? [...] perhaps they should be presented more prominently. / How does the faithfulness score and the quality of explanations vary with the choice of wavelet function and choice of $\Lambda$ ? Is it sensitive to these choices?* **The quality and faithfulness of explanations does not change with the choice of the mother wavelet**. the quality and faithfulness of the explanation stem from the multi-scale decomposition, which is a property of the $\Lambda$, irrelevant of the choice of the mother wavelet. The choice of the mother wavelet determines how the input signal is decomposed, influencing whether finer details or broader structures are emphasized in the wavelet coefficients By default, we considered the Haar wavelet, due to its semantic properties but also because it is fast to compute. **Additional experiments verified that the choice of the wavelet does not change the quantitative results**. We evaluated $WAM_{IG}$ on a ResNet 50 model using Daubechies and Bior wavelets. We updated the supplementary materials in to state more clearly our choice of wavelet and discuss what wavelets can be chosen. On the other hand, **the choice of $\Lambda$, remained fixed and we did not explore beyond the dyadic transform because the multi-scale property of our decomposition fundamentally relies on its dyadic structure**. Using non-dyadic decompositions could disrupt the natural hierarchy of scales, leading to a loss of spatial localization and a less structured frequency representation. **Question** *Your paper claims WAM is a unifying perspective of feature attribution. [...]. However, the paper seems to provide a method that is adaptable to multiple input domains. Perhaps some more up-front clarity/renaming would clarify this fact and prevent confusion.* We acknowlege that we remained elusive on what we aimed at unifying, a concern shared by R3. We will edit our manuscript to explicitely state what we aim to unify in this work. **Question** *A major component of IG is completeness. [...] Does WAM_IG satisfy completeness?* For completeness, only **showing that inverse wavelet transform keeps things differentiable and that the initial and final points are same is enough (see [3], Prop 1)** to show that $WAM_{IG}$ satisfies completeness. Formally, IG satisfies completeness if $F$ is differentiable almost everywhere and $$ \sum_{i=1}^n IntegratedGrad_i(x) = F(x) - F(x_0) $$ Where $x_0$ is a baseline black image such that $F(x_0)\approx 0$. To ensure that $WAM_{IG}$ satisfies the condition for completeness, we have to ensure that (1) $F(W^{-1}(z))$ is differentiable almost everywhere and that (2) we can set $z_0$ such that such that $W^{-1}(z_0)$ is a black image. **(1) depends on the choice of the mother wavelet $\psi$** (we need the mother wavelet to be smooth) and **(2) is obtained by setting the wavelet coefficients to 0**, so **$WAM_{IG}$ satisfies completeness if $\psi$ is smooth**, e.g. with Daubechies but not with Haar wavelets. - [1] Yin et al (2019). A fourier perspective on model robustness in computer vision. NeurIPS - [2] Chen et al (2022). Rethinking and improving robustness of convolutional neural networks: a shapley value-based approach in frequency domain. NeurIPS - [3] Sundararajan et al (2017) Axiomatic attribution for deep networks. ICML
null
null
null
null
null
null
null
null
An Efficient Pruner for Large Language Model with Theoretical Guarantee
Accept (poster)
Summary: The paper proposes an proximal-operator based approach that allows to prune linear layers in large language models. The paper provides a solid theoretical analysis of the method as well as experiments on a Llama 7B, where it (marginally) improves over previous methods. The proximal operator is (known) the elementwise hard thresholding operator. The key innovations of the paper are the addition of an acceleration step that guarantees descent as well as an adaptive selection of the regularization parameters, depending on how far from the target sparsity level the current matrix is. Claims And Evidence: The theoretical claims are well supported with rigorous statements and proofs. These hold irrespective of the the application area (e.g. large language models). The empirical claims "outperforms state of the art pruning techniques" is not supported enough IMO. They are only conducted on Llama-7B, which is an outdated model and also quite small. To profoundly support this claim, I'd expect more results on other and more modern models. Furthermore I strongly recommend adding always the dense model performance as baseline. From Table 1 it seems that mAIHT improves, but on a very tiny scale compared to the gap to the base model. In the abstract they say that "pruning is an effective solution to reduce model size". I think this is somewhat false. First, the authors only consider unstructured sparsity which cannot be run on GPU. Second the model performance drops on recent models (llama) is quite large. I do not think anyone is actually using those pruned models. The one model that comes closest is https://neuralmagic.com/blog/24-sparse-llama-smaller-models-for-efficient-gpu-inference/ but it also requires fine-tuning after pruning. I thus recommend to tone down a little bit. Methods And Evaluation Criteria: The proposed methodology is very reasonable and fits the use case very well. As mentioned above, the empirical evaluation is not sufficient. Theoretical Claims: I checked Proposition 2.1 in detail and skimmed the other proofs. Experimental Designs Or Analyses: Beyond the limitation of the evaluation in terms of datasets, I am wondering to what part of their innovation they attribute the improvement. Are they finding a better mask or do they better optimize the remaining weights? [1] reports that applying masked gradient updates also improves wanda and sparsegpt. Can you add some ablation? Also in section 2.4 you say that you use a projected gradient descent step after freezing the sparsity pattern (line 9 in the algorithm). AFAIK SparseGPT does not do this. Or can you point me to where they do it (either code or paper)? I think this might be a novelty of the present work. Concurrent work [1] also uses it, but does not need to be considered due to recency. What's also missing is a/ code to reproduce the results b/ a rough number how long the method takes in wall clock time. Whilst it is usually much smaller than training a model, for the research community it matters a lot. [1] A Proximal Operator for Inducing 2:4-Sparsity (https://arxiv.org/abs/2501.18015) Supplementary Material: I checked the Proof of Proposition 2.1 and skimmed the remainder. Relation To Broader Scientific Literature: The relation to broader literature are appropriately given. The main established baselines are Wanda and SparseGPT and it also considers more recent work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This is a rather theoretical paper. I think it's theoretical contribution is strong and relevant beyond how well it works on the currently hyped LLMs. On the other hand, the empirical improvements on LLM pruning are negligible. Other Comments Or Suggestions: - in Section 3, line 196, I think it is worthwhile adding a sentence why this is the lipschitz constant of the gradient. Might not be clear to all readers. - the formatting on page 5 is over the boundaries. Please fix this. you have enough space left. - Section 4.1 line 307. Notice that this is exactly the change that moves magnitude pruning to Wanda. It might be worth mentioning this to the reader. Questions For Authors: Can you get results on a Llama 3.1 model and also on a 70B model? Will you publish the code? How fast is your code compared to Wanda or SparseGPT? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful feedback. Below, we provide our responses and clarifications. **Can you get results on a LLaMA 3.1 model and also on a 70B model?** We appreciate the reviewer's valuable suggestion. Due to resource constraints, we have conducted additional experiments on the LLaMA-13B model with 0.5 sparsity. The results are summarized below: |Methods|wikitext-2| |---|---| |SparseGPT|6.2535| |mAIHT|**6.1336**| |Methods|BoolQ|RTE|HellaSwag|ARC-e|ARC-c|WinoGrande|OBQA|Mean| |---|---|---|---|---|---|---|---|---| |SparseGPT|**76.06**|**60.28**|74.00|67.55|41.89|**71.98**|44.20|62.28| |mAIHT|75.47|**60.28**|**75.08**|**70.03**|**44.88**|71.42|**44.80**|**63.14**| These results demonstrate that mAIHT achieves better perplexity and mean performance compared to SparseGPT. We will consider additional experiments on newer models, such as LLaMA-3, in future work. --- **Will you publish the code?** **Reply**: If the paper is accepted, we will release the code along with detailed documentation and a usage tutorial to facilitate reproducibility and further research. --- **How fast is your code compared to Wanda or SparseGPT?** We compare the runtime of mAIHT, Wanda, and SparseGPT for pruning the LLaMA-7B model. The results, including the time required to generate input activations, are as follows: | |Time (s)| |---|---| |Wanda|148.55| |SparseGPT|609.04| |mAIHT|1370.79| While mAIHT is slower than other methods due to its advanced gradient-based optimization for layerwise reconstruction, its pruning time remains negligible compared to LLM fine-tuning. For reference, Wanda [1] reports that fine-tuning LLaMA-7B with LoRA takes 24 hours on a V100 GPU, while full fine-tuning requires 24 days. In contrast, mAIHT prunes the model in under 30 minutes. --- **Reply to the Concerns in Claims and Evidence**: We appreciate the reviewer's concern. While our method primarily focuses on unstructured pruning, it can be extended to handle structured pruning (e.g., n:m sparsity, hierarchical sparsity, block sparsity, and row sparsity). For further details and experimental results, please refer to our response to Reviewer iY6M. --- **Reply to Experimental Designs or Analyses**: We thank the reviewer for the insightful suggestion regarding our experimental design. Using projected gradient descent (PGD) to refine weights aligns naturally with our approach. The gradient descent step provides a warm start, bringing the solution close to optimality. PGD then fine-tunes the weights efficiently within the sparsity constraints. To study the impact of refinement, we conducted an ablation experiment on the LLaMA-7B model with 0.5 sparsity. The results on Wikitext-2 perplexity are as follows: | |Perplexity| |---|---| |SparseGPT|7.2397| |mAIHT (without refining)|7.0843| |mAIHT|7.0720| Even without refinement, mAIHT achieves better results than SparseGPT. This suggests that proximal gradient descent implicitly refines retained parameters, unlike SparseGPT. The additional refinement step further improves performance, demonstrating its effectiveness. [1] Sun, M., Liu, Z., Bair, A., & Kolter, J. Z. A Simple and Effective Pruning Approach for Large Language Models.
Summary: This paper addresses the challenge of pruning Large Language Models (LLMs) to reduce computational and storage costs without retraining. The authors reformulate the pruning process as an $\ell_0$-penalized optimization problem and propose a “monotone accelerated Iterative Hard Thresholding” (mAIHT) method. They provide detailed theoretical analyses—covering convergence, convergence rates, and risk upper bounds—and demonstrate experimentally that their approach outperforms several state-of-the-art pruning techniques for LLaMA-7B on tasks such as WikiText2 (perplexity) and multiple zero-shot benchmarks. Claims And Evidence: 1. Claim: The paper claims to offer a layer-wise pruning procedure (mAIHT) that achieves better pruning performance than existing one-shot methods like Wanda, SparseGPT, and ADMM-based pruning. - Evidence: Experimental evaluations on LLaMA-7B at various sparsity levels consistently show improved perplexities on WikiText2 and higher zero-shot accuracies across a variety of tasks. 2. Claim: The proposed $\ell_0$-penalized optimization framework has strong theoretical guarantees. - Evidence: The paper derives the convergence of both the IHT and mAIHT methods, provides sub-linear or linear rates under certain assumptions, and proves risk upper bounds in the setting of finite calibration data. 3. Claim: The monotone accelerated variant (mAIHT) converges more quickly and finds solutions of higher quality than naive iterative hard thresholding alone. - Evidence: Single-layer experiments and subsequent ablation-like studies show that mAIHT attains lower reconstruction error (based on $\|XW - XW^{\text{(orig)}}\|_F$) more rapidly. Methods And Evaluation Criteria: - Methods: - $\ell_0$-penalized Formulation: The pruning is cast as minimizing $\frac{1}{2}\|XW_{\text{orig}} - XW\|_F^2$ subject to an $\ell_0$ term. - IHT and mAIHT: Iterative Hard Thresholding is adapted to an accelerated, monotone version, which helps achieve faster convergence. - Adaptive Sparsity Scheduling: The regularization parameter $\lambda$ is adjusted to match a desired target sparsity level. - Practical Refinement Step: After determining the support, a projected gradient descent step refines the solution. - Evaluation Criteria: - Perplexity on WikiText2 (lower is better). - Accuracy on zero-shot benchmarks (BoolQ, RTE, HellaSWAG, WinoGrande, ARC-e, ARC-c, OpenbookQA). - Computational Efficiency: Demonstrated by plotting reconstruction error vs. iteration count for IHT vs. mAIHT. Theoretical Claims: 1. Convergence Guarantees: - The paper proves that both IHT and mAIHT converge to a so-called “$\alpha$-fixed point,” ensuring there exists a stable solution that the iterative procedure approaches. 2. Convergence Rate: - IHT has a sub-linear rate of $O\bigl(\tfrac{1}{k}\bigr)$, while mAIHT exhibits a faster asymptotic rate $O\bigl(\tfrac{1}{k^\beta}\bigr)$ for some $\beta>1$ under certain monotonicity conditions. 3. Risk Upper Bounds: - Under assumptions of restricted isometry properties (RIP) and bounded data, the pruned solution’s risk approaches a constant multiple of the optimal risk, indicating good generalization even with finite calibration samples. These theoretical contributions are significant because they strengthen the rigor behind pruning methods, which are often heuristic or lacking formal guarantees. Experimental Designs Or Analyses: - Data and Setup: The authors select a small calibration set from C4 (128 samples) to prune each layer, then evaluate perplexity (WikiText2) and zero-shot accuracy on multiple tasks. - Metrics: They track perplexity, zero-shot accuracy, and reconstruction error. They also compare runtime and iteration counts for single-layer experiments. - Model: LlaMA-2 7B. - Results: - Across moderate sparsity levels (20–50%), mAIHT typically surpasses other baselines in perplexity and zero-shot accuracy. - For extreme low-sparsity regimes (e.g., 10%), the performance remains comparable, sometimes slightly below ADMM-based methods but still better than standard magnitude pruning or Wanda. Supplementary Material: he supplementary sections include: - Extended Theoretical Proofs: Detailed derivations for convergence, rate of convergence, and risk bounds. - Additional Implementation Details: Such as the step-size selection, initialization strategies, and the role of the final projected gradient descent step. - Ablation Experiments (suggested by partial results): Brief demonstrations of how different hyperparameter settings affect convergence and reconstruction error. Relation To Broader Scientific Literature: - Pruning Methods: Builds on classical magnitude pruning (Han et al., 2015) and more recent single-shot approaches like Wanda and SparseGPT. The paper extends these techniques with a formal $\ell_0$-based optimization perspective. - Optimization for Pruning: Incorporates ideas related to iterative shrinkage/thresholding algorithms from compressed sensing, bridging them with modern large-model pruning. - Large Language Model Compression: Contributes to a growing interest in making LLMs more resource-friendly, aligning with a broader literature on LoRA, low-rank factorization, and other parameter-efficient fine-tuning or pruning methods. Essential References Not Discussed: The core references for single-shot pruning and $\ell_1$/$\ell_0$-based approaches are covered. However, the authors might consider: - Structured Pruning approaches for LLMs (e.g., channel or head pruning) that could benefit from a similar $\ell_0$-reformulation. - Mixed-precision or quantization methods that often co-occur with pruning and share theoretical frameworks about memory reduction and inference latency. Other Strengths And Weaknesses: Strengths: - Clear theoretical foundation, which is relatively uncommon for LLM pruning papers. - Practical and easily implemented (layer-wise) method that converges quickly. - Strong empirical gains at moderate sparsity levels. - Helpful ablation-like results on reconstruction error across multiple iterations. Weaknesses: - Although the method performs well at moderate sparsities, there is somewhat less emphasis on extremely high sparsity (e.g., 90%+). - The approach relies on a small calibration set that can induce overfitting on C4 dataset nature. - Integration with quantization or other compression techniques remains an open question. - Only one model: LLaMA-2 7B was considered. It is common in pruning papers to include more experimental results with more models (LLaMa-3 from 1B up to 70B) - Practicality of the work: it is not well explained why do we need sparsity in LLM. e.g. in quantization we gain memory because it is much easier to store low-bit representation of the weight, but sparsity (unstructured) is not that applicable. Other Comments Or Suggestions: - Support for Structured Pruning: Extending mAIHT to structured or blockwise pruning scenarios could be of practical interest. - Real-World Deployments: Future work could explore how performance and latency improvements manifest on actual hardware beyond just theoretical FLOPs or memory measures. - Hyperparameter Sensitivity: While the paper provides defaults, additional clarity on how sensitive the final results are to $\lambda$, the step size $\alpha$, or the $\ell_2$-regularization parameter $\mu$ would be beneficial for reproducibility. Questions For Authors: 1. High-Sparsity Regime: Have you tested or do you plan to test the method at extremely high sparsities (e.g., 80–90%)? 2. Structured Pruning: Is the proposed method easily adapted to channel-level or head-level pruning? 3. Calibration Data: How robust is mAIHT to smaller or noisier calibration sets? 4. Implementation Tips: Could you elaborate on any pitfalls or best practices (beyond step-size selection) that practitioners should keep in mind? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful feedback. Below are our responses and clarifications for the various concerns raised. **High-Sparsity Regime: Have you tested or do you plan to test the method at extremely high sparsities (e.g., 80-90%)?** **Reply**: Thank you for the question. We found that at very high sparsities (80-90%), models pruned by all methods degrade significantly, with performance dropping to near-random guessing. Below, we compare mAIHT and SparseGPT at 70% sparsity on the LLaMA-7B model: |Methods|wikitext-2| |---|---| |SparseGPT|26.6107| |mAIHT|**21.9671**| |Methods|BoolQ|RTE|HellaSwag|ARC-e|ARC-c|WinoGrande|OBQA|Mean| |---|---|---|---|---|---|---|---|---| |SparseGPT|63.18|**53.43**|42.82|40.24|26.37|59.04|**30.20**|62.28| |mAIHT|**65.29**|51.99|**44.21**|**42.76**|**27.73**|**60.30**|**30.20**|**63.14**| These results show that mAIHT performs better than SparseGPT even at high sparsities (up to 70%). --- **Structured Pruning: Is the proposed method easily adapted to channel-level or head-level pruning?** **Reply**: We appreciate this important question. Our proposed method can be adapted to semi-structured and structured pruning, including N:M sparsity, hierarchical sparsity [1], block sparsity [2], and row sparsity [3]. Formally, in problem (2), the term $\lambda\Vert W\Vert_0$can be replaced with $I_{S}(W)$, where $S$ represents the set of matrices satisfying the desired sparsity pattern, and $I_S$ is the indicator function ($I_S = 0$ if $W \in S$, $I_S = +\infty$ otherwise). For this problem, monotone accelerated proximal methods can be applied by replacing the hard thresholding operator with a projection operator onto $S$. Below are results for 2:4 sparsity in LLaMA-7B: |Methods|wikitext-2 perplexity| |---|---| |SparseGPT|7.2933| |mAIHT|**7.2606**| |Methods|BoolQ|RTE|HellaSwag|ARC-e|ARC-c|WinoGrande|OBQA|Mean| |---|---|---|---|---|---|---|---|---| |SparseGPT|**73.79**|54.51|69.28|**66.75**|39.33|**68.42**|38.60|58.67| |mAIHT|72.96|**60.28**|**69.56**|64.89|**40.27**|67.48|**39.40**|**59.26**| However, for channel-level or head-level pruning, our method is not ideally suited, as it evaluates the importance of individual parameters rather than entire channels or attention heads. --- **Calibration Data: How robust is mAIHT to smaller or noisier calibration sets?** **Reply**: We compare mAIHT and SparseGPT across different calibration set sizes for 0.5 sparsity pruning in LLaMA-7B. |Number of calibration data|32|64|128| |---|---|---|---| |SparseGPT|7.4959|7.3281|7.2397| |mAIHT|**7.2305**|**7.1867**|**7.0720**| Results show that mAIHT is significantly more robust to smaller calibration sets compared to SparseGPT. Regarding noisy calibration sets, we typically use high-quality data for optimal results. While we avoid excessive noise, testing robustness against noisy data is an interesting research direction, and we will consider further experiments to evaluate mAIHT in such scenarios. --- **Implementation Tips: Could you elaborate on any pitfalls or best practices (beyond step-size selection) that practitioners should keep in mind?** **Reply**: We appreciate the reviewer's question. Below, we discuss key hyperparameter considerations: 1. **Penalty Selection**:We have designed an adaptive penalty selection where practitioners only need to set an initial penalty coefficient $\lambda$. We recommend setting a small initial $\lambda$ (e.g., to retain 95% of the parameters) for optimal pruning performance. This phenomenon aligns with gradual pruning observed in [4]. 2. **Step Size and Number of Iterations**: Based on simulations on the 0-self-attn.q_proj layer of the LLaMA-7B model, we suggest using a relatively large step size, ensuring $\alpha<1/L$, where $L$ is the Lipschitz constant of the gradient. Choosing a step size too large can cause instability, while a smaller value ensures convergence. 3. **Ridge-like term coefficient $\mu$**: The $\mu$ term enhances the stability of the algorithm. As long as $\mu$ remains small, the algorithm is not sensitive to its exact value. [1] Wu, Y. N., Tsai, P. A., Muralidharan, S., Parashar, A., Sze, V., & Emer, J. Highlight: Efficient and flexible DNN acceleration with hierarchical structured sparsity. [2] Gray, S., Radford, A., & Kingma, D. P. GPU kernels for block-sparse weights. [3] Meng, X., Ibrahim, S., Behdin, K., Hazimeh, H., Ponomareva, N., & Mazumder, R. OSSCAR: One-Shot Structured Pruning in Vision and Language Models with Combinatorial Optimization. [4] Benbaki, R., Chen, W., Meng, X., Hazimeh, H., Ponomareva, N., Zhao, Z., and Mazumder, R. F. Fast as Cheetah: Neural Network Pruning with Combinatorial Optimization. In International Conference on Machine Learning.
Summary: $\ell_0$ regularization and propose to use proximal gradient descent to solve it. The authors also provide theoretical analysis for their method, but the review who is very bad at math is unable to justify them. Empirical evaluations show that the proposed method can alightly outperform existing pruning methods. Claims And Evidence: The proposed method is reasonable: it adopts the standard proximal gradient optimization to solve the problem. The acceleration with iterative hard-thresholding is also a standard practice in proximal gradient optimization. Methods And Evaluation Criteria: The method and evaluation is generally reasonable. But the main concern is that the authors does not specify whether their pruning method is unstructured or semi-structured (obviously it is not structured pruning). If the method is simply unstructured pruning, the improvement over baselines is too margin -- most of the baselines are designed for semi-structured pruning such as 2:4 or 4:8 sparsity and can actually be deployed for acceleration of LLMs. However, it seems the authors only compare with baselines in unstructured pruning setting, which is far from sufficient -- the improvement is very marginal, and the proposed method can not be used for acceleration of LLMs due to its unstructured property. Theoretical Claims: The reviewer is not able to justify the correctness of the proofs and theoretical analysis in Section 3. Experimental Designs Or Analyses: Please refer to Section **Methods And Evaluation Criteria**. The main concern regarding the experimental design is the unstructured pruning setting -- nearly all baselines can be used for semi-structured pruning while the proposed method does not prove it can do semi-structured pruning. Unstructured pruning cannot be used for acceleration for LLMs, which hurt the contribution of this paper. Supplementary Material: No. Most of them are theoretical analysis and the reviewer cannot understand them due to the reviewer's pool math expertise. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: n/a Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Please refer to Section **Methods And Evaluation Criteria**. If the concerns on efficiency can be resolved, I will be happy to increase my rating. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful comments. Below, we address the concerns and provide clarifications. **Reply to the concerns in Methods and Evaluation Criteria** Our method can be directly extended to n:m sparsity by modifying the term $\lambda\Vert W\Vert_0$ in problem (2) to $I_{S}(W)$, where $S$ represents the set of all matrices satisfying n:m sparsity, and $I_S$ is the indicator function ($I_S=0$ if $W \in S$, $I_S=+\infty$ otherwise). For this problem, monotone accelerated proximal methods can be applied directly. In practice, this means replacing the hard thresholding operator with a projection operator onto the n:m sparsity set $S$ consistent with the extensions proposed in [1] and [2]. The following table presents the results of LLaMA-7B pruning with 2:4 sparsity: |Methods|wikitext-2 perplexity| |---|---| |SparseGPT|7.2933| |mAIHT|**7.2606**| | Methods | BoolQ | RTE | HellaSwag | ARC-e | ARC-c | WinoGrande | OBQA | Mean | | --------- | --------- | --------- | --------- | --------- | --------- | ---------- | --------- | --------- | | SparseGPT | **73.79** | 54.51 | 69.28 | **66.75** | 39.33 | **68.42** | 38.60 | 58.67 | | mAIHT | 72.96 | **60.28** | **69.56** | 64.89 | **40.27** | 67.48 | **39.40** | **59.26** | These results demonstrate that mAIHT achieves competitive or superior performance compared to SparseGPT under n:m sparsity constraints. Our method can also be extended to structured pruning, including Hierarchical sparsity [3], Block sparsity [4], and Row sparsity [5]. In these cases, the $\lambda\Vert W\Vert_0$ term in problem (2) is modified to $I_{S}(W)$, where $S$ represents the corresponding structured sparsity set. The algorithm requires only replacing the hard thresholding operator with a projection operator onto $S$. While our monotonically accelerated algorithm empirically improves performance under n:m sparsity, the theoretical analysis of mAIHT does not directly transfer to structured pruning settings. This is because the theoretical guarantees depend on the structural properties of the set $S$. To maintain a rigorous and principled approach to pruning, we did not include these extensions in the current version. However, we recognize that formalizing the theory for structured pruning is an important open problem. In future work, we plan to extend our theoretical framework to structured and semi-structured pruning settings. We will also discuss these extensions in the next version of our paper. Finally, we would like to highlight that a key contribution of our work is the **rigorous theoretical analysis** of the pruning algorithm, including convergence guarantees and generalization properties. Unlike heuristic approaches, our method provides a solid theoretical foundation, ensuring both strong empirical performance and efficiency in real-world applications. By establishing a principled approach to pruning, we believe this work lays a strong foundation for future research and further advancements in model compression. [1] Sun, M., Liu, Z., Bair, A., & Kolter, J. Z. A simple and effective pruning approach for large language models. [2]Xiang Meng, Kayhan Behdin, Haoyue Wang, Rahul Mazumder. ALPS: Improved Optimization for Highly Sparse One-Shot Pruning for Large Language Models [3] Wu, Y. N., Tsai, P. A., Muralidharan, S., Parashar, A., Sze, V., & Emer, J. Highlight: Efficient and flexible DNN acceleration with hierarchical structured sparsity. [4] Gray, S., Radford, A., & Kingma, D. P. GPU kernels for block-sparse weights. [5] Meng, X., Ibrahim, S., Behdin, K., Hazimeh, H., Ponomareva, N., & Mazumder, R. OSSCAR: One-Shot Structured Pruning in Vision and Language Models with Combinatorial Optimization. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response, which clearly addressed my concerns in inference latency and the application on semi-structured pruning. However, I am very sorry that I am not able to judge the theoretical contribution part -- this is fully the fault of the reviewer. I increase my rating from 2 to 3 given the new results. Sorry again and hope the AC or other reviewers can help verify the theoretical part of this paper.
Summary: The paper introduces monotone accelerated Iterative Hard Thresholding (mAIHT), designed to improve the efficiency and theoretical soundness of pruning large language models (LLMs). The authors reformulated the pruning problem as L0-penality optimization problem, addressing the limitations of heuristic and retraining-based pruning methods. A notable strength of the paper is its rigorous theoretical analysis, including proofs of convergence, convergence rate, and risk upper bounds, which are often missing in existing pruning techniques. Experimental results demonstrate that mAIHT achieves superior pruning performance on the LLaMA-7B model compared to state-of-the-art approaches like SparseGPT and ADMM-based pruning, particularly in maintaining model performance at moderate sparsity levels. Claims And Evidence: The authors make strong claims about mAIHT’s superiority over existing methods, particularly in terms of theoretical rigor and empirical performance. While they provide extensive theoretical justifications, the empirical results focus mostly on LLaMA-7B, which, though relevant, may not generalize across architectures. Some claims (such as scalability to larger models) could be better supported with additional experimental results. Methods And Evaluation Criteria: The choice of perplexity and zero-shot accuracy as evaluation metrics is reasonable. The evaluation setup follows standard pruning benchmarks (using LLaMA-7B, WikiText2, and zero-shot NLP tasks). Ablation studies on penalty selection, iteration count, and sparsity adaptation strategy would add clarity. Theoretical Claims: The L0-penalized optimization reformulation is mathematically rigorous. Convergence proofs for monotone accelerated IHT are included. Theoretical comparisons with prior pruning works (SparseGPT, ADMM) are somewhat limited. A deeper discussion on their theoretical limitations vs. mAIHT's advantages would be beneficial. Experimental Designs Or Analyses: The experiments align with pruning literature, comparing against relevant baselines. Results focus only on LLaMA-7B, lacking validation across different model families. Computational cost (e.g., runtime, memory footprint) is not discussed—key for real-world deployment. While the paper focuses on one-shot pruning, post-pruning fine-tuning could show practical performance differences. Supplementary Material: No supplementary material was provided. Relation To Broader Scientific Literature: The proposed method builds on SparseGPT and Wanda but replaces heuristic pruning with a more rigorous L0-penalized optimization framework. Monotone accelerated Iterative Hard Thresholding (mAIHT) is an improvement over Iterative Hard Thresholding (IHT) with adaptive sparsity control and faster convergence. Essential References Not Discussed: Frankle, Jonathan, and Michael Carbin. "The lottery ticket hypothesis: Finding sparse, trainable neural networks." arXiv preprint arXiv:1803.03635 (2018). Other Strengths And Weaknesses: See above comments. Other Comments Or Suggestions: 1. Several issues with citations. for example - citep vs citet. 2. Inconsistencies in mathematical symbols. 3. Table captions are not elaborated enough and contains several typos. Questions For Authors: 1. How well does mAIHT scale to larger models? What are the computational trade-offs? 2. How does the runtime complexity and memory overhead of mAIHT compare to SparseGPT and ADMM-based pruning? 3. How sensitive is the performance to penalty selection, step size, and number of iterations? Would a different hyperparameter schedule improve results? 4. Can fine-tuning after pruning further improve performance? How does mAIHT compare with SparseGPT when fine-tuning is allowed? 5. How does mAIHT compare with Lottery Ticket Hypothesis-based pruning? 6. Could mAIHT be used together with quantization (e.g., GPTQ) to further reduce inference cost? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful feedback. Below, we provide our responses and clarifications. **How well does mAIHT scale to larger models? What are the computational trade-offs?** **Reply**: We thank you and Reviewer PbeQ for the comments. In response, we have added pruning experiments on the Llama-13B model. Please refer to our response to Reviewer PbeQ for the experimental results. We address the computational trade-offs in the following question. --- **How does the runtime complexity and memory overhead of mAIHT compare to SparseGPT and ADMM-based pruning?** **Reply**: The table below compares the runtime of mAIHT, Wanda, SparseGPT, and ADMM-based pruning on the LLaMA-7B pruning task, including the time for generating input activations: | |Time (s)| |---|---| |Wanda|148| |ADMM|446| |SparseGPT|609| |mAIHT|1370| mAIHT takes longer due to its advanced gradient-based optimization for layerwise reconstruction. However, its runtime remains negligible compared to LLM fine-tuning. For instance, Wanda [1] reports that fine-tuning LLaMA-7B with LoRA takes 24 hours on a V100 GPU, while full fine-tuning requires 24 days. In contrast, mAIHT completes pruning in under 30 minutes. Memory overhead is minimal since pruning is performed layer by layer, requiring only one layer parameters to be stored at a time. --- **How sensitive is the performance to penalty selection, step size, and number of iterations? Would a different hyperparameter schedule improve results?** **Reply**: We appreciate the reviewer's insightful question. Below, we analyze the impact of different hyperparameters: 1. **Penalty selection**: Our adaptive penalty selection requires only an initial penalty coefficient $\lambda$. A smaller initial $\lambda$ leads to better pruning results. In our experiments, we set $\lambda$ to retain 95% of parameters initially. This aligns with the gradual pruning effect observed in [2]. 2. **Step size and number of iterations**: We conducted experiments on the 0 self-attn.q_proj of the LLaMA-7B model for a fixed $\lambda$. The table below presents the loss $\mathcal{L}$ for different step sizes and iteration counts, where $L$ is the largest singular value of $X^{\top}X$. | |$0.95/L$|$0.1/L$|$10/L$| |---|---|---|---| |0|14.59e+6|14.59e+6|14.59e+6| |10|7.15e+6|11.76e+6|9.37e+21| |50|6.73e+6|11.66e+6|nan| |100|6.61e+6|11.51e+6|nan| |200|6.59e+6|11.47e+6|nan| The results align with our theoretical analysis. Larger step sizes can accelerate convergence, but to ensure stability, we require $\alpha < 1/L$, where $L$ is the Lipschitz constant of the gradient of $\frac12 \Vert X\hat{W}-XW\Vert_F^2$. The experiment with $\alpha=10/L$ confirms divergence. 3. **Ridge-like term coefficient $\mu$**: $\mu$ is introduced for stability. As long as its value remains small, the algorithm is not sensitive to $\mu$. --- **Can fine-tuning after pruning further improve performance? How does mAIHT compare with SparseGPT when fine-tuning is allowed?** **Reply**: Fine-tuning can improve performance but significantly increases computational cost. mAIHT is designed as a one-shot pruning method to minimize retraining while preserving model accuracy. If fine-tuning is allowed, mAIHT provides a stronger starting point than SparseGPT, as it retains more critical weights and offers a more precise initial estimate. This results in higher final accuracy after fine-tuning. --- **How does mAIHT compare with Lottery Ticket Hypothesis-based pruning?** **Reply**: Both mAIHT and LTH-based pruning seek to find sparse subnetworks, but mAIHT offers key advantages: 1. **No retraining needed**: mAIHT achieves one-shot pruning, whereas LTH-based methods require costly iterative retraining. 2. **Theoretical guarantees**: mAIHT provides convergence proofs and risk bounds, while LTH is primarily based on empirical observations without strong theoretical backing. --- **Could mAIHT be used together with quantization (e.g., GPTQ) to further reduce inference cost?** **Reply**: We appreciate this insightful question. Yes, mAIHT can be effectively combined with post-training quantization methods like GPTQ. Since mAIHT retains the most critical weights, subsequent quantization benefits from a well-structured model, leading to even greater inference efficiency and cost reduction. [1] Sun, M., Liu, Z., Bair, A., & Kolter, J. Z. A simple and effective pruning approach for large language models. [2] Benbaki, R., Chen, W., Meng, X., Hazimeh, H., Ponomareva, N., Zhao, Z., and Mazumder, R. F. Fast as Cheetah: Neural network pruning with combinatorial optimization. In International Conference on Machine Learning. --- Rebuttal Comment 1.1: Comment: Thank you for providing the additional information. If my understanding is correct, Wanda and SparseGPT performs well even without fine-tuning. Can you show the trade-offs for these methods -- fine-tuning time vs. performance of SparseGPT and Wanda, and whether the higher runtime of mAIHT outweighs its benefit over SparseGPT or Wanda? --- Reply to Comment 1.1.1: Comment: Thank you for your comment and for raising this important point about the trade-off between fine-tuning time and performance. You are absolutely right: both Wanda and SparseGPT can perform surprisingly well without fine-tuning. In our paper, all comparisons are conducted under a no fine-tuning setting to ensure fairness across methods. That said, fine-tuning can significantly improve performance. For instance, the Wanda paper reports that fine-tuning LLaMA-7B with LoRA takes about 24 hours on a V100 GPU, and full fine-tuning can take up to 24 days. While this results in substantial performance gains, the computational cost is high and often infeasible for many users. In contrast, mAIHT offers a practical middle ground. It significantly outperforms single-step methods like Wanda and SparseGPT while requiring only a small fraction of fine-tuning time and, without needing any gradient access to the full model. As discussed in Remark 2.2 of our paper, a single iteration of mAIHT is equivalent to magnitude pruning. With pre-pruning normalization, this reduces to Wanda. Thus, mAIHT can be seen as a natural generalization of these methods. By increasing the number of iterations, mAIHT incrementally improves performance, giving users a controllable trade-off between runtime and quality—something fixed-step methods like Wanda and SparseGPT cannot offer. Below is a table illustrating the trade-off between runtime and performance for mAIHT when pruning LLaMA-7B to 50% sparsity, using different iteration counts: | | Wanda(mAIHT with 1 iteration) | mAIHT with 20 iterations | mAIHT with 50 iterations | mAIHT with 100 iterations | | ---------- | ----------------------------- | ------------------------ | ------------------------ | ------------------------- | | perplexity | 7.2588 | 7.1876 | 7.0720 | 7.0765 | | time(s) | 148 | 860 | 1370 | 2286 | These results demonstrate that performance improves steadily as the number of iterations increases, reaching its best perplexity at 50 iterations. The slight increase in perplexity at 100 iterations is likely due to overfitting to the calibration data. Meanwhile, the runtime grows approximately linearly with the number of iterations, allowing users to flexibly balance speed and accuracy based on their computational budget and performance requirements. Thus, mAIHT provides a flexible and efficient pruning strategy—offering much of the benefit of fine-tuning with a small computational cost, and clear advantages over static one-shot methods.
null
null
null
null
null
null
Larger or Smaller Reward Margins to Select Preferences for LLM Alignment?
Accept (poster)
Summary: This paper identifies that existing metrics for selecting preference data, which rely on either explicit reward margins or implicit reward margins, often yield contradictory evaluations for the same dataset. To address this issue, the authors propose a new metric called the alignment potential ($M_{AP}$), which quantifies the gap between a model’s current implicit reward margin and a target explicit reward margin. Empirical results demonstrate that training on data selected by $M_{AP}$ consistently enhances alignment performance, outperforming existing metrics. Furthermore, the proposed metric can be integrated into self-play data generation frameworks. ## Update After Rebuttal Thanks for the authors' effort during the rebuttal. I still have concerns regarding the computation of the implicit reward margin. As the authors replied, “The implicit margins, $M_\pi = |\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l)|$, are indeed computed based on the initial model (i.e., the SFT model $\pi_\text{SFT}$).” However, based on prior work and empirical observations, an SFT model typically assigns very similar implicit rewards to different responses, indicating a lack of meaningful discrimination. Thus, I remain skeptical about this formulation. While I understand that the implicit reward may gradually become more distinguishable as training progresses, the initial SFT model itself is generally not capable of such fine-grained reward estimation. In fact, upon revisiting the cited paper [1], I found that the implicit reward in that work was not computed using the SFT model, but rather using a trained model, with the SFT model serving only as a reference. A raw SFT model, not yet trained via DPO or other preference optimization methods, lacks the necessary discriminatory ability. This remains my primary concern with the paper. I would appreciate a discussion on this issue in future revisions, particularly addressing how the results might differ if a trained model were used to compute the implicit reward instead of an untrained SFT model, especially for both the baseline and the proposed method. [1] Not All Preference Pairs Are Created Equal: A Recipe for Annotation-Efficient Iterative Preference Learning Claims And Evidence: 1. The authors claim that "while existing metrics primarily assess data quality based on either explicit or implicit reward margins, they often provide contradictory evaluations for the same data" as the main motivation of this paper. However, this claim is only illustrated through two specific cases in Figure 1(a). There is a lack of quantitative analysis to determine how frequently such contradictions occur in commonly used preference datasets. Conducting a quantitative study on the proportion of "contradictory" data points in widely used datasets would strengthen the motivation of this work. 2. It is unclear whether the implicit reward margins used in this work are computed from a policy that has already been optimized on the preference dataset, or if they are all derived directly from an SFT model as mentioned on Page 5. If the latter is the case, it raises concerns, as an SFT model is expected to be relatively neutral in its preferences for all the preference data. This leads to the question of how the distribution of implicit reward margins differs before and after optimization? Additionally, in line with the first concern, it would be useful to quantify the proportion of data points where implicit reward margins contradict explicit reward margins. The choice of implicit reward margins is crucial to the paper’s claims—why should smaller implicit reward margins be preferable? Methods And Evaluation Criteria: Is the proposed method limited to SimPO, or can it generalize to other approaches? For instance, in standard DPO, the implicit reward margins computed from an SFT model are always zero. How would this affect the applicability of the proposed $M_{AP}$ metrics in such settings? Theoretical Claims: Most of the theoretical results appear to be correct upon review. However, the detailed proofs provided in the appendix were not fully verified. Experimental Designs Or Analyses: 1. In Table 1, the primary experiments only compare $M_{AP}$ against baselines that use explicit reward margins $M_r$ for data selection. However, there is no baseline that selects data based on implicit reward margins $M_\pi$. 2. The experiments mention the hyperparameter $\alpha$ for the proposed $M_{AP}$, but do not report the specific values used across different experiments. Providing this information would improve the reproducibility of the results. Supplementary Material: Yes, the supplementary material was reviewed, specifically Appendix A (Related Work) and Appendix B (Additional Experiments). Relation To Broader Scientific Literature: Can you provide a theoretical or intuitive explanation of why a larger explicit reward margin is always better, while a smaller implicit reward margin is preferable? A clear justification for this assumption would strengthen the correlation to previous works and strengthen the motivation of this paper. Essential References Not Discussed: NA Other Strengths And Weaknesses: Overall, this paper is well-organized and easy to read. However, as noted in the Claims and Evidence and Experimental Designs or Analyses sections, there are still some concerns regarding the lack of quantitative validation for key claims and missing baselines. Other Comments Or Suggestions: NA Questions For Authors: Please see above. I am very happy to discuss with the authors during the rebuttal process and look forward to having the above concerns and questions addressed. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Contradictions We have conducted an analysis to measure the contradictions between explicit and implicit margin metrics ($M_r$ and $M_\pi$) by comparing the Jaccard similarity of subsets selected by each metric: - **Subset selection**: Using $M_r$ and $M_\pi$, we select the top-rated k% subsets from existing datasets, denoted as $D_r$ and $D_\pi$. - **Jaccard similarity**: We calculate the similarity between the selected subsets using: $$ J(D_1,D_2) = \frac{|D_1 \cap D_2|}{|D_1 \cup D_2|}. $$ - **Comparison**: We compare the resultant similarity $J(D_r, D_\pi)$ with two uniformly selected subsets $J(D_{u1},D_{u2})$. A lower similarity $J(D_r, D_\pi) < J(D_{u1},D_{u2})$ would indicate contradictions between $M_r$ and $M_\pi$. We measure the similarities on all three datasets in our paper: |Jaccard (%)|Top-5%|Top-10%|Top-20%| |-|:-:|:-:|:-:| |Gemma|1.28|3.16|7.67| |Llama v1|2.12|4.82|10.59| |Llama v2|1.42|3.46|8.25| |_Uniform_|2.62|5.44|11.08| As shown in the table, the similarities $J(D_r, D_\pi)$ are **consistently smaller than those of uniformly selected subsets** across varying top-k ratios and datasets. This observation validates the claim that explicit and implicit margin metrics can **indeed provide contradictory evaluations for preference selection**. ## Smaller Implicit Margins Explanation of implicit reward margins: - **Computation**: The implicit margins, $M_\pi = |\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l)|$, are indeed computed based on the initial model (i.e., the SFT model $\pi_\text{SFT}$). - **Motivation**: **The implicit margin shows the LLM's ability to discern preferences**. A small implicit margin indicates the model initially lacks the ability to distinguish such preference, making the data suitable for alignment training. **Empirical evidence**: We measure the distributions of implicit margins before and after the alignment training on the Llama v2 dataset. The results, as shown in Figure I in this [anonymous link](https://anonymous.4open.science/r/tables-11915/bqyD.md), demonstrate a notable increase in $M_r$ after alignment training. Specifically, the average $M_r$ rises from `0.072` to `0.742`, indicating **improved preference recognition ability** of the model. Given these insights, data with relatively smaller initial implicit margins generally indicates that **the LLM cannot tell the preference, thereby requiring further training** to align on such data. ## Method Implementation Our SimPO-based implementation of the implicit reward is because: 1. The implicit reward $\hat r_\theta^{Sim}(x,y) = \beta\log\frac{\pi_\theta(y|x)}{|y|}$, has been demonstrated to be effective in SimPO's paper. 2. The formulation does not require additional conditions (e.g. reference model), allowing for its direct application in different optimization methods, e.g., DPO, IPO. Throughout our paper, **the implicit reward in our $M_{AP}$ metric is consistently calculated using the SimPO formulation** $\hat r_\theta^{Sim}(x,y),$ regardless of whether the training method is SimPO or DPO. And the empirical results also validate its effectiveness across different training settings. ## More Baselines In response to your suggestions, we conduct additional experiments to include uniform and $M_\pi$ baselines under Table 1's setting, and here are the results of Llama models on Alpaca Eval: |Llama+DPO|Uniform|$M_\pi$|$M_{AP}$| |-|:-:|:-:|:-:| |Alpaca LC|40.86|42.00|**42.82**| |Alpaca WR|42.66|44.71|**45.00**| |Llama+SimPO|Uniform|$M_\pi$|$M_{AP}$| |-|:-:|:-:|:-:| |Alpaca LC|40.69|42.39|**43.76**| |Alpaca WR|38.77|41.62|**42.00**| As shown in the tables, our $M_{AP}$ metric consistently outperforms existing baselines for both optimization methods. ## Hyperparameters The value of hyperparameter $\alpha$ can be found in Line 991 and Line 1008 of Appendix D.1, where we generally set the value within {0.5, 1, 2.5, 5.0}. ## Explain Margins & Motivation We would like to clarify that our paper is not centered on the notion that "large explicit reward margin" and "small implicit reward margin" **individually** imply high-quality data. Instead, one key motivation for our work is that **relying solely on either of these margins is insufficient** for assessing data quality. As demonstrated in Figure 1, the two instances with high explicit reward margins or small implicit margins are evaluated as *low-quality* data by our metric (Line 71-74). The core idea of our metric is to quantify the data quality by measuring **the discrepancy between the current model's preference and the aligned optimum** on the data. The alignment optimum is captured through explicit reward margins (Equation 8), and the model's preference is indicated by implicit reward margins. So the proposed metric $M_{AP} = |r(x,y_w) - r(x,y_l)|-|\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l)|$ quantifies the **gap from the current implicit margin to the target explicit margin**, thereby indicating the potential for alignment on the given data.
Summary: This paper proposes the new "alignment potential metric" to evaluate the quality of (and select) data for offline preference optimization. This metric quantifies the gap between the model's implicit reward margin and the target explicit reward margin, and thus aims to estimate the model's potential to align on the preference data by measuring how much the model can improve its preference discrimination. Using standard DPO and SimPO preference learning setups, the paper compares this preference data selection metric against existing baseline metrics (e.g., using only the model's implicit reward margin, and only the explicit reward margin), and shows that this alignment potential metric consistently outperforms. The paper also provides a theoretical justification for why training converges faster when selecting data using the alignment potential metric, versus random data selection. Claims And Evidence: A major claim is that "large explicit reward margin" and "small implicit reward margin" both imply "high quality" datapoints for preference optimization. These claims are substantiated by citations in the introduction, but since they are both very general claims and are central to the motivation for the paper's new "alignment" potential metric", it would be more convincing if the results and/or theoretic basis supporting these claims were at least summarized in the paper, in the context of the paper's own setup. Especially the claim that data for preference optimization should be selected via the "small implicit reward margin" metric is not obvious; e.g., perhaps a curriculum learning setup where the allowed implicit reward margin increases over the course of training would outperform. It would be informative to include an ablation to show that selecting datapoints for preference optimization using "small explicit reward margin" and "large implicit reward margin" metrics does not work well. Another related and central claim in the paper is that a larger gap between the target explicit reward margin and current implicit reward margin indicates a greater potential for improvement. But the paper does not address the potential limitation that if the gap is too large, learning the correct alignment may be out of reach of the model (label exposure bias problem). Methods And Evaluation Criteria: - The M_{AP} metric crucially depends on (the quality of) an external reward model, and measures the extent to which the model's preference margin matches the ground truth margin as given by this external reward. The effect of the reward model used for data selection is not explored in the main text. - To what extent does the quality of the M_{AP} metric depend on the quality of the explicit reward model? (This model is being considered as a proxy for the gold rewards.) - An assumption in the formulation of the M_{AP} metrics is that all reward margins of equal magnitude imply an equal quality gap. But in practice, the mapping between quality and reward model score is often non-linear, and a large reward margin at the upper versus lower end of the quality spectrum may imply different actual differences in quality. - Can the same LLM generating the responses be used as a prompted LLM-as-a-Judge reward model to get the "explicit rewards"? If not, what are the required characteristics of the external reward model for this method to yield gains? - The "Reward noise regularization" subsection (Section 3.1) is poorly motivated. The regularization simply consists in taking the absolute value of the explicit and of the implicit margins. In the extreme (for a certain explicit reward model), this metric could select examples for which the model always agrees with the external reward about the ranking of the preferred and dispreferred responses, which is intuitively the opposite of what M_{AP} purports to do. So the effectiveness of this regularization is likely highly dependent on the characteristics of the external reward being used. - The benchmarks and evaluation metrics used were reasonable, and are standard in the preference optimization literature. Theoretical Claims: Theoretical results (Theorem 3.1, proof in Appendix) show that training converges faster when using M_{AP} for data selection, vs random sampling. This result is novel and insightful. Experimental Designs Or Analyses: - The paper compares using M_{AP} against using only the model's implicit reward margin, and only the explicit reward margin, as data selection metrics. The paper does not address whether the relative magnitude of the implicit versus explicit margins is important, though, or whether it is primarily the ranking (dis)agreement that matters. Another insightful baseline would be to compare using M_{AP} for data selection against using a ranking (rather than margin) metric, which selects all examples for which the explicit and implicit rewards disagree on the ranking of the preferred versus dispreferred response. (Within this subset, examples could be randomly chosen to match the expected dataset size.) - How does data selection with M_{AP} compare to weighting the loss by the implicit margin? And can train-time loss weighting provide incremental improvements on top of offline data selection with M_{AP}? - The proposed M_{AP} metric selects a fixed set of examples offline before preference learning (or during multiple iterations of training, as in Section 5). However, the paper does not explore a curriculum learning setup. How does performance from selecting a fixed set of examples with a large M_{AP} margin compare to (smoothly) varying the reward margin of the examples over the course of training? - How long were the DPO/SimPO models trained for, and how was the checkpoint to use for evaluation chosen? Supplementary Material: There is no supplementary material included in this submission. Relation To Broader Scientific Literature: This paper contributes to the body of literature on data selection for post-training/alignment. There has been a lot of recent interest and work in this area, not only for selecting datasets for preference optimization (e.g., DPO), but also for selecting SFT and RL datasets. Essential References Not Discussed: While the paper cites related work in preference optimization dataset selection, it does not cite other works which investigate metric-guided data selection for training LLMs more generally. For example, [this paper](https://arxiv.org/pdf/2311.05350) (among others) shows that using QE metrics for data filtering improves the quality of machine translation models. The proposed M_{AP} method also uses metric-guided filtering, just in the (paired) preference data setting. Other Strengths And Weaknesses: Strengths: - The results look strong. The proposed M_{AP} method for preference data selection consistently outperforms other metrics, across several experiments in which data is selected both from fixed and iteratively evolving datasets. - The paper shows that by selecting data with M_{AP}, training converges faster than with baseline data selection methods. Moreover, the performance gains are consistent across dataset sizes. Weaknesses: - The proposed M_{AP} metric is extremely simple and not very novel (very small incremental contribution on top of existing work). - There is no significance testing to accompany the experimental results (and, for example, the results using M_{pi} versus M_{AP} in Figure 5 look very close). - The Introduction is repetitive and introduces concepts/intuitions which are redundant with those presented in the Methods section (Section 3). - The entire Related Work section is in the Appendix. Other Comments Or Suggestions: - The M_{AP} equation is repeated multiple times throughout the paper. Is this necessary, or can equation (11) just be referenced? - Heavily overlapping results between Figure 1b and Figures 4/5 - The Discussion section contains nothing more than the Conclusion + Limitations and future work. Questions For Authors: No additional questions besides those posed in previous sections (especially see the "Methods And Evaluation Criteria" and "Experimental Designs Or Analyses" sections). Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Due to space limits, we put all tables in this anonymous link: https://anonymous.4open.science/r/tables-11915/ZHMc.md. ## Large/Small Margins To reiterate, the primary claim of our paper is **not** that "large explicit reward margin" and "small implicit reward margin" **individually** imply high-quality data. Instead, the key motivation for our work is that **relying solely on either of these margins is insufficient** for assessing data quality. As demonstrated in Figure 1, we evaluate instances with high explicit reward margins or small implicit margins as low-quality data (Line 71-74). The main idea of our proposed metric is to consider the **gap between the model's current implicit margin and the target explicit margin** to measure the potential for alignment learning. However, if assessing data quality using a single margin is *required*, we acknowledge that a higher explicit margin or smaller implicit margin can be beneficial. This notion is supported by recent works (Line 40) and our experiments (Figures 4, 5, and 8). In response to your suggestions, we've conducted additional experiments by following the setup of Figure 4, but **reversely selecting data** via these metrics. As shown in **Table I of the linked page**, reversely selecting training data indeed results in poorer performance, thus verifying these metrics' efficacy from an opposite standpoint. ## Too Large Gaps While a larger gap might require more intensive alignment training, the direction of the update remains correct. Moreover, the common practice in preference dataset construction for alignment training, such as SimPO, involves **sampling responses $y_w$ and $y_l$ using the model that is being trained**. This practice ensures that both the chosen and rejected responses are **within the model's generation capabilities**, thus mitigating the label exposure bias problem. ## Different Reward Models In our study, we have indeed considered the impact of different reward models by incorporating two RMs: **PairRM (0.4B) and ArmoRM (8B)**, for preference annotation and metrics evaluation. Figures 4 and 8a in the paper illustrate the results based on datasets annotated using ArmoRM and PairRM, respectively. As shown in the figures, **our proposed $M_{AP}$ metric outperforms existing baselines under both RMs**, attesting to the effectiveness of $M_{AP}$ metric under different reward models. ## Reward Noise & Ranking **The ranking disagreement might not be the key determining the data quality**, due to the reward noise issue. As suggested, we select 40% data subsets in which the ranking of explicit and implicit rewards are opposite. We compare this strategy with existing metrics on both Llama datasets as described above. As shown in **Table II of the linked page**, although selecting data with **contradictory ranking** between explicit and implicit rewards can improve performance to some extent, it **cannot outperform any of the existing baselines**. Such an observation is consistent with our motivation of "Reward noise regularization". As discussed in Line 212-215, given the LLM's ability to discern certain preferences, **the opposite ranking** between explicit and implicit rewards—suggesting contradictory preference judgments between the RM and the LLM—may indicate **a higher risk of noisy reward model annotations**, and thus resulting inferior model performance. ## Other Directions We sincerely appreciate your suggestions for exploring the *non-linear mapping, LLM-as-Judge, weighted loss and curriculum learning* settings. While we believe they hold promise for future research, the limited timeframe for the current study does not permit us to explore all these directions, so we plan to investigate them in future work. ## Novelty While existing methods focus solely on either explicit or implicit margins and often lack in-depth explanations, our work introduces a key innovation by measuring the data quality with **the gap between the model's aligned optimum and the current model's preferences**. This gap serves as a **theoretical basis** for evaluating data quality in alignment training, distinguishing our approach from prior work. Although the final form of $M_{AP}$ incorporates the two existing metrics ($M_r$ and $M_\pi$), **its core idea of gap quantification and the logical derivation of our approach are novel aspects not explored in existing metrics**., which contributes significantly to the advancement and originality of this work. ## Training Details & Significance Testing We report the training time, standard deviations and confidence intervals of current evaluations in **Table III & IV of the linked page**. ## References & Presentation Thanks for your suggestions, we will make the following revisions: 1. Include relevant references of data selection in general LLM tasks 2. Merge repetitive equations and figures 3. Revise sections by splitting the Discussion section into separate Conclusion and Limitations sections
Summary: This paper examines how reward margins influence preference data selection in LLM alignment. It introduces a novel metric, Alignment Potential (AP), which integrates both explicit reward margins (provided by the reward model) and implicit reward margins (derived from the policy and reference models). Furthermore, it extends AP to self-play data generation. Extensive experiments demonstrate that AP outperforms leading alignment baselines across various settings. Claims And Evidence: The claims are well-supported by both theoretical analysis and experimental results. Methods And Evaluation Criteria: The proposed method is well-aligned with the LLM alignment problem, focusing on contrast strategies in preference data selection—specifically, whether larger or smaller reward margins should be used. AP is a theoretically and empirically grounded metric for assessing preference data quality. To evaluate AP’s effectiveness, the paper employs well-established preference optimization benchmarks (ApacaEval2, Arena-Hard) and diverse model architectures (Llama3-8B, Gemma2-9B). Theoretical Claims: The proofs, including those in the appendix, have been reviewed, and no issues were found. Experimental Designs Or Analyses: The authors first conduct preliminary experiments to assess different metrics (explicit reward margin, implicit reward margin, and AP) in the context of LLM alignment, supported by theoretical analysis. The experiments are then extended to a self-play data generation framework, with clear comparisons against baseline methods (DPO, SimPO with different self-play strategies) across various model architectures (Llama3-8B, Gemma2-9B) and dataset sizes (10k–60k). The experimental design is thorough and convincingly demonstrates AP’s effectiveness. Supplementary Material: Yes, I have examined all sections in the Appendix. Relation To Broader Scientific Literature: 1. This paper builds on research in LLM alignment, particularly in direct preference optimization (DPO) and self-play preference generation. 2. It explores a critical problem in LLM alignment—how contrast strategies (explicit and implicit reward margins) in preference data selection impact alignment. AP is proposed as a novel strategy that integrates these reward margins to provide a more effective metric. 3. Beyond empirical validation, the paper provides theoretical analysis to justify AP’s ability to identify high-quality alignment data. Essential References Not Discussed: To my knowledge, this paper has included sufficient references. Other Strengths And Weaknesses: Strengths - The paper is well-written, with clear motivation and an intuitive structure. - It establishes connections between existing data selection strategies (explicit and implicit reward margins). - Preliminary experiments, paired with theoretical analysis, support AP’s effectiveness. - Extensive experiments consistently show AP’s superior performance across LLM architectures, benchmarks, and dataset sizes. Notably, models trained on only AP-selected data (e.g., 30k samples) outperform those trained on full-sized datasets. Weaknesses - AP is tested only with DPO and SimPO. While the experiments are thorough and convincing, evaluating AP with additional alignment approaches would strengthen the results. Other Comments Or Suggestions: See weaknesses. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Additional Alignment Approaches Thanks for your suggestion! Aside from the DPO and SimPO methods, we conduct additional experiments using the IPO method for alignment training. Like the setting in Section 3.2, we utilize different metrics to select top-40% subsets from existing preference datasets for subsequent IPO training. We utilize the two Llama-based datasets for experiment, benchmark the trained models on Alpaca Eval 2.0, and report both Length-Controlled (LC) win rates and raw Win Rates (WR): On the default Llama dataset of SimPO (based on the PairRM reward model): | **Llama+IPO** | Uniform | $M_r$ | $M_\pi$ | $M_{AP}$ (ours) | |---|:---:|:---:|:---:|:---:| | Alpaca LC | 30.49 | 34.15 | 35.04 | **35.44** | | Alpaca WR | 34.15 | 34.08 | 35.40 | **36.08** | On the Llama v2 dataset of SimPO (based on the ArmoRM reward model): | **Llama-v2+IPO** | Uniform | $M_r$ | $M_\pi$ | $M_{AP}$ (ours) | |---|:---:|:---:|:---:|:---:| | Alpaca LC | 33.17 | 33.92 | 38.19 | **40.35** | | Alpaca WR | 30.63 | 30.17 | 35.73 | **38.14** | As shown in the table, our proposed $M_{AP}$ metrics still **outperform existing baselines under this new alignment method**, further certifying its effectiveness. **Reference** - IPO: Azar, Mohammad Gheshlaghi et al. “A General Theoretical Paradigm to Understand Learning from Human Preferences.” ICML 2024.
Summary: This paper investigates the techniques of preference data selection, particularly between the explicit reward margin by reward models and the implicit reward margin by the SFT models. This paper proposes a new preference pair quality metric (MAP) based on the discussion between the above two reward margins, enforcing the high explicit reward margin and the low implicit reward margin simultaneously. The experiments show that, using MAP as the preference data selector, alignment quality is improved compared with other selection methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed MAP data selector is clear and easy to follow. The evaluation designs are convincing. Theoretical Claims: I did not check the correctness of DPO convergence, since I am not an expert in this field. Experimental Designs Or Analyses: 1. The comparison between different preference data selection methods. 2. GPT-4 agreement on the selected preference pairs. 3. The DPO and SimPO alignment results in the self-play setting. Supplementary Material: 1. Related works. 2. Implementation details. 3. Preference data examples. Relation To Broader Scientific Literature: This paper provides an easy yet effective preference data selection method, which is essential for constructing good preference datasets. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. Although the scope of this paper is quite narrow, this paper still provides an in-depth discussion of the reward margins, which I think is helpful for readers to understand the literature. 2. I am surprised by the simplicity of the proposed method. I think that is the reason why this paper focuses on too many technical details and is a little bit hard to read. Other Comments Or Suggestions: In L216, what is the M_+ method? It is not clearly stated before this line. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## The $M_+$ Metric The $M_+$ metric is a modified version of the proposed $M_1$ metric designed to adapt to the **unidirectional update nature of existing preference optimization methods** (e.g. DPO, SimPO). As described in section 3.1, we first quantify the discrepancy between the current model and the aligned optimum on the preference data $(x,y_w,y_l)$ via the $M_1$ metric: $$M_1(x,y_w,y_l;\theta,r^*) = |(r^*(x,y_w) - r^*(x,y_l)) - (\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l))|.$$ To ensure practicality, we then introduce two key adaptations to the $M_1$ metric, and the first one is termed **unidirectional calibration** (Lines 182-202). The rationale behind this adaptation is explained as follows: In the optimization process of DPO, the loss function $-\log\sigma(\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l))$ tends to **unidirectionally increase the implicit margin term** $\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l)$ during optimization. So we would like to select data with a low implicit margin term requiring to be increased through training. However, the original $M_1$ metric measures a **bidirectional gap** between $\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l)$ and $r^*(x,y_w) - r^*(x,y_l)$, which can lead to selection of data with either a very high margin or a very low margin. While data with a low margin term, $\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l)$, is desirable, data with a high margin term will be further increased by the DPO training process, which is counterproductive. To address this, we opted to remove the absolute value from $M_1$, implementing what we refer to as unidirectional calibration, and formulated a revised metric, $M_+$: $$M_+(x,y_w,y_l;\theta,r^*) = (r^*(x,y_w) - r^*(x,y_l)) - (\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l)).$$ This new metric effectively selects data with a minimal implicit margin term $\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l)$ to increase, **aligning with the unidirectional nature of preference optimization methods**. We hope this elucidation clarifies the intent and application of the $M_+$ metric within our study. ## Readability Thank you for highlighting this issue. We will strive to enhance the readability in our revised version. Specifically, we plan to: (1) merge the two $M_{AP}$ equations (Eq. 11 and Line 236) to reduce redundancy and enhance clarity; (2) include a concise overview of our metric formulation at the beginning of Section 3 to guide readers through our methodology with greater ease. These changes aim to streamline the presentation and make the content more coherent to readers. --- Rebuttal Comment 1.1: Comment: Thanks for the response! My concerns have been addressed.
Summary: The submission addresses the challenge of selecting high-quality preference data for aligning large language models with human values. It introduces the metric of alignment potential. This metric quantifies the gap between a model’s current implicit reward margin and the target explicit reward margin to identify preference pairs with high potential for improving alignment. By integrating this metric into both offline preference learning and self-play data generation, the methodology effectively selects data that accelerates convergence and enhances overall performance, achieving superior alignment and robust performance gains compared to traditional data quality metrics. Claims And Evidence: The primary contribution of the submission is the introduction of the Alignment Potential (MAP) metric as a method to select preferences effectively for model alignment. The authors present a clear and logical derivation of this metric, progressing from the initial metric formulation ($M_1$) through subsequent refinements ($M_+$) until the final $M_{AP}$ formulation. Furthermore, the paper provides empirical evidence demonstrating the superiority of $M_{AP}$ over baseline methods, confirming its practical benefits. Methods And Evaluation Criteria: The proposed $M_{AP}$ metric represents a novel approach to me for selecting high-quality data for LLM alignment. This derivation shows how combining the explicit reward margin with the implicit reward margin results in a unified metric that quantifies the model’s potential for alignment improvement. Furthermore, the empirical evaluation employs benchmark datasets and evaluation metrics that are common practice in the field. Theoretical Claims: I have carefully reviewed the proofs presented in the main text as well as those in Section C.1 of the Appendix and have found no errors. Experimental Designs Or Analyses: I reviewed the experimental design and analyses presented in the paper. The methodology is sound, with appropriate evaluation protocols across the relevant benchmarks. The authors provide clear comparisons between their proposed approach and existing baselines in both offline and online settings. Supplementary Material: I have reviewed the whole appendix, including the additional experiments in section B, the proof in section C, and implementation details in section D of the appendix. Relation To Broader Scientific Literature: From my understanding, this paper makes two key contributions in broader scientific literature: - Advances in Data Selection for Preference Optimization: The $M_{AP}$ metric refines data selection by effectively distinguishing high-quality from lower-quality preference data, which streamlines the alignment process. - Integration of Self-Play Mechanisms: Incorporating self-play supports scalable model refinement through self-generated content, beyond static data selection. The metric enables the effective identification of high-quality preference pairs within self-generated content, supporting continuous model refinement in dynamic training scenarios. Essential References Not Discussed: Not applied Other Strengths And Weaknesses: The clarity of the writing in this submission makes it an engaging and enjoyable read. The logical progression from $M_1$ to $M_+$ and ultimately to $M_{AP}$ is smooth. Additionally, the authors provide an example for calculating the various metrics. Other Comments Or Suggestions: I strongly recommend that the authors enhance Figure 1 with more insightful examples to better illustrate the potential impact of this work. The current examples fall short in conveying the broader implications, making it challenging to fully appreciate the significance of the proposed approach. To be more specific, Example I demonstrates a case with a clear ground-truth label that can be easily handled by other methods. By contrast, Example II features two very similar responses, rendering it difficult to discern which one is superior—even with human evaluation. Questions For Authors: Could you please explain in more detail the potential impact of addressing the issues presented in the two examples in Figure 1? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Suggestions & Questions Regarding Figure 1 Thanks for your valuable suggestion! In Figure 1, we present two examples to compare the existing metrics: - **Explicit reward margin**: $M_r = |r(x,y_w) - r(x,y_l)|$, which quantifies **how much $y_w$ is more preferable than $y_l$**. - **Implicit reward margin**: $M_\pi = |\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l)|$. With $\hat r_\theta(x,y)$ indicating the current model $\pi_\theta$'s preference evaluation on $y|x$, this measures **how well the model discerns the preference between $y_w$ and $y_l$**. While existing works select data with *larger explicit reward margins* or *smaller implicit reward margins* for training, we illustrate how these metrics can yield contradictory evaluations for the same data in Figure 1—demonstrating inaccuracies since at least one evaluation must be incorrect. 1. In **Example I**, the chosen response is clearly correct and more preferable, resulting in a substantial explicit reward margin (indicated by the blue bar), and thus deemed "high-quality" by $M_r$. However, the large implicit reward margin (shown by the orange bar) suggests that the current model $\pi_\theta$ can already distinguish the preferences between $y_w$ and $y_l$, thus being "low-quality" by $M_\pi$. Consequently, despite what $M_r$ indicates, this data cannot further improve preference learning as the model is already well-aligned on this sample—**a scenario where relying on $M_r$ is misleading**. 2. **Example II** presents two nearly identical responses $y_w$ and $y_l$, resulting in a very small explicit reward margin, and thus being "low-quality" by $M_r$. Similarly, the model cannot tell the preference between such similar responses and produces a small implicit reward margin, which will be evaluated as "high-quality" by $M_\pi$. Given the negligible preference between the responses, this data should be considered as low-quality—**highlighting how reliance on $M_\pi$ can also be fallacious**. These examples underscore the shortcomings of relying solely on a single margin, and motivate us to derive a more propoerly designed metric: - Since $M_r$ meaures the difference between $y_w$ and $y_l$, it serves as an **alignment target** indicating **how the preference should be** on the specific data. - In contrast, $M_\pi$ corresponds to the model's **current state**, indicating **how the preference of the current model is** on this data. - As shown in the two examples, evidenced by the examples, data quality cannot be determined merely by a large target value ($M_r$) or a small current value ($M_\pi$); rather, **it is the gap between the current and target preferences that holds significance**. - From this insight, our proposed $M_{AP}$ metric evaluates preference data quality by quantifying the gap from the current implicit margin to the target reward margin, thereby measuring the potential for further alignment training. We acknowledge that the connection between the contradictions presented in Figure 1 and the core concept of the proposed $M_{AP}$ metric is not sufficiently clear in the current figure. To address this, we will enhance Figure 1 by including more informative text within the figure or title to explicitly highlight that existing metrics fail on these two examples because they solely focus on either the current model or the target value, neglecting the crucial gap between them. Thank you once again for your insightful suggestion! --- Rebuttal Comment 1.1: Comment: Thank the authors for addressing my comments and questions, and most of my concerns have been resolved.
Summary: - This paper proposes a new way to select good preference data jointly using two different reward signals which are captured through the external reward model and training model’s implicit DPO reward, respectively. Start from the mathematical derivation of DPO, the authors suggest the revised score function to alleviate the problem from the original score. Through two different practical scenarios of preference learning (offline and online), the proposed method is demonstrated compared to the existing baselines. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, section B, C, and D. Relation To Broader Scientific Literature: The key contributions are about new ideas and results. Essential References Not Discussed: Current references are sufficient, but it would be nice if some works are additionally referred and discussed. Those works are mentioned in the below comments. Other Strengths And Weaknesses: ### Pros 1. **Clarity**. Overall, the writing is clear and easy to follow. In addition, the organization of the main draft is well-established. 2. **Well-motivated problem and intuitive approach.** The selection of good preference data is an interesting problem, and the proposed approach seems to be intuitive and effective. ### Cons - **More ablation study**. While the authors provide some ablation studies to demonstrate the effectiveness of the proposed ideas, more experiments are required for a rigorous ablation. - In Figure 3, the proposed score (Eq. 11) is only compared with the score in Eq. 10. However, as there are two modification (taking absolute value in external reward gap & taking absolute value in implicit reward gap), it is unclear whether they are really necessary. It would be nice if the authors can provide two additional results for each modification (e.g., absolute external reward gap like Eq. 11 & implicit reward gap like Eq. 10 and vice versa). - In section 3.2, all the experiments are conducted across different selection methods under a fixed selection ratio (top-k%, k=40). For more extensive demonstration of the proposed method, it would be nice if the authors provide the additional results by (1) selecting bottom-40% samples and (2) varying k values such as 10% and 70%. - **More baselines for iterative preference learning setup**. In figure 6, the authors conduct the experiments about the iterative preference learning setup. As many prior works for online DPO have been proposed for this problem [1,2,3], it would be nice if the authors can demonstrate the effectiveness of the proposed method compared to them. Also, the baselines in Figure 5 (uniform, $M_r$, $M_{\phi}$ are directly applicable for this experiment, too. [1] Xiong et al., Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint., ICML 2024 [2] Kim et al., Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment., ICLR 2025 [3] Chen et al., Bootstrapping Language Models with DPO Implicit Rewards., ICLR 2025 Other Comments Or Suggestions: Please address the above concerns. Questions For Authors: Please address the above concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Ablation I: absolute values Although Eq.11 incorporates two additional absolute values compared with Eq.10: $|r(x,y_w) - r(x,y_l)|$ and $|\hat r_\theta(x,y_w) - \hat r_\theta(x,y_l)|$, only the latter absolute values (on implicit rewards) actually changes the metric's value. The reason is that, when constructing the preference dataset $\{(x,y_w,y_l)\}$, two responses $y_1,y_2$ are annotated by the reward model $r$ to determine the winning/losing responses: $y_w,y_l \in \{y_1,y_2\}$, such that $r(x,y_w) > r(x,y_l)$ *(line 182-185)*. Therefore, by definitation we have $r(x,y_w) - r(x,y_l) \ge 0$ and $|r(x,y_w)-r(x,y_l)| = r(x,y_w) - r(x,y_l)$. Given that **the absolute value applied to the external reward gap does not alter the outcome**, there will be no need to conduct additional ablations. We will explicitly clarify this point in our revised manuscript to prevent potential confusion. Thank you for pointing out this problem! ## Ablation II: Varying-k & Bottom-40% Thanks for your insightful suggestions! We have conducted additional experiments to address your feedback on varying selection ratios and the impact of selecting bottom-40% samples. For varying k values, we expanded our experiments by selecting different top-k on the default SimPO's Llama v1 dataset using various metrics, and evaluated the resulting models using Alpaca Eval 2.0. We reported both Length-Controlled (LC) win rates, which are more preferable to reduce length bias, and raw Win Rates (WR): | **Select 20%** | Uniform | $M_r$ | $M_\pi$ | $M_{AP}$ (ours) | |---|:---:|:---:|:---:|:---:| | Alpaca LC | 26.02 | 28.95 | 28.46 | **30.66** | | Alpaca WR | 28.06 | 30.24 | **31.01** | 30.62 | | **Select 40%** | Uniform | $M_r$ | $M_\pi$ | $M_{AP}$ (ours) | |---|:---:|:---:|:---:|:---:| | Alpaca LC | 30.72 | 36.58 | 33.53 | **37.07** | | Alpaca WR | 32.64 | 36.12 | 34.75 | **36.58** | | **Select 60%** | Uniform | $M_r$ | $M_\pi$ | $M_{AP}$ (ours) | |---|:---:|:---:|:---:|:---:| | Alpaca LC | 33.94 | 40.77 | 38.29 | **42.86** | | Alpaca WR | 34.72 | 38.14 | 37.28 | **39.60** | As shown in the table, our $M_{AP}$ metric continuously outperforms other methods **across different proportions of selected data**. For the bottom-40% selection, data was reversely chosen using various metrics from the Llama v2 dataset for SimPO training: | **Reverse 40%** | Unifrom | $M_r$ | $M_\pi$ | $M_{AP}$ | |---|:---:|:---:|:---:|:---:| | Alpaca LC | 41.61 | 40.66 | 37.07 | _34.64_ | | Alpaca WR | 36.26 | 33.36 | 32.95 | _29.37_ | Across all metrics—when selecting the bottom 40% based on $M_r, -M_\pi, M_{AP}$—the performance of resultant models was poorer than the uniform baseline, indicating lower data quality through reverse selection. Notably, **the lowest performance** observed was with our $M_{AP}$ metric, verifying its efficacy in identifying high-quality datasets **from an opposite standpoint**. These experiments affirm the effectiveness of our proposed method, and we will incorporate the findings in our revised version. ## More Iterative Baselines & References Thank you for your valuable suggestions! In the iterative preference learning experiments, we augment the default iterative preference learning pipeline by introducing an additional data selection procedure guided by the proposed metric $M_{AP}$. The role of our proposed metric in this process is to **select high-quality preference data** for subsequent training. Therefore, it serves as an **orthogonal strategy** alongside existing iterative preference optimization techniques [1,2,3]. While integrating our data selection process with these optimization methods poses interesting prospects, we're afraid the current timeframe does not allow us to explore all potential directions. We sincerely appreciate you for highlighting these relevant works. We will include them in our references and discuss potential integrations of our $M_{AP}$ metric with various iterative preference learning methods in the future work section of our revised version.
null
null
Concept Reachability in Diffusion Models: Beyond Dataset Constraints
Accept (poster)
Summary: This paper compares the effects of prompting vs. steering on diffusion models, and in particular how well these approaches allow for the control of specific concepts in the generated output. To provide control in the experiments, a synthetic dataset of overlapping shapes is created, and the concepts of interest are the types of shape in the image and the colors of the shapes. The paper then explores the effects of bias (frequent co-occurance of a shape/color pair), concept scarcity (the degree to which the concept has been observed), and caption specification (how well the caption associated with an images fully captures the concepts in the image). The main finding is that commonly used prompting is not sufficient to realize concepts in the presence of badly formed data, whereas steering allows better generation of scarce concepts and allows the disentanglement where spurious correlation/bias exists. Furthermore, prompt steering rather than steering in the latent space of the model is shown to be more robust and reliable. ### Update Following the rebuttal I have maintained my score and recommend accepting the paper. Claims And Evidence: Yes. The use of the synthetic data allows specific control over the presence of concepts, and degree to which the prompt specifies the concepts. The are examples of showing the equivalent effects on real-world samples, albeit limited in number. Methods And Evaluation Criteria: Yes. The use of the synthetic data removes ambiguity and noise present in real-world samples (such as the degree to which a concept is present in an image, or how other the presence of concepts affect interpretation of a concept). Theoretical Claims: N/A. The paper presents an empirical study. Experimental Designs Or Analyses: I have no concerns about the experiments, but had a minor question regarding using the accuracy of a classifier to measure concept reachability. Does the underlying accuracy of the classifier not need to serve as the baseline for the reachability measure? For example, if the classifier reports 90% accuracy on the generated data, reachability is reported at this level — however if the classifier itself is only 90% accurate, then might the concept not be considered full reachable? Supplementary Material: Yes. I read all of the appendices. These provide additional supplementary support for the experiments described in the main paper. Relation To Broader Scientific Literature: The paper references related work that these experiments build on, and relevant citations are included to support statements made. Essential References Not Discussed: I have no concerns here. The assume the authors make is that if a concept cannot be reached by steering vectors, then it is not reachable. However, there are other methods for steering the outputs generated by models, and these might perhaps be able to tease out the concept from the model. I do think it is fine to focus only on steering vectors, but this limitation/assumption might be worth spelling out. Other Strengths And Weaknesses: Strengths + The authors will release the code upon accept to aid reproducibility. + The experiments are well put together and results support the claims made. Weaknesses: - Perhaps more analysis on trickier real-world cases would make the paper stronger. Other Comments Or Suggestions: In Section 4.3: replace “with not shapes” with “with no shapes” In Section 4.3: replace “report the average results” with “report the mean results” Questions For Authors: Q1: Does the degree to which the shapes overlap affect the reachability measure? For example, a heavily occluded background shape might be harder for the classifier to identify, even if the shape itself is correct? This could mean that reachability is under-estimated in more difficult cases, as opposed to the concept being less reachable. Q2: In Figure 4 — are the shape/color combinations at the top of figure only representative examples, or does the figure relate to these exact samples? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. We are very happy to read your positive review! We have read through the comments and suggestions you made and addressed them below: ___ - **Does the underlying accuracy of the classifier not need to serve as the baseline for the reachability measure?** When evaluating on human-labelled images (5400) that were generated by prompting from two diffusion models trained on balanced data, the classifier obtains an accuracy of 96.63%, hence the results would have to consider the possible error of the classifier. We consider this accuracy to be sufficient to reflect the impact of our experiments on reachability. We will add a clarification about this to the paper. - **Paper focuses on steering, but there are other methods for reaching the concept than steering** This is fully correct: steering falls under the broader category of modifying the latent space and many variations exist on how this latent space modification is performed (which model features are being modified, how this modification is performed). Our focus here was on steering as it has achieved successful results in both diffusion models and autoregressive modelling, while still being relatively simple and efficient to implement (and hence promising also for practical use-cases). - **More complex data** This is addressed in the response to qawW (CelebA). - **Typos in Section 3.4** Thank you for pointing these out! We will correct this in the updated version of the paper. - **Does the degree to which the shapes overlap affect the reachability measure?** Originally, we encountered challenges when classifying the back shape in case these were heavily occluded. We modified the neighbourhood within which the front shape was sampled in order to ensure a reasonable portion was visible. The minimum percentage of each back shape visible in the balanced train set for the diffusion models is the following: circle 52.46%, triangle 48.97%, square 59.38% - this will be added to the Appendix in the dataset details. Overall, approximately half or more, of each shape is visible. We did not explore this further, however after this modification we didn’t notice any significant difference in reachability with the different shapes in the back. - **In Figure 4 — are the shape/color combinations at the top of figure only representative examples, or does the figure relate to these exact samples?** These are the colour-shape combinations for the accuracies displayed in the graph. For example, the first column shows a green triangle behind a red triangle, which changes no concepts with respect to the starting prompt y_s. Note that it does not refer to the exact relative position between the two shapes, only the combination (we will clarify this). --- Rebuttal Comment 1.1: Comment: Thank you for the follow up. I found the paper interesting and look forward to seeing the results on the CelebA data. I will maintain my recommendation to accept the paper.
Summary: This paper studies the concept reachability in diffusion models and focuses on the effects of three common constraints in datasets: scarcity of concepts, underspecification of captions, and biases. The work shows that although some concepts are reachable for the model, prompting fails to provide sufficient information to reach them. In addition, this paper proposes steering as a novel controlling mechanism for better concept reachability in the generation of diffusion models. ## update after rebuttal I would like to keep my original rating. This paper presents insightful findings, but my main concerns regarding evaluation on more complex datasets and clarification on general image generation remain insufficiently addressed during the rebuttal. The authors acknowledged the usage of CelebA in the response to Reviewer qawW but failed to provide results during the rebuttal. Claims And Evidence: The claims are successfully supported by experiments on controlled synthetic data. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense under the problem setting. Theoretical Claims: There is no major theoretical claim in this paper. Experimental Designs Or Analyses: The experiment design is convincing overall, except for several concerns: (1) The experiments focus on the analysis of synthetic data and only show three examples for three challenges for the application in image generation with diffusion models. Is it possible to evaluate the performance on a larger scale for diffusion model generation? (2) How is the efficiency of steering in terms of application? Supplementary Material: Appendix A and B Relation To Broader Scientific Literature: Previous work has proposed to optimize the input of specific task, or introduce steering vectors at specific layers to improve the fine-grained control in LLM or text-to-image model generation. This work studies the concept reachability of steering the textual prompt or in the U-Net bottleneck layer under three scenarios of dataset constrains. Essential References Not Discussed: I didn't identify any such references. Other Strengths And Weaknesses: Strengths: This paper is well-written and the contribution is valid by categorizing and analyzing the common constraints in concept reachability. The steering approach is effective in these settings and image generation of diffusion models. The experiments on the synthetic data are comprehensive, with settings that are well-controlled for each constraint. Weaknesses: This work mainly focuses on the analysis of synthetic data, lacking broader evaluation for more general image generation with diffusion models. Other Comments Or Suggestions: The notion of Figure 7 is unclear to me, and I didn't fully understand how Figure 7 supports the claims regarding biases. Questions For Authors: Please see the questions in Experimental Designs Or Analyses. Additional question: Will the synthetic dataset be too simplified? CLEVR [1] is a dataset with multiple combinations of attributes with more realistic appearances, and the code to render new images is provided as well. Will CLEVR be a better choice for analysis? [1] Johnson, Justin, et al. "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reading through our paper! We really appreciate your positive feedback, and have addressed the questions/comments you made below: ___ - **How is the efficiency of steering in terms of application?** Our current work did not focus on analysing the computational efficiency of steering precisely, but overall we found that the optimisation of the steering vectors is dependent on the dataset and learning rate used for optimising, with the number of steps required to implement the optimisation varying. Analysing the efficiency of steering under different conditions in large-scale models would be an interesting extension. In Stable Diffusion, the optimisation of the steering vector required less than 20 steps. - **CLEVR and synthetic data** Although the Clevr dataset contains more complex images with greater detail and 3D structure, we believe that repeating the experiments with Clevr would yield similar results to those already observed within our synthetic framework. The underlying factor structure of both datasets is essentially the same, and as such, we anticipate that the observed outcomes would not vary significantly. In our work, we opted for a simpler dataset to ensure controllability, robustness, and realisable methods for evaluating accuracy, as well as more scalable model training at a lower computational cost. We further address experiments with more complex data in the response to Reviewer qawW (CelebA). - **Figure 7** Figure 7 shows the reachability to images containing either only circles or only blue shapes in the back, under conditions where these two concepts are tied. As the presence of images containing only blue in the back is increased (thus, reducing the bias), the reachability of images containing only blue in the back increases - this is shown in how the light-coloured X's produce a higher accuracy on the horizontal plane. However, this also leads to a general trend (with some variability) of increase in the accuracy of images containing only circles in the back - note that generally across all methods, on the vertical axis, the lightest coloured X's achieve the highest accuracy. We will clarify the description of this Figure in the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal from the authors. I would like to keep my original rating with the following concerns. This paper presents interesting findings and I believe this paper can benefit from experiments with more complex data. As reviewer qawW mentioned, including results with CelebA could make this work more convincing. In addition, the rebuttal did not include my concerns about the more general image generation. Clarifying the scope of this work and providing more details on practical usage (such as Fig 8) could be helpful for future work.
Summary: This paper explores the influence of three core dataset issues on concept reachability in text-to-image diffusion models. Through a synthetic setup, the paper constructs dataset variations corresponding to the three dataset issues and test concept reachability by evaluating the concepts in generated images after training on the dataset variation. The paper introduces novel perspectives into how data issues impact diffusion model generation and offers insights into the benefit of steering in image generation controls. Claims And Evidence: The claims made are clear and supported with one small issue: maybe I missed this in the paper, but the evidence behind the claim that overly detailed prompts may not help with reachability on line 373 is not available. Methods And Evaluation Criteria: The method works effectively for the proposed synthetic setup. Although Sec 5.5 and Appx F.3 show the success of steering on Stable Diffusion, these are essentially verifying concept customization but not how dataset issues impact the reachability in general in real world distributions. However, the effectiveness in real-world dataset can be readily tested, e.g., CelebA offers labels of attributes for subsampling to construct the dataset issues. Theoretical Claims: No proof or theoretical claims Experimental Designs Or Analyses: 1. The choice of starting prompt y_s seems rather arbitrary. What is the rationale behind the selected y_s for each experiment (e.g., how much does it deviate from y_e and why)? 2. The quality of concept classifiers is not shown in the paper. How well does it serve as the evaluator for reachability accuracy? Supplementary Material: The supplementary is comprehensive and provides more details to further explain the results and helps the reader to replicate and leverage the method. Relation To Broader Scientific Literature: This paper is a novel addition to the data-centric reachability study in diffusion models and offers insights and recommendations for future improvement of text-to-image generative models and generation controls. Essential References Not Discussed: It can be interesting and relevant to discuss the connection between (1) the diminishing reachability as the dataset issues worsen and (2) data attribution for the diffusion models trained on the problematic datasets, such as [1]. [1] Wang, Sheng-Yu, et al. "Evaluating data attribution for text-to-image models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. Other Strengths And Weaknesses: The paper is clearly presented with substantial visualizations and graphs. The proposal enables novel perspectives to root cause reachability in the dataset. Other Comments Or Suggestions: Figure 11 in Appx E is missing a legend for dots in the plot, which makes it a bit confusing to read the figure. Questions For Authors: 1. Despite that Sec 5.5 and Appx F.3 show the success of steering on Stable Diffusion, these are essentially verifying concept customization but not how dataset issues impact the reachability in general in real world distributions. However, the effectiveness in real-world dataset can be readily tested, e.g., CelebA offers labels of attributes for subsampling to construct the dataset issues. The utility of the discovery in this paper can be more impactful if shown on realistic settings. 2. The choice of starting prompt y_s seems rather arbitrary. What is the rationale behind the selected y_s for each experiment (e.g., how much does it deviate from y_e and why)? 3. The quality of concept classifiers is not shown in the paper. How well does it serve as the evaluator for reachability accuracy in the synthetic setting? 4. The image illustrations in Figure 3 and Figure 4 present different positions of shapes and portions of overlapping. Does the IoU, especially the visible portion of the back shape, have any impact on the result? 5. It is interesting to see in Figure 6(b) that prompt space and h-space optimization lead to different performance on red and blue colors. Is there any hypothesis why this happens? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and thoughtful comments! We address your questions below: ___ - **Evidence behind the claim that overly detailed prompts may not help with reachability** The starting prompt y_s used to implement the steering is as described in Section 3.5, however the accuracy displayed in Figure 6(a) for prompting is that of the prompt containing the full description of the target concepts. Hence, the graph shows that as the specification of concepts is decreased, the reachability of prompting the full description (which is overspecified with respect to the seen specification level) is severely affected due to the specification of the train set being limited. This will be clarified in the paper. - **Rationale behind the selected y_s for each experiment (e.g., how much does it deviate from y_e and why)?** In Experiment 1 we vary the deviation (number of concepts changed) between y_s and y_e to understand the behaviour between lower to higher changes in number of concepts. In the remaining sections we choose the case y_s = y_e, as it overall showed the most stable performance with respect to steering on the h-space (prompt space is mostly unaffected), and would be the natural choice when trying to reach a concept in a real data model. That is, if generating something on a model via prompting fails, in most cases one would to try to additionally steer on the prompt that describes the desired outcome. - **Quality of the classifier** This is addressed in the response to Reviewer saQX. - **Reference suggestion** Thank you for the reference! Data attribution methods are certainly related to our setup. We will discuss data attribution methods in diffusion models and the reference provided in the updated version of the paper. - **Figure 11** We will add a legend labelling the random seed to clarify the figure. - **Portion of visible back shape** This is addressed in the response to Reviewer saQX (Degree of overlap). Note that the illustrations in Figure 3 and Figure 4 are diagrams - actual samples of the train set are provided in Figure 10 in the Appendix. - **Difference between h-space and prompt space in Figure 6(b)** Figure 6(b) shows an example for one model of the behaviour of prompting and steering when trying to reach red in the back when the label c_1 is not specified in training, or reaching a square in the front when the label s_2 is not specified. We wanted to note that, particularly in the case of prompting and steering on the h-space, the generated output is close to randomly sampling the value of the unspecified label (in the case of c_1, the colour of the front shape in the target combination is green and so there are only two choices for the back colour). Steering on the prompt space, instead, produces higher accuracy on the target combination, leading to a higher value achieved on the top axis (red and square, respectively) than the one achieved by prompting, although remaining close to this value. We will update the explanation of this in the paper. The dimensionality, level of disentanglement, and the dependency of the h-space on the timestep $t$ could potentially impact the observed differences. - **CelebA** We agree that extensions to real-world data will be valuable. We are currently implementing our framework on the CelebA dataset to assess whether its structure aligns with the properties required for our analysis. As part of this, we are constructing approximately 15-20 dataset variations and training multiple models per dataset. Given the compute we have available we hope to be able to present the full results before the end of the author-reviewer discussion period and include them in the paper. We currently have observed similar trends in a balanced dataset as in Experiment 1: reachability on the prompt space is more consistent than reachability on the h-space, which is affected by number of factors changed. Moreover, we have also observed a decrease in reachability as the level of specification of the train set is decreased. We plan to include this in the paper, as well as an analysis of scarcity and biases, with detailed examples of steered images. We remark that while CelebA does provide attribute labels for selected factors, other latent factors such as background, lighting and pose are uncontrolled, and it is only the use of synthetic data that guarantees a fine-grained cause effect study of the impact of dataset modifications. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed explanation. I appreciate the clarifications provided and, after careful consideration, I will keep my recommendation of weak accept. --- Reply to Comment 1.1.1: Comment: Thank you to all reviewers for their positive recommendations! We would like to share the obtained results on CelebA. By looking at two characteristics (male/female and hat/no hat), we defined a gradual modification of the dataset. We train 4 diffusion models on each train set, and evaluate accuracy using two classifiers. Below we summarise the main conclusions obtained on synthetic data and their generalisation to CelebA: - **Concepts remain reachable when steering from diverse starting prompts** We used a fully balanced dataset where each of the studied combinations was equally seen during training. We fix one starting prompt and vary the reachability to different concept combinations. We observe the different concept combinations to be reachable via steering, and in particular, steering on the h-space is observed to depend on the number of concepts changed, similarly to Figure 4 in synthetic data. - **Reachability drops sharply as concepts become more scarce** We decrease the number of images containing people wearing hats, and target the generation of men wearing hats. Similar to the observation on synthetic data (Figure 5), we observe a critical threshold (in terms of the number of images containing the hat concept) below which reachability significantly drops. We also identify that it is possible to reach the concept by steering in settings where prompting does not work effectively. - **A decrease in underspecification hinders reachability** A decrease in specification of captions significantly decreases reachability across all methods. We compare reachability when specifying both concepts, one of the concepts, or neither. When specification of one concept is removed, accuracy is approximately 50%, when specification of both captions is removed, accuracy is close to 25%. - **Increasing the presence of an individual concept increases separate reachability to both concepts** We tie the concepts of female and not wearing a hat, and target the generation of females wearing hats and males not wearing hats, thus aiming to break the bias. As we gradually increase the presence of males not wearing hats in the dataset, we observe a rapid increase in reachability to males not wearing hats (expected), but also a general increase in reachability to females wearing hats across all reachability methods. This concept combination is, in particular, most reachable through steering. We will add a section in the Appendix to present these results.
Summary: This paper focuses on the limitations of prompting for model control. It shows how steering is a more robust mechanism to enhance concept reachability. The authors study three common dataset issues: concept scarcity, underspecification of captions, and biased co-occurrence of concepts. Experiments are evaluated mostly on a controlled synthetic dataset (colored shapes with specific positional relationships). Their findings demonstrate that steering vectors significantly improve concept reachability, particularly in out-of-distribution or underspecified conditions, and they support this with empirical studies including extensions to real data using Stable Diffusion. Claims And Evidence: The main claims are supported by well-designed experiments. Methods And Evaluation Criteria: The methodology is appropriate for analyzing robustness of model control. The synthetic dataset allows for precise control over concepts, facilitating clear analysis of reachability mechanics. Theoretical Claims: The paper is primarily empirical. There are no formal theorems or proofs. Experimental Designs Or Analyses: The experimental design is clean and systematic. Supplementary Material: Yes, the supplementary material was reviewed, particularly: Appendix A (architecture and training details), Appendix B (synthetic dataset creation), Appendix C (classifier design and evaluation) and Appendix E (additional experiments). Relation To Broader Scientific Literature: The connection to steering literature (including LLMs and diffusion models) is well established. Essential References Not Discussed: Essential references are discussed. Other Strengths And Weaknesses: ### Strengths - The empirical setup is clean and contributes to the steering literature. - Insightful findings for alternative to prompting for model control. - Relevance to practitioners seeking more controllable diffusion model behavior. - The papers is really well-written and organized. ### Weaknesses - While results are robust within their synthetic setup, the paper would benefit from further discussion on the generalization of these findings to real-world, more complex datasets. - See questions below. Other Comments Or Suggestions: - If possible, figure 9 should be moved to the main paper. - The notation for the steering vectors $\mathbf{c}_p$ and $\mathbf{c}_h$ can be confused with the concept notation $c$. The authors should use for example $\mathbf{s}_p$ and $\mathbf{s}_h$. Questions For Authors: - Are the notations $[f_{i_1}, f_{i_2}, \ldots, f_{i_j}]_X$ and $[f_1^{(i)}, f_2^{(j)}, f_3^{(k)}]_X$ the same? - Can you better explain the difference between $y_e$ and $y_s$? Which one is the target combination? And Why do you need $\mathbf{x}_0$? - Figure 4, why steering from the initial prompt "a green triangle behind a red triangle" to a completely different combination such as "a red circle behind a blue square"? Isn't it a too "extreme" steering? - Can you provide more examples on real datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reading through our work and providing your feedback! We have read through your comments and address your questions below: ___ - **Notation of steering vectors** We noticed that the notation $\mathbf{s}$ may also lead to confusion due to the labels $s_1$ and $s_2$ for the factors in our dataset. However, we can change the notation for the steering vectors to $\mathbf{v}$. - **Notation for $[f_{i_1}, f_{i_2}, \dotsc, f_{i_k}]_X $ and $[f_1^{(i)}, f_2^{(j)}, f_3^{(k)}]_X$** The notation **$[f_1^{(i)}, f_2^{(j)}, f_3^{(k)}]_X$** refers specifically to the diagram. This diagram contains only three factors, so the notation refers to the combination of images that contain one specific value or concept for each factor in the dataset. The notation $[f_{i_1}, f_{i_2}, \dotsc, f_{i_j}]_X$ refers to the $n$-factor case, and only fixes the concept value for a subset of the factors, indexed as $i_1, i_2, \dotsc i_j$. We will modify the indexing in the diagram to avoid confusion. - **Difference between $y_e$ and $y_s$** $y_e$ describes the properties of the images we steer to (the end target). $y_s$ is the prompt from which we start, that may (if $y_s = y_e$) or may not describe the desired properties. $\mathbf{x}_0$ is the collection of images containing the target concepts that are used to steer the generation process (they may have been generated by sampling from a model using the prompt $y_e$ or obtained from a test set). - **Figure 4: why steer to such extreme combinations?** We wanted to explore the behaviour in extreme modifications under balanced conditions in order to understand the different reachability methods. Perhaps it would be expected that as the number of factors changed (with respect to $y_s$) increases, reachability via steering decreased. However, this is not the case in steering on the prompt space, which highlights structural differences in the spaces where steering is implemented. Throughout the remaining experiments presented in the main body, the steering is implemented in the case $y_s = y_e$, which we found to produce the most stable results for steering on the $h$-space. We will clarify this in the revised version of the paper. - **Examples on real datasets** This is addressed in the response to Reviewer qawW (CelebA).
null
null
null
null
null
null
Stochastic Regret Guarantees for Online Zeroth- and First-Order Bilevel Optimization
Reject
Summary: This article presents two online bilevel optimization algorithms SOGD and ZO-SOGD. Among them, SOGD achieves the sota local regret bound without using window-smoothed functions. ZO-SOGO provides a fully-zero-order approach and achieves the hypergradient estimation only with function values. The authors present the theoretical analysis of the regret as well as the convergence analysis. Both two algorithms perform well in a series of numerical experiments. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The reviewer takes a quick check of the theoretical analysis. The theoretical claims is reasonable and there is no fatal issue being found by the reviewer. Experimental Designs Or Analyses: Yes. The experiments can show the benefit of the proposed methods. However, it could be better if the authors present the comparison of different algorithms under **wall-clocked time**, which may illustrate the practical value of the proposed algorithms. Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths** 1. Both algorithms are sing-loop. 2. SOGO achieves the sota regret bound without extra computation of the window-smoothed function. 3. ZO-SOGO provides a fully-zero-order approach for online BO, which is especially useful in the training of large models. **Weaknesses** NA Other Comments Or Suggestions: After author rebuttal: Thanks for the response and the main concerns have been solved. The reviewer will update the rating. Questions For Authors: Can the authors present the comparison of different algorithms under wall-clocked time? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response:** Thank you for your review and suggestions. We would like to clarify that the runtime plots presented in both Figure 2 and Figure 3 already show wall-clock time measurements in seconds. Please refer to the left subplot in Figures 2 and 3, where we directly report time elapsed. Specifically, - In Figure~2 (left panel), we present the actual wall-clock time of our ZO-SOGD method compared to the baseline methods in the online adversarial attack experiment. The recorded times range from approximately $0.5 \times 10^2$ to $2.0 \times 10^2$ seconds at $T = 200$ steps. - In Figure 3 (left panel), we present wall-clock time comparisons for our SOGD method against OAGD and SOBOW in the parametric loss tuning task. The results show a clear efficiency gain, with SOGD completing under 25 seconds at T = 400, compared to SOBOW, which takes roughly 220 seconds due to its reliance on conjugate gradient (CG) iterations. All wall-clock times were measured using the same hardware platform across all algorithms to ensure fair and consistent comparisons. These empirical results support the practical value of our proposed methods, aligning with our theoretical efficiency claims. In the revised version, we will make this point more explicit by stating "wall-clock time" clearly in the figure captions and experimental section to avoid any ambiguity. Please let us know if we misunderstood your question, and we will be happy to address your concern further. > **Further Elaboration on Runtime and Accuracy Using CIFAR-10:** For further elaboration of runtime and how our methods can handle large scale datasets, we have conducted additional experiments on CIFAR10 to demonstrate the scalability and effectiveness of our approach beyond MNIST. For our CIFAR-10 experiments, we used a dataset with 10 classes, and further details on the parameters will be added to the appendix in the revised manuscript. Below we present the performance comparison of our ZO-SOGD method against single-level ZO methods on CIFAR10: Table 1: Runtime (seconds) | Method | t=100 | t=200 | t=300 | t=400 | |---|---|---|---|---| | ZO-O-GD | 358±17 | 726±22 | 1089±33 | 1452±44 | | ZO-O-Adam | 385±17 | 781±28 | 1177±39 | 1573±50 | | ZO-O-SignGD | 319±13 | 638±22 | 957±31 | 1276±39 | | ZO-O-ConservGD | 413±20 | 825±28 | 1238±39 | 1650±50 | | ZO-SOGD (ours) | 1078±33 | 2156±50 | 3234±66 | 4312±83 | | ZO-SOGD (ours, Adam) | 1155±39 | 2310±55 | 3465±72 | 4620±88 | Table 2: Testing Accuracy (lower is better for attacks) | Method | t=100 | t=200 | t=300 | t=400 | |---|---|---|---|---| | ZO-O-GD | 0.81±0.07 | 0.78±0.06 | 0.74±0.05 | 0.71±0.05 | | ZO-O-Adam | 0.59±0.08 | 0.54±0.06 | 0.50±0.05 | 0.47±0.05 | | ZO-O-SignGD | 0.78±0.10 | 0.71±0.07 | 0.67±0.06 | 0.64±0.06 | | ZO-O-ConservGD | 0.76±0.06 | 0.68±0.05 | 0.63±0.04 | 0.57±0.04 | | ZO-SOGD (ours) | 0.73±0.10 | 0.69±0.08 | 0.65±0.07 | 0.56±0.06 | | ZO-SOGD (ours, Adam) | **0.58±0.08** | **0.49±0.06** | **0.40±0.05** | **0.37±0.04** | Table 3: Perturbation Magnitude $||y||_\infty$ | Method | t=100 | t=200 | t=300 | t=400 | |---|---|---|---|---| | ZO-O-GD | 5.46±0.08 | 5.54±0.10 | 5.62±0.11 | 5.72±0.12 | | ZO-O-Adam | 4.58±0.08 | 4.68±0.10 | 4.76±0.11 | 4.85±0.12 | | ZO-O-SignGD | 5.52±0.11 | 5.66±0.13 | 5.76±0.14 | 5.88±0.16 | | ZO-O-ConservGD | 4.52±0.07 | 4.60±0.08 | 4.66±0.09 | 4.72±0.10 | | ZO-SOGD (ours) | 7.20±0.24 | 7.56±0.29 | 7.92±0.34 | 8.16±0.36 | | ZO-SOGD (ours, Adam) | 6.70±0.28 | 7.30±0.35 | 7.70±0.38 | 8.00±0.40 | The CIFAR-10 results demonstrate that our ZO-SOGD approach scales effectively to higher-dimensional problems with ZO-SOGD (ours, Adam) reducing model accuracy to ~37\% at $t=400$ compared to 47-71\% for single-level ZO methods, while requiring larger perturbation magnitudes of 8.00 compared to 4.72-5.88 for baseline methods. This represents a trade-off where our approach can achieve significantly higher attack success rates at the cost of somewhat larger perturbations, which remain visually acceptable despite the 12 $\times$ increase in dimensionality from MNIST to CIFAR-10. Our ZO-SOGD (ours, Adam) achieves this superior performance with approx. 2.9-3.0 $\times$ computational overhead, representing an excellent trade-off when attack success is the primary objective, and the consistent improvement from $t=100$ to $t=400$ confirms that BO optimization yields further benefits in higher-dimensional spaces. The larger perturbation magnitudes are offset by the substantially improved attack success rates, making our approach particularly valuable in scenarios where robustness evaluation is critical. We will add more detailed analyses and hyper-parameter sensitivity studies in the appendix.
Summary: This paper proposes a novel approach to online bilevel optimization (OBO) that achieves sublinear stochastic bilevel regret without window smoothing, addressing limitations in existing methods under dynamic conditions. By introducing a new search direction, it improves efficiency through reduced oracle dependence, simultaneous inner-outer updates, and zeroth-order estimation of Hessians, Jacobians, and gradients. Experiments on online parametric loss tuning and black-box adversarial attacks validate its effectiveness. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: I have reviewed the theorem in the main text. The appendix contains extensive theoretical analyses, but I have not gone through all of them. Experimental Designs Or Analyses: Yes. The experimental results lacks analyses. I recommend providing more detailed insights into why the proposed method outperforms the baselines. For example, the authors state: "The left panel shows that ZO-SOGD has similar runtime to single-level baselines, despite outer-level optimization on x." However, the specific components of the proposed algorithm that contribute to these performance improvements are not clearly highlighted. A deeper analysis of the key factors driving the observed empirical advantages would significantly strengthen the experimental section. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper is related to the bilevel optimization, online optimization. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: 1.This work introduces a novel search direction, enabling first- and zeroth-order OBO methods to achieve sublinear stochastic bilevel regret without relying on window smoothing. 2.Unlike existing methods that depend on gradient, Hessian, and Jacobian oracles, this work estimates these quantities using only function value oracles, improving scalability in black-box settings. 3.By requiring only a single subproblem solver iteration, the proposed algorithms enhance efficiency over existing approaches that rely on multiple iterations for hypergradient approximation. Weakness: 1.The motivation for designing a hyper-gradient-based method to solve OBO is not clearly articulated. The authors need to justify why a hyper-gradient-based approach is adopted instead of first-order methods such as the value function method [1] or the cutting plane method [2]. A more in-depth discussion of the advantages of hyper-gradient-based OBO over these alternative approaches would enhance the readability and justification of the proposed method. 2.Given that the proposed algorithm requires extensive computations in each iteration, I suggest that the theoretical analysis not only provide iteration complexity (as in Theorem 3.6 and Theorem 4.2) but also include an overall computational complexity analysis, such as arithmetic complexity. Additionally, a comparison of the overall complexity between the proposed algorithm and the baseline methods would be beneficial in understanding whether the proposed approach offers significant complexity advantages over competing methods. 3.Regarding the theoretical analysis, I appreciate the efforts in establishing rigorous theoretical guarantees. However, since several assumptions are made, it would be beneficial to further discuss their practicality. For instance, in the experimental section, do all these assumptions hold in practice? One particular point of concern is the strong convexity of the lower-level objective function in Assumption 3.2, as this assumption may limit the applicability of the proposed OBO algorithm. Given that many recent works in Bayesian Optimization no longer rely on this assumption [3, 4], a discussion on whether and how this assumption might be relaxed would be valuable. 4.The readability of the experimental section can be improved. The authors dedicate a substantial portion of the section to introducing baseline methods and problem settings but overlook critical experimental analysis. I recommend providing more detailed insights into why the proposed method outperforms the baselines. For example, the authors state: "The left panel shows that ZO-SOGD has similar runtime to single-level baselines, despite outer-level optimization on x." However, the specific components of the proposed algorithm that contribute to these performance improvements are not clearly highlighted. A deeper analysis of the key factors driving the observed empirical advantages would significantly strengthen the experimental section. 5.It seems that the proposed algorithm may also be applicable to traditional bilevel optimization or zeroth-order bilevel optimization. I suggest that the authors include comparisons with existing zeroth-order nested optimization methods to further demonstrate the effectiveness of the proposed approach. [1] Bome! bilevel optimization made easy: A simple first-order approach. NeurIPS 2022. [2] Asynchronous Distributed Bilevel Optimization. ICLR 2023. [3] Projection-free methods for stochastic simple bilevel optimization with convex lower-level problem. NeurIPS 2023 [4] An Accelerated Gradient Method for Convex Smooth Simple Bilevel Optimization. NeurIPS 2024 Other Comments Or Suggestions: No Questions For Authors: Please read my comments on weaknesses of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Response to W1:** Thank you for the helpful comment. We chose to use a hyper-gradient-based approach due to its robustness and strong theoretical guarantees in the context of **stochastic** OBO problems. In contrast, alternative approaches such as value function methods [1] and asynchronous bilevel optimization methods [2] are primarily designed for **deterministic** settings. These methods rely on Lagrangian reformulations and typically converge to Lagrangian stationary points, rather than true bilevel solutions. They also require careful tuning of penalty parameters, which increases their complexity. We have clarified the motivation and key differences from the deterministic methods in [1,2] in the revision. **Response to W2:** Thank you for the suggestion of arithmetic complexity analysis. We have added a detailed arithmetic complexity comparison in our response to Reviewer **x9oa** and do not repeat it here due to response length limits. Briefly, our SOGD method significantly reduces computational cost, while ZO-SOGD suffers from worse dimensional dependence due to its gradient-free nature. **Response to W3:** Thank you for this important point. We provide a detailed response to Reviewer **Xqeg** on this issue. Our experiments include a non-convex imbalanced learning task with a deep neural network inner problem (Section 5), where our algorithms show stable convergence despite the lack of strong convexity. While Refs [3] and [4] use more relaxed inner assumptions (convexity of inner), they focus on **simple bilevel** problems in the **offline** setting. Our OBO setting is more complex and requires stronger assumptions for convergence. Adapting relaxed conditions such as PL to stochastic OBO is a valuable direction for future work but needs careful investigation. **Response to W4:** We thank the reviewer for the insightful feedback. We have revised the experimental section to provide a clearer analysis of why our proposed methods outperform the baselines and to highlight the components responsible for these improvements. Specifically: - In Sec 5.1, ZO-SOGD demonstrates stronger attack performance compared to single-level baselines such as ZO-O-Adam. These gains stem from our proposed OBO framework, which allows for continuous optimization of hyper-parameters, in contrast to baselines like ZO-O-Adam that tune such parameters in a discrete and manually selected search space. Additionally, the new search direction defined in Eq. (6), combined with variance control in the finite-difference estimators (Eq. 21) and the projected update on $v_t$ defined in Eq. (8), contributes to more effective ZO-SOGD over time. - In Sec 5.2, our SOGD method achieves the lowest runtime (left panel) while maintaining high accuracy across varying degrees of class imbalance (middle and right panels). Although other OBO-based algorithms can also perform continuous hyper-parameter search, our superior efficiency result from the single-loop simultaneous update scheme (Alg 1). This avoids the high computational cost of multi-loop strategies, such as SOBOW, which rely on window-smoothing and conjugate gradient solvers. We have explicitly clarified these points in Sections 5.1 and 5.2 of the revised manuscript. **Response to W5:** We appreciate the suggestion to compare with ZO nested methods. To our knowledge, there are no practical zeroth-order nested methods without access to a gradient oracle (including those with variance reduction) available in the literature. We include a comparison (following the setup in Sec.~5) with our offline variant (ZO-S$^2$GD), which highlights a tradeoff: while it achieves stronger attack performance (test accuracy drops to 0.09 at $T=200$), it requires significantly more computation (40$\times$ runtime) and generates highly visible perturbations ($||y|_\infty$ quickly reaches 5.0). Our online method (ZO-SOGD) maintains an effective balance between computational efficiency, attack performance (testing accuracy 0.56 at $T=200$), and imperceptibility ($||y||_\infty$ 2.7 at $T=200$). **Runtime (seconds)** | Method | T=50 | T=100 | T=200 | |---|---|---|---| | ZO-SOGD | 48±5 | 80±8 | 190±10 | | ZO-S²GD | 2800±30 | 8000±50 | 15800±100 | **Testing Accuracy (lower is better for attacks)** | Method | T=50 | T=100 | T=200 | |---|---|---|---| | ZO-SOGD | 0.75±0.10 | 0.67±0.12 | 0.56±0.12 | | ZO-S²GD | 0.23±0.05 | 0.10±0.03 | 0.09±0.02 | **Perturbation Magnitude $||y||_\infty$** | Method | T=50 | T=100 | T=200 | |---|---|---|---| | ZO-SOGD | 1.8±0.3 | 2.2±0.3 | 2.7±0.3 | | ZO-S²GD | 3.0±0.2 | 5.0±0.0 | 5.0±0.0 | We also note that, as illustrated in Figure 2 of our paper, ZO-SOGD (ours, Adam) achieves an adversarial testing accuracy of ~0.24 with similar runtime. This highlights the significant computational improvement offered by our online method over offline approaches.
Summary: This paper introduces a novel framework for stochastic online bilevel optimization (OBO) that addresses limitations of existing methods by achieving sublinear stochastic bilevel regret without relying on window smoothing. The authors propose a new search direction and develop two algorithms, Simultaneous Online Gradient Descent (SOGD) for first-order oracles and Zeroth-Order SOGD (ZO-SOGD) for function value oracles, both requiring only a single subproblem solver iteration. Their main findings include theoretical guarantees of sublinear stochastic bilevel regret for both algorithms under mild assumptions, even with rapidly changing objective functions, and improved oracle efficiency in hypergradient estimation. Empirical evaluations on online parametric loss tuning and black-box adversarial attacks demonstrate the practical effectiveness and efficiency of the proposed SOGD and ZO-SOGD algorithms compared to existing OBO methods. Claims And Evidence: The claims regarding sublinear stochastic bilevel regret, improved efficiency, and empirical validation are likely supported by evidence typically presented in submissions of this nature, specifically mathematical theorems in the appendix for the regret guarantees and experimental results sections showcasing performance gains. The paper introduces novel algorithmic and conceptual ideas, including a new search direction and zeroth-order adaptations, and claims theoretical support through derived regret bounds, along with empirical validation on relevant tasks like adversarial attacks and meta-learning loss tuning. While a thorough verification would necessitate a detailed examination of the proofs and experimental setup, the submission, as summarized, suggests the claims are substantiated by the expected forms of evidence within the scope of this research domain. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are sensible for the problem of stochastic online bilevel optimization. The SOGD and ZO-SOGD methods are designed to address the specific challenges of OBO, such as non-stationarity and limited access to gradients (especially for black-box scenarios addressed by ZO-SOGD). The use of stochastic bilevel regret as an evaluation criterion is appropriate for online learning settings, as it measures the cumulative performance against a dynamic benchmark. Furthermore, the chosen benchmark datasets, MNIST and potentially ImageNet for adversarial attacks and imbalanced learning tasks, are standard and relevant for evaluating machine learning algorithms, providing a practical context to assess the effectiveness of the proposed OBO methods in representative applications. Comparing against relevant baselines like OAGD, SOBOW, and single-level ZO methods further strengthens the evaluation by contextualizing the performance of the proposed algorithms within the existing landscape of optimization techniques. Theoretical Claims: I have not thoroughly checked the proofs, which are too complex and contained in the appendices. Experimental Designs Or Analyses: The experimental designs and analyses appear generally sound and valid for the stated problem and claims. The choice of black-box adversarial attacks and parametric loss tuning are relevant applications for online bilevel optimization, especially for zeroth-order methods in black-box settings. The use of MNIST data for initial validation and potentially ImageNet (or similar datasets for attacks on DNNs) provides appropriate benchmarks widely recognized in machine learning. The evaluation metrics, including runtime, test accuracy, balanced accuracy (for imbalanced data), and perturbation norms, are well-suited to assess the efficiency and effectiveness of the proposed algorithms in these tasks, directly addressing the claims of improved performance and reduced oracle dependence. The comparison against relevant baselines like OGD, OAGD, SOBOW, and single-level ZO methods provides a valid context for evaluating the contributions of SOGD and ZO-SOGD. Reporting mean and standard deviation across multiple runs, as indicated by "mean±std", suggests an attempt to account for variability and improve the reliability of the results, although the presence of formal statistical significance testing is not explicitly mentioned in the summary and would further strengthen the validity. Overall, based on the provided description, the experimental design and analyses seem appropriate and logically connected to the paper's claims. Supplementary Material: No, the proofs are too long and complex. Relation To Broader Scientific Literature: The key contribution of this paper is advancing online bilevel optimization by tackling limitations in existing literature, particularly in stochastic and black-box settings. Related to prior work like OAGD and SOBOW, this paper distinguishes itself by achieving sublinear regret without window smoothing, addressing a key critique of those methods in rapidly changing environments. Unlike these first-order OBO approaches that rely on gradient oracles and often multiple inner loop iterations (like CG in SOBOW), this work introduces a novel search direction enabling both first-order SOGD and zeroth-order ZO-SOGD algorithms. ZO-SOGD directly addresses the gap in OBO for black-box scenarios, building upon the broader zeroth-order optimization literature but extending it to the complexities of bilevel problems, contrasting with offline ZO-BO methods and single-level ZO online optimization like ZOO-ADMM. The paper claims to improve upon the regret bounds compared to existing stochastic OBO and even single-level online zeroth-order optimization methods, such as those by Roy et al. and Guan et al., by achieving better variance dependence and dimension dependence in the regret analysis, thus pushing the frontier of both theoretical understanding and algorithmic efficiency in online bilevel learning. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: The paper demonstrates notable strengths in originality and significance by introducing a novel search direction and algorithms that advance the state-of-the-art in online bilevel optimization, particularly by achieving sublinear regret without window smoothing and extending OBO to zeroth-order settings. This is significant as it directly addresses limitations of existing methods in dynamic and black-box scenarios, enhancing both theoretical understanding and practical applicability. The conceptual clarity appears to be a strength, with a well-structured presentation that logically progresses from background to methods, theory, and experiments. However, a potential weakness lies in the complexity of the theoretical proofs, which, while utilizing established techniques, demand meticulous verification to ensure complete correctness. Furthermore, while the experimental evaluations are relevant and support the claims, the scope of datasets and applications, although standard, might be seen as a point for further expansion to fully demonstrate the robustness and generalizability of the proposed algorithms across a wider range of real-world problems. Other Comments Or Suggestions: One suggestion to enhance clarity, particularly for readers less familiar with the intricacies of regret analysis, would be to include more intuitive explanations or visualizations of the key regularity conditions (like path-length and function variation) and how they relate to the dynamic regret bounds. Additionally, while the experimental section is well-designed, exploring the sensitivity of ZO-SOGD to the smoothing parameter ρ in the black-box attack experiments, and perhaps including ablation studies on the momentum components in SOGD and ZO-SOGD, could further strengthen the empirical analysis and provide deeper insights into the algorithms' behavior and parameter tuning. Finally, a minor point: a careful proofread for typos, especially in the dense mathematical sections, would be beneficial for the final version. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **W1:** While the experimental evaluations support the claims ... generalizability of the proposed algorithms. A suggestion to explore the sensitivity of … $\rho$ and ablation studies on the momentum components ... . **Response:** We appreciate the reviewer’s feedback. To address the generalizability of our algorithms, we conducted additional adversarial attack experiments on CIFAR10, beyond MNIST, and demonstrated the effectiveness of our approach on this larger-scale dataset. Due to space constraints, details and discussion are provided in our response to Reviewer **C8i4**. For the smoothing parameter $\rho$, inner/outer stepsizes, and momentum components, we include a hyperparameter sensitivity analysis in our response to Reviewer **x9oa**, also due to space limitations. Our analysis shows that while ZO-SOGD is sensitive to hyperparameter choices, it remains robust within reasonable ranges. Following your suggestion, we have included extensive tuning results under adversarial attacks in the appendix. > **W2:** The theoretical assumptions, particularly strong convexity …? **Response:** We greatly appreciate this thoughtful comment. Our current theoretical guarantees rely on Assumption 3.2, which requires strong convexity of the inner objective $g_t(x, \cdot)$. This assumption enables the application of the implicit function theorem and stability of the inner solution mapping $y^*_t(x)$. While this assumption is common in bilevel optimization (e.g., Ghadimi and Wang, 2018; Ji et al., 2021) and is used in almost all previous OBO literature (as cited in Table 1), we acknowledge that it may not always hold in real-world applications. To demonstrate practical robustness, we have included a highly non-convex imbalanced learning task in our experiments where the inner problem involves training a deep neural network (Section 5). Despite the lack of strong convexity, our algorithms perform effectively, showing stable convergence and improved balanced accuracy. We also highlight that our **online bilevel setup is more complex** than standard bilevel settings, as both the inner and outer objectives change over time, making analysis under nonconvex or relaxed conditions significantly more challenging. That said, we are encouraged by recent progress in extending bilevel optimization to Polyak–Łojasiewicz (PL) conditions (e.g., Shen et al., 2023; arXiv:2302.05185). These works suggest that strong convexity may not be strictly necessary, and PL-like structures may suffice for convergence. We believe our theoretical framework can be extended to PL conditions, which represents an exciting direction for future work. However, given the complexity of our online and dynamic bilevel optimization setting, this extension requires careful investigation. In the final version of the paper, we will incorporate this discussion following the strong convexity assumption. > **W3:** The proofs are complex … would help readability. **Response:** Thank you for the helpful suggestion. We have revised the theory section to include intuitive explanations for the path length and function variation metrics, clarifying how they capture nonstationarity in OBO problems. We have also provided an example in the appendix to illustrate these metrics. We have also expanded **Remarks 3.7, 4.3, and 4.4** to explain the intuition behind our main theorems---highlighting improvements in variance dependence, dimensional scaling, and regret under noisy feedback. Our initial submission included detailed **proof roadmaps in Appendix C** (lines 796–806) for first-order methods and **Appendix D** (lines 2185–2196) for zeroth-order methods. In the final version, we plan to relocate these roadmaps and summaries to the main body, as we will have an additional page. We will also offer a more intuitive high-level roadmap outlining the overall proof strategy: we decompose regret into bias and approximation errors in gradient and projection steps, control these in a sequential manner through a number of lemmas, and assemble them into our final bounds. These changes, along with the existing comments and appendix, help make the theoretical results clearer and easier to understand.
Summary: This paper introduces new first-order and zeroth-order algorithms for Online Bilevel Optimization (OBO) that achieve sublinear stochastic bilevel regret without using window smoothing. The authors propose a new search direction and develop methods that work with limited feedback, including function value oracles. The theoretical guarantees cover both gradient-based and gradient-free settings, and the approach is validated through experiments on black-box adversarial attacks and parametric loss tuning. Claims And Evidence: The key claims are: 1. Sublinear regret guarantees for OBO without window smoothing. 2. Reduction in oracle dependence using function value-based estimates. 3. Empirical improvements in adversarial attacks and parametric loss tuning. The theoretical results seem well-supported by rigorous proofs, though I did not check all derivations in detail. The experiments provide reasonable validation, but additional large-scale evaluations would strengthen the claims. Methods And Evaluation Criteria: The methods seem well-suited for OBO and are motivated by practical applications. Using function value oracles to estimate Hessians and Jacobians makes sense in black-box settings, but it’s not clear how well this approach scales to more complex problems. The evaluation focuses on specific machine learning tasks, which are relevant, but a broader set of experiments would help assess the method’s general applicability. Theoretical Claims: I did not fully verify the proofs, but the regret bounds seem to follow standard techniques. The convexity assumption in the inner problem is common and often satisfied, as the paper notes. However, in practice, many problems may not have strongly convex inner objectives. It would be useful to discuss whether the approach could still perform well under weaker assumptions, such as PL conditions or approximate inner solutions, and whether the regret guarantees could be adapted accordingly. Experimental Designs Or Analyses: The experiments demonstrate the method’s effectiveness in adversarial attacks and parametric loss tuning, with reasonable comparisons to prior work. However: The tasks are relatively small-scale, primarily tested on MNIST and controlled optimization settings. Evaluating on larger or more diverse datasets (e.g., ImageNet for adversarial attacks) would provide stronger validation. The method’s sensitivity to hyperparameters (step sizes, smoothing factors) is not fully explored. Since OBO involves multiple updates, performance may vary significantly, and an ablation study would clarify robustness. Supplementary Material: I mainly reviewed the related work section in the supplementary but did not go through the rest in detail. Relation To Broader Scientific Literature: The paper builds on recent work in bilevel optimization and online learning, particularly methods that use gradient-based and function-value-based approaches. It relates well to previous work on regret minimization in OBO but extends it by removing window smoothing and using zeroth-order optimization. Essential References Not Discussed: The paper covers the main prior work in online bilevel optimization. However, if there are recent works applying similar ideas to meta-learning, reinforcement learning, or robust optimization, discussing them could help position this work in a broader context. Other Strengths And Weaknesses: Strengths: The removal of window smoothing is a notable contribution. The use of function value-based estimation makes the method applicable to black-box settings. Theoretical guarantees are well-structured and seem sound. Weaknesses: The notation and proofs are dense, making it hard to follow for non-experts. The experimental validation is limited in scope; larger-scale benchmarks would be useful. Computational efficiency of zeroth-order methods is not discussed in depth Other Comments Or Suggestions: A more intuitive explanation of the key theoretical results would improve readability. An ablation study on different hyperparameter choices would be helpful. If possible, adding real-world applications beyond adversarial attacks could strengthen the practical impact. Questions For Authors: 1. How does the proposed method compare in computational cost to gradient-based bilevel optimization approaches? 2. Can the framework handle non-convex inner problems, or how does performance degrade in such cases? 3. How sensitive is the method to different step sizes and smoothing parameters? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **W1:**... experiments on larger datasets ... . **Response:** We appreciate the reviewer’s feedback regarding evaluation on additional datasets. To address this, we conducted additional experiments on CIFAR10 to demonstrate the scalability of our approach beyond MNIST. Results are provided in our response to Reviewer **C8i4** and omitted here due to space constraints. **Response to Question 1:** We provide the total computational complexity of our methods and compare them with baselines. This comparison will be added to Table 1 in the final version. Our SOGD algorithm offer notable computational benefits. | Method | Total Complexity | |----------------|---------------------------------------------------------| | SOGD | $\mathcal{O}((d_1 + d_2) \cdot \varepsilon^{-3})$ | | ZO-SOGD | $\mathcal{O}((d_1 + d_2)^{7/4} \cdot \varepsilon^{-3})$ | | SOBOW/SOBBO | $\mathcal{O}(((d_1 + d_2)+ \kappa_g \log(\kappa_g) d_2) \cdot \varepsilon^{-3})$ | As an example, for **SOGD**, the regret bound $\text{BL-Reg}_T \leq \mathcal{O}(T^{1/3}(\sigma^2 +\Delta_T) + T^{2/3} \Psi_T)$ implies that $T = \mathcal{O}(\varepsilon^{-3})$ suffices for average regret $\leq \varepsilon$, leading to total cost $\mathcal{O}((d_1 + d_2) \cdot \varepsilon^{-3})$. We note that the total cost for other baselines (SOBOW/SOBBO) is derived using their window size $w = o(T)$ and $\kappa_g \log(\kappa_g)$ for conjugate gradient (CG) steps. OAGD baseline incurs higher cost due to exact system solves at each iteration. SOGD offers two key advantages: 1) it uses a single system solve per step, which is more efficient than full solves or CG steps required by some baselines; 2) it sets $w = 1$, avoiding costly average gradient computation, unlike window-based methods. ZO-SOGD has higher complexity but enables gradient-free optimization. Its higher dimensional dependence is consistent with the increased complexity in single-level ZO methods. **Response to Question 2:** We appreciate the reviewer’s concern regarding the handling of non-convex inner problems. To demonstrate practical robustness, the inner problem in Section 5 is non-convex, and our experiments validate that the algorithm performs effectivel--showing stable convergence and improved balanced accuracy. This robustness stems from our novel search direction, which controls variance over time without requiring window smoothing. A detailed discussion is provided in our response to Reviewer **Xqeg**; due to space constraints, we omit it here. **Response to Question 3:** Thank you for the insightful question. As detailed in Section 5 (Experimental Setup), we carefully tuned all hyperparameters to ensure stable and fair comparisons; the selected ranges are listed in Lines 405–420 (second column). Our analysis shows that while ZO-SOGD is sensitive to hyperparameter choices, it remains robust within reasonable ranges. Following your suggestion, we include extensive tuning results under adversarial attacks for ZO-SOGD (ours, Adam), summarized in the tables below. For inner ($\beta$) and outer ($\alpha$) stepsizes: | | $\alpha=0.001$ | $\alpha=0.005$ | $\alpha=0.01$ | $\alpha=0.1$ | |---|---|---|---|---| | $\beta=0.001$ | $0.68\pm0.05$ | $0.59\pm0.07$ | $0.47\pm0.06$ | $0.53\pm0.08$ | | $\beta=0.005$ | $0.54\pm0.06$ | $0.41\pm0.05$ | $0.35\pm0.04$ | $0.42\pm0.05$ | | $\beta=0.01$ | $0.48\pm0.04$ | $0.34\pm0.05$ | $0.57\pm0.07$ | $0.39\pm0.06$ | | $\beta=0.1$ | $\mathbf{0.26\pm0.03}$ | $0.43\pm0.06$ | $0.33\pm0.04$ | $0.45\pm0.07$ | For smoothing parameters: | | $\rho_r=\rho_s=0.001$ | $\rho_r=\rho_s=0.005$ | $\rho_r=\rho_s=0.01$ | $\rho_r=\rho_s=0.05$ | |---|---|---|---|---| | $\rho_v=0.001$ | $0.61\pm0.06$ | $0.52\pm0.05$ | $0.48\pm0.04$ | $0.57\pm0.06$ | | $\rho_v=0.005$ | $0.47\pm0.05$ | $0.39\pm0.04$ | $0.35\pm0.04$ | $0.45\pm0.05$ | | $\rho_v=0.01$ | $0.41\pm0.04$ | $\mathbf{0.28\pm0.03}$ | $0.31\pm0.03$ | $0.43\pm0.05$ | | $\rho_v=0.05$ | $0.53\pm0.06$ | $0.44\pm0.05$ | $0.40\pm0.04$ | $0.52\pm0.06$ | For momentum parameters: | | $\lambda_t=\eta_t=0.9$ | $\lambda_t=\eta_t=0.99$ | $\lambda_t=\eta_t=0.999$ | |---|---|---|---| | $\gamma_t=0.9$ | $0.35\pm0.04$ | $0.29\pm0.03$ | $0.38\pm0.05$ | | $\gamma_t=0.99$ | $0.31\pm0.03$ | $\mathbf{0.24\pm0.02}$ | $0.33\pm0.04$ | | $\gamma_t=0.999$ | $0.37\pm0.04$ | $0.32\pm0.03$ | $0.40\pm0.05$ | Lower test accuracy indicates better attack performance. The algorithm is robust across a broad range of hyperparameters. Optimal performance is achieved with inner stepsize $\beta = 0.1$, outer stepsize $\alpha = 0.001$, smoothing parameters $\rho_v = 0.01$, $\rho_r = \rho_s = 0.005$, and momentum parameters $\gamma_t = 0.99$, $\lambda_t = \eta_t = 0.99$. In our revised version, we will include a detailed sensitivity analysis for SOGD (the first-order variant) in the parametric loss tuning application (Section 5.2) in the appendix.
null
null
null
null
null
null
Uncertainty Quantification for LLM-Based Survey Simulations
Accept (poster)
Summary: This study investigates the reliable use of simulated survey responses generated by large language models from the perspective of uncertainty quantification. The proposed approach transforms synthetic data into confidence intervals for human response group parameters, addressing distributional shifts between simulated and real populations. A key innovation is determining the optimal number of simulated responses: too many lead to overly narrow confidence intervals with poor coverage, while too few result in overly loose estimates. To address this, the method adaptively selects the sample size to ensure effective coverage on average. It is broadly applicable to any LLM, regardless of fidelity, and any confidence interval construction process. Additionally, the selected sample size quantifies the discrepancy between the LLM and the target population. The study demonstrates the effectiveness of this approach through experiments on real datasets and LLM-generated responses. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths 1. The paper addresses a highly valuable problem. Given the large number of simulation experiments based on large language models, ensuring their reliability is a fundamental issue. 2. The paper provides a detailed formal discussion, offering a strong reference for future research. 3. The study includes comprehensive experiments. ### Weaknesses 1. I extremly expected to see some discussions on the nature and characteristics of language models for simulation, leading to methodological design and theoretical analysis based on these aspects. However, the paper does not seem to be structured in this way; instead, it treats the model more as a black box. 2. The paper starts by directly positioning "the number of synthetic samples" as the core research direction, which feels somewhat unnatural. This claim assumes that the distribution simulated by the language model changes with the number of samples and that there exists an optimal quantity range that aligns better with the real-world distribution. This conclusion is non-trivial and requires further justification. Other Comments Or Suggestions: 1. I think the figure 1 does not provide much useful information and may not be suitable for inclusion in the main body of the paper. 2. Adding a conclusion section would help readers quickly grasp the key findings of the study. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review and the encouraging score. Below are our responses. **Weaknesses** > *1. I extremely expected to see some discussions on the nature and characteristics of language models for simulation, leading to methodological design and theoretical analysis based on these aspects. However, the paper does not seem to be structured in this way; instead, it treats the model more as a black box.* The reviewer is correct that our method treats the LLM as a black box. An advantage is that the method works with LLMs with arbitrary fidelities and architectures. This ensures its wide applicability, as many LLMs are not open-source. > *2. The paper starts by directly positioning "the number of synthetic samples" as the core research direction, which feels somewhat unnatural. This claim assumes that the distribution simulated by the language model changes with the number of samples and that there exists an optimal quantity range that aligns better with the real-world distribution. This conclusion is non-trivial and requires further justification.* Thanks for raising the confusion. To perform simulation, one must specify a sample size $k$. Thus, it is an important quantity to determine. We would like to clarify that the distribution of the simulated responses does not change with the simulation sample size $k$. Instead, the sample size $k$ involves a trade-off between the width and the coverage validity of the confidence interval. Simulating too few samples leads to an overly wide confidence interval, while simulating too many samples causes the confidence interval to concentrate around the population parameter of simulated responses rather than the human responses. Our method is designed to find a simulation sample size $\widehat{k}$ that balances this trade-off. **Other Comments Or Suggestions** > *1. I think the figure 1 does not provide much useful information and may not be suitable for inclusion in the main body of the paper.* Thanks for the suggestion. We will move the figure to the appendix. > *2. Adding a conclusion section would help readers quickly grasp the key findings of the study.* Thanks for the suggestion. In the first two sentences of Section 5, we have summarized the main goal and methodology of our study. We will add the following sentence to summarize the key findings: "Numerical experiments on real datasets verified the coverage guarantees of our approach, and revealed that LLMs exhibited higher fidelity in simulating opinions to social problems than in simulating student responses to mathematics questions."
Summary: This study proposed a method to convert LLM simulated results into confidence sets for population parameters of human responses. It provides an estimate for the optimal number of simulations with theoretical proofs. Claims And Evidence: Claims: (1) A general mathematical formulation of uncertainty quantification for LLM-based survey simulations was provided. (2) The proposed method converts simulated responses into confidence sets for population parameters of human responses. It adaptively selects simulation sample size. This method is applicable for any LLMs, irrespective of its fidelity, and any methods for confidence set construction. The first claim is true. I am not sure about the second claim as I do not fully understand the evaluation method. Additionally, the current study does not provide results for other confidence set construction. Methods And Evaluation Criteria: I am not fully following the evaluation of the selected sample size. Figures 2 and 3 show the average \hat{k} against different alphas. I understand that the number \hat{k} is the measure that the proposed study aims to predict. I do not understand why simply showing this value across different alphas demonstrates the validity of the proposed method. Moreover, since this study aims to estimate the optimal number of simulations that can best represent the human response distributions, why not evaluate the method by actually simulating the responses \hat{k} times on the two datasets and compute the distribution differences between the simulated responses and real human responses? Theoretical Claims: Yes, I checked the proof. Based on the assumptions, it seems correct. However, I have doubts in the assumption. I wonder if the domain of the surveys would influence the simulation results. Experimental Designs Or Analyses: Yes. However, I think this lacks a direct comparison of the simulated responses with \hat{k} and the real human responses. Supplementary Material: Yes, I briefly went through the codes. Relation To Broader Scientific Literature: The potential contribution would be impactful as this method would provide an effective way to select the number of simulations, saving the inference cost. Essential References Not Discussed: References are satisfactory. Other Strengths And Weaknesses: Strengths (1) The research question is important. The potential contribution could be impactful as it may provide a principle in using LLMs for survey response simulation. (2) This study provides theoretical support for selecting the number of simulations. Weaknesses (1) The main weakness lies in the clarity of the evaluation and lack of a more direct evaluation method. As aforementioned, it would be better if the authors could further elaborate on the evaluation of the selected simulation size. Additionally, it would strengthen the study if the simulated results using \hat{k} can be compared directly with the real human responses. (2) It is not intuitively straightforward why the domains of the survey may not be a big factor when designing the approach. It would be better if this can be further discussed. Other Comments Or Suggestions: I did not find typos. Questions For Authors: Please see my main questions (concerns) for the evaluation method. Other questions: (1) Would different hyperparameter settings of the LLMs influence the final results (e.g., decoding strategy)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review. Below are our responses. We hope they address your questions and concerns. **Claims And Evidence** > *the current study does not provide results for other confidence set construction.* We clarify that the coverage guarantee (Theorem 3.3) of our general method in Section 3 holds for any confidence set construction procedures, such as inverting hypothesis tests, the bootstrap, and the empirical likelihood ratio function. In our experiments, we constructed confidence intervals based on Hoeffding's concentration inequality, because it is arguably the simplest construction procedure that has a valid coverage guarantee for any finite sample size. **Methods And Evaluation Criteria** > *I do not understand why simply showing this value [$\hat{k}$] across different alphas demonstrates the validity of the proposed method.* The reviewer is correct that the value of $\hat{k}$ does not demonstrate the validity of our method. Instead, we should verify the coverage guarantee of the induced confidence sets as claimed in our theorems. We examine this through the miscoverage proxy $\tilde{L}(\hat{k})$ in Equation (18). According to the explanations there, $\mathbb{E}[\tilde{L}(\hat{k})]\le\alpha$ implies a controlled miscoverage probability $\mathbb{P}(\theta(\psi)\not\in\mathcal{S}^{\mathsf{syn}}(\hat{k}))\le\alpha$. Thus, to check the validity of our method, we will perform a statistical hypothesis test on $H_0:\mathbb{E}[\tilde{L}(\hat{k})]\le\alpha$ against the alternative $H_1:\mathbb{E}[\tilde{L}(\hat{k})]>\alpha$. Due to space constraint, we kindly ask the reviewer to refer to the tables of $p$-values in our response to reviewer xDnM. In short, all $p$-values are reasonably large, supporting the null hypothesis $\mathbb{E}[\tilde{L}(\hat{k})]\le\alpha$ at, say, the 0.05 significance level. This shows that the miscoverage probability is effectively controlled by our method. We will add these results and discussions to the paper. >*... since this study aims to estimate the optimal number of simulations that can best represent the human response distributions, why not evaluate the method by actually simulating the responses $\hat{k}$ times on the two datasets and compute the distribution differences between the simulated responses and real human responses?* We thank the reviewer for raising this potential connection. Our method aims to select a simulation sample size $\hat{k}$ such that the confidence set $\mathcal{S}^{\text{syn}}(\hat{k})$ constructed from $\hat{k}$ simulated samples can cover the human population parameter with a prescribed probability while being as tight as possible. Therefore, a natural way to evaluate our method is to examine its coverage probability as well as the size of the confidence set. The results are summarized in Appendix C.4.2 and Table 3 in the original manuscript. Please also refer to our response to your comment in "Methods And Evaluation Criteria". Those results indeed reflect the distributional difference between the simulated and real responses: for instance, high coverage probability plus small size would certify small difference. We did not directly simulate $\hat{k}$ samples and directly compute the distributional differences, as it would require defining (additional) discrepancy metrics. **Weaknesses** > *(1) The main weakness lies in the clarity of the evaluation and lack of a more direct evaluation method ...* Please refer to our responses above. > *(2) It is not intuitively straightforward why the domains of the survey may not be a big factor when designing the approach.* It is true that our procedure for selecting the simulation sample size $\hat{k}$ as well as the validity of our approach does not rely on the specific survey domain. However, the survey domain does implicitly play a role in the outcomes of our approach. Different domains may have different target populations and formats of survey questions (e.g., the survey responses could be multiple-choice, numerical, or in a Likert scale). Moreover, the simulation power of the LLM can also be very different across different domains. All these factors can cause our method to choose different simulation sample sizes $\hat{k}$ and confidence sets $\mathcal{S}^{\text{syn}}(\hat{k})$, but the coverage guarantee will always hold. **Questions** > *(1) Would different hyperparameter settings of the LLMs influence the final results...?* The coverage guarantee of our method always holds regardless of the choice of LLM. On the other hand, the hyperparameter setting of the LLM does affect the quality of the LLM simulations. This will in turn affect the chosen simulation sample size $\hat{k}$ and the size of confidence set $\mathcal{S}^{\text{syn}}(\hat{k})$. For example, if a certain hyperparameter setting improves the quality of the LLM simulations, then we expect our method to select a larger simulation sample size $\hat{k}$, which gives a tighter confidence set.
Summary: The paper proposes a novel framework for converting synthetic survey responses generated by large language models (LLMs) into statistically valid confidence intervals for population parameters. By focusing on uncertainty quantification, the authors develop a principled method to adaptively select the simulation sample size (denoted as k̂) such that the confidence intervals maintain a prescribed coverage probability. The paper supports its contributions through rigorous theoretical guarantees—including coverage and sharpness results—and extensive experiments on two real-world datasets (EEDI and OpinionQA) using several state-of-the-art LLMs. Claims And Evidence: The paper makes several claims: - It introduces a data-driven method to determine an effective sample size that produces confidence intervals with prescribed coverage probabilities. - The method provides theoretical guarantees on both coverage and sharpness, implying that the selected sample size yields neither overly conservative nor excessively loose intervals. While the theoretical analysis seems sound, the discussions focus on multiple-choice survey responses (with binary or limited choice scenarios), which raises questions about its generality. Given that LLMs excel at generating diverse and open-ended responses, this narrow focus does not fully leverage their potential in simulating human opinions. Methods And Evaluation Criteria: - The scope of the paper is not clearly stated in the introduction or problem statement. The paper seems to focus only on multiple-choice survey, which is quite limited given the advantage of LLMs in human simulation is to produce opinions on open-ended questions. Therefore, the taste of LLMs in this paper is not very clear. The proposed method to estimate the sample size is also not tailored for LLMs. - The method relies entirely on the base LLM to produce synthetic responses. It is well-known that LLMs can introduce significant bias—especially against minority groups—which is not adequately addressed in the current framework. - A considerable portion of the theoretical and experimental work is devoted to the binary response setting, which may limit the practical applicability of the method to more complex survey scenarios. Theoretical Claims: I haven't verified proofs in the paper. Experimental Designs Or Analyses: The experimental designs for the EEDI and OpinionQA datasets are methodologically sound within the limited scope of multiple-choice surveys. It also relies on some assumptions such as the independence of survey questions and the applicability of CLT approximations, which would be beneficial if the authors could investigate the effect if those assumptions do not hold. Especially the independence of survey questions. Supplementary Material: No Relation To Broader Scientific Literature: The paper focuses on the problem of applying LLM for human simulation. However, the scope is limited since they only focus on determining the number of sample sizes needed, which has limited scope. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: My main concerns are the limited scope of the paper with the main emphasis on determining the number of samples needed, neglecting the bias on underlying LLMs. Other Comments Or Suggestions: The paper writing is difficult to follow in general. It would be better if the authors could clarify the scope of the paper earlier (in the introduction, for example). Questions For Authors: Since we’re having access to human responses, can we use it to calibrate LLM responses to reduce bias? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review. Below are our responses. We hope they address your questions and concerns. **Claims And Evidence** >*... the discussions focus on multiple-choice survey responses... which raises questions about its generality.* Our motivating example in Section 2 focuses on binary responses for the purpose of illustration and ease of understanding. However, our general method and theory in Section 3 apply to a much broader range of survey questions. Specifically, they are applicable as long as the raw responses can be converted into a quantifiable metric. This includes the possibilities of text responses as well as a survey consisting of multiple related questions. Moreover, most existing works on assessing the fidelity of LLM simulations (e.g., those cited in "Related Works") focus on multiple-choice questions. **Methods And Evaluation Criteria** >*The proposed method ... is also not tailored for LLMs* The reviewer is correct that our method treats the LLM as a black box. An advantage is that the method works with LLMs with arbitrary fidelities and architectures. This ensures its wide applicability, as many LLMs are not open-source. In addition, our framework also applies to any black-box generative models beyond LLMs, such as image or video generators. >*LLMs can introduce significant bias...* We agree with the reviewer that LLMs can introduce significant bias against certain populations. However, as long as the target population is specified and we have human responses from that population, our method will construct a reliable confidence set for that population regardless of any bias the LLM may have. The intuition is that by comparing synthetic responses with human responses, we choose a suitable effective sample size $\hat{k}$ so that the bias of LLM is dominated by the random error associated with $\hat{k}$ synthetic samples. Therefore, the confidence set does take care of the LLM's bias for the given population. In addition, recent works have investigated mitigating LLMs' bias towards certain populations, through fine-tuning, prompting, etc. One may use these techniques to simulate more truthful responses from the LLM, and apply our method over these simulations to obtain tighter and more informative confidence sets. >*A considerable portion of the theoretical and experimental work is devoted to the binary response setting...* Please refer to our response to your comment in "Claims And Evidence". **Experimental Designs Or Analyses** >*It also relies on some assumptions such as the independence of survey questions and the applicability of CLT approximations...* In our experiments, there are at least 100 human responses per question. As a standard practice, we expect the CLT-based confidence intervals $\mathcal{S}_j$ to be accurate. The independence of survey questions is used for theoretical analysis only. It is similar to the assumption of i.i.d. data points in the statistical machine learning literature. We totally agree that it may not hold in practice, including the real-world datasets in our paper. However, our experiments show that even when this assumption may be violated, our method still achieves good coverage empirically. **Weaknesses** >*My main concerns are the limited scope of the paper... neglecting the bias on underlying LLMs.* Please refer to our response to your second comment in "Methods And Evaluation Criteria". **Other Comments Or Suggestions** >*The paper writing is difficult to follow in general. It would be better if the authors could clarify the scope of the paper earlier (in the introduction, for example).* Thanks for the suggestion. To clarify the main goal of our paper, we will change the first sentence of the abstract to "We investigate the use of large language models (LLMs) to simulate human responses to survey questions, and perform uncertainty quantification to gain reliable insights." To clarify the survey simulation procedure that our paper is concerned with, we will add the following sentence to the first paragraph of Section 1: "The typical simulation procedure consists in prompting an LLM with a real or fictional persona as well as a survey question, and collecting the LLM's responses." **Questions** >*Since we're having access to human responses, can we use it to calibrate LLM responses to reduce bias?* We believe that this is an important consideration in LLM simulation. In Section 5, we have mentioned the possibility of combing our approach with debiasing methods. As an example, one may first use part of the human data for LLM alignment, and then apply our method with the rest of the human data. However, this is beyond the scope of our current work, as our goal is to convert any black-box LLM simulations to reliable confidence intervals, regardless of the bias of the LLM. Thus, we will leave it as future work to use human data to debias LLM simulations. --- Rebuttal Comment 1.1: Comment: We thank the authors for their detailed responses. However, my main concerns regarding the scope of the paper still remain. The proposed method is not specifically designed for LLMs, so the discussion focused on LLMs for the proposed method is not really necessary. The paper also neglects the inherent bias from LLM simulations, which I believe is a crucial problem for LLM simulations to be adopted. Therefore, I would like to keep my initial score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the prompt reply. Below are our responses. We sincerely hope they address your concerns. > *The proposed method is not specifically designed for LLMs, so the discussion focused on LLMs for the proposed method is not really necessary.* We agree that our method applies to any black-box generative models beyond LLMs. This wide applicability of our approach is an advantage, given that many generative models are not open-sourced. We focus on LLMs due to their vital role in simulating human responses in many fields. Such practical importance makes them a natural motivating example and testbed for our method. > *The paper also neglects the inherent bias from LLM simulations, which I believe is a crucial problem for LLM simulations to be adopted.* As we pointed out in our previous rebuttal, our method is designed to handle *any bias* in the LLM simulations. Specifically, it can convert any imperfect and biased LLM simulations into reliable confidence intervals for population parameters of the human responses. The width of the confidence interval is adaptively chosen to dominate the bias, so that the coverage guarantees always hold. Our main results, Theorems 2.1 and 3.1, do not make any assumption on the bias and thus allow for arbitrary bias. Therefore, we respectfully disagree with the reviewer's claim that we neglect the LLM's bias.
Summary: **Survey Simulation Problem**: This paper proposes using LLMs to estimate the statistics for each survey question without asking humans to fill out the survey. For example, in an educational test scenario, if we aim to estimate the probability $\mu$ of students correctly answering a test question, we can employ an LLM to sample multiple answers, and then we can use these answers to construct a confidence interval to estimate this probability. **Methodology**: - 1. Given a range of user profiles (e.g., gender, age), sample K user profiles. - 2. For each survey question, combine it with K=50 user profiles, and prompt LLM to get K corresponding answers. - 3. Calculate the confidence interval (the statistics we want) based on these K answers (e.g., Apply Hoeffding’s concentration inequality). - 4. Utilize survey questions in the training dataset, and their corresponding real human answers to identify the optimal hyperparameter \( K \). - 5. For a new survey question, follow steps (1)-(3) to compute the respective confidence interval. ## update after rebuttal In general, I think the problem is important and practically useful, and the solution of applying conformal-prediction-style calibration to obtain a better-calibrated statistics seems quite fit in this case. Given these points, I am leaning towards acceptance. Claims And Evidence: See the following "Methods And Evaluation Criteria" and "Theoretical Claims" for details. In general, the claims in this paper are supported by convincing evidence. Methods And Evaluation Criteria: **Method** This paper's goal is to estimate a statistic for a given survey question—one that would require data from the real population—by instead simulating responses from an LLM. The challenge is that if we sample extensively from an LLM, the resulting estimate reflects the LLM’s internal distribution rather than the true population’s. This paper handles this misalignment by using actual responses to similar questions for calibration. Then it compares each LLM-based confidence set against these real outcomes. If the LLM-based confidence sets repeatedly fail to include the real responses, which indicates a sizable mismatch between the synthetic and real distributions, then the algorithm will stop increasing the synthetic sample size (and hence interval narrowing) early, maintaining adequate coverage. In this way, it ensures, on average, the final confidence interval achieves near $1-\alpha$ coverage of the real statistic. A limitation is that the coverage guarantees hold on average over random draws of survey questions and data, not necessarily for any particular question. In many practical applications, one does care about accuracy for each specific question. However, in the calibration literature, per-question calibration (i.e., sample-wise) is typically infeasible. Overall, the proposed calibration procedure is still effective with a theoretical guarantee. **Evaluation Criteria** - The 2 datasets are representative: 1) use EEDI Dataset to estimate the probability a student answers a math question correctly and produce a confidence interval around that probability; 2) use OpinionQA to estimate the mean sentiment score for a given question and produce a confidence interval. - The evaluation metrics test miscoverage rate is reasonable. Theoretical Claims: I briefly reviewed Theorems 2.6 and 3.3; they seem correct, though I might have overlooked some details. Experimental Designs Or Analyses: Overall, the experimental design (train/test approach, empirical miscoverage checks) is reasonable and aligns with the proposed calibration method. The issue is that most of the experimental results are put in the Appendix and some details are not shown, which quite disrupts the flow of the reasoning and understanding. Supplementary Material: I have roughly reviewed "Appendix B. Proofs" and "C.3. Selection of Survey Questions", and checked the results in the "C.4. More Experiment Results". Relation To Broader Scientific Literature: The authors’ work falls into the category of using LLMs as “synthetic subjects” in survey or experimental contexts. While several of those prior studies demonstrate that LLMs can be cost-effective in reproducing or approximating human‐subject responses, most do not provide formal guarantees on how well those simulations match real distributions. This paper’s core contribution lies in formalizing and ensuring coverage—i.e., constructing confidence sets around population estimates that remain valid despite misalignment between real and simulated data. The authors extend ideas from uncertainty quantification (notably in distribution‐free or conformal settings) by adapting them to an LLM scenario, where the synthetic distribution is not guaranteed to reflect the true data‐generating process. Their approach incorporates “calibration data” (a set of questions with real responses) to select the simulation sample size, addressing a gap in earlier LLM-based studies that did not systematically gauge or correct for misalignment. In that sense, their approach parallels conformal inference’s reliance on real data for coverage calibration—except they repurpose it for a new “simulation from an imperfect distribution” challenge. Essential References Not Discussed: To the best of my knowledge, it appears to include the necessary references. Other Strengths And Weaknesses: **Strengths** The paper addresses a relatively new and interesting task—survey response simulation—where large language models are used to estimate the real population's responses for a given questionnaire. More importantly, it provides a framework for estimating the survey’s statistical results using a confidence interval, which gives a quantifiable range with a $1-\alpha$ coverage guarantee. **Weaknesses** 1. **Writing and Structure**: The paper’s presentation could be more accessible. Specifically, the abstract is difficult to parse initially; including a brief explanation or definition of the survey simulation task at the beginning would help readers understand the purpose immediately. 2. **Experimental Analyses**: While the test miscoverage rate is quite important for practical validation, it is relegated to the appendix. Similarly, the discussion about selecting the simulation sample size \(k\) (which indicates misalignment between the LLM and real data) might benefit from its own dedicated section or case study, rather than being nested within the "Experiment Results" section. Other Comments Or Suggestions: See the section "Other Strengths And Weaknesses" above. Questions For Authors: 1. When creating \(K\) user profiles based on a given range of features (e.g., gender, age), how exactly are these profiles sampled? From what I understand, it seems they are randomly drawn from the dataset where such profile information already exists. But if we have a survey question without corresponding real-data profiles, do you then sample feature values uniformly from the space of possible profiles? Or use some other method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review and the encouraging score. Below are our responses. **Experimental Designs Or Analyses** > *... most of the experimental results are put in the Appendix and some details are not shown...* Thanks for raising this point. We will add more experiment details and results in the main text, especially those verifying our guarantees on the coverage probability. In particular, we will examine the miscoverage proxy $\tilde{L}(\hat{k})$ in Equation (18) of the paper. As we explain in the paper, $\mathbb{E}[\tilde{L}(\hat{k})]\le\alpha$ implies a controlled miscoverage probability $\mathbb{P}(\theta(\psi)\not\in\mathcal{S}^{\text{syn}}(\hat{k}))\le\alpha$. Thus, to check the validity of our method, we will perform a statistical hypothesis test on $H_0:\mathbb{E}[\tilde{L}(\hat{k})]\le\alpha$ against the alternative $H_1:\mathbb{E}[\tilde{L}(\hat{k})]>\alpha$. The following tables report the $p$-values computed from the one-sided $z$-test for different LLMs and $\alpha$ over the 100 random splits. We see that all $p$-values are reasonably large, supporting the null hypothesis $\mathbb{E}[\tilde{L}(\hat{k})]\le\alpha$ at, say, the 0.05 significance level. This shows that the miscoverage probability is effectively controlled by our method. OpinionQA: |min|25% quantile|median|75% quantile|max| |:---:|:---:|:---:|:---:|:---:| |0.03|0.25|0.51|0.70|0.99| |$\alpha$|0.05|0.10|0.15|0.20|0.25|0.30|0.35|0.40|0.45|0.50| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| :---:| |GPT-3.5-turbo|0.06|0.13|0.20|0.22|0.70|0.35|0.34|0.26|0.78|0.84| |GPT-4o-mini|0.16|0.16|0.07|0.24|0.47|0.44|0.20|0.12|0.57|0.38| |GPT-4o|0.37|0.32|0.49|0.22|0.58|0.44|0.23|0.34|0.97|0.92| |Claude-3.5-Haiku|0.39|0.59|0.59|0.46|0.70|0.60|0.74|0.89|1.00|0.98| |Llama-3-8B|0.76|0.44|0.26|0.61|0.57|0.42|0.70|0.61|0.74|0.52| |Llama-3.3-70B|0.55|0.65|0.77|0.59|0.71|0.72|0.61|0.35|0.70|0.53| |Mistral 7B|0.62|0.05|0.05|0.03|0.65|0.60|0.54|0.21|0.34|0.34| |DeepSeek-V3|0.07|0.10|0.08|0.15|0.51|0.23|0.17|0.26|0.35|0.30| |Random|0.78|0.62|0.75|0.66|0.96|0.99|0.93|0.80|0.93|0.98| Only one $p$-value (Mistral with $\alpha=0.2$) is below 0.05. This is not surprising due to the multiple comparison problem. Indeed, even when the null hypothesis is true, one would expect to see one out of 20 $p$-values to be below 0.05. EEDI: |min|25% quantile|median|75% quantile|max| |:---:|:---:|:---:|:---:|:---:| |0.18|1.00|1.00|1.00|1.00| |$\alpha$|0.05|0.10|0.15|0.20|0.25|0.30|0.35|0.40|0.45|0.50| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |GPT-3.5-turbo|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |GPT-4o-mini|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |GPT-4o|0.18|0.58|1.00|0.97|1.00|0.98|0.93|0.83|0.98|0.99| |Claude-3.5-Haiku|0.87|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |Llama-3.3-70B|0.96|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |Mistral 7B|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |DeepSeek-V3|0.94|0.98|0.98|1.00|1.00|1.00|1.00|1.00|1.00|1.00| |Random|0.98|1.00|1.00|1.00|1.00|0.99|1.00|1.00|0.99|1.00| **Weaknesses** > *1. ...the abstract is difficult to parse initially; including a brief explanation or definition of the survey simulation task at the beginning would help ...* Thanks for the suggestion. We will change the first sentence of the abstract to "We investigate the use of large language models (LLMs) to simulate human responses to survey questions, and perform uncertainty quantification to gain reliable insights." > *2. While the test miscoverage rate is quite important for practical validation, it is relegated to the appendix. Similarly, the discussion about selecting the simulation sample size ... might benefit from its own dedicated section ...* Thanks for the suggestions. We will include results on the miscoverage rate in the main text. Please refer to our response to your comment in "Experimental Designs Or Analyses" for more details. We will also move the results on the chosen simulation sample size $\hat{k}$ to a separate part. **Questions** > *When creating (K) user profiles based on a given range of features (e.g., gender, age), how exactly are these profiles sampled? ...* The reviewer is correct that in our experiments, the profiles are randomly drawn from the dataset, where we have access to the profile information. When real-data profiles are not available, practitioners may rely on their knowledge of the target population to create fictional profiles. We point out our method does not require the profile distribution $\mathcal{P}^{\mathsf{syn}}$ used for simulation to be the same as the target population $\mathcal{P}$, and automatically addresses any distribution shift between $\mathcal{P}^{\mathsf{syn}}$ and $\mathcal{P}$. Our Theorems 2.4 and 3.3 on the coverage guarantees do not assume any relation between them. Of course, we expect the confidence sets to be tighter when $\mathcal{P}^{\mathsf{syn}}$ and $\mathcal{P}$ are close. --- Rebuttal Comment 1.1: Comment: Thank you for the reply! In general, I think the problem is important and practically useful, and the solution of applying conformal-prediction-style calibration to obtain a better-calibrated statistics seems quite fit in this case. Given these points, I am leaning towards acceptance. Besides, I am curious what would be the possible solution/adjustment if the test dataset has an out-of-distribution shift from the training dataset, and the conformal prediction framework will not work elegantly. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive feedback! Below is our response. > *Besides, I am curious what would be the possible solution/adjustment if the test dataset has an out-of-distribution shift from the training dataset, and the conformal prediction framework will not work elegantly.* The distribution shift problem is indeed an important consideration. In practice, there can be changes in the distributions of the survey questions and the human responses to those questions. The conformal prediction literature has developed a number of approaches to deal with distribution shift, such as reweighing the data [1] and adjusting the miscoverage level [2]. We believe it is possible to adapt those approaches to our framework, and leave it as future work. **References** [1] Barber, R. F., Candes, E. J., Ramdas, A., & Tibshirani, R. J. (2023). Conformal prediction beyond exchangeability. *The Annals of Statistics, 51*(2), 816-845. [2] Gibbs, I., & Candes, E. (2021). Adaptive conformal inference under distribution shift. *Advances in Neural Information Processing Systems, 34*, 1660-1672.
null
null
null
null
null
null
Rethink GraphODE Generalization within Coupled Dynamical System
Accept (spotlight poster)
Summary: The paper investigates the generalization challenges of GraphODE models when used for coupled dynamical systems. It shows that mixing static attributes with dynamic states during initialization, along with an over-reliance on context-specific coupling patterns, can hurt the model’s performance in new settings. To overcome these issues, the paper introduces the GREAT framework, which features a Dynamic-Static Equilibrium Decoupler to clearly separate static and dynamic components, and a Causal Mediation for Coupled Dynamics module that uses variational inference to reduce misleading correlations. Experiments on systems like SPRING, CHARGED, and PENDULUM demonstrate that GREAT outperforms existing methods in both familiar and new environments. ## Update after rebuttal After reviewing all the Reviewer-Authors discussions, I would like to express my strong support for this paper. Most reviewers have recognized the novelty and contributions of this work. The authors have presented an innovative framework that addresses fundamental challenges in GraphODE generalization for coupled dynamical systems. I initially raised concerns about parameter sensitivity analysis and baseline method discussions. The authors' rebuttal thoroughly addressed these points with clear technical explanations and comprehensive empirical evidence. Their responses not only resolved my concerns but further strengthened my appreciation of their technical contributions. Given the quality of both the original submission and their thorough rebuttal, I am raising my score. Claims And Evidence: All claims are well supported by evidence. Methods And Evaluation Criteria: The paper uses RMSE and MAPE as key evaluation metrics. The proposed model and the evaluation criteria make sense for the problem. Theoretical Claims: Theoretical claims are correct and easy to understand. Experimental Designs Or Analyses: Overall, the experimental setup and analyses clearly demonstrate the framework's enhanced performance and generalization capabilities. Supplementary Material: I have read supplementary materials about additional experiments, settings and more necessary theoretical background along with the complete proof process. Relation To Broader Scientific Literature: In terms of machine learning, the paper draws on work related to neural differential equations and graph neural networks. While these techniques have been applied successfully in other domains, their application to epidemic forecasting is a significant innovation. By leveraging these advanced techniques, the paper contributes to the growing body of research on using deep learning to enhance predictive modeling in epidemiology. This approach also connects to broader trends in the literature about the fusion of classical and modern machine learning techniques to address real-world problems. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1) Method is novel. The introduction of the DyStaED provides a novel mechanism to separate these features effectively, enabling better modeling of system evolution. The CMCD module employs variational inference for a nuanced mediation analysis of state evolution, offering deeper insights into how coupling dynamics influence outcomes. 2) The paper innovatively combines causal inference with dynamic modeling to address the entanglement of static and dynamic components. It presents a systematic analysis using causal graphs, particularly in section 3.3, which details the state evolution and clarifies the influence of coupling dynamics. Theorem 3.1 is a standout, offering a rigorous theoretical foundation that significantly bolsters the framework’s claims. 3) The paper is well-organized, with a clear progression from problem definition to theoretical analysis and experimental validation. Weaknesses: 1) Although the paper presents a solid experimental design, it would benefit from a more detailed sensitivity analysis regarding the choice of parameters, such as those used in variational inference or coupling patterns. This would help understand how sensitive the model is to variations in these parameters and further enhance its robustness. 2) The paper provides limited discussion on baseline methods. A more detailed introduction of relevant baselines would offer clearer context for the improvements claimed. Other Comments Or Suggestions: Please see weaknesses. Questions For Authors: 1. How is variational inference implemented within the CMCD module? Is it integrated as part of the joint optimization during training, or is it applied as a separate post-processing step after feature extraction? 2. Why does GREAT exhibit the lowest RMSE in both ID and OOD settings? Is there a specific characteristic of this method that makes it more robust in longer prediction lengths? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: # Response to Reviewer CHTQ Thank you for your positive review and insightful questions. We are grateful for your recognition of our work's novelty and contributions. We address your comments below: > `Weakness 1`: Need more parameter sensitivity analysis. We appreciate this valuable feedback. For example, the orthogonality loss weight $\lambda_o$ is critical for balancing disentanglement and prediction accuracy. Our preliminary analysis shows: | Metric | $\lambda_o$=0.1 | $\lambda_o$=0.3 | $\lambda_o$=0.5 | $\lambda_o$=1.0 | $\lambda_o$=2.0 | |--------|--------------|-------------|------------|------------|-------------| | SPRING ID (RMSE) | 3.945 | 3.803 | **3.687** | 3.714 | 3.892 | | SPRING OOD (RMSE) | 3.986 | 3.754 | 3.651 | **3.619** | 3.785 | As shown above, $\lambda_o$=0.5 and $\lambda_o$=1.0 provide the optimal balance for ID and OOD settings respectively. Too small a value ($\lambda_o$<0.3) results in insufficient separation of static and dynamic components, while too large a value ($\lambda_o$>1.0) overly constrains the model's expressiveness. The model maintains strong performance across a reasonable range of values. > `Weakness 2`: Limited discussion of baseline methods. Thank for the friendly reminder. We will provide a more comprehensive discussion of baseline methods in the revised version. > `Question 1`: Implementation details of variational inference? Our CMCD module implements variational inference by treating coupling factors $\eta_i(t)$ as confounding variables, parameterized via $q(\eta_i(t)|h_i(t))$ with $\zeta_i(t) = \text{Softmax}(W_\eta h_i(t))$. We employ Gumbel-Softmax sampling for differentiability: $\eta_{i,k}(t) = \frac{\exp((\zeta_{i,k}(t)+g_k)/\tau)}{\sum_j \exp((\zeta_{i,j}(t)+g_j)/\tau)}$. The KL divergence term $\text{KL}(q(\eta_i(t)|h_i(t))\|p(\eta_i(t)))$ regularizes against a context-agnostic prior, reducing spurious correlations. This approximates the interventional likelihood $\log p(y_i(T)|\text{do}(h_i(t_0)),\mathcal{G})$, enabling our model to learn universal physical dynamics rather than domain-specific patterns. All components are jointly optimized, allowing the coupling inference to leverage both static and dynamic information for robust generalization. > `Question 2`: Reason for superior performance in longer predictions? GREAT's superior long-term prediction performance stems from several key mechanisms working in concert: First, our effective disentanglement prevents error accumulation by maintaining consistent static components while allowing dynamic components to evolve naturally. Second, the CMCD module enables coupling-aware evolution that explicitly models inter-node influences, capturing complex interdependencies crucial for extended horizons. Third, the DHPA component captures self-exciting temporal patterns at multiple scales, effectively modeling both short-term fluctuations and long-term trends. Finally, empirical stability analysis shows our error growth rate is significantly lower in Figure 6. These advantages make GREAT particularly suitable for applications requiring extended forecasting horizons in complex physical systems. --- Rebuttal Comment 1.1: Comment: The authors have solved my concerns. Thus, I vote for the acceptance of this paper. --- Reply to Comment 1.1.1: Comment: Dear reviewer CHTQ We sincerely appreciate your thoughtful feedback and your willingness to consider the merits of our work. Best regards, Authors
Summary: This paper presents the GREAT framework to improve GraphODE models' generalization in coupled dynamical systems. It tackles two key issues: the entanglement of static and dynamic information during initialization and the reliance on environment-specific coupling patterns. The framework introduces two modules: the DyStaED for separating features via orthogonal projections, and CMCD, which uses variational inference to disentangle latent coupling factors and reduce spurious correlations. Validated through extensive experiments, the approach shows significant improvements in both in-distribution and out-of-distribution scenarios. Claims And Evidence: Based on the description, the claims in the paper appear to be well-supported by evidence. The paper does an excellent job in aligning its problem description with the motivation provided by the SCM causal analysis. It accurately and intuitively identifies the issues in existing methods. Methods And Evaluation Criteria: The paper uses RMSE and MAPE as key evaluation metrics. RMSE captures overall accuracy and penalizes larger errors, while MAPE provides a clear percentage-based evaluation, making it ideal for comparing relative errors. The use of both ID and OOD scenarios is also robust. ID testing ensures good performance in familiar conditions, while OOD evaluation tests the model’s ability to generalize in new, unseen environments. Theoretical Claims: The paper provides a strong, systematic causal analysis using causal graphs to identify and explain the key issues in existing methods. The causal analysis is thorough and offers an insightful understanding of the challenges faced by GraphODE models, leading to constructive Generalizable GraphODE Design Principle. Furthermore, Theorem 3.1 is a key contribution, providing a clear and convincing theoretical result that underpins the effectiveness of the proposed causal mediation approach. The theorem articulates a critical relationship between state evolution and coupling dynamics, offering a rigorous justification for the model's ability to generalize across both ID and OOD scenarios. Experimental Designs Or Analyses: The experimental design is robust and comprehensive, incorporating both ID and OOD evaluations across simulated systems like SPRING, CHARGED, and PENDULUM. The comprehensive ablation studies (Figure 5) effectively isolate and highlight the contributions of both the DyStaED and the CMCD modules. Supplementary Material: I reviewed some of the supplementary materials, mainly including detailed data tables, supplementary illustrations, and extended descriptions of the experimental methods. Relation To Broader Scientific Literature: The paper leverages advanced AI techniques to deepen our understanding of physical systems. It builds on earlier work that applied machine learning to nonlinear dynamics, offering fresh interdisciplinary insights that expand the boundaries of AI in physics. Essential References Not Discussed: All related references have been included. Other Strengths And Weaknesses: ### Paper Strength - The GREAT framework rethinks traditional approaches by decoupling static and dynamic attributes using causal inference, which allows the model to effectively mitigate confounding factors and achieve superior generalization. - The paper establishes a robust theoretical framework by integrating causal inference into the coupling patterns within GraphODE models, while its extensive experiments—spanning multiple simulated datasets and testing both ID and OOD conditions—thoroughly validate the proposed approach. - The paper features a clearly articulated problem description and a compelling motivation grounded in Structural Causal Models (SCM), which together provide a great foundation for the proposed method. - The paper’s diagrams and figures are exceptionally well-designed, combining aesthetic appeal with clarity to effectively convey complex concepts and experimental results. ### Paper Weakness - Although the paper acknowledges some inherent constraints, it does not thoroughly explore the potential limitations of its approach. Other Comments Or Suggestions: The author should provide clearer details on the baseline in the appendix—specifically, parameter settings, configurations, or any implementation differences. Questions For Authors: 1. In scenarios where the distinction between static and dynamic features is ambiguous, how robust is the decoupling mechanism, and what measures are in place to prevent misclassification? 2. Are there experiments to verify the stability of the GREAT model under different training rounds and initial conditions? If the model exhibits instability or significant fluctuations in certain situations, can it be interpreted as limitations in some assumptions or parameter choices of the model? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: # Response to Reviewer XMqH Thank you for your thorough review and encouraging feedback. We are grateful for your positive assessment about novelty and importance of our work. We address your questions below: > `Weakness`: Limited exploration of approach limitations. We fully agree that a more thorough discussion of limitations would significantly strengthen the paper. In the revised version, we will add a dedicated limitations section that provides comprehensive analysis. One potential limitation is training stability: Similar to many deep learning approaches utilizing variational inference, our method requires careful tuning to balance multiple loss components (prediction loss, orthogonality loss, and KL divergence term). We will provide detailed discussion of these potential training challenges along with practical strategies for mitigating them through hyperparameter selection and optimization techniques. > `Comment`: Need clearer baseline implementation details. We sincerely thank you for this suggestion. We will enhance the baseline description and implementation details in the revised version. > `Question 1`: Robustness when static/dynamic distinction is ambiguous? This is an excellent question. Our decoupling mechanism employs several strategies to maintain robustness: 1) Orthogonality regularization ($L_o$): The loss function in Eq.8 enforces strict orthogonality between the static and dynamic subspaces, helping to separate features even when they appear entangled. Our experiments show that without this regularization, performance drops by 12.3% on average in OOD settings. 2) Learnable subspaces: Rather than using predetermined feature splits, our approach learns the separation through end-to-end training. The embedding layers $S_{static}$ and $S_{dynamic}$ adaptively determine the optimal projection for each domain, allowing the model to discover natural separations even in ambiguous cases. 3) We employ a synergy decoder (Eq.12) that allows information to flow from both static and dynamic components. This ensures that even if some features are misclassified during training, critical information is not lost during reconstruction. 4) We introduce a DHPA that dynamically adjusts the importance of different feature patterns at multiple scales. This helps the model better handle ambiguous cases by learning to focus on the most relevant patterns for dynamic separation at each level of abstraction. Our experiments show this further improves robustness in Figure 5. > `Question 2`: Stability across different training configurations? We have conducted extensive experiments to verify GREAT's stability across different training configurations. The final reported results from all tables are the result of 5 random seeds. Here, we also tested various learning rates and training epochs on the SPRING dataset. The results demonstrate that GREAT maintains stable performance across a range of configurations: | Metric | LR=1e-3 | LR=5e-4 | LR=1e-4 | LR=5e-5 | LR=1e-5 | |--------|---------|---------|---------|---------|---------| | SPRING ID (RMSE) | 3.854 | 3.792 | 3.736 | 3.705 | **3.687** | | SPRING OOD (RMSE) | 3.814 | 3.769 | 3.726 | 3.683 | **3.619** | | Epochs | 100 | 200 | 300 | 400 | 500 | |--------|-----|-----|-----|-----|-----| | SPRING ID (RMSE) | 4.243 | 3.986 | 3.782 | **3.687** | 3.721 | | SPRING OOD (RMSE) | 4.279 | 3.947 | 3.751 | **3.619** | 3.655 | --- Rebuttal Comment 1.1: Comment: Thanks for author detailed response. After reading the rebutal and other reviewer questions, most of my concerns have been addressed. I'm happy to increase my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer XMqH, Thank you again for recognizing the importance of our work. Best regards, Authors
Summary: The paper proposes a new graphODE-based method to process the compiled dynamical system. The trajectory is decomposed into dynamic and static parts in the latent space. The experiments demonstrate the effectiveness of the proposed method. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: I'm not sure the OOD split according to the system parameters makes sense. Supplementary Material: No Relation To Broader Scientific Literature: GraphODE is a previous work. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - The paper is well-written and well organized. - The idea of decomposing the trajectory to static and dynamic parts in the latent space is interesting and technique sound. - This paper also provides many useful insights, detailed discussions, and theoretical support. Weakness: - The introduction of background on coupling dynamic is limited. A detailed background on coupled dynamical systems contributes to a better understanding. - The experiments only contain 3 small datasets. More experiments on other domains like molecule or protein will make the paper more convincing. Other Comments Or Suggestions: - It is better to visualize the orthogonality loss L_o, which would be helpful to let the reader know if the decomposition is successful. - The test sets seem all about the small particle system; those particles don't have strong interaction with each other. Can the method test on molecular systems, such as MD17 and QM9 in [1]. [1] Geometric Trajectory Diffusion Models, NIPS 2024 Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer qXoi Thank you for your constructive comments and positive assessment of our work. We address your concerns below and hope our responses will help update your score: > `Weakness 1`: Limited background on coupling dynamics. We sincerely appreciate this valuable suggestion. Due to space limitations, we provided only essential background information. We will expand the background section in the revised version to include more examples and fundamental principles of coupled dynamical systems. > `Weakness 2 & Comment 2`: Small datasets and need molecular system testing. We greatly appreciate these insightful suggestions. We respectfully would like to clarify several points regarding the systems in our experiments: 1) While appearing simple, our systems exhibit strong interaction patterns: the CHARGED system demonstrates complex electromagnetic interactions with high coupling strengths, and the PENDULUM system captures chaotic behaviors with highly entangled dynamics. 2) Following standard settings in previous works [1,2], these systems were deliberately chosen as they present the core challenges (static-dynamic entanglement and coupling bias) that our method addresses. These **challenges are fundamental** across coupled systems regardless of complexity level and have been widely recognized in the field. 3) Our OOD experiments already demonstrate generalization across varying interaction strengths by testing with different coupling parameters (e.g., interaction strength γ∈[0.05,0.15] for SPRING and γ∈[0.5,1.5] for CHARGED in OOD settings). 4) We appreciate your suggestion about molecular systems and have conducted preliminary experiments on MD17: | Method | Aspirin | Benzene | Ethanol | Malonaldehyde | Naphthalene | Salicylic | Toluene | Uracil | |------------|---------|---------|---------|---------------|-------------|-----------|---------|--------| | LatentODE | 0.667 | 0.809 | 0.272 | 0.698 | 0.132 | 0.186 | 0.351 | 0.154 | | LG-ODE | 0.436 | 0.581 | 0.240 | 0.348 | **0.046** | 0.092 | 0.157 | 0.091 | | PGODE | 0.398 | 0.534 | 0.163 | 0.325 | 0.057 | 0.085 | 0.134 | 0.074 | | GREAT | **0.252** | **0.123** | **0.156** | **0.317** | 0.048 | **0.079** | **0.127** | **0.064** | We report RMSE values (Å) on MD17 dataset with irregular temporal sampling where only 60% of timesteps were randomly sampled and follow other stanard setting in our manuscript. GREAT also performs well on molecular systems because these systems naturally exhibit the same challenges our method addresses: varying interaction patterns. Our approach effectively captures both local and global dynamics, achieving the best performance on 7 molecules. We acknowledge the value of testing on diverse systems and will include more comprehensive results including QM9 in the revised version. > `Comment 1`: Need visualization of orthogonality loss. Thank you for this valuable suggestion. The orthogonality loss trajectory during training on SPRING: | Epoch: | 1 | 50 | 100 | 150 | 200 | 300 | 400 | |--------|---|----|----|----|----|----|----| | Loss (Lo): | 13.88 | 5.42 | 2.11 | 1.25 | 0.67 | 0.55 | 0.54 | As shown, the orthogonality loss steadily decreases during training, indicating effective disentanglement of static and dynamic components. We have also demonstrated its effectiveness through ablation studies in Sec. 4.4 and Figure 5. We will include more detailed visualizations in the revised version. > `Concern 1 (Experimental Designs Or Analyses)`: OOD setting. Our OOD split based on system parameters follows established practices in prior work [1] and directly tests the primary challenge in coupled dynamical systems: generalization across varying initialization conditions. This design has strong physical meaning as parameter changes, fundamentally alter system dynamics. [1]: Pgode: Towards high-quality system dynamics modeling. In ICML, 2024. [2]: Physics-informed regularization for domain-agnostic dynamical system modeling. In NeurIPS, 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the response! The replies address my concern. I maintain my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer qXoi, Thank you again for recognizing the innovation and contribution of our work and for your willingness to support its acceptance! Best regards, Authors
Summary: This paper introduces a new framework named GREAT (Generalizable GraphODE with disEntanglement And regularizaTion) to enhance the generalization capabilities of Graph Ordinary Differential Equations (GraphODE) models for coupled dynamical systems. The key contributions and findings include identifying generalization challenges in GraphODE and devise corresponding modules to overcome. The results in three datasets show good performance in both in-distribution (ID) and out-of-distribution (OOD) settings. Claims And Evidence: The two main challenges claimed in this paper are appropriately tackled in the method design. However, I am not fully convinced by the presented results since only average performance metrics are included. For example, - For challenge 1, does the proposed method correctly decouple static and dynamic parts? - For challenge 2, does the proposed method correctly capture the coupling factor? Methods And Evaluation Criteria: Most parts of the method design are reasonable. However, - The motivation behind Dynamic Hawkes Process Augmentation is unclear. Is it universal to all kinds of dynamical systems? Theoretical Claims: Checked. Experimental Designs Or Analyses: I think more ablation studies and in-depth analyses are needed to demonstrate that the proposed method successfully overcomes two challenges. Supplementary Material: Checked. Relation To Broader Scientific Literature: Coupled dynamic systems are ubiquitous across domains. However, the conducted experiments only cover a small range of simplified systems. Essential References Not Discussed: Enough. Other Strengths And Weaknesses: Strength - Rigorous deriviation and method design. - Well-motivated in terms of two main challenges. - Good performance compared with several SOTA methods. Weakness (Detailed in above comments) - The covered systems seem too simple. For example, the dimensionality of these systems is no greater than 5. - Lack of motivation for Dynamic Hawkes Process Augmentation. - Performance evaluations cannot fully validate the claimed advantages. Other Comments Or Suggestions: Figure 3 is Beautiful, but I think the icons are too much, and some of them are not necessary. Questions For Authors: Please see my comments above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # Response to Reviewer LVHe Thank you for your thorough review and constructive feedback. We address your concerns below and hope these clarifications will help you re-evaluate and update the score: > `Weakness 1`: Systems too simple with low dimensionality (≤5). Thank you for this observation. We would like to offer several considerations that guided our experimental design: 1) These systems exhibit complex nonlinear dynamics and chaotic behaviors. We calculated the Maximum Lyapunov Exponent (MLE) for each system, where positive values indicate chaotic dynamics: |SPRING|CHARGED|PENDULUM| |-|-|-| |0.651|0.875|18.932| As shown, all systems exhibit positive MLEs, with PENDULUM showing strong chaos (MLE≫0). The SPRING system captures elastic interactions with long-range dependencies, the CHARGED system models electromagnetic forces with phase transitions, and the PENDULUM system demonstrates extreme sensitivity to initial conditions—each representing distinct coupling mechanisms common in real-world applications. 2) Our research addresses two challenges in GraphODE generalization: static-dynamic entanglement and coupling pattern bias. These challenges **are intrinsic to coupled systems regardless of dimensionality**. 3) Following established practice [1,2], these systems serve as standard benchmarks enabling fair comparison with prior work. 4) We conducted additional experiments on a higher-dimensional SPRING system (10 particles) and the MD17 molecular dynamics (please refer to response to Reviewer qXoi), where GREAT consistently outperformed baselines: |RMSE|LatentODE|LG-ODE|PGODE|GREAT| |-|-|-|-|-| |ID|7.65|6.27|4.92|4.29| |OOD|8.37|6.81|5.39|5.01| [1]: Neural Relational Inference for Interacting Systems. In ICML, 2018. [2]: Physics-informed regularization for domain-agnostic dynamical system modeling. In NeurIPS, 2024. > `Weakness 2`: Unclear motivation for Dynamic Hawkes Process Augmentation. DHPA addresses a fundamental challenge: coupled systems exhibit complex temporal dependencies that basic representations cannot capture. While our disentanglement separates static and dynamic features, the dynamic component needs enhancement to model multi-scale patterns. Following a "divide and conquer" strategy, DHPA strengthens the dynamic representation with weighted historical information: $ \hat{o}\_i(t) = o_i(t) + \delta\sum_{\tau=1}^s w_{\tau,t}\cdot o_i(t-\tau) $. This approach is **universal across systems** because its adaptive weights $ w_{\tau,t} $ automatically adjust to each system's temporal characteristics, with its effectiveness confirmed by our ablation studies (Fig.5). > `Weakness 3`: Insufficient validation of claimed advantages. We provided ablation studies in Figure 5 and Section 4.4 demonstrating each component's contribution. Additional evidence validates our key contributions: Challenge 1: Static-Dynamic Disentanglement: The orthogonality loss decreases during training, showing successful separation: |Epoch|1|100|200|400| |-|-|-|-|-| |SPRING|13.88|2.11|0.67|0.54| |CHARGED|18.94|5.63|2.15|1.62| |PENDULUM|8.76|1.58|0.63|0.64| Feature separation testing: Mutual information (MI) analysis between static and dynamic representations with permutation tests: https://anonymous.4open.science/r/Response_to_Reviewer_LVHe-F29E/table-MI.md. MI was estimated using the Kraskov estimator across 1000 time points, with permutation tests (10,000 shuffles). Decreasing p-values confirm statistically significant separation. The steady MI decrease demonstrates our model eliminates information leakage between representation spaces. Temporal consistency: Cosine similarity between representations at t=0 and later points confirms static representations maintain higher consistency: https://anonymous.4open.science/r/Response_to_Reviewer_LVHe-F29E/table-time-consistency.md Challenge 2: Coupling Pattern Capture: Our experiments and ablation studies demonstrate GREAT's superior performance in OOD scenarios. Our theoretical analysis in Section 3.3 provides rigorous justification by formulating the problem through a causal lens - modeling coupling factors as confounding variables and deriving an interventional likelihood $p(y_i(T)|do(h_i(t_0)), \mathcal{G})$ rather than observational likelihood. Theorem 3.1 proves this formulation captures intrinsic physical dynamics invariant, going beyond empirical validation to provide theoretical guarantees. We will include more detailed analyses in the revised version. > `Other Comment`: Excessive icons in Figure 3. We agree and will simplify Figure 3 while maintaining the figure's informational content.
null
null
null
null
null
null
EFDTR: Learnable Elliptical Fourier Descriptor Transformer for Instance Segmentation
Accept (poster)
Summary: The paper presents EFDTR as an innovative solution for instance segmentation, combining the strengths of polygon-based representations with advanced deep learning techniques. The proposed framework not only enhances segmentation accuracy but also paves the way for future advancements in contour learning and applications in computer vision. ## update after rebuttal The rebuttal addressed my concerns, and I updated my score accordingly. Claims And Evidence: Most claims are supported. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: The authors provide no supplementary material. Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: Pros: + The idea is interesting. + The paper is well-written and well-structured. Cons: - Hole Case: The proposed method demonstrates suboptimal performance in scenarios where holes are present in the mask. This is illustrated by the zebra example in Figure 5, where the foot of the right zebra appears to be merged. - Details. The reviewer noticed a few missed details when there are sharp changes in boundary, where some SAM-based methods can perform well in such case (the connection point between the motorcycle wheel and the rod). But the reviewer also acknowledges that it is a common case even for mask-based methods. So, is there some balance or trade-off between sharp and other concerns? - The reconstruction performance. For the elliptical representation, the reviewer would be pleased to see the reconstruction error raised by the downsampling and speed. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to Reviewer jXM9 Thank you for your constructive comments and insightful suggestions, which have helped us improve the quality and clarity of our manuscript. We address your points in detail below. ### Response to Hole Case The COCO dataset primarily uses polygon annotations for instance masks, with only 1.2% of annotations using RLE. Polygon annotations in COCO do not explicitly account for holes during the annotation process, meaning that hole structures are generally not represented in the ground truth masks. Therefore, during training on the original dataset, the model has limited exposure to true hole instances. To further investigate this, we conducted experiments on the SBD dataset, which includes explicit annotations for hole regions. Upon qualitative analysis, we found that the current 1st-order EFD formulation struggles to accurately capture hole structures. However, using higher-order EFDs—while helpful for capturing fine-grained shapes—led to a drop in overall performance. This trade-off suggests a limitation in the current design, and we consider hole representation as a promising direction for future improvement. ### Response to Balance between Sharp and Other Regions This is an excellent point. Our current approach samples points uniformly from the EFD-predicted contours in the first stage. While this ensures consistent coverage, it may under-sample sharp corners or highly curved regions. A more adaptive sampling strategy—allocating denser points around such regions—could yield more accurate reconstructions. However, this would require an additional module to learn the sampling distribution dynamically. We greatly appreciate this suggestion and plan to explore it in future work. ### Response to Reconstruction Performance To evaluate the balance between reconstruction accuracy and inference speed, we tested different vertex group settings on the SBD dataset. All experiments were conducted at an input resolution of 512×512 using an NVIDIA RTX 3090 GPU, with CUDA 11.8, PyTorch backend, and FP32 precision. The results are as follows: | Model | Vertex Group | FPS (mean ± std) | AP$_{vol}$ | |-------------|--------------|------------------|--------------| | Ours (R50) | 16 | 22.26 ± 0.30 | 67.3 | | Ours (R50) | 8 | 21.29 ± 0.28 | 69.8 | | Ours (R50) | 4 | 18.19 ± 0.16 | 70.2 | | Ours (R50) | 2 | 13.76 ± 0.08 | 70.9 | | MaskDINO | - | 6.45 ± 0.03 | - | | E2EC$^*$ | - | 36 | 59.2 | $^*$ E2EC is tested at an NVIDIA A6000. These results demonstrate that our method maintains a favorable trade-off between accuracy and inference speed, especially compared to pixel-based methods like MaskDINO.
Summary: This paper proposes EFDTR, an instance segmentation framework that leverages Elliptical Fourier Descriptors (EFDs) to represent object contours. The approach employs a two-stage Transformer decoder: the first stage predicts low-order (particularly first-order) elliptical Fourier coefficients to capture global shape information, and the second stage refines the polygon with many vertices aligned by phase matching. The authors argue that EFDTR retains the flexibility of polygon-based methods while using frequency-domain matching to resolve ambiguities in vertex assignment, achieving strong segmentation performance in a variety of challenging cases. ## update after rebuttal My concerns have been fully addressed. Claims And Evidence: The paper presents quantitative comparisons with prior polygon-based methods (e.g., DeepSnake, PolarMask, DANCE, BoundaryFormer) and some mask-based methods (e.g., Mask R-CNN, Mask DINO). The reported AP values show an improvement in boundary precision. While the results on COCO appear convincing, the experiments are predominantly limited to this single dataset. Additional evaluation on other datasets would further strengthen the claims regarding robustness to multi-polygon or highly complex shapes. Methods And Evaluation Criteria: The approach is methodologically sound, however, broader testing on additional datasets or tasks would better illustrate general applicability. Theoretical Claims: There are no formal proofs of new theorems. Experimental Designs Or Analyses: The main experiments are on COCO, comparing the proposed method to polygon-based approaches and selected mask-based baselines, focusing on standard AP metrics. Supplementary Material: No Relation To Broader Scientific Literature: The paper compares to standard polygon-based methods like DeepSnake, PolarMask, PolyTransform, BoundaryFormer, as well as to conventional mask-based methods like Mask R-CNN and Mask DINO. Essential References Not Discussed: One key contribution is a Multiple Polygon Connection strategy (via minimum spanning tree) that merges all polygons of an instance into a single closed contour. Research on multi-polygon (PolygonGNN: Representation Learning for Polygonal Geometries with Heterogeneous Visibility Graph [https://dl.acm.org/doi/abs/10.1145/3637528.3671738]) and work where multiple surfaces are combined into a closed 3D manifold (PolyhedronNet: Representation Learning for Polyhedra with Surface-attributed Graph) are relevant. Other Strengths And Weaknesses: Strengths 1. Introducing novel elliptical Fourier descriptors to segment polygons addresses the vertex-matching issue via the frequency-domain phase. 2. On COCO, it substantially outperforms prior polygon-based methods and approaches certain mask-based baselines. Weaknesses 1. Experiments are primarily on COCO; more challenging or domain-specific datasets are not tested. 2. While the paper shows first-order EFD is most beneficial, there is limited discussion on whether certain shapes might need higher orders. 3. The two-stage design, along with multi-scale feature fusion, may increase computational load; speed comparisons to baseline mask-based methods are not thoroughly explored. Other Comments Or Suggestions: 1. Evaluating on additional datasets would better demonstrate the method’s robustness. 2. Extending the discussion around the MST-based multi-polygon connection could strengthen the justification and highlight potential edge cases. 3. Including more analyses of the performance/accuracy trade-off with higher-order Fourier terms might clarify when they can be helpful. Questions For Authors: 1. When connecting multiple polygons to form a single loop, is there any risk of self-intersection or artifacts that degrade IoU? How do you handle such scenarios? 2. How well do higher-order EFDs cope with extreme non-convex boundaries or shapes containing holes? Could second or higher orders be beneficial in such situations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to Reviewer GvJY Thank you very much for your valuable and constructive feedback. We truly appreciate your time and effort. Below, we provide point-by-point responses to the concerns and suggestions you raised. ### Response to More Dataset We have conducted additional experiments on the **SBD** and **Cityscapes** datasets. The results are as follows: **SBD Results:** | Method | Venue | AP$_{vol}$ | AP$_{50}$ | |--------------|-----------|--------------|-------------| | E2EC[1] | CVPR 2022 | 59.2 | 65.8 | | PolySnake[2] | TCSVT 2024| 60.0 | 66.8 | | Ours (R50) | - | **70.2** | **78.2** | **Cityscapes Results (Val):** | Method | Venue | AP (Val) | |--------------|-----------|--------------| | E2EC[1] | CVPR 2022 | 39.0 | | PolySnake[2] | TCSVT 2024| 40.2 | | Ours (R50) | - | **43.4** | Our method outperforms all other polygon-based instance segmentation methods on both datasets, achieving the highest AP. Notably, PolySnake, which is recognized as the best contour prediction model according to Paperwithcode, is surpassed by our approach by 10 AP$_{vol}$ ### Response to EFD with Higher Order We evaluated the impact of different EFD orders on SBD and Cityscapes: | Dataset | EFD Order | AP$_{vol}$ / AP (Val) | |-------------|-----------|--------------------------| | SBD | 1 | 70.2 | | SBD | 2 | 66.2 | | SBD | 4 | 58.6 | | Cityscapes | 1 | 43.4 | | Cityscapes | 4 | 32.6 | We observe that the first-order EFD consistently delivers the best performance. We hypothesize that, within the current model architecture, higher-order EFDs introduce optimization challenges, resulting in unstable point sampling during the second stage. In contrast, the first-order EFD acts as a stable and learnable geometric representation, analogous to rotated bounding boxes. Although higher-order EFDs are capable of capturing more detailed shape distributions, they tend to be unstable, which adversely affects the performance of the downstream polygon decoder. Below, we present samples of the fitting results using first- and fourth-order EFDs. [img](https://upload.cc/i1/2025/04/01/XyUFoC.png) ### Response to Self-intersection and Degraded IoU On the COCO dataset, we apply a minimum spanning tree (MST) algorithm to connect multiple polygons into a single contour. Among the COCO instances annotated with multiple polygons (9.71% of the total), we sample up to four polygons per instance, which accounts for 99.67% of all multi-polygon annotations. With a maximum of four polygons, the likelihood of self-intersection is extremely low. Given the small proportion of instances with more than four polygons, we believe the impact on training is minimal. | polygon number | instance number | proportion | |----------------|-----------------|------------| |1 | 767315 | 90.28% | |[2, 5) | 79842 | 9.39% | |[5, 10) | 2589 | 0.30% | |[10, ∞) | 203 | 0.02% | ### Response to Holes and EFD Order In COCO, instance masks are annotated using simple polygons without regard to orientation, and thus do not include holes. However, the SBD dataset does contain instances with holes. As discussed in the **Response to EFD with Higher Order**, we observed that higher-order EFDs do not lead to better performance. Visualization shows that the 1st-order model captures the outer contour more robustly. Although some hole-containing instances exist in SBD, they represent a small portion of the dataset. Our analysis suggests that the model’s strength lies in accurate outer contour detection rather than specific handling of holes. This could be a limitation of the current model architecture, which we aim to address in future work. [1] Zhang T, Wei S, Ji S. E2ec: An end-to-end contour-based method for high-quality high-speed instance segmentation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 4443-4452. [2] Feng H, Zhou K, Zhou W, et al. Recurrent generic contour-based instance segmentation with progressive learning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024. --- Rebuttal Comment 1.1: Comment: Most of my concerns have been addressed. I recommend authors include a discussion on works mentioned in "Essential References Not Discussed." --- Reply to Comment 1.1.1: Comment: We greatly appreciate the two papers your recommended. In the paper “PolygonGNN: Representation Learning for Polygonal Geometries with Heterogeneous Visibility Graph,” the authors propose a two-hop method to build a five-tuple heterogeneous geometric representation. PolygonGNN is validated through extensive experiments on both synthetic and real-world datasets, showcasing its robust performance across a variety of scenarios. Notably, the heterogeneous spanning tree sampling strategy in PolygonGNN bears similarities to our approach of using a minimum spanning tree (MST) to connect multiple polygons. This data augmentation technique has inspired us and provides valuable insights into improving performance. The second paper, “PolyhedronNet: Representation Learning for Polyhedra with Surface-attributed Graph,” introduces the innovative Surface-Attributed Graph (SAG) and employs local rigid representations for accurate polyhedron representation and reconstruction. This 3D data representation work serves as an important reference for our future extensions, particularly when considering the application of surface-based approaches in geometric modeling. Both papers suggested by the reviewer have greatly contributed to enhancing the depth of our work. **We promise that we will include discussions of these works and cite them in our paper.** Our Learnable Elliptical Fourier Descriptor Transformer (EFDTR) method, by employing a two-stage model structure with the EFD decoder and polygon decoder, leverages a Fourier-phase-based regression target assignment strategy. This approach has achieved state-of-the-art results for polygon-based methods across multiple datasets. On an Nvidia RTX 3090, our method achieved a performance of 22.26 ± 0.30 (as referenced in the response to Reviewer jXM9). Finally, we kindly request you to consider revising your evaluation score if our response has addressed your concerns. Once again, we sincerely thank you for your time and efforts in reviewing our work. Your feedback has been instrumental in improving the quality of our paper.
Summary: This paper devises a method for regressing vertex positions using Elliptic Fourier Descriptors (EFDs). Furthermore, it proposes a learnable transformer architecture to incorporate these EFDs. The transformer pipeline consists of two stages: 1) a transformer predicting EFDs to get coarse instance regions and 2) a transformer to decode the EFDs into more precise polygon instance segmentations. In order to represent multiple polygons as a single EFD, this approach connects polygons together. To determine the connectivity, this method computes distances between polygons and uses a minimum spanning tree of the resulting graph. To supervise the EFD regression, an L1 loss is used whereas a smooth-L1 loss is used to supervise the polygons. The overall approach is compared with both polygon and pixel-based methods showing improved performance compared to polygon methods and comparable performance with pixel-based methods. Key components of the method (number of decoder layers, EFD prediction order, and others) are thoroughly ablated justifying their importance. ## update after rebuttal I have read the rebuttal and it addresses my main concerns. I am maintaining my score of accept. Claims And Evidence: The claims made are supported by sufficient evidence. Methods And Evaluation Criteria: Yes. The authors compare to both polygon-based methods which are most similar and pixel-based methods which exhibit state of the art performance. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design appears sound. Supplementary Material: Supplementary material not provided. Relation To Broader Scientific Literature: The findings in this paper advance the state of polygon-based instance segmentation which provides a more lightweight representation than pixel-based methods. The authors demonstrate this advancement by showing that their method archives superior performance as compared to other polygon-based methods such as SharpContour and BounrdaryFormer. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Strengths: - This paper presents a new method for polygon-based segmentation using EFDs. - The proposed method is evaluated against multiple baselines. Results show that this method leads to improved performance over existing polygon-based methods and comparable performance against pixel-based methods. - Paper is well written and the method is easy to follow. Weaknesses: - The main weakness would be that this method still performs worse than some pixel-based approaches. While these polygon methods might be more efficient, it seems likely that for many applications, performance would be more important and pixel-based methods would be used instead. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to Reviewer 3kZq Thanks a lot for the time and effort you invested in providing the detailed reviews. Regarding the current weaknesses you pointed out, we are glad to give our responses. ### Summary and Strengths We are pleased that you found our method well motivated, the design sound, and the paper clearly written. We're also glad that you appreciated our comprehensive ablation studies and performance improvements over existing polygon-based methods. ### Weakness: Comparison to Pixel-Based Methods > *"The main weakness would be that this method still performs worse than some pixel-based approaches. While these polygon methods might be more efficient, it seems likely that for many applications, performance would be more important and pixel-based methods would be used instead."* Thank you for this valuable observation. We agree that in high-accuracy applications, pixel-based methods often remain the preferred choice. However, our goal is to provide a segmentation model that outputs structured data, which is particularly useful in tasks such as bird's-eye view vector map construction and remote sensing vector map generation. We believe our method can contribute to these areas by offering a novel solution. Additionally, we provide inference performance comparisons in our submission, which highlight the efficiency of our method. For further context and elaboration, we also encourage referring to our responses to the other reviewers. Once again, thank you for your valuable comments and support.
Summary: The paper proposes an instance segmentation method for images, based on the prediction of a polygon as a series of connected points instead of a more commonly used pixel mask. The contour of a polygon capturing an instance is decomposed with a Fourier decomposition. The authors propose a transformer-like architecture that extracts multi-resolution features from input images, predicts coefficients of the Fourier decomposition, samples the obtained contour, and refines the sampled point (polygon vertices) with several additional transformer-like layers to predict the final positions of the vertices. The approach is evaluated on COCO dataset and compared to the other mask-based and polygon-based instance segmentation baselines. A series of ablation experiments empirically justifies the choice of some important hyperparameters. Claims And Evidence: 1. The paper claims to obtain the best instance segmentation results among polygon-based methods. To support this claim they compare their method to a series of polygon-based baselines (the latest are from CVPR’22). 2. The paper claims that their approach is a competitive alternative to pixel mask-based methods. To verify, it is compared to some mask-based methods. Methods And Evaluation Criteria: The proposed method that decomposes the contour of the polygon using Fourier decomposition and uses a transformer-based architecture to infer the coefficients of the decomposition looks valid from a theoretical point of view. However, I suspect the method does not work well in practice. In theory, high-order decompositions are desirable because the instance contours have non-trivial topology, and low-order approximations are not able to distinguish the contour point samples with the same phase but different amplitudes. However, the ablation in Table 3 shows that the method works best with the first-order approximation (which approximates the contour with an ellipsoid). That fact raises the question of whether the decomposition is required at all. Given that, the method could have just sampled the initial points on a circle and just used the polygon decoder with more layers to infer the final positions of the vertices. Theoretical Claims: There are no theoretical claims in the paper. Experimental Designs Or Analyses: Overall, the experiments are designed well; the ablations make sense. However, the authors only used relatively old ResNet-50/101 backbones for their method, which makes the approach significantly weaker compared to the overall state-of-the-art methods. Supplementary Material: There are no supplementary materials. Relation To Broader Scientific Literature: I am not sure about the subfield of polygon-based instance segmentation, but overall the authors do not include several important prior works. For example, even on the simplest ResNet-50 backbone, Co-DINO-Deformable-DETR++ [1] achieves AP of 52.1 (vs 43.6 reported for the presented method) when trained for the same number of epochs. The corresponding paper also contains multiple references to other methods that produce better results. Essential References Not Discussed: [1] Z. Zong, G. Song, Y. Liu. DETRs with Collaborative Hybrid Assignments Training. In ICCV’23. Other Strengths And Weaknesses: The authors try to make a case for their method by carving a niche for polygon-based instance segmentation methods and trying to push their results, but it is hard to justify the niche if the difference between the state-of-the-art and the proposed methods is that large (AP = 45.1 vs 65.9 best for [1] with ViT-L backbone). The fact that the high-order decompositions do not work well does not help the case. Other Comments Or Suggestions: I suggest the authors consider using other backbones and try to make the high-order approximations work properly. ## Update after rebuttal First of all, I would like to apologize for my initial misunderstanding of the proposed method. Indeed, the segmentation contour is projected on the approximated decomposition in a bijective manner, so I withdraw this concern and, as a result, am willing to improve my rating. After the rebuttal, my concerns mostly stayed the same: 1. The proposed method does provide a way to project segmentation contours on ellipsoids (1st order approximations), which is of some value and likely helps the method, but the fact that the use of any high-order approximations lowers the performance is concerning and might indicate that the method is either severely limited in its potential of is technically not correct. 2. The authors operate in a subfield of the polygon-based segmentation and only choose very weak mask-based baselines for comparison. As I mentioned in the review, there are mask-based instance segmentation methods that outperform the proposed method using the same back-bones by a large margin (Co-DINO-Deformable-DETR++ achieves AP of 52.1 vs 43.6 reported for the presented method on ResNet50). The gap further widens with the use of the more up-to-date backbones. I agree that a direct comparison of these classes of methods is not entirely fair, but it is hard to ignore the existence of these related methods and not diminish the importance of the presented method. To the very least, the paper should contain a baseline that takes modern superior mask-based segmentations and converts them to the polygon-based format. This seems like a much fairer competition compared to outdated mask-based methods with worse performance. If the masks can be converted to polygons without a significant loss in quality, the presented method loses its significance. Overall, it is still hard for me to recommend acceptance due to these concerns. There are some merits in the presented work but to me, the approach seems to be too far from the current state of the art (or to the very least it is hard to properly assess that). Questions For Authors: Have the authors tried to remove the EFD decoder and just use a polygon decoder with more layers? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Response to Reviewer tj3E We sincerely appreciate your time and effort in reviewing our paper and are glad to provide detailed responses to your insightful questions and suggestions. ### Clarification on Method While higher-order EFDs can offer finer contour approximations, the concern regarding "same phase but different amplitudes" may stem from a misunderstanding of the concept of phase in the EFD frequency domain versus angular coordinates in polar representations. In our approach, Elliptic Fourier Descriptors establish a **bijective mapping** between each contour point \( p(\theta) \) and a unique phase value \( \theta \in [0, 2\pi) \). This bijection guarantees a one-to-one correspondence between contour points and their associated phase values in the frequency domain. Our point regression strategy exploits this property to avoid ambiguities present in Cartesian and polar systems. For example, in polar coordinates, a ray from the centroid may intersect the contour at multiple points sharing the same angle but differing in radius. In contrast, under EFD, such points are associated with distinct phase values, as illustrated in the figure below. [img](https://upload.cc/i1/2025/04/01/rE6OP8.png) ### Response to Vanilla Ellipse To further validate the effectiveness of our two-stage architecture and the role of the first-stage EFD prediction, we conducted additional experiments on the SBD dataset in response to the reviewer’s concerns: - **Type 1**: Identical to the original model, except the first stage predicts a naive inscribed ellipse. - **Type 2**: Based on Type 1, the number of layers in the first stage is reduced to 2, while the second-stage polygon decoder is deepened to 6 layers. | Model | 1st Stage Layers | 2nd Stage Layers | AP\(_{vol}\) | |-----------|------------------|------------------|--------------| | Original | 6 | 3 | **70.2** | | Type 1 | 6 | 3 | 59.5 | | Type 2 | 2 | 6 | 62.7 | The results demonstrate that the EFD-based first-stage prediction significantly outperforms the naive ellipse initialization. Moreover, increasing the depth of the second-stage decoder alone does not compensate for the lack of EFD-based guidance. We attribute the superior performance of our design to the following: 1. Although both are ellipses, the 1st-order EFD behaves more like a PCA-based initialization, better capturing the object's principal orientation and shape distribution. 2. EFD provides a more principled framework for assigning regression targets than heuristic elliptical approximations. ### Response to Larger Backbone You raised an important point regarding the potential benefit of a stronger backbone in enhancing the learning of high-order EFDs. We conducted experiments with a Swin-L backbone using a 1× training schedule on the COCO *val2017* set: | EFD Order | Schedule | Backbone | mAP(val2017) | |-----------|----------|-----------|---------------| | 1 | 1× | Swin-L | 44.1 | | 4 | 1× | Swin-L | 33.2 | | 1 | 1× | ResNet-50 | 40.6 | Interestingly, even with a more powerful backbone, the 1st-order EFD still achieves the highest mAP. We believe that, under the current architecture, high-order EFD parameters are difficult to optimize effectively, introducing instability in the second-stage point sampling. Conversely, the 1st-order EFD serves as a robust and easily learnable target, similar to rotated object detection, which helps stabilize the overall training. ### Response to Model Design Question > *Remove the EFD decoder and just use a polygon decoder with more layers.* This experiment is in the **Response to Vanilla Ellipse** section. ### Additional Remarks Our method is polygon-based, where both the output and the supervision signals are sequences of points. Within the domain of polygonal contour prediction, our method surpasses all prior work. Although in terms of AP, pixel-based methods outperform ours, we emphasize that our vector-based formulation offers a unified and end-to-end framework that holds practical value in applications such as vectorized HD map construction and remote sensing vectorization. We believe our contribution provides an effective approach for tasks requiring structured, vectorized outputs. For further comparisons on additional datasets and inference efficiency, please refer to our responses to other reviewers. Once again, we sincerely thank you for your valuable feedback.
null
null
null
null
null
null
Explaining, Fast and Slow: Abstraction and Refinement of Provable Explanations
Accept (poster)
Summary: The paper tackles the issue of computing verifiably sufficient input-level explanations of neural network predictions. The proposed algorithm speeds up the computation, which involves several invokations to an exact solver, by constructing a smaller-scale abstraction of the target neural network. The abstraction retains sufficiency but may break minimality. An incremental version of the algorithm thus constructs a sequence of smaller-to-larger abstractions until the corresponding explanation is both sufficient and minimal. The experiments primarily focus on evaluating efficiency and size gains. Claims And Evidence: Evidence is quite convincing. The main claims are that the explanations output by the proposed algorithm are sufficient by construction and small, while the algorithm itself is claimed to be faster than the competition. - Sufficiency: this is supported both by the analysis of Algorithm 2 in page 6 and by the results in Table 2: the explanations always achieve 100% sufficiency; the metric used for this assessment appears to be in line with the general notion of sufficiency in the formal explainability literature. - Size: there appear to be sizeable (!) gains in terms of explanation size in Table 1. - Speed: the results in Figure 2 indicate a reasonable speed-up on all data sets. More generally, the algorithm is tested on three data sets and different neural net architectures. While the networks are quite small, this is due to scalability issues of current NN verification tools, not directly the proposed algorithm. Methods And Evaluation Criteria: Yes. The setup is very much in line with what's expected from an NN verification paper (which slightly differs from what XAI researchers might expect, but this difference is also expected). Theoretical Claims: I did not check the correctness of the claims. They all seem to be intuitively reasonable. Experimental Designs Or Analyses: Yes, the design is clean. Supplementary Material: I did not thoroughly review the appendices, I merely skimmed through them to check for obvious issues. Relation To Broader Scientific Literature: To the best of my knowledge, the contribution is positioned appropriately against existing literature. Essential References Not Discussed: No to my knowledge. Other Strengths And Weaknesses: #STRENGTHS - Clearly written and structured - The motivation is likewise clear - Presents a sensible and technically non-trivial solution - The experiments are well designed - The results provide evidence that the claims hold #WEAKNESSES - Even when tackled using nice strategies -- like the proposed algorithm -- finding provably sufficient explanations is still computationally challenging, limiting (at least at present) applicability and significance. Other Comments Or Suggestions: - Algorithm 2, line 8: "obtain counterexample" - how is this done? - Section 5: it is not immediately obvious what the "standard algorithm" is or whether it is reflective of the SOTA. Please be clearer about it. - Table 1: explanation size is not monotonic across $\rho$'s - this is a bit counterintuitive when compared with the discussion in Section 3. Could you please clarify what is happening here? - Figures and Tables: how is the std deviation computed? - Proofs: it's not necessary to mention that proofs are in Appendix A four times. - Prop 4: min-suff is technically undefined. - Prop 2: the notation f \in f and f \subset f is not defined. - It is really unfortunate that the construction by Ladner & Althoff is not explained in more detailed in the main text. - Minimal sufficient explanations are not unique. I strongly suggest the authors to discuss uniqueness of the computed explanations in the Limitations paragraph. Questions For Authors: - Could your algorithm help *enumeration* of minimal sufficient explanations too? If so, it may be worth to point this out, as a number of papers on formal explainability are concerned with this very problem. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback and their acknowledgment of the significance of our work. **Enumeration of minimal sufficient explanations** We thank the reviewer for the insightful comment and agree that this is a highly relevant direction, particularly in the context of formal explainable AI. We touch on a related aspect in Appendix C, where we explore how varying feature orderings can influence the explanations generated by our algorithm. Intuitively, running the algorithm over different orderings yields different explanations. A more advanced strategy could start with an initial explanation and iteratively perturb specific features - e.g., replacing features in the explanation with those from its complement - to produce alternative explanations. Another possible direction is to explicitly maximize the divergence between explanations to encourage diversity, for instance by using reverse orderings. While enumerating all possible explanations is computationally intensive, one could potentially leverage the duality with the minimum hitting set (MHS) problem [1] to generate many contrastive explanations and iterate over diverse MHSs. However, such an approach may not scale well to large input domains due to the sheer number of possible contrastive explanations. We appreciate the importance of this point and will incorporate a discussion of it into both the main text and appendix of the final version, and highlight it as an interesting direction for future research. **Additional weaknesses, comments and suggestions** We first acknowledge the weakness pointed out by the reviewer regarding the scalability challenges in producing provably sufficient and minimal explanations for neural networks. However, as both the reviewer and other reviewers have noted, our method represents a significant improvement over prior approaches tackling the same task - offering better computation time, support for larger model sizes, and more concise explanations. We also refer the reviewer to Appendix D, where we demonstrate the generalizability of our method across additional settings. This includes evaluation on a language benchmark derived from a medical NLP dataset, where certification is conducted using meaning-preserving perturbations within an embedded input space, as well as in a regression setting. In the latter, we use the real-world, safety-critical Taxi-Net benchmark to show that fixing certain input features keeps predictions within a target range. Moreover, we highly appreciate the reviewer’s many thoughtful comments and suggestions, which we will incorporate into the final version and will help us improve the clarity of our work. Regarding Algorithm 2, the counterexample is obtained directly from the verifier but can also be extracted through an external adversarial attack, and we will clarify this accordingly. By a standard algorithm, we refer to a typical greedy approach, which does not incorporate abstraction-refinement, such as the one presented in Algorithm 1 and used in prior works (e.g., the method by Wu et al., NeurIPS 2023 [2]). Importantly, we emphasize that our abstraction-refinement approach is agnostic to the specific technique used to generate explanations and is applicable to any method that derives explanations through multiple certification queries. We agree that this point should be more clearly articulated in the final version. We agree with the reviewer’s comment on the importance of improving the discussion of the abstraction methodology in the main text. To enhance clarity, we will relocate portions of the detailed expedition of the abstraction technique from Appendix A to Section 3 and refine the overall discussion accordingly. Additionally, the standard deviation is computed as the square root of the mean of the squared deviations from the average, normalized by dividing by the total number of data points used in each separate experiment. We will also address the notation issues raised and emphasize the non-uniqueness of the generated subsets. While this aspect is partially covered in Appendix C, which discusses different feature orderings, we will ensure it is explicitly highlighted and discussed in the main text as well. Thank you for bringing these valuable points to our attention! Lastly, we appreciate the reviewer’s interesting point regarding the non-monotonicity that is observed in Table 1. This effect is due to the experiment's timeout: without it, the behavior remains monotonic (see Fig. 6). However, with the timeout, larger abstraction configurations can get stuck earlier in the algorithm, causing them to produce larger explanations within the time limit, which explains the observed slight non-monotonic behavior. We will clarify this interesting point in our final version. [1] From Contrastive to Abductive Explanations and Back Again (Ignatiev et al., KR 2021) [2] Verix: Towards Verified Explainability of Deep Neural Networks (Wu et al., Neurips 2023)
Summary: This paper aims to improve the scalability of verification algorithms for computing minimal sufficient explanations of neural network predictions. \ The traditional approach iteratively removes input features while preserving the invariance property that the retained set must remain a sufficient explanation. However, this strategy is computationally expensive due to the large number of evaluations required for the neural network. \ The proposed approach introduces an abstraction-refinement technique that enables querying a surrogate, smaller network, thereby reducing the computational cost of verification. The approach progressively increases the network size to ensure convergence to the minimal sufficient explanation set. Notably, the ordering of network sizes corresponds to an ordering of the explanation sets (under the subset relation), allowing formal guarantees such as minimality and sufficiency. \ Experiments are conducted on three datasets, MNIST, CIFAR-10 and GTSRB, comparing the proposed method against the traditional formal approach and demonstrating improved efficiency. Additionally, comparisons with heuristic-based methods, including Anchors and SIS, highlight the ability to reduce the computational time gap between formal and heuristic-based approaches while ensuring formal guarantees. ## update after rebuttal The authors have addressed my concerns. I think the additional effort put into the rebuttal have enhanced the completeness of the work, in particular with respect to the additional computational analysis of querying surrogate oracles, the analysis of the scalability trend (scaling the size of the original backbone) and the additional experiments to make the analysis consistent across different network architectures (in terms of activation functions). All these experiments should be included in the paper or in the supplementary material. Additionally, the illustrative examples are an interesting addition to improve the clarity of the presentation and increase the accessibility and readability of the paper. These examples should be included in the main paper. Overall, I'm happy to increase my score. Congratulations to the authors ! Claims And Evidence: Overall, the claims of the paper are clear and reasonable. The idea of leveraging a continuum of progressively larger models to reduce the computation time for verification and improve the scalability is novel and significant. Methods And Evaluation Criteria: Experiments are in my opinion overall convincing, but should be strengthened in terms of their scope. Please refer to the Experimental Designs Or Analyses Section for further details. Theoretical Claims: I haven’t checked the proofs for the theory, but all results about sufficiency and minimality are reasonable. There is however a major issue with the clarity of the presentation. Please refer to the Other Strength and Weaknesses section for further details. Experimental Designs Or Analyses: Some of the design choices for the experimental analysis are questionable. For instance the backbone network for evaluation uses sigmoid activations for MNIST and GTSRB, and ReLU for CIFAR-10. In order to improve the completeness of the analysis, experiments with both sigmoid and ReLU should be provided in all datasets. Additionally, it would be good to perform an analysis with networks using residual connections, otherwise the scope of validity of the results is quite limited. In addition to Figure 4, it would be good to provide a more fine-grained analysis of the computation time, including the number of call evaluations to the neural network surrogates with the corresponding cost in terms of time per call (over iteration/feature). Additionally, the analysis doesn’t provide any insight about the “trend” for the scalability, as the size of the original backbones is kept fixed. It would be good to repeat the analysis for smaller models (or larger backbones) and compare with the already provided results. Supplementary Material: None Relation To Broader Scientific Literature: In my opinion, there is good discussion about the related work. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper can be improved in terms of Clarity. Specifically: 1. It would be good to provide a running example (simple yet non-trivial) to highlight the notion of abstraction and refinement of a network and the corresponding consequence on the explanation set. A lot of definitions and propositions are provided without giving the reader the possibility to gain an intuition. 2. No sufficient detail about the proving of sufficiency and the way in which networks are abstracted and refined is provided. This hinders the clarity and reproducibility of the work. Overall, the paper makes an interesting and original contribution. However, the scope of the experimental analysis is rather limited and the paper can improve in terms of clarity. Therefore, I make an initial and cautious judgement, but I'm willing to increase my score. Other Comments Or Suggestions: None Questions For Authors: Please refer to the Sections: - Experimental Designs Or Analyses - Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback and their acknowledgment of the significance of our work. **Expanding the scope** We thank the reviewer for bringing up this point and will improve its discussion in the final version. We note that while our primary experiments focus on vision classification models, our approach is domain-agnostic and can be applied to other domains. In Appendix D, we provide example experiments in additional domains. For language tasks, we utilized the only language-specific benchmark from the annual neural network verification competition (VNN-COMP [1]), which is based on a medical safety NLP dataset. Certification was achieved over an embedded representation of the input space, ensuring meaning-preserving perturbations. Another example of an expansion can include regression tasks, not just classification. In this case, the provable guarantee would ensure that fixing a subset of features keeps the output prediction within a specified range. We demonstrate this in Appendix D using the real-world, safety-critical Taxi-Net benchmark from Wu et al. (Neurips 2023, [2]). We will enhance our discussion of these extensions, as well as other potential applications, in the body of the final draft. **Design choices for experimental analysis** We selected these specific configurations because they are utilized in the annual neural network verification competition (VNN-COMP [1]). However, in response to the reviewer’s feedback, we will include the additional requested configurations in our final version, with the varying distinct activation functions used for each setting. Our method is also compatible with networks that include residual connections, and we will include an experiment in the final version to highlight this. **Performance trend and additional quantitative measures** We thank the reviewer for these valuable suggestions. We will indeed incorporate additional quantitative measures in the final version, specifically detailing both time per query and time per iteration across our various settings. Due to space limitations in the rebuttal, we provide here only representative results - namely, the average query times in seconds for our different settings in Section E.5. | Abstraction| MNIST | CIFAR | GTSRB | |-------|-------|-------|-------| | $\rho$=0.1 | 0.08 | 0.35 | 1.89 | | $\rho$=0.2 | 0.11 | 0.42 | 3.05 | | $\rho$=0.3 | 0.13 | 0.55 | 4.73 | | $\rho$=0.4 | 0.16 | 0.75 | 7.05 | | $\rho$=0.5 | 0.18 | 0.90 | 9.27 | | $\rho$=0.6 | 0.20 | 0.92 | 11.28 | | $\rho$=0.7 | 0.22 | 0.93 | 12.97 | | $\rho$=0.8 | 0.23 | 0.92 | 14.38 | | $\rho$=0.9 | 0.25 | 0.93 | 15.68 | We appreciate the reviewer’s suggestion to examine the scalability trend in our results and will include dedicated experiments in the final version to explore this aspect more thoroughly. Our current findings indicate that increasing model size leads to an increase in overall generation time. At the same time, this also amplifies the relative benefits of the abstraction-refinement strategy, as larger models tend to produce more significant improvements when using coarser abstractions. To illustrate this effect more clearly, we performed an additional experiment based on the “Marabou” benchmark from VNN-COMP [1]. While our original setup focused solely on the “cifar10_large” model, we now include the “cifar10_medium” and “cifar10_small” variants as well. In this experiment, the abstraction-refinement approach showed relative improvements over the standard method, reducing computation time by 136.46 seconds for the small model, 264.18 seconds for the medium model, and 570.39 seconds for the large model. In the final version, we will include additional experiments that vary model sizes while keeping all other parameters constant, to more clearly illustrate this trend. **Paper clarity improvements** We thank the reviewer for the helpful feedback on these points. In the final version, we will include a running example to help clarify our definitions. Due to space limitations, our current detailed explanation of the sufficiency proofs and abstraction appears primarily in Appendix A, with only a concise summary in the main text. As recommended, we will integrate portions of this discussion into the main body and improve the overall presentation of these topics. [1] First three years of the international verification of neural networks competition (VNN-COMP) (Brix et al., STTT 2023) [2] Verix: Towards Verified Explainability of Deep Neural Networks (Wu et al., Neurips 2023) --- Rebuttal Comment 1.1: Comment: Thank you for the answers. I appreciate the additional computational analysis of querying the surrogate oracles and the analysis of the scalability trend, which contribute to strengthen the approach. There are still some aspects that are not adequately addressed. Specifically, the experimental analysis on the network choices should be made consistent (both in terms of activation and architecture) in order to assess the scope of validity of the proposed solution. Moreover, an intuitive example would be appreciated to enhance the clarity of the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. We are glad to hear that some points have been addressed and agree that they will help strengthen our work. We aim to clarify the remaining open points below: **Additional experimental analysis for consistency** We agree with the reviewer that including alternative configurations with different activation functions is important for consistency, even though our initial choices were based on architectures used in VNN-COMP. In response, we added the requested experiments with both ReLU and Sigmoid activations in Section 5.1. The results from the complementary variants are as follows: For MNIST with ReLU, our method achieved an average runtime of 46.88s vs. 52.68s for the standard method. With a 20s timeout, our explanations averaged 380.36 in size, compared to 439.74—showing improvements in both speed and explanation size. For CIFAR with Sigmoid, our method ran in 411.29s on average, versus 999.14s for the standard method, and yielded significantly smaller explanations (309.13 vs. 576.50). On GTSRB with ReLU, due to the time constraints of the rebuttal phase, we were only able to run a partial experiment (with a 2000s timeout). This yielded explanation sizes of 701.0 (ours) compared to 849.4 (standard). Despite currently being partial, these results too already point to a significant improvement in explanation size and an expected runtime improvement of the full experiment. In the final version, we will include the full experiment with a thorough analysis of all results, presenting them in both detailed tables and through the relevant visualizations and ablations—similar to those provided for the other benchmarks. We appreciate the reviewer for raising this important point. **Incorporating running examples** We thank the reviewer for this valuable suggestion. We will add a running example to the main text and move parts of Appendix A into Sections 3 and 4. We also share a simple [illustrative example](https://postimg.cc/yDjjKCfJ), which we will expand upon in the final version. It demonstrates several concepts using a toy ReLU network with positive weights and biases, allowing for exact bounds via simplified interval-bound propagation. While intentionally simplified, the example serves to illustrate the procedure. Figure (a) shows the original model and the interpreted input (0,1,1). All biases are set to 0 except for the lower output neuron, which has a bias of 10. Propagating (0,1,1) gives outputs of 15 (class 1) and 46 (class 2), so class 2 is predicted. Figure (b.1) illustrates an explanation that includes features 2 and 3 (in orange): we fix these features to their original values of 1, restricting their domains to [1,1]. Feature 1 is not in the explanation and hence is allowed to vary freely in the range [0,1]. We then compute bounds via interval propagation. For example, the top hidden neuron gets an input range of [3, 5], from a lower bound of 0x2 + 1x2 + 1×1 = 3 and an upper bound of 1x2 + 1x2 + 1×1 = 5. These bounds are propagated to the output layer using the weights. For example, the top output neuron's range is [15, 22], calculated as: lower bound = 3x2 + 3×1 + 6×1 = 15 and upper bound = 5x2 + 5×1 + 7×1 = 22. Overall, the output range for class 2 ([46,55]) is strictly above that of class 1 ([15,22]). Therefore, fixing features 2 and 3 is *sufficient* to guarantee that class 2 remains the predicted class—making it a valid explanation. In Figure (b.2), we illustrate how three hidden neurons are merged by unifying their intervals and computing a weighted sum. While in practice we use the Minkowski sum to obtain a tighter bound, we simplify the process here for clarity. For example, the top neuron has bounds of 3 × (2+1+1) = 12 and 7 × (2+1+1) = 28, giving the interval [12, 28]. Since this lies strictly below [31, 59], features 2 and 3 form an “abstract explanation” per Definition 3. Moreover, figures (c.1) and (c.2) show that feature 3 alone is also an explanation. While it's a *minimal* explanation for the original model, it isn't one for the abstract model, since [4, 28] and [17, 59] overlap—violating Definition 3. This shows that while every abstract explanation is valid for the original model, minimal explanations may not be. Figures (d.1) and (d.2) show a refinement where only the first two neurons are merged, resulting in two merged neurons: one for the first and second, and one for the third. The output interval [37, 55] remains strictly above [8, 22], confirming that fixing only feature 3 is a *minimal* explanation for both the *refined and original model*. We emphasize that the most significant improvements in our method appear in much more complex models with tighter bounds and harder certification. The examples shown were deliberately simple to clarify the general methodology, which we agree is important to illustrate intuitively. Once again, we thank the reviewer for these very important remarks and for helping us improve our work!
Summary: This paper introduces a novel abstraction-refinement technique to efficiently compute provably sufficient explanations of neural network predictions, defined as a subset of input features that are sufficient to determine that the prediction remains the same. The method constructs an abstract neural network, which is significantly smaller than the original model, by merging similar neurons, showing that a sufficient explanation for the abstract model is also provably sufficient for the original. Since an explanation that is minimal for the abstract network may not be minimal for the original, authors introduce an approach that iteratively refines the abstract network by gradually increasing its size until a provably minimal sufficient explanation is found for the refined network that is also provably minimal for the original. This method substantially improves the efficiency of computing sufficient explanations compared to the existing verification-based baseline while also outperforming heuristic-based methods –in terms of explanation size and time-- which fail to provide sufficient explanations and lack formal guarantees. The results demonstrate that the approach enhances scalability, interpretability, and flexibility, offering a more fine-grained understanding of neural network decisions. Claims And Evidence: The main claim that the proposed method outperforms existing verification-based methods by producing smaller explanations more efficiently is supported by the evaluation setup. Also, the comparison against heuristic-based approaches shows that they do not provide sufficient explanations while lacking theoretical guarantees. However, I still find the following claims unsupported: 1. Claim: “A smaller explanation provides a better interpretation, and for this reason, the minimality of the explanation is also a desired property” in L019-023. Issue: this claim is not well-supported by the authors throughout the paper. Since this is the basis behind finding provably minimal sufficient explanations (the main contribution in the paper), I think it would be more convincing to add how this improves interpretability (e.g., visually, quantitatively) and compare it to other non-minimal explanations. Methods And Evaluation Criteria: The proposed method for providing provably sufficient explanations efficiently is well-motivated, especially for safety-critical domains, where having reliable explanations is crucial. However, there is quite a few limitations in the evaluation criteria: - Limited diversity in benchmark datasets and architecture: the evaluation focuses on MNIST, CIFAR-10 and GTSRB, but lacks a broader range of complex real-world datasets that are more relevant for downstream tasks. - Lack of qualitative comparison of explanations: the paper only evaluates their method against heuristic-based methods quantitatively in Table 2. (e.g., size and efficiency), but does not provide a qualitative comparison. It is unclear whether the generated provably sufficient and minimal explanations are more meaningful semantically or more interpretable than heuristic-based explanations. - Quantitative evaluation metrics are not enough: The metrics used (e.g. explanation size and time) are not enough from an explainability point of view. Perhaps it would be useful to include quantitative interpretability metrics (e.g., Grid Pointing Game), or any other similar metric, to see how provably sufficient explanations can truly localize class-discriminative features compared to heuristic-based approaches. Theoretical Claims: Claim: Alg. 2 is claimed to produce a provably sufficient and minimal explanation as stated in Proposition 5. Issue: Going through the proof, I did not find any detail on whether the final explanation is always the globally minimal one, or whether there are multiple minimal explanations (different subsets of input features of same minimal size). If so, this needs to be explicitly clarified. Experimental Designs Or Analyses: Yes. I checked the experimental design described in Section 5 (Experimental results) and Appendix B, C and D. One issue is the choice of the perturbation radius $\epsilon_p$. There is no justification as to why the values were chosen to be 0.01 and 0.001 (L734-L735). Supplementary Material: Yes. Parts of Appendix A and Appendix B, C and D. Relation To Broader Scientific Literature: This paper contributes to the field of formal explainable artificial intelligence (formal XAI) by leveraging formal verification techniques (abstraction refinement) to provide provably sufficient explanations more efficiently. Unlike heuristic-based methods such as Anchors and SIS, which lack guarantees, and prior verification techniques that are computationally expensive, this method merges neurons to reduce verification complexity while ensuring explanation sufficiency. The paper extends work in neural network verification (e.g., Reluplex, SMT solvers) by refining the abstraction iteratively to find a minimal sufficient explanation. This improves over the verification baseline method (Alg. 1), making formal guarantees less computationally expensive with applications in trustworthy AI and safety-critical domains. By bridging verification, explainability, and scalability, this work contributes to making formal XAI more practical. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: Strengths - This paper introduces a novel abstraction-refinement approach to efficiently generate provably sufficient explanations for neural networks. The idea is interesting and reasonable: to show that a certain property of a reduced size model (e.g., abstract sufficient explanation) provably holds for the original model, especially for the verification literature which is often limited by scalability - The paper is well-organized, and the description of the algorithm and evaluation is clear. Weaknesses - Certified radius: there is no evaluation on the effect of changing the perturbation radius on the produced explanations. Clearly a larger radius would be more desired, but there is no elaboration on this part. - Halting condition: there are no clear evaluation criteria or recommendation of which halting condition (e.g., network size $\rho$) a user should decide on to improve both the computation time and interpretability. (L090-092, right) Other Comments Or Suggestions: no further suggestions Questions For Authors: 1. How does the proposed method perform w.r.t different perturbation radii $\epsilon_p$? I think this is a crucial point to show, as to also justify the choice of the current radii. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback and their acknowledgment of the significance of our work. **Extension to different $\epsilon_p$ perturbations** We thank the reviewer for raising this important point. As correctly noted, larger $ϵ_p$ perturbations yield stronger sufficiency guarantees but may result in larger explanations. We chose the perturbation levels in this paper based on those commonly used in the annual Neural Network Verification Competition (VNN-COMP [1]), but our method is general and can be applied to any perturbation. We already include an ablation study in *Appendix C*, which we’ll reference more clearly. Per the reviewer’s suggestion, we’ll expand it with additional benchmarks, perturbation levels, and visualizations. **Qualitative analysis and the importance of minimality** We agree that this is an important point, and will include further qualitative assessments in the final version. As a preliminary example, we share one [illustrative case](https://postimg.cc/Mcg8tp1y). First, image (a) highlights the importance of minimality. For example, the interior of the forward sign (red frame in image (c)) is not included in our explanation, and the edges alone are sufficient for classification. This shows *the interior pixels are irrelevant* and can be excluded without affecting the prediction. In contrast, non-minimal sufficient subsets (e.g., the other examples) include unnecessary features. To better grasp the significance of small or minimal explanations, one can consider the extreme case where the entire image is chosen as the explanation: although it is clearly sufficient, it is neither minimal nor informative. Second, image (b) shows that fixing the provably sufficient subset ensures robustness: any change in the complement $\bar{S}$ within the domain doesn't alter the classification. This contrasts with heuristic subsets, where changes in the complement can flip the prediction - revealing their limitations. The explanation also matches human intuition, as the highlighted region (the triangular forward sign) alone justifies the predicted class, unlike the much less clear heuristic explanations. **Additional quantitative analysis and scope** Since our work focuses on *sufficient explanations* rather than the more widely studied *additive attributions*, certain commonly used evaluation metrics - such as infidelity [2], which are specifically designed for additive forms - are not directly applicable. Adapting such metrics for sufficiency-based explanations would require significant modifications and, on its own, constitutes an interesting direction for future work. Therefore, we followed established conventions in the literature [e.g., 3–5] for evaluating sufficiency-based explanations, relying on the three widely accepted metrics in this area: generation time, sufficiency, and conciseness. That said, we appreciate the reviewer’s interesting idea of assessing how well sufficient explanations localize class-discriminative features. We tested this using object-detection ground truth from GTSRB and found that indeed 93.33% of our method’s explanation pixels aligned with annotated regions, outperforming Anchors (60.36%). We thank the reviewer for this idea and will include a detailed experiment on this point in the final version. Lastly, while our main experiments focus on vision classification models, our approach is domain-agnostic and can extend to other domains. Appendix D includes examples of both language and regression tasks. For language, we use the only VNN-COMP [1] language benchmark based on a medical NLP dataset, certifying meaning-preserving perturbations via an embedded input space. For regression, we use the real-world safety-critical Taxi-Net benchmark used in Wu et al. (Neurips 2023, [3]), showing that fixing input features keeps predictions within a target range. We will expand on these extensions and additional applications in the final draft. [1] First Three Years of the International Verification of Neural Networks Competition (VNN-COMP) (Brix et al., STTT 2023) [2] On the (In) Fidelity and Sensitivity of Explanations (Yeh et al., Neurips 2019) [3] Verix: Towards Verified Explainability of Deep Neural Networks (Wu et al., Neurips 2023) [4] On Guaranteed Optimal Robust Explanations for NLP Models (La Malfa et al., IJCAI 2021) [5] Abduction-based Explanations for Machine Learning Models (Ignatiev et al., AAAI 2019)
Summary: The paper seeks to use an abstraction-refinement approach to generate "provably sufficient" explanations for neural networks. == The rebuttal has satisfactorily addressed my major concerns. Claims And Evidence: 1. The paper is motivated by the need for proofs in high-assurance systems. However, the investigations are on relatively simple benchmarks and simple models. 2. The notion of a provably correct AI system is clear. However, provably sufficient explanations need not carry the same sense of high assurance. 3. Explanations are often meant for human users. It is not clear that a provably sufficient explanation will be desired by a human end user or will enhance assurance arguments for complex systems. Methods And Evaluation Criteria: 1. The method is evaluated on relatively simple benchmarks using simple models. 2. There is no evaluation to show these provably sufficient explanations lead to high assurance, which is the central motivation of the paper. Theoretical Claims: 1. The theoretical claims primarily build upon earlier work on abstraction refinement and neural network verification. 2. The connection between explainable AI for high-assurance applications and provably sufficient explanations is tenuous. Experimental Designs Or Analyses: 1. The experiments are on really small models and low-resolution data sets. Supplementary Material: I have gone through the supplementary material. While the application of abstraction refinement to explanations is new, the connection is weak -- it is not clear that provably sufficient explanations lead to high-assurance systems and the paper does not provide evidence to support this. Relation To Broader Scientific Literature: The paper introduces ideas from formal verification such as abstraction refinement to explainable AI. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable and constructive feedback. **Improving the motivation behind the work** We agree that the motivation for obtaining explanations with provable guarantees, as well as the importance of the specific guarantees we provide, can be better articulated. We will make sure to better clarify this in the final version. Please see our detailed response on this point below: **General motivation.** Explanations for black-box models are often sought to *enhance user trust* in the model. For instance, consider a scenario where a medical professional must trust a black-box model to diagnose a disease from MRI scans. An explanation can help them decide whether to rely on the prediction - especially if it highlights medically relevant image regions. However, if the explanation is untrustworthy and lacks guarantees - such as presenting a subset that isn’t truly sufficient - it can misrepresent the model’s reasoning, leading the medical professional to place misplaced trust in it, potentially resulting in harmful outcomes. Such scenarios have sparked increasing interest in the literature in developing explanations that come with provable guarantees [see, e.g., 1–8]. In this context, *explanations that are both provably minimal and sufficient* have gained significant attention as a highly sought-after form of provable explanation [see, e.g., 1-8, among others]. The key idea is to identify small subsets of features that, even when all other features take arbitrary values, still preserve the model’s prediction. This ensures the subset’s certified sufficiency, helping users understand the reasoning behind the prediction and disregard irrelevant features. **Scope.** While we agree that the guarantees our method provides inherently face scalability challenges - due to their reliance on verification, as is with all approaches tackling this task - we emphasize the significant relative improvements our approach achieves over previous methods. As several reviewers have also positively noted, our contributions are particularly notable in terms of reduced computation time, the generation of smaller explanations, and the ability to handle larger models. As the field of neural network verification continues to progress rapidly [9-11], our method - offering a significant, orthogonal enhancement for generating explanations using these tools - will similarly evolve. Moreover, to further show the generality of our results, we present experiments on two additional benchmarks in Appendix D: (1) the only language task from the annual neural network verification competition (VNN-COMP, [11]), based on a medical safety NLP dataset, which uses meaning-preserving perturbations in an embedded input space; and (2) a regression task, where our guarantees ensure output stability, evaluated on the *real-world, safety-critical Taxi-Net benchmark*, evaluated in Wu et al. (Neurips 2023, [2]). Finally, in response to the reviewer’s comment and reviewer pUMo’s feedback, we will include additional qualitative examples in the final version to better motivate our work. As part of our response to reviewer pUMo, we have added a small illustrative visualization, which we also plan to incorporate - along with other visualizations - into the final version. We thank the reviewer for highlighting this important point and will significantly enhance our discussion of these aspects in the final version. [1] Delivering Trustworthy AI Through Formal XAI (Marques Silva et al., AAAI 2022) [2] Verix: Towards Verified Explainability of Deep Neural Networks (Wu et al., Neurips 2023) [3] Abduction-Based Explanations for Machine Learning Models (Ignatiev et al., AAAI 2019) [4] On Guaranteed Optimal Robust Explanations for NLP Models (La Malfa et al., IJCAI 2021) [5] Model Interpretability Through the Lens of Computational Complexity (Barcelo et al., Neurips 2020) [6] Computing Abductive Explanations for Boosted Trees (Audemard et al., AISTATS 2023) [7] Explanations for Monotonic Classifiers (Marques Silva et al., ICML 2020) [8] Foundations of Symbolic Languages for Model Interpretability (Arenas et al., Neurips 2021) [9] Beta-Crown: Efficient Bound Propagation with Per-Neuron Split Constraints for Neural Network Robustness Verification (Wang et al., Neurips 2021) [10] Scalable Neural Network Verification with Branch-and-bound Inferred Cutting Planes (Zhou et al., Neurips 2024) [11] First Three Years of the International Verification of Neural Networks Competition (VNN-COMP) (Brix et al., STTT 2023)
null
null
null
null
null
null
You Always Recognize Me (YARM): Robust Texture Synthesis Against Multi-View Corruption
Accept (poster)
Summary: This paper addresses real-world unknown image degradation that affects deep learning model performance. The authors propose a novel data-centric approach that optimizes the textures of 3D objects to enhance their robustness against corruption. The methodology is based on 3D voxel grid representations reconstructed from multi-view images. A classifier is used as a surrogate model to optimize textures, ensuring robustness against corruption during imaging. ## update after rebuttal After I review the author's response and other reviewers' comments, I maintain my score. I appreciate the authors' efforts. No higher score because the contribution doesn't reach the bar of ICML. Claims And Evidence: moderate, please see details in weaknesses. Methods And Evaluation Criteria: yes, they make sense in the proposed setting. Theoretical Claims: limited theoretical analysis Experimental Designs Or Analyses: I think the experiment is not solid and enough. Supplementary Material: n/a Relation To Broader Scientific Literature: see problems Essential References Not Discussed: see problems Other Strengths And Weaknesses: Strengths: 1.Novel Data-Centric Perspective: Instead of modifying models or preprocessing images, object texture optimization is a new angle for improving robustness. 2.Transferability: While transferability is a core concern of the article, it indicates how the output of a single model can be generalized to other architectures. 3.Application of 3d reconstruction: Using voxel-based NeRF for texture optimization aligns well with modern 3D deep learning approaches. Weaknesses: 1.Unpolished writing: line 162-163 and 199-200 is repetitive. Formula 3 is cross entropy loss but labeled as mse. Figure captions are 2.Lack of real-world experiments: The experiments are performed on synthetic datasets rather than real-world scenarios. But whether the method is applicable to real-world data is a key question mentioned in the intro 3.Scalability: 3d reconstruction makes this method difficult to scale up to larger images. Other Comments Or Suggestions: see problems Questions For Authors: 1.The article discussed the existence of a universal robust texture. However the experiment results did not support the importance and necessity of this feature. Why do we need this additional computation if the single robust texture is already transferable? It is mentioned that hopefully this facilitates the classification of unseen data, but this is not verified. 2.Why does some texture generated from a stronger backbone like res152 lead to significant drop in performance for simpler architectures like res18 and res34? 3.How does the object become robust as the training progresses? Figure 4 is a bit confusing in displaying it. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We greatly appreciate the reviewers’ thoughtful feedback and insightful suggestions. We have carefully reviewed all the comments and provide our detailed responses below. **A1.Regarding writing issues:** Thank you for pointing out these issues! We will correct these errors in the revised version of the paper. **A2.Regarding real-world experiments:** Thank you for your question! We fully understand the importance of real-world experiments. However, compared to previous methods (such as Unadv), we have already conducted extensive evaluations of our proposed method on a large dataset containing 40 categories and 400 object instances (whereas Unadv evaluated only 5 objects). We believe this is sufficient to demonstrate the generalization capability of our method. Nevertheless, to further address your concern, we have additionally conducted an experiment on a real-world scene. Specifically, we selected a real scene used in [1]. Since our method requires separating the foreground object, we manually segmented the foreground object from the multi-view images and performed 3D reconstruction and texture optimization accordingly. The experimental results based on ResNet-18 are presented in the table below. The visualization results can be found here https://p.sda1.dev/23/b6df0ef9d63f80fba42e36ce11994fed/real_scene.png |Method|none|1|2|3|4|5|random|m.CE|R.mCE| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Clean|0.7155|0.7016|0.6373|0.5424|0.4390|0.3382|0.4959|-|-| |Ours|1.0000|0.9957|0.9786|0.8932|0.7009|0.5342|0.8417|0.2395|0.3720| **A3.Regarding scalability:** Thank you for your question! We believe that large-size images can be downscaled to fit our 3D reconstruction pipeline, and therefore, scalability to larger images is not a major concern in our method. In contrast, images with excessively low resolution may significantly degrade the quality of 3D reconstruction, which could in turn hinder the effectiveness of the optimized textures. **A4.Regarding universal robust textures:** Thank you for your question! Indeed, a single object-specific robust texture can only be transferred across different classifiers but cannot be directly transferred to other objects with different shapes, even within the same category. We proposed the concept of universal robust textures because object-specific texture synthesis is an inefficient process. Therefore, our goal is to generate a universal texture for each category, independent of the specific appearance and shape of individual instances. We have validated the feasibility of this idea through preliminary experiments. It is worth noting that the performance of universal robust textures reported in Tables 1 and 2 was evaluated on objects that were not used during training. This demonstrates that these textures indeed provide protection for previously unseen objects within the same category. However, the primary focus of this paper is on object-specific robust textures. As such, we did not devote significant effort to further improve the performance of category-level robust textures, which may explain why their current performance appears unsatisfactory. Nevertheless, we believe that the direction of category-level robust textures is worthy of further exploration. We have also discussed possible ways to improve their performance (see ***A6 to Reviewer rM6D***). **A5.Regarding selection of proxy model:** Please refer to ***A5 to Reviewer rM6D***. **A6.Regarding the question about Figure 4:** Thank you for your question! We apologize for the lack of clarity in the presentation of Figure 4. Specifically, the first row of Figure 4 shows the original appearance of the objects; the second row shows the appearance after applying object-specific robust texture optimization for each object; and the third row shows the appearance when applying the universal robust texture, generated for the "airplane" category, to objects of different shapes within the same category. As illustrated in Figure 1, we selected 8 objects from the "airplane" category as the training set. We reconstructed voxel grid representations for each of these 8 objects based on their multi-view images. Then, we initialized a random perturbation $\delta$ and, during each iteration of the optimization process, randomly selected one voxel grid to combine with $\delta$ and rendered the corresponding view images. In this way, the resulting category-level robust texture can be directly applied to unseen voxel grid representations of other objects belonging to the "airplane" category, enabling the rendering of corruption-robust views. We hope this explanation helps clarify your concerns regarding the universal robust texture and the content of Figure 4. --- Rebuttal Comment 1.1: Comment: Given that my major concerns haven't been fully addressed, I insist on my initial score, recommending a weak reject to this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer mDnf, We sincerely appreciate your response. If possible, we would be grateful if you could kindly specify which concerns you believe remain insufficiently addressed and, if feasible, offer constructive feedback that could help us further refine and strengthen our work.
Summary: This paper proposes a data-centric approach to enhance the performance of deep learning models in the presence of image degradation by utilizing multi-view 3D reconstruction and optimizing for a robust texture. They have shown that a generalized robust texture exists that can transfer across objects of the same category but with different geometries. From the experiments, they conclude that choosing a surrogate model with weaker performance can better accommodate a broader range of transfer scenarios. They have shown relevant experiments to show the performance of all the subparts. They have a dataset IM3D to show the effectiveness of their method. The dataset contains 40 different classes where each class contains 10 different objects and there are around 100 images for each object. ## Update after rebuttal Authors have addressed my concerns. I keep my original rating 'weak accept'. Claims And Evidence: 1. In Line 139-140, it is given that some previous methods have explored robustness in 3D objects. And the differences (they were not tested on large-scale datasets and other models, ...) are not convincing. Concepts look similar. Can you please specify what are the exact differences between your method and those methods? The difference is only where you are utilizing 3D reconstruction in your optimization? 2. Is it possible that, in Eq. (3), the optimized $\delta$ becomes zero, and the classifier predicts the correct class always? Methods And Evaluation Criteria: Did you check the effectiveness of your method on some real data (real-world)? Other than the dataset IM3D you have used? I feel there should be some results on real data also. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Are the experiments fair? Can you please specify the training settings of the methods compared in Tabl 1 and 2? Have all methods been retrained/finetuned with the same dataset? Supplementary Material: Yes. Details on 15 types of degradations, evaluation metrics, and more qualitative results. Relation To Broader Scientific Literature: From the current version of the paper, I feel, the contribution of the paper is good to the scientific literature. But, please specify the exact differences from the previous literature as I have raised in one previous query. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths are 1. The paper is well-written and it is interesting to read. 2. Proper experiments are shown. Weaknesses 1. Fig. 1, 2, and 4 are not referred to in the texts and these figures are like some independent things in the paper. Readers may not see those figures if they are not cited in the text. 2. In Fig. 4, what is shown by universal texture? Not clear! 3. Line 209, column 2, why you have replaced the soft plus activation function with ReLU? 4. IM3D dataset is from which paper? Is it cited in the paper? 5. It is better to have more visual results of more different objects and also every method you have compared (if you can show what they have learned). Other Comments Or Suggestions: NA Questions For Authors: Against each of the sections, questions are given. Please try to address my concerns to help me make a more favorable decision. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewers’ thoughtful feedback and provide our detailed responses below. **A1.Regarding the specific differences from prior methods:** Thanks for your question! We acknowledge that our work is inspired by Unadv; however, there are clear differences that distinguish our approach from Unadv: 1.Optimization space: Unadv directly optimizes the pixel values of texture images in the 2D image space, while our method performs optimization in the voxel grid space. We believe the high dimensionality of the voxel grid allows for better optimization outcomes. 2. Visual appearance preservation: Directly optimizing texture images in pixel space often results in highly discrete pixel values, which can manifest as noise-like visual artifacts on the object surface. In contrast, the textures generated by our method exhibit a certain degree of local continuity. This advantage is particularly evident when applied to objects without initial textures (see ***A3***). 3.Efficiency: Our method is approximately 100 times faster than Unadv. Unadv's optimization is based on 3D mesh representations combined with a differentiable rendering framework, which typically requires 6 to 7 hours to optimize a single object. In comparison, our approach takes only 4 to 5 minutes to complete the optimization. **A2.Regarding Eq.3:** Thanks for your question! We acknowledge that such a situation is theoretically possible; however, the probability is very low. Specifically, for this to happen, the gradient of the $\delta$ term obtained through backpropagation during training would need to remain 0 throughout the entire process. Additionally, it would require that the multi-view images of the original object exhibit strong robustness against all types of corruptions. Furthermore, commonly used optimizers incorporate momentum terms, which help prevent the optimization from getting stuck in such a local optimum. Therefore, we believe this situation is unlikely to occur in practice. **A3.Regarding real-world data and other datasets:** The reason why we only used the IM3D dataset in our experiments is that our method requires datasets with specific characteristics: (1) multi-view images along with camera poses to enable 3D reconstruction; and (2) category labels aligned with ImageNet classes to support the classification task. Therefore, datasets commonly used in NeRF-related tasks are not directly applicable to our pipeline. Nevertheless, to demonstrate the generalization ability of our method, we selected 4 objects from the ModelNet40 dataset—originally designed for point cloud classification—and rendered multi-view images of them using Blender for evaluation. Experimental results based on VGG16 is shown below. Visualization results can be found here https://p.sda1.dev/23/b660fbea4c3a6d471effde89e4b4110f/modelnet.png. For real scene results please see ***A2 to Reviewer mDnf***. |Method|none|1|2|3|4|5|random|m.CE|R.mCE| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Clean|0.7229|0.6296|0.6014|0.5693|0.5228|0.4596|0.5381|-|-| |URIE|0.3708|0.3235|0.3301|0.3067|0.2844|0.2326|0.3125|1.7991|1.9015| |VQSA|0.3854|0.4032|0.4081|0.3949|0.3838|0.3757|0.3851|1.4768|1.5398| |DCP|0.2819|0.2652|0.2138|0.1677|0.1429|0.1158|0.1443|2.3419|2.2930| |Unadv|0.1794|0.1620|0.1566|0.1283|0.1139|0.1031|0.1123|3.4739|4.6710| |Ours|0.9646|0.9334|0.8875|0.8599|0.7971|0.6982|0.8347|0.3463|0.3799| **A4.Regarding the fairness of the experiments:** Thanks for your question! For the baseline methods, we first evaluated their publicly released weights (most of which were pretrained on ImageNet) on the IM3D dataset. However, due to distribution differences, most of these methods exhibited suboptimal performance on IM3D. Therefore, we additionally fine-tuned URIE, VQSA, and DCP on the IM3D dataset to ensure a fair comparison. After fine-tuning, VQSA showed significant performance improvement, URIE exhibited a slight improvement, while DCP showed no noticeable improvement. **A5.Regarding Figure 4:** Thanks for your question! In Figure 4, we illustrate the appearance of universal robust texture when applied to multiple instances with different shapes within the same category (airplane). We will add further clarification to the caption of Figure 4 in the revised version and ensure that Figures 1, 2, and 4 are explicitly referenced in the main text. **A6.Regarding the use of ReLU:** This design choice was proposed by [1], where the authors claimed that replacing SoftPlus with ReLU better preserves the discontinuities present in real-world signals. **A7.Regarding the IM3D dataset:** The IM3D dataset is sourced from [2]. We apologize for the oversight of not citing it in the original submission. We will add the appropriate citation in the revised version. **A8.Regarding the visualization results:** Please refer to ***A3***. [1].ReLU Fields: The Little Non-linearity That Could. [2].Towards viewpoint-invariant visual recognition via adversarial training. --- Rebuttal Comment 1.1: Comment: Authors have addressed my concerns. I keep my original rating 'weak accept'. --- Reply to Comment 1.1.1: Comment: Dear Reviewer eS8T, Thank you very much for acknowledging both the significance of our work and the efforts we have made in the rebuttal. If our response has adequately addressed your concerns, we would greatly appreciate it if you would consider raising your score. Your support would be very meaningful to us. Best regards, Authors of Paper 11436
Summary: The paper focuses on the reconstruction of a 3D object from a set of low-quality 2D images. The method applies 15 different corruption techniques during training to the respective images just like a data augmentation technique to make the classifier strong enough independent of the noise introduced. Furthermore, this research evaluates the classification performance of the reconstructed objects by transferring knowledge from the surrogate model to a larger architecture. ## update after rebuttal I decided to keep my original score (weak accept). Claims And Evidence: The claims mentioned in the paper are supported by various studies that have explored similar approaches. The paper discusses the use of NeRF methods to justify the adoption of a voxel representation for generating the corresponding reconstruction. Additionally, the method introduces universal adversarial perturbations (UAP) as a guiding mechanism for their universal robust texture, developed from a set of objects. The authors reference literature that examines how small perturbations can significantly affect classification outcomes. However, the discussion on transfer learning in relation to the surrogate models is presented but not well supported by prior evidence. While it is well known that transfer learning can enhance the performance of a larger model by training a distilled version, the cited literature focuses on alternative approaches (segmentation models) that are not directly related to this specific method. Methods And Evaluation Criteria: - The experiments for this method were conducted downstream using the IM3D dataset, which has been widely utilized in various applications, including mesh generation and image classification. However, the method is not tested on other datasets, such as ShapeNet3D, Objaverse, or ModelNet40, to demonstrate its generalization to different objects. - The paper does not provide visual results for other methods, only the respective metrics. In 3D reconstruction, it is particularly important to present at least one example from other approaches to allow for a proper comparison of the results. Theoretical Claims: All the theoretical claims are correct including the ones mentioned in the appendix about the formulas used for each corruption. Experimental Designs Or Analyses: The paper includes the ablation study on evaluating the performance of the textures obtained with different hyperparameter combinations and voxel grid resolutions. The experimental design is valid for providing insights into the impact of these specific hyper-parameters, but a more comprehensive ablation study could have strengthened the paper's conclusions about optimal parameter settings across different scenarios. In addition, should be important to mention how many objects were run in these experiments and provide some figures where it is possible to visualize the differences. Supplementary Material: This paper has a complete supplementary material section. They specify all the formulas and procedures used for each of the 15 corruption techniques. In addition, all the validation metrics are well explained. Relation To Broader Scientific Literature: The paper leverages recent advances in neural radiance fields (NeRF) and their accelerated variants using voxel grid representations. The authors adapt these techniques for texture optimization rather than novel view synthesis. This paper is related to NeRF-based editing research, showing how voxel grid-based 3D representations can be used for specific downstream objectives. Essential References Not Discussed: **Transfer Learning** The paper by Gatys et al. (2016), "Image Style Transfer Using Convolutional Neural Networks," is an influential approach for transferring artistic styles between images. This work showed how CNNs previously optimized for object recognition could be repurposed to separate and recombine content and style representations of images. By defining content and style losses based on feature activations and feature correlations in different CNN layers, their method transfers the stylistic elements of artworks like Van Gogh's "The Starry Night," Munch's "The Scream," and Picasso's "Seated Nude" onto images. **Image Restoration** Several key image restoration papers are missing from the references. Zhang et al. (2017) showed how CNNs could effectively denoise images using residual learning. Dong et al. (2015) was the first major work applying deep learning to super-resolution. Isola et al. (2017) created pix2pix, which handles many image-to-image translation tasks including restoration. Wang et al. (2018) improved GAN-based restoration with ESRGAN, while Ulyanov et al. (2018) found that network architectures inherently contain useful priors for restoration. Including these works would give better context for comparing the paper's approach with restoration-based methods for handling corrupted images. Other Strengths And Weaknesses: Strengths: 1. The key innovation is making object appearances inherently robust to corruption, rather than trying to repair corrupted images or complicate models. This directly benefits safety-critical systems like autonomous vehicles that must reliably identify objects in varying conditions. 2. This approach solves real-world problems where vision systems must function in harsh conditions. Unlike NeRF and Gaussian Splatting which assume perfect images, this method maintains performance despite weather effects, sensor noise, and motion blur. Weaknesses: 1. **Missing ablation studies**: There's limited exploration of how different components of their pipeline contribute to the final performance, making it difficult to understand which aspects are most critical. 2. The performance seems highly dependent on the choice of proxy model during optimization, but the paper doesn't provide clear guidelines for selecting optimal proxy models for different scenarios. There are recent CNN's that can be study like ConvNext. Other Comments Or Suggestions: N/A Questions For Authors: 1. the results in Figure 3 suggest that smaller models tend to be better surrogates for transferable texture optimization. Could you share insights into why this might be the case? 2. Table 1 shows a performance gap between universal robust textures and object-specific textures. Do you see pathways to improving universal texture performance, or are there fundamental limitations to what universal textures can achieve compared to object-specific ones? 3. Could your approach be extended to jointly optimize both textures and classification models in an end-to-end manner? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate the reviewers’ thoughtful feedback and insightful suggestions. We have carefully reviewed all the comments and provide our detailed responses below. **A1.Regarding experiments on ModelNet40 datasets:** Thanks for your question! Please refer to ***A3 to Reviewer eS8T***. **A2.Regarding visualization results:** Thanks for your question! Since VQSA and DCP are finetuning-based methods, we provided visualization results of URIE, Unadv and our method. Please refer to ***A3 to Reviewer eS8T*** for results on ModelNet40 and ***A2 to Reviewer mDnf*** for real scene. **A3.Regarding ablation study:** Thanks for your question! In fact, our method does not involve many detachable modules that allow for extensive ablation studies. Therefore, in the ablation study section of the paper, we primarily focused on analyzing the impact of voxel grid resolution and the boundary of texture perturbation on the final results. To further illustrate the superiority of voxel grids as a 3D representation in our framework, we additionally compared the performance of robust textures when using NeRF's MLP as the 3D representation. In this experiment, we selected one object from each category for evaluation. The results on Resnet18 is shown below: |Method|none|1|2|3|4|5|random|m.CE|R.mCE| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Clean|0.3611|0.3289|0.2793|0.1855|0.1325|0.1156|0.1602|-|-| |MLP|0.4527|0.4078|0.3629|0.2774|0.2168|0.1825|0.2484|0.6873|0.7540| |Voxel(Ours)|0.8714|0.8598|0.7994|0.7182|0.6530|0.5796|0.6901|0.2617|0.2984| **A4.Regarding reference:** Thank you for your question! However, we would like to clarify that our work is not closely related to transfer learning or style transfer. The reason why the robust textures can be transferred across different classifiers is similar to the phenomenon observed in adversarial examples: different classifiers tend to learn feature spaces with certain similarities. Additionally, image restoration works have a different focus compared to ours. Traditional image restoration aims to recover as much visual detail of the original image as possible, whereas our objective is to enhance the performance of downstreaming models when encountering corrupted images. Image restoration is merely one possible approach that can be incorporated into our framework, but it is not the primary focus of our work. **A5.Regarding selection of proxy model:** Thanks for your question! As we explained in section 3.5, employing a proxy model with relatively weaker generalization ability can lead to better transferability to other model architectures. To further clarify this design choice, we consider our task as the reverse of an adversarial attack task. In prior work on adversarial attacks, it has been observed that when the proxy model itself has stronger generalization capability, the generated adversarial examples tend to exhibit higher transferability. Intuitively, this can be likened to a teacher-student scenario: if a question can challenge a very capable student, it is more likely to challenge a less capable one as well. In contrast, our objective is to enable the classifier to correctly recognize the input, which can be viewed as the opposite of adversarial attacks. In this analogy, it is akin to designing questions that students can answer correctly. If a question can be easily answered by a less capable student, it is naturally more likely to be correctly answered by a more capable one as well. **A6.Regarding universal robust textures:** Thanks for your question! We proposed universal robust textures because synthesizing robust textures for each individual instance is an inefficient process. However, since the main focus of this paper is on object-specific robust texture synthesis, we only conducted preliminary experiments to validate the feasibility of universal robust texture. We believe that the performance of universal robust textures can be further improved in the following ways: 1. A larger, category-organized multi-view 3D reconstruction dataset. In our current IM3D dataset, each category contains only 10 different objects, and we used only 8 of them to train the robust textures. This clearly limits their generalization capability. 2. More efficient 3D representation. In our current setting, we adopt voxel grids as the 3D representation. However, training universal robust textures on a larger set of objects inevitably requires increasing the capacity of voxel grids, which would significantly raise the GPU cost. NeRF-based representation may be better for scaling up category-level robust texture training. **A7.Regarding end-to-end training:** Thanks for your question! Our proposed data-centric approach for enhancing classifier performance is based on the assumption that, in industrial scenarios, deployed classifiers cannot be easily modified or retrained. But we think it's possible to jointly optimize both the textures and the classifier to get better performance. --- Rebuttal Comment 1.1: Comment: Many thanks for the rebuttal. I decide to keep my acceptance score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer rM6D, Thank you very much for acknowledging both the practical relevance of our work and the efforts we have made in the rebuttal. If our response has adequately addressed your concerns, we would greatly appreciate it if you would consider raising your score. Your support would be very meaningful to us. Best regards, Authors of Paper 11436.
null
null
null
null
null
null
null
null