title stringlengths 15 163 | paper_decision stringclasses 4
values | review_1 stringlengths 853 32.6k | rebuttals_1 stringlengths 0 15.1k | review_2 stringlengths 1.03k 35.6k | rebuttals_2 stringlengths 0 15.1k | review_3 stringlengths 807 27.4k ⌀ | rebuttals_3 stringlengths 0 15k ⌀ | review_4 stringlengths 780 22.2k ⌀ | rebuttals_4 stringlengths 0 15.1k ⌀ | review_5 stringclasses 171
values | rebuttals_5 stringclasses 166
values | review_6 stringclasses 25
values | rebuttals_6 stringclasses 24
values | review_7 stringclasses 4
values | rebuttals_7 stringclasses 4
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Erwin: A Tree-based Hierarchical Transformer for Large-scale Physical Systems | Accept (poster) | Summary: **Review updates after rebuttal**
The authors correspond to many of my commets well. As my initial score is already the positive side and I find it fair, hence I will keep my score unchanged.
**End Review updates after rebuttal**
The paper introduces Erwin, a hierarchical transformer that combines ball tree partitioning with attention mechanisms to efficiently process large-scale physical systems defined on irregular grids. Erwin achieves linear-time attention by organizing computation hierarchically, enabling it to capture both fine-grained local details and global features. The method is validated across multiple domains (cosmology, molecular dynamics, and turbulent fluid dynamics), demonstrating state-of-the-art performance in both accuracy and computational efficiency.
However, the paper has several areas that need improvement (details see in later sections):
- No comprehensive reports for baselines;
- Incomplete dataset descriptions;
- Lack of literature review for multiscale GNNs for Irregular Grids;
- Rushed presentation;
Overall, I give weak accept at current phase; upon resolving the above bullet points, I will improve my score.
Claims And Evidence: Claim: Erwin achieves linear-time attention and outperforms baseline methods in accuracy and computational efficiency.
Evidence:
Figure 5~7: Shows linear scaling of runtime, lower MSE, and faster runtime compared to baselines
Table 3, 4: lower RMSE and MSE on two datasets.
Claim: Erwin captures long-range interactions and multi-scale phenomena effectively.
Evidence:
Figure 6~8
Methods And Evaluation Criteria: Methods:
- Ball tree partitioning to organize hierarchy.
- UNet-like structure, or Pointnet++, SWIN transformer, etc.
Evaluation Criteria:
Accuracy: Measured by MSE (cosmology, airflow pressure), NLL (molecular dynamics), and RMSE (fluid dynamics).
Efficiency: Measured by runtime and memory usage.
Scalability: Demonstrated through linear scaling of runtime with the number of nodes.
Theoretical Claims: NA; no concern: The method builds on mature techniques (ball trees, attention mechanisms) and provides solid incremental improvements.
Experimental Designs Or Analyses: The paper includes experiments across multiple domains (cosmology, molecular dynamics, fluid dynamics, airflow pressure), demonstrating the generalizability of Erwin.
The results show consistent improvements in both accuracy and computational efficiency.
Weaknesses:
Not all baselines are applied to all datasets. For example, PointTransformer v3 is not included in the fluid dynamics experiments, and EAGLE is not included in the cosmology experiments.
Suggestion: Include a comprehensive table listing all models and baselines for each dataset, with explanations for missing entries (e.g., due to incompatibility or heavy modifications required to the source).
Supplementary Material: The supplementary material includes additional experiments (e.g., airflow pressure modeling) and visualizations (e.g., rollout trajectories for fluid dynamics).
Weaknesses:
Only two datasets (cosmology, molecular dynamics) are introduced in detail. The fluid dynamics and airflow pressure datasets are mentioned briefly.
Suggestion: Provide a coherent and complete introduction to all datasets, organized in a structured layout. Ensure all components (e.g., dataset statistics, preprocessing steps) are included.
Relation To Broader Scientific Literature: The paper builds on hierarchical attention methods and tree-based algorithms from computational physics. It also draws inspiration from vision transformers (e.g., SwinTransformer) and point cloud transformers (e.g., PointTransformer v3).
The authors cover regular grids, particles, transformers, but miss some related literature for multiscale structures on irregular mesh, see below section for suggested refs to add.
Essential References Not Discussed: Multiscale learning on irregular mesh
- Multi-scale rotation-equivariant graph neural networks for unsteady Eulerian fluid dynamics
- Efficient Learning of Mesh-Based Physical Simulation with Bi-Stride Multi-Scale Graph Neural Network
- Learning Distributions of Complex Fluid Simulations with Diffusion Graph Networks
Other Strengths And Weaknesses: Weaknesses:
- Incomplete sections: Dataset introduction, experimental designs, baseline comparisons, and supplementary material need refinement.
- Rushed presentation: The paper feels slightly rushed, with some sections (e.g., dataset descriptions) lacking detail. Many tables in appendix lack text accompanying them.
- Suggestion: Revise the paper to address these issues, ensuring completeness and coherence.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging feedback and hope to address their concerns with the rebuttal.
## Experimental Designs Or Analyses
> Not all baselines are applied to all datasets. For example, PointTransformer v3 is not included in the fluid dynamics experiments, and EAGLE is not included in the cosmology experiments. Suggestion: Include a comprehensive table listing all models and baselines for each dataset, with explanations for missing entries (e.g., due to incompatibility or heavy modifications required to the source).
In our experiments we did not aim at evaluating every baseline on every dataset. For each benchmark, except for MD (as it is not an established benchmark), we report results for baselines that have been previously evaluated on the benchmark. That being said, we do compare extensively against PointTransformer v3 when possible as it is the state-of-the-art model from computer vision and is very close to Erwin in spirit as it also suggest a way to linearize attention. Unfortunately, as the reviewer pointed out, we did not evaluated it on the fluid dynamics dataset as PTv3 is only implemented for 3D, while the benchmark is 2D. Doing so would require heavy modifications from us, particularly in serialization (https://github.com/Pointcept/PointTransformerV3/blob/main/serialization/z_order.py), which is the cornerstone of the approach. If there are specific experiments the reviewer would like us to run for the rebuttal, we are happy to do it.
**Action taken**: In the final version, we will further explain our logic in baseline selection in the experimental section for each benchmark stressing out the comparison against PTv3. We also conducted additional experiments on ShapeNet-Car which now includes comparison to Transolver - state-of-the-art model for PDE modelling.
## Supplementary Material
> Only two datasets (cosmology, molecular dynamics) are introduced in detail. The fluid dynamics and airflow pressure datasets are mentioned briefly. Suggestion: Provide a coherent and complete introduction to all datasets, organized in a structured layout. Ensure all components (e.g., dataset statistics, preprocessing steps) are included.
**Action taken**: We appreciate reviewer's suggestion and will use increased space in the final version to describe datasets in detail, particularly statistics and preprocessing steps. For the ease of comprehension, for each dataset, we will compile a table which describes
- dataset statistics
- train/validation/test split
- input type (mesh/point cloud), feature type
- target type
- preprocessing (normalization, connectivity type, etc.)
If there are any other descriptors that reviewers would like to be included, we would be happy to do so.
## Essential References Not Discussed
**Action taken**: We appreciate the reviewer's suggestions and will use the increased page count to expand the related works section and include multiscale frameworks.
## Conclusion
We are thankful to the reviewer for the valuable suggestions that we believe will improve the presentation of our framework. If the are any questions, we would be happy to answer them.
---
Rebuttal Comment 1.1:
Comment: Dear authors and reviewers,
I have read all the reviews and author comments. I appreciate the authors willing to take actions to a) explain why selecting some baselines while not for the others b) improving reproductivity and c) adding missing related works.
However, as a) is my major concern (also some other reviewer's), and currently the authors did not provide a throughout explanation (they promise to do so in the final version; maybe a better idea is preparing some intermediate markdown tables, or figures in anounymous repo to explain), I believe my current rating is rather fair.
I can keep an eye on if any more updates are provided.
---
Reply to Comment 1.1.1:
Comment: We appreciate reviewer's response to our rebuttal and would like to address concern a) explicitly with a comprehensive table as the reviewer suggested:
| Model Type | Model | Cosmology¹ | MD | EAGLE¹ | ShapeNet² |
|------------|-------|-----------|-----|-------|----------|
| **Message-Passing Based** | MPNN | ✓ | ✓ | ✗ | ✗ |
| | SEGNN | ✓ | ✗ | ✗ | ✗ |
| | NequIP | ✓ | ✗ | ✗ | ✗ |
| | MGN | ✗ | ✗ | ✓ | ✗ |
| | GAT | ✗ | ✗ | ✓ | ✗ |
| **Transformer-Based** | PointTransformer v3 | ✓ | ✓ | - | ✓ |
| | EAGLE | ✗ | ✗ | ✓ | ✗ |
| | OctFormer | ✗ | ✓ | ✗ | ✗ |
| | **Erwin (Ours)** | ✓ | ✓ | ✓ | ✓ |
| **Other Hierarchical** | PointNet++ | ✗ | ✓ | ✗ | ✗ |
| **Neural Operators** | U-Net | - | - | - | ✓ |
| | FNO | - | - | - | ✓ |
| | GINO | ✗ | ✗ | ✗ | ✓ |
| | UPT | ✗ | ✗ | ✓ | ✓ |
| | DRN | - | - | ✓ | - |
| | Transolver | ✗ | ✗ | ✗ | ✓ |
**Legend:**
- ✓: Model evaluated on this dataset
- ✗: Model not evaluated on this dataset
- -: Not applicable
**Notes:**
- ¹ An established benchmark with established baselines.
- ² The benchmark is meant to compare neural operators against each other, hence only SOTA NOs are selected.
**Rationale for Baseline Selection:**
**Cosmology:**
- We focus on comparing message-passing models (MPNN, SEGNN, NequIP) with sub-quadratic transformer models (PointTransformer v3, Erwin).
- These baselines were selected to evaluate performance on large-scale systems with long-range interactions.
**Molecular Dynamics:**
- Comparison aimed at exploring the Pareto frontier (performance vs. runtime).
- We selected hierarchical models that are scalable to large-scale systems, which is our primary focus.
**Fluid Dynamics (EAGLE):**
- We used the established benchmark with its native baselines (MGN, GAT, DRN, EAGLE).
- PointTransformer v3 was not included as it's designed for 3D data, while this dataset is 2D.
**ShapeNet-Car:**
- Compared against existing neural operators and physics-based models (U-Net, FNO, GINO, UPT).
- Added PointTransformer v3 as it's SOTA for large-scale point clouds.
- Added Transolver as a SOTA model for large-scale physical simulations.
We are grateful for the reviewer's feedback and we hope that this response clarifies the baseline selection. | Summary: The authors propose a tree-based hierarchical transformer model to capture large-scale and small-scale interactions in exhaustively large datasets. The motivation is that traditional transformers are not scalable and efficient, so the authors propose a hierarchical model to allow course granting to capture large scale interactions that scales ~linearly with the data size. They test the proposed model on three empirical data sets.
Claims And Evidence: The claims around computational efficiency, linear scalability, and performance are evaluated empirically and backed by different experiments.
Methods And Evaluation Criteria: The methods and evaluation criteria, including the benchmarks, are suitable for the problem and demonstrate the superiority of the proposed solution.
Theoretical Claims: N/A. No theoretical claims are made in this paper.
Experimental Designs Or Analyses: The cosmology experiment has some logical gaps. This paper takes the location of galaxies (halos) and wants to predict their velocity. Location and velocity, even though correlated, are two independent random variables. Their correlation is due to the initial condition, therefore the model is not generalizable to another initial condition or universe with different properties. I am not sure what the motivation is behind setting up the problem in this way.
My intuition is that writing the loss function as the MSE of log Y for the turbulent fluid dynamics problem is more appropriate. The fractional error is important, not the actual value. The actual value has a unit and two quantities with different units (strictly speaking) should not be added together. However, fractional error does not have a unit and the fractional error of two quantities can be added.
Supplementary Material: I reviewed supplementary materials.
Relation To Broader Scientific Literature: The proposed solution si appropriate for the target problems. The authors consider a set of scientific problems in which the traditional transformer-based models are not scalable and limited. Therefore, they propose a hierarchical solution that is suitable for the class of problems studied in this work and can be readily applied to those problems.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The main target audience is people who use AI for science applications, and I feel this ICML is an appropriate venue for this kind of publication.
Other Comments Or Suggestions: - "halos form local clusters through gravity while maintaining long-range correlations originated from interactions in the early universe before cosmic expansion" -> the universe has been expanding since the big bang. Also, the long range correlations are vanishing (unless a non-standard cosmology is assumed).
Questions For Authors: - "The input is a point cloud $X \in \mathcal{R}^{5000\times3}$" -> I assume 3, the the 3-spacial dimensions. Can the authors elaborate on what time the positions and velocity vectors are measured?
- "The total dataset includes 1184 simulations with 990 time steps per simulation. The dataset is split with 80% for training and 10% each for validation and testing." -> Can you describe how the training, validation, and test are selected?
- I assume the proposed solution has application mainly to problems where data are in d-dimensional spatial domain. Are there any other applications that the author can think of where the unit balls are not defined in the spatial domain?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback. We shall address their questions below. When discussing the questions regarding the cosmology dataset, we will refer to the original dataset paper by Balla et al.
## Questions
> "The input is a point cloud $X \in \mathbb{R}^{5000 \times 3}$" -> I assume 3, the the 3-spacial dimensions. Can the authors elaborate on what time the positions and velocity vectors are measured?
Yes, that is correct - 3 stands for spatial dimensions. Regarding the measurement time, according to A.1. Details of simulation of Balla et al., it is the last step of a Quijote simulation (*present time*): "Each simulation models the evolution of the large-scale structure of the Universe by following the dynamics of 5123 cold dark matter particles in a cubic comoving volume of side ∼ 1 Gigaparsec from redshift z = 127 to z = 0 (present time)." The simulations are defined in Villaescusa-Navarro et al.
> "The total dataset includes 1184 simulations with 990 time steps per simulation. The dataset is split with 80% for training and 10% each for validation and testing." -> Can you describe how the training, validation, and test are selected?
Since we use the benchmark, we use the predefined validation and test partitions as defined in Balla et al. (See https://zenodo.org/records/11479419 and https://github.com/smsharma/eqnn-jax/blob/main/benchmarks/galaxies/dataset.py for details). Training samples are taken randomly from the total number of 11,200.
> I assume the proposed solution has application mainly to problems where data are in d-dimensional spatial domain. Are there any other applications that the author can think of where the unit balls are not defined in the spatial domain?
1) One application that comes to mind is the relativistic case where particles live in spacetime, and one would need to adjust the algorithm to account for the pseudo-Euclidean structure.
2) Related to spacetime, if one axis represents time, unit balls will not make much sense. In this scenario, however, one can simply build a ball tree based on the spatial coordinates and omit time during the construction. After all, a ball tree can be seen just as a means of organizing computation to linearize attention.
## Other comments
> "halos form local clusters through gravity while maintaining long-range correlations originated from interactions in the early universe before cosmic expansion" -> the universe has been expanding since the big bang. Also, the long range correlations are vanishing (unless a non-standard cosmology is assumed).
Regarding the universe expansion, we thank the reviewer for their careful reading and for identifying this textual error. Indeed, what we meant to say was that the structures were initially in causal contact. We will remove "before cosmic expansion" in the revised manuscript. Regarding long-range correlations, although vanishing, they are still present according to the benchmark description (see Fig. 1 and "Introduction, Information across scales" in Balla et al.).
## Experimental Designs Or Analyses
We appreciate the reviewer's critical assessment of the benchmarks used in our evaluation. We note that both cosmology and EAGLE datasets are benchmarks with established tasks/preprocessing/partitioning/losses. That is the reason we predict velocity from positions - it is the original task of the cosmology benchmark (see Balla et al., Section 3.3, Node-level prediction). For the same reason, we stick to the losses as proposed in the EAGLE benchmark. Overall, we used the original settings defined in the benchmarks for consistent comparison with other baselines validated on each benchmark.
## Conclusion
We thank the reviewer for the valuable feedback and are hopeful that the reviewer will continue to support the paper. If the are any questions, we would be happy to answer them.
## References
- Balla, J., Mishra-Sharma, S., Cuesta-Lázaro, C., Jaakkola, T., & Smidt, T.E. (2024). A Cosmic-Scale Benchmark for Symmetry-Preserving Data Processing. Learning on Graphs Conference, 2024.
- Villaescusa-Navarro, F., Hahn, C., Massara, E., Banerjee, A., Delgado, A.M., Ramanah, D.K., Charnock, T., Giusarma, E., Li, Y., Allys, E., Brochard, A., Chiang, C., He, S., Pisani, A., Obuljen, A., Feng, Y., Castorina, E., Contardo, G., Kreisch, C.D., Nicola, A., Scoccimarro, R., Verde, L., Viel, M., Ho, S., Mallat, S., Wandelt, B.D., & Spergel, D.N. (2019). The Quijote Simulations. The Astrophysical Journal Supplement Series, 250. | Summary: The paper introduces Erwin, a hierarchical transformer model designed for large-scale physical systems. The model employs a ball tree partitioning strategy to group nodes, enabling linear-time attention by processing local neighborhoods in parallel. This hierarchical approach allows progressive coarsening and refinement, capturing both fine-grained local details and global interactions. The paper demonstrates Erwin’s effectiveness across multiple domains, including cosmology, molecular dynamics, and particle fluid dynamics.
Claims And Evidence: Most claims are supported.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
While the paper provides training information for Erwin, it lacks precise training details of compared methods.
Supplementary Material: Yes. I have read the Appendix.
Relation To Broader Scientific Literature: Erwin builds on transformer-based approaches, but introduces hierarchical ball tree partitioning to improve efficiency.
Essential References Not Discussed: The paper includes most of the relevant prior work and provides thorough comparisons. However, since one of Erwin’s key contributions is hierarchical modeling, it might be beneficial to discuss and compare with existing hierarchical models in physical systems, such as BSMS-GNN [1] and HCMT [2]. These models incorporate multi-scale representations in physics simulations and could provide additional context for understanding Erwin’s approach.
[1] Efficient Learning of Mesh-Based Physical Simulation with BSMS-GNN.
[2] Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer.
Other Strengths And Weaknesses: **Strengths**
- The paper presents a creative combination of ball tree partitioning and transformer-based attention, leading to an efficient and expressive hierarchical model for large-scale physical systems.
- By leveraging ball tree partitioning, the model reduces computation cost, which is important for large-scale physics systems.
**Weakness**
- The paper discusses OctFormer, another tree-based hierarchical attention model, but does not include a direct comparison. Since both methods aim to improve efficiency through hierarchical partitioning, an empirical evaluation would better highlight Erwin’s advantages.
- While the paper compares Erwin to UPT and EAGLE, which use two-level hierarchical pooling, it does not consider more flexible multi-level approaches like BSMS-GNN[1] and HCMT[2], which incorporate multi-scale message passing.
- Erwin performs worse than some baselines when trained on small datasets. This suggests that the model may rely more heavily on large-scale data to learn effective representations, whereas certain baselines generalize better with limited training samples.
- Table 2 shows that most improvements come from MPNN in the encoder, while Erwin Blocks (ball attention) contribute less. This raises the question of whether hierarchical attention is essential, or if similar gains could be achieved with a strong graph-based encoder and standard attention.
- The paper directly adopts baseline results from EAGLE for turbulent flow experiments, but it is unclear whether Erwin’s model depth is aligned with the baselines. Additionally, the appendix lacks details on Erwin’s architecture for this experiment, making it difficult to assess the fairness of the comparison.
- Erwin’s MPNN and the compared MPNN baseline rely on k-nearest neighbors, while a more general and physically meaningful approach is ball neighborhoods, as used in MGN paper.
- Some important methodological and experimental details are missing (see questions), which significantly affect my recommendation.
Other Comments Or Suggestions: Where is the anonymous link?
Questions For Authors: 1. In Figure 9, the ball tree partitions appear to vary significantly across different datasets. Did you analyze how the choice of tree depth or splitting strategy affects performance? For example, in molecular dynamics, does the partitioning align well with physical structures like polymer chains, or are there cases where the tree creates unnatural groupings?
2. The partitions in ShapeNet and EAGLE seem mostly aligned with coordinate axes. Given this, how does ball tree partitioning compare to KD-tree partitioning for these datasets? Would KD-trees provide similar efficiency, and what advantages does the ball tree offer in these cases?
3. Erwin pads the ball tree to form a perfect binary tree, but the paper does not specify how virtual nodes are positioned or initialized. Are they assigned based on nearby real nodes, or are they purely placeholders? Could learnable pooling or adaptive padding reduce computational overhead?
4. Table 1 ablates ball size for cosmology dataset, but how was ball size chosen for each datasets?
5. How many layers does the MPNN in the embedding stage have?
6. The paper introduces cross-ball connections by rotating the point cloud before constructing a second ball tree. What specific rotation matrix is used? Is it a fixed transformation, randomly sampled, or learned? How does the choice of rotation affect performance?
7. The results suggest that MPNN performance plateaus as training data increases, but my experience, especially MGN models, is that they usually continue improving. Can you give me a suitable explanation?
8. Did you ensure that all baseline models have comparable depth and capacity with Erwin?
A clear response to these questions would help clarify my key concerns and could positively influence my evaluation.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the actionable feedback. We address their concerns and questions in the following.
## Questions For Authors
### Q1
That is an insightful question. In general, ball trees are geometry-adaptive but do not guarantee that balls will cover points in a perfectly natural way and certain irregularities can indeed appear in practice. To address it, we introduce the distance-based attention bias (Eq. 10), which penalizes the strength of interaction between distant tokens. It provides a soft mechanism for creating more “natural” partitions. In our opinion, such partitioning irregularities are a byproduct of *any* attempt to impose a rigid structure onto irregular data, such as converting a point cloud into a sequence (e.g. space-filling curves in PTv3 produce discontinuities).
Regarding tree depth, we clarify that it is fixed, as the tree is always constructed until leaf size 2 and is not a variable design parameter. For the splitting strategy, we employ Algorithm 1, selected primarily for its speed (also used by scikit-learn).
### Q2
For both these datasets ball tree partitioning aligns very well with KD-tree partitioning and KD-tree would be a viable alternative. However, one advantage of ball trees is that they offer a rotation invariant partitioning which might be critical for certain physical applications.
### Q3
We are thankful to the reviewer for bringing our attention to it and apologize for overlooking it in the original submission. We will include details in the final version. Tree construction creates leaf nodes with either 1 point (incomplete) or 2 points (complete). For single-point leaves, we duplicate the point as padding, creating virtual nodes that are copies of original points. This simplifies implementation, allowing pooling to work without edge cases and minimal computational overhead. Speaking of learnable pooling, this is one of the directions for future work as it would make Erwin applicable to gigantic-scale industrial applications and we aim to explore it.
### Q4
Ball size is chosen according to computational constraints, i.e. available GPU memory. In our experiments, it was a hyperparameter which we tuned for the best performance, choosing between 128 and 256. Generally, it is a trade-off as 256 provides better coverage but 128 allows for using larger batch size.
### Q5
We specify the number of message-passing steps in Tables 6 and 7 for ShapeNet ($1$ step) and MD ($2$ steps). We will also add the detail for EAGLE and cosmology, where the number of steps was $3$.
### Q6
For 2D cases, we specify degrees of rotation, and for 3D, we specify Euler angles. In all experiments we used fixed value of 45 which we empirically found to generate sufficiently different partitions. The difference between partitionings influences the performance significantly, as it is the mechanism that allows points within different original balls to interact. Furthermore, we avoided using small angles, but there was no difference for sufficiently large angles.
### Q7
The reviewer refers to the cosmology experiment where the performance of MP models plateaus with increasing number training points. This is consistent with results of the original benchmark paper (Balla et al., Fig. 3). In the task, each data point is a point cloud with 5000 points. Furthermore, even for small training sizes, there is enough signal to capture local interactions for message-passing models. We note, however, that there are also long-range dependencies in data (Balla et al., Fig. 1), which MP models are not able to capture. Furthermore, regardless of how much data is given to them, they are not able to get that part of the signal due to short-range receptive field, which is different for Erwin or PointTransformer v3 that both have large receptive fields.
### Q8
We did control explicitly in *all* the tasks that both Erwin and state-of-the-art PT v3 have the same depth and similar number of parameters as those models are comparable in their architecture.
- cosmology: we use the same hyperparameters for each model as in the original paper (Balla et al.), as those are the best performing models according to the hyperparameter tuning from the authors (see Appendix C, Balla et al.).
- EAGLE: we take results from the original paper where authors do hyperparameter sweep and thus report best-performing configurations. Parameter-wise, Erwin has 18M params and the closest competitor, EAGLE, has 10M params.
- ShapeNet, all models have similar parameter count - between 4M and 5M parameters.
## Conclusion
Unfortunately, we could only use 5k characters for the rebuttal and do not have space to address weaknesses one-by-one despite having prepared answers. For additional experiments that address W1 and W4, we kindly refer to another rebuttal: https://openreview.net/forum?id=MrphqqwnKv¬eId=BfX9JC9Pxg. We would be happy to engage in further discussion if the reviewer would like us to clarify further.
---
Rebuttal Comment 1.1:
Comment: I have read all the reviews and author comments. Thank you for your response and for providing the missing details, which address most of my concerns. I would appreciate it if these additional discussions and details could be incorporated into the revised manuscript, as they would greatly help in understanding the method. If you commit to doing so, I will consider increasing my rating.
Regarding Q6, there is still a small issue: Could you clarify what "sufficiently different" means in this context? How do the partitions differ, and is it possible to quantify this difference?
---
Reply to Comment 1.1.1:
Comment: We promise to include the additional details/results and would like to emphasize our commitment to doing it.
*Regarding Q6*, that is perhaps easier to explain on the examples:
https://github.com/erwin-transformer/erwin/blob/main/misc/ball_tree_with_rotations.png
Here, you can see that partitions are different in a sense that the **set difference** between two partitions that cover the same original point is high. Take a look, for example, on the yellow partitions. They cover different sets of points in both original and rotated configurations, however, there is still a substantial intersection. This allows points that were originally in the right upper corner to technically interact with points that were in the left upper corner (what we call information leakage).
Coming back to your question, we believe that we can robustly quantify the difference by measuring the set difference on the highest but one level of the tree (when there are only two partitions). | Summary: The paper introduces a transformer-based approach for processing point cloud data. It utilizes ball-tree-based structures, enabling the attention-based framework to operate in linear time rather than quadratic time. This also allows the model to capture interactions at both fine and coarse scales.
The proposed method is evaluated on three problems: capturing long-range interactions (cosmology); computational efficiency (molecular dynamics); model expressivity on large-scale multi-scale phenomena (turbulent fluid dynamics), demonstrating strong performance in both computational efficiency and prediction accuracy.
update after rebuttal
Thank the authors for the responses. I decide to keep my original score.
Claims And Evidence: The claims made in the submission is supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense.
Theoretical Claims: n/a
Experimental Designs Or Analyses: The experimental designs make sense.
Supplementary Material: The appendix.
Relation To Broader Scientific Literature: The paper discussed the existing works of sub-quadratic attention. Compared to those works, the paper uses ball-tree structures to capture the hierarchical structures of point cloud data.
Essential References Not Discussed: no.
Other Strengths And Weaknesses: The proposed ball-tree-based attention seems novel to me.
Besides the algorithm, the paper also discusses the implementation details of the ball-tree attention, such as, the representation of the tree which makes the tensor rearranging on the nodes more efficient.
Other Comments Or Suggestions: n/a
Questions For Authors: - The paper mentions "The code is available at anonymized link". But there seems to be no link attached at the "anonymized link".
- During the simulation process, the positions of particles will change. Does the proposed method re-build the tree after each step?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging feedback. We address their questions below.
## Questions For Authors
> The paper mentions "The code is available at anonymized link". But there seems to be no link attached at the "anonymized link".
We apologize for the inconvenience. An anonymized version of the repo can be found here: https://github.com/erwin-transformer/erwin.
> During the simulation process, the positions of particles will change. Does the proposed method re-build the tree after each step?
Rebuilding the tree depends entirely on the properties of the modelled system, particularly, how quickly the arrangement of particles changes:
- The mesh is *static*: the tree should only be computed once during preprocessing.
- The mesh changes *slowly*: the tree should be rebuilt each N step.
- The mesh is dynamic and changes *quickly*: ideally, the tree should be rebuilt at every step of a simulation.
During the code implementation, we assumed that we would need to build the tree as fast as possible. Hence, we implemented ball tree construction in C++ & OpenMP and according to our benchmarks, it consistently takes less than 10% of the forward pass. For details, see https://github.com/erwin-transformer/erwin?tab=readme-ov-file#benchmark.
## Additional experiments
We also conducted additional experiments to further highlight the strength of our method.
### OctFormer on the molecular dynamics task
We will include OctFormer results for the molecular dynamics task in the final manuscript. We have evaluated OctFormer while maintaining similar parameter count, depth, and partitioning size (128) as Erwin for each model size.
| Model || model size ||
|--------------------------------|------------|--------------|-------------|
| | small (4M) | medium (19M) | large (43M) |
| OctFormer | 0.725 | 0.715 | 0.712 |
| Erwin | **0.712** | **0.693** | **0.691** |
### Ablation study on ShapeNet-Car dataset
We conducted an ablation study to gather additional information concerning the role of the MPNN embedding and Ball Attention in Erwin's performance. Here, Erwin has a small size (1.5M parameters) and a fixed width (we do not use any upsampling/downsampling to achieve the best performance)
| | Test MSE |
|-------------|--------|
| w/o | 30.39 |
| + MPNN | 30.49 |
| + RPE | 30.02 |
| + Rotating trees | **16.58** |
| Transolver (3.9M) | 19.88 |
Notice that in this experiment, including the MPNN leads to deteriorating performance, while rotating trees essentially allow Erwin to achieve state-of-the-art performance. The reason is that the ball attention is able to process large-scale data (3.6K points) at full fidelity, and rotating trees allow for information to propagate from one ball to another, essentially making the receptive field encompass the full car.
## Conclusion
We are grateful for the reviewer's feedback. Should you have any questions or concerns, we are always ready to discuss them further. | null | null | null | null | null | null |
Overestimation in LLM Evaluation: A Controlled Large-Scale Study on Data Contamination’s Impact on Machine Translation | Accept (poster) | Summary: This paper presents a controlled study on the impact of data contamination on machine translation. They decontaminate their train-test splits and train two decoder-only models of different sizes (with 1 and 8 billion parameters). Then, they add test data into the pretraining data and train a contaminated model branching out from the baseline checkpoint. Finally, they compare the relative performance of the contaminated and the baseline model on contaminated and non-contaminated data. Their main findings are as follows:
(i) contaminating with both source and target sides leads to substantial performance inflation on those test sets;
(ii) partial contamination (source-only or target-only) leads to smaller/less consistent inflation;
(iii) the temporal persistence of contamination matters;
(iv) contamination has more impact on larger models;
(v) contamination requires sufficient language representation to have a measurable effect.
Claims And Evidence: Their main claims/findings are summarized above (i-v); they are all well-supported through experiments. (iv) seems to hold in practice but this is only tested for two model sizes (this is acknowledged in the paper).
Methods And Evaluation Criteria: The method seems appropriate. Using branching to save computation sounds reasonable. It would be nice to explore other types of contamination (eg., paraphases) but this is not really needed. However, I can’t understand the reason for drawing conclusions based on BLEU scores. Its issues are well known, check for instance [1, 2]. The authors include Metric-X results in the Appendix, so my question is:
(1) Is there a reason to report BLEU in the main paper and Metric-X results only in the Appendix?
[1] Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust (Freitag et al., WMT 2022)
[2] Results of WMT23 Metrics Shared Task: Metrics Might Be Guilty but References Are Not Innocent (Freitag et al., WMT 2023)
Theoretical Claims: No proofs are presentend but I think none are needed (the paper focuses on empirical findings).
Experimental Designs Or Analyses: The experimental design is sound. In particular the authors searched and found that ~10% of test set examples were already contaminated (Section 3.1). This shows the importance of their decontamination step (the first step in Fig. 1), which is missing in previous work according to Table 1.
Supplementary Material: I skimmed through the Appendix but did not carefully check all the information (eg., individual plots for each language pair).
Relation To Broader Scientific Literature: Table 1 provides a good summary, but I think this is the first work doing this analysis for machine translation.
Essential References Not Discussed: NA.
Other Strengths And Weaknesses: In addition to what I pointed out above, the paper is exceptionally well-written and is easy to follow; the analysis setup is sound, and most of their choices are well-justified. The main weakness is the reliance on BLEU as the main evaluation metric to draw conclusions. I’d like to see some discussion justifying this choice.
Other Comments Or Suggestions: Minor comments:
- Typo in L208 (missing white space after “language pairs:”)
- It would be valuable to expand a bit the discussion on the broader implications of these findings, eg., how to adapt existing evaluation protocols/benchmarks to mitigate this issue?
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive review. We will extend the discussion section to include a discussion on the broader implications of our findings.
On the choice of BLEU as a metric: We acknowledge that a string-based metric (like BLEU) has limitations. For this reason, we already accompany all of our evaluations with an additional learned metric (i.e., MetricX) which has been widely adopted by the field. All drawn conclusions are consistent across both reported metrics. Our choice to surface BLEU in the main paper is based on its wider adoption within the ML community. | Summary: This paper investigates the impact of data contamination on machine translation. In particular, the paper tests factors including source contamination, target contamination and temporal distribution.
Claims And Evidence: The claims are supported by abundant experiments from multiple data source.
Methods And Evaluation Criteria: No significant flaws in method and evaluation.
Theoretical Claims: Not applicable for this paper.
Experimental Designs Or Analyses: No significant problems in experiments or analyses.
Supplementary Material: No significant problems in supplementary material.
Relation To Broader Scientific Literature: This paper extends the study of data contamination to the field of machine learning.
Essential References Not Discussed: No missing reference found.
Other Strengths And Weaknesses: Strength: Abundant experiment to support the claim
Weakness: Although it conducts a thorough investigation on the impact of data contamination in machine translation, there is no suggestion on how to resolve this issue. Significance can be further improved by e.g., methods to detect data contamination in the pretraining data, or method to detect data contaminated pretrained models.
Other Comments Or Suggestions: See the previous comment.
Questions For Authors: 1. Is there an efficient way to detect data contamination in the pretrained dataset, i.e., overlap between pretrained and test data?
2. Is it possible to determine whether there is data contamination, if you only have access to the pretrained model and test dataset, but no access to the pretrained data? In other words, is there a clear separation between the performance of a model pretrained with non-contaminated data, and that of a model pretrained with contaminated data?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks a lot for your review of our paper.
First, we respectfully disagree that the contribution of our paper is insignificant. The key assumption of data contamination is motivated by the hypothesis that consuming test data leads to an overestimation of a model’s performance. While intuitive, this hypothesis has not been rigorously tested at this scale, and our work aims to fill that gap in the literature. Further, we suspect many researchers care about testing this hypothesis but are unable to run the experiments that we did due to the computational cost. Therefore, we feel our contributions are significant and our findings are very relevant to the research community.
1. There are methods for detecting contamination by looking at the overlap between the training and test sets, including https://arxiv.org/abs/2411.03923. Our train-test decontamination stage is inspired by the findings of this paper. Our implementation is very efficient because it’s highly parallelizable and uses a Bloom filter combined with an exact search to search more than 100k 8-grams from the test data in trillions of training tokens in less than a day.
2. There are papers focused on finding contamination with just access to model activations. However this task is shown to be harder than initially expected (https://arxiv.org/abs/2402.07841.) Our experiments and findings regarding contamination in section 5.4 can partially highlight why this task can be harder than initially hypothesized. We show that models don’t naively memorize text in the pretraining data suggesting that model behaviour might not be as easily separable between what is in the pre-training data and what is not. Recent work also shows that models can verbatim generate text that is explicitly not in their pre-training data https://arxiv.org/abs/2503.17514. This suggests that models learn more general representations of language and understanding contamination and the impact on evals requires understanding knowledge formation and access in large language models.
---
Rebuttal Comment 1.1:
Comment: Thanks for you response. I agree that the experiment results you have should contribute to research in machine learning translation. I've raised the score to 3. | Summary: This paper presents a controlled large-scale study on how data contamination impacts machine translation evaluation in large language models. The researchers created a carefully decontaminated train-test split, then systematically reintroduced contamination under controlled conditions across different modes, temporal distributions, and frequencies on both 1B and 8B parameter models.
The key findings of the paper shows that the full contamination mode (where both the source and target text of the test set exist) can dramatically inflate performance metrics (up to 30 BLEU points for 8B models). Corroborating previous study, contamination effect is more evident when it is introduced later during training, but the paper also found that uniform contamination leads to the most significant effect. Last but not least, contamination requires a sufficient language representation to have a measurable effect. These findings highlight the critical importance of properly decontaminating evaluation benchmarks to avoid substantially overestimating model capabilities.
Claims And Evidence: Following the breakdown from the introduction of the paper:
- Contaminating source-target MT pairs inflates performance on those test sets. -> adequately supported
- The temporal distribution of contamination matters. -> adequately supported
- The impact of contamination increases with model scale. -> supported with limitation (only data points for 1B & 8B is supplied), but acknowledged in the paper
- Contamination requires sufficient language representation to have a measurable effect -> adequately supported
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: I checked everything that's presented in Section 3 and 4 carefully. While I don't spot issues with what's presented, I find the level of details given for checkpoint-averaging a little lacking. See more details in "Other Strengths And Weaknesses".
Supplementary Material: I checked Appendix G carefully since it looks like an interesting piece of information that wasn't presented at other places in the paper. I only glanced through the other parts because they are mostly extra results with different metrics or per-language break-down.
Relation To Broader Scientific Literature: There are existing work that looks at membership detection of certain data point into a models pre-training data [1][2][3][4], each of which has established some sort of detection algorithm: min k% prob [1], membership inference attacks [2], and bloom filters [3], infini-gram [4]. There are also papers that examine the impact of data contamination on fair LLM evaluation [5][6]. However, to the best of my knowledge, none of them has studied systematically how introducing data contamination in the pre-training data in different ways can introduce different effects in the resulting model, so I think the novelty of the findings in this paper is significant.
- [1] https://arxiv.org/pdf/2310.16789
- [2] https://aclanthology.org/2020.tacl-1.4.pdf
- [3] https://arxiv.org/pdf/2303.03919
- [4] https://arxiv.org/pdf/2401.17377
- [5] https://aclanthology.org/2023.findings-emnlp.722.pdf
- [6] https://aclanthology.org/2024.findings-acl.716.pdf
Essential References Not Discussed: The paper has already cited [1]. I would recommend also discussing [2] because they are tackling a similar data contamination problem from a different (the system user's) angle, even though it predates the boom of LLMs. [3] and [4] should also be discussed in the context of alternative decontamination methods, which is something I would like to see added (more details in "Other Comments Or Suggestions").
Other Strengths And Weaknesses: This is a high-quality controlled study on the effect of data contamination on the inflation of machine translation evaluation result. The problem studied here has significant real-world implications and is of value to other tasks other than machine translation as well. The presentation quality of the paper is high, with the core methods and findings very clearly presented.
The main complaints I have about the paper is the lack of details in the checkpoint branching method (Section 3.4). This is the most significant experimental design choice of this paper, and yet is very briefly glossed over. This is not at all sufficient for subsequent studies to reproduce their results.
Other Comments Or Suggestions: I would like to see two main things added in the next draft.
### More details on checkpoint branching
I would like to see Section 3.4 significantly re-written to better present this method.
* Start with the motivation. Stress what you have given up compared to multiple full re-training runs (essentially, you have fixed initialization seed and data sampling order), and what you get out of this (re-use some compute from other runs).
* When you introduce a "branch", how do you implement it? Do you just add extra samples to the training data? What about learning rate scheduler?
* How do you introduce uniform contamination?
* In total, how many "branches" did you have to introduce for one set of experiment? (e.g. for 1B, single-copy, full contamination)
### More discussions on the choice of decontamination method
In its current shape, the paper only mentions their choice of decontamination method in Appendix G. I think this is an important experiment design that is worth talking in the main paper as well. Moreover, this should be treated as an intentional choice, since what this paper adopts is not the only contamination detection/decontamination method out there (see [1][2][3][4] above). The paper should acknowledge these other methods and discuss how the other methods relate to their method.
I also spot a few minor typos/style comments:
* L74 left col. -- "up to 60 BLEU points", not 30?
* L83 right col. -- "previous works" -> previous work
* L85-86 right col. -- use -> used
* L205-216 left col. -- for readers that's not familiar with how WMT datasets are set up, probably give some justification for why you take 2023 as contaminated dataset and 2024 as non-contaminated dataset? (But then you also report in Appendix G that there are also some contamination in 2024 dataset, which I'm not sure how it happened?)
* Figure 2 -- I'm confused about how the methods are ranked? Specifically, there seems to be some methods below "baseline" that's doing worse than baseline? (Same with Figure 3)
Questions For Authors: Please clarify the following:
* The questions I listed in "Other Comments Or Suggestions" about checkpoint branching
* The ranking of setups in Figure 2
* What does "threshold of 0.7" mean in Appendix G (L1343 and L1359) -- basically, here is the relevant quote I found from the section 8 of the PALM paper that was cited:
> "So, we were able to split each dataset into a “contaminated” and “clean” subset based on whether at least 70% of the 8-grams in question, prompt, or target were seen at least once our training data."
but I was confused about how exactly it is computed, and how it related to what's been discussed there.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed review and insightful comments. We will use the reviewer’s feedback to revise the writing of Section 3.4 and include missing citations.
1. Details on contamination score calculation: The contamination score is calculated as the number of tokens in the longest overlap, over to the number of tokens in the eval example. If more than 70% of the tokens are contained in the longest matching overlap we label an example as contaminated and remove it from our test set.
2. Choice of WMT23/24 datasets: The WMT23/24 datasets used in our experiments are the standardized datasets, established by WMT (Conference on Machine Translation) annual conference.
3. Contamination from WMT’24: Source segments from WMT’24 are generally standard pieces of texts that are frequently found in news articles or general webtext. Since WMT sources input segments from online sources, it is possible that frequently used sentences might be found in any pre-training data. For this reason, we make sure that we run decontamination tests for all datasets we use.
4. On the ranking of Figure 2: The presented methods are sorted based on the average performance across language pairs. They are shown in descending order for both Figs 2 and 3. We will clarify this in the next version of the paper.
5. Checkpoint branching details: In our branching method, each branch is effectively a copy of the baseline model, with all the optimizer and meta data inherited from the checkpoint. As training continues, the only difference is the pre-training mixture that the new model (continued pre-training) is exposed to. This mixture is a copy of the baseline one with random examples replaced by the contaminated instances. Our setup covers 42 contamination experiments, leading us to 42 “branches”. We will include those details in the next version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Minor clarification about my request:
> probably give some justification for why you take 2023 as contaminated dataset and 2024 as non-contaminated dataset?
What I'm trying to suggest is that you could point out WMT shared task organizers do collect data with recent time stamp to avoid the test set appearing in previous pre-trained data, which is why this is a good setup for data contamination study.
> it is possible that frequently used sentences might be found in any pre-training data.
Good point. Thanks for clarifying. | Summary: The paper analyzes the influence of data contamination on LLMs trained for machine translation. Their testing controls factors such as the modes of contamination, the temporal distribution of contaminated samples, and the frequency the contaminated samples are presented. From their experimentation they demonstrate that (1) parallel data contamination provides a much larger impact than monolingual data contamination, (2) uniformly distributed data contamination has the most persistent impact, (3) larger models are more influenced by data contamination, and (4) data contamination is most prevalent once the model already has a language understanding capability.
Claims And Evidence: Yes, claims are supported by evidence, but they would be more convincing if a wider range of models were tested on. (Not a big deal as I understand this would require an incredible amount of compute and would most likely lead to the same outcomes.)
Methods And Evaluation Criteria: Yes, the methods and evaluation make sense for the problem.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, experimental design and analysis is sound.
Supplementary Material: Yes, I viewed the MetricX results in the appendix.
Relation To Broader Scientific Literature: The paper contributes a thorough analysis of the influence of data contamination for machine translation LLMs. This is a prevalent issue that has not been explored to the same level in a prior publication.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1) Propose a checkpoint-branching scheme that reduces variance and increases computational cost of their experimentation.
2) This is the first paper that provides experimentation with data control that explores the influence of data contamination for LLMs when they are applied to machine translation.
Weaknesses:
1) The experimentation required an extensive amount of computation and there is a lack of documentation outlining the amount of compute required in case future people want to replicate the methodology for alternative seq2seq tasks.
2) A lot of the conclusions drawn from experimentation were not incredibly insightful. For instance, (1) and (3) are already known by the community.
3) It does not seem that testing the influence of the number of copies of the test set in the training data is entirely necessary. It seems unlikely that the training data would contain 10-100 copies of the test set ever. Unless I am misinterpreting something these results appear as though they were included to exaggerate the influence of data contamination on machine translation. Is this something that would be seen in real world data?
4) The conclusions were made on a single architecture, and it is unclear if the conclusions would extend to alternative architectures.
5) The testing is only done for machine translation, and it does not explore if similar results would hold for alternative seq2seq problems.
Other Comments Or Suggestions: None.
Questions For Authors: 1) What is the overall amount of compute required for all experiments in terms of GPU hours? How many models were trained from your checkpoint branch?
2) Do you think similar trends would carry over to alternative architectures?
3) Are there any circumstances where there might be 10-100 copies of the test set in the training set?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments and feedback. We respond to the reviewer’s comments below.
1. Question 1 (training compute): All models are implemented as continued pre-training starting off baseline checkpoints. For instance, models with late contamination are only trained for 10% of the entire training (given we kickstart the model from the 90% of the baseline’s mark). In practice, this means that our checkpoint branching approach reduces the compute requirements about 60% (compared to the compute budget we would have spent, if all models were trained from scratch.
The experiments being expensive is definitely a challenge however we have undertaken this cost for the community and shared our findings openly that can answer critical questions in this area as well as inform future research. We think it is an important contribution of this paper that we have committed to rigorously experimenting the impact of contamination given the computational requirements and not a weakness.
2. Question 2 (generalization to different architectures): We acknowledge that our findings are limited to decoder-only LLMs, which constitute the predominant LLM paradigm in recent years.
3. Question 3 (frequency of contamination in pre-training): While in principle, pre-training data deduplication is a common practice; in multilingual settings, upsampling under-represented languages is an equally common tactic. The latter is dictated by the lack of resources for many languages, and inevitably can lead to data being repeated more than once. As a result, it is reasonable to assume that once a test instance bleeds into pre-training data, it might end up seen by the model more than once.
4. On the novelty of our findings: The more intuitive result, i.e., contamination leads to performance overestimation, has been hypothesized by many works; however, which data conditions contribute to this overestimation and to what extent, has not been studied by prior work. Our study contributes empirical evidence that contamination could, but not always lead to performance overestimation, a behavior that depends on various factors ranging from the way contaminated test sets are presented in the pre-training data, to how well the contaminated languages is represented in the pre-training distribution (Section 5 and 5.1).
5. On the choice of Machine Translation (MT) test sets: MT test sets naturally let us study how data contamination interacts with pre-training resource requirements (which are different across the many languages we studied with this work). As evidenced from our findings, whether contamination has a measurable impact on downstream performance does connect to the language’s representation during pre-training. | null | null | null | null | null | null |
Scaling Sparse Feature Circuits For Studying In-Context Learning | Accept (poster) | Summary: This paper aim to understand the mechanism of ICL by leveraging SAE, along with other techniques such as ITO and SFC, to analyze the properties of ICL task vectors. And how these techniques can be combined and improved to enhance the understanding underlying mechanisms.
Claims And Evidence: I have few questions and concerns:
1. What is the main contribution of this paper?
The paper attempts to introduce a new TVC. My understanding is that this approach primarily involves continued training of a pretrained SAE specifically for ICL. If my understanding is correct, this adaptation may compromise the generalizability of SAE. If the goal is to extract features for specific tasks, why not use supervised methods instead? The authors might consider referring to benchmark techniques such as DiffMean and ReFT, as outlined in AxBench [1], which may offer more effective approaches for task feature extraction.
2. What is the motivation of using SAE to analyze task vector?
3. What empirical observations or justifications led to the decision to separate task detection and task execution vectors?
The distinction between these two components should be further elaborated, with supporting evidence demonstrating why such a separation is meaningful and necessary.
[1] AxBench: Steering LLMs? Even Simple Baselines Outperform Sparse Autoencoders.
Methods And Evaluation Criteria: Evaluation is good to me, but results on more models and sparsity level are needed. The current experiments are primarily conducted on Gemma-1, but recent models such as LLaMA and Gemma-2 also feature pretrained SAEs with varying sparsity levels. How would different levels of sparsity in pretrained SAEs impact the conclusions drawn in this study? Additional experiments on multiple models with different sparsity settings would strengthen the validity of the findings and offer a broader perspective on the generalizability of the proposed approach.
Theoretical Claims: N/A (This paper do not have much theoretical analysis.)
Experimental Designs Or Analyses: 1. observations that motivate the separation of task-detection and execution vector are needed
2. results on more models and sparsity level are needed.
Supplementary Material: Roughly review the code, but do not run it.
Relation To Broader Scientific Literature: This paper investigates the task vector of in-context learning (ICL) using sparse autoencoders (SAE), and design new TVC algorithms to better assist the analysis, but the main contribution need more elaboration.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: see summary
Other Comments Or Suggestions: see summary
Questions For Authors: see summary
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Hello Reviewer `1tPd`, thank you for your review. We appreciate your time and feedback, but it appears there may be some fundamental misunderstandings about our paper's focus and contributions that we'd like to clarify. **You stated: “1. What is the main contribution of this paper? The paper attempts to introduce a new TVC” but this is not the main contribution of our paper.** Instead, the main contribution of the paper is the use of SAEs to understand the circuitry behind in-context learning (ICL), as is clear from the title of our paper. This is important because our contributions are considerably more exciting for mechanistic interpretability researchers. TVC is a tool we developed to help identify which existing SAE features are most relevant to ICL tasks. Critically, TVC **does not** retrain or modify the pretrained SAE in any way - it only identifies which features in the already-trained SAE are causally relevant to ICL. Therefore, the comparison to AxBench (a paper that was released two days before the final submission deadline for ICML 2025 and therefore does not need to be addressed by us since it is contemporary work) is much less relevant to our work because our research is principally about circuits rather than steering.
**On the motivation for using SAE to analyze task vectors.** Our motivation is to understand the circuit mechanisms behind in-context learning. While previous works like Todd et al. (2024) and Hendel et al. (2023) discovered task vectors, they didn't explain how these vectors are mechanistically implemented in the model. SAEs help us decompose these task vectors into interpretable features that form part of the model's computation.
**On separating detection and execution features:** This separation was an empirical discovery, not an assumption, supported by multiple lines of evidence:
Different activation patterns: execution features activate on arrow tokens (89.8%), detection features on output tokens (96.76%) - Tables 1 and 2
Layer positioning: detection features work best in earlier layers (11), execution features in later layers (12)
Causal structure: ablating detection features reduces execution feature activation (Figure 11)
Task-specific steering effects: both feature types affect different tasks differently (Figures 6 and 10)
Our answer to reviewer `zx2e` also contains a more thorough definition of both feature types.
**On results with more models and sparsity levels:** Our Appendix includes experiments across multiple models (Gemma 1 2B, Gemma 2 2B, Gemma 2 9B, Phi-3 3B) and SAE configurations. Figures 16-19 show TVC performance across these settings, demonstrating that our approach generalizes well. These figures include different sparsities, widths and models, including Gemma Scope SAEs. | Summary: This paper uses sparse autoencoders (SAEs) to study in-context learning (ICL), specifically focusing on ICL tasks that can be abstracted into a task vector. The authors propose task vector cleaning (TVC), a methodology to identify task-execution features from the set of SAE features that implement the ICL tasks during generation. The authors adapt the sparse feature circuit (SFC) methodology starting from these features, demonstrating evidence of the faithfulness and task specificity of discovered circuits. This also reveals another set of features, termed as task-detection features, which are activated prior to the task-execution ones and have a downstream activation effect on them. Experiments are mainly launched on Gemma-1 2B.
Claims And Evidence: Generally yes, however the contributions and claims should be clarified, and the experimental results should be approached with caution, as follows.
- The authors claim in the first contribution that the studied **Gemma-1 2B** has “10 – 35x more parameters than prior models in comparable, circuits-style mechanistic interpretability research”. However, [1] has already performed circuit-style analysis on multiple-choice question-answering tasks using models up to **70B** parameters. Thus, the claim is invalid.
- The paper lacks a detailed, quantitative description of the experimental setup throughout the manuscript, and main results are constrained to Gemma-1 2B. This affects the scope of the paper on whether the claim holds true for more general ICL tasks and larger models.
[1] Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla
Methods And Evaluation Criteria: Yes. The task vector is a widely accepted methodology to understand and steer ICL behavior in LLMs, and the proposed decomposition of task vectors by SAE features and the subsequent study using SFC are well-inspired by current literature. However, there are concerns regarding writing and ablation studies, which affect the scope of evaluation. Please refer to **experimental designs or analyses.**
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: Yes. I have several concerns as follows.
- **The scope and applicability of the proposed method.** The experiments presented in the main text used Gemma-1 2B SAE trained by the authors. Given the current open-sourced SAEs on larger models (Gemma Scope, Llama Scope etc) and the claimed simplicity (“For Gemma 1, it stops at 100-200 iterations, which is close to 40 seconds at 5 iterations per second.”), I assume it is plausible for the authors to launch the same series of experiments on larger scale LLMs to demonstrate the scalability.
- **Lack of quantitative description of experiments.** For example, in Figure 2, the metric “Average relative loss change” is not described in the main text until Appendix D.1. This issue also applies to the same quantity in Figure 3, and the quantities displayed in heatmaps (Figure 6, 10, 11) among others. Numerical results should be described, or at least captioned, precisely to accurately reflect the objective of interest, facilitating the interpretation and understanding of the outcome. This statement also applies to SFC-related experiments. Could you explain the normalizing and clipping process when producing the heatmap in more detail? Further, when calculating the token type activation masses to produce Table 1, what dataset are you using, and how large is the batch?
- **Missing results though reported.** This connects to point 1. The authors mentioned “...we successfully applied … to Gemma 2 2B and 9B models… it was also successful with the Phi-3 3B model.” However, I can only find the Gemma 2 2B results in the appendix, and the results with all other models are **partial** (only the TVC comparison to inference-time optimization, ITO). How do the steering results appear for these models? Will the pattern shown in the heatmaps persist?
- **Mismatched results between heatmap and text description.** In Section 3.1, task-execution features are extracted via the proposed TVC to filter out the originally noisy features. However, the features displayed in heatmaps, such as in Figure 6, are **not** the task-execution set. The feature that has the most significant effect on en_fr and en_it is eliminated through the TVC process according to Figure 22. Thus, the description of the heatmap is improper - these non-task-execution features should be highlighted in the main text instead of in the appendix. Do you have an explanation for why certain important features are discarded by TVC? Furthermore, how does this impact the trustworthiness of TVC for task-execution feature extraction?
For other concerns, see Questions For Authors.
Supplementary Material: Yes, sections A, B, C, D, F and G.
Relation To Broader Scientific Literature: This paper studies ICL via SAE, proposes to decompose a task vector through SAE features to locate task-related crucial features that allow for steering.
Essential References Not Discussed: I have not found essential missing references, however I have one concern as in Claims and Evidence, point 1. This reference was cited but not discussed in the manner described.
Other Strengths And Weaknesses: The writing of the paper can be improved. In addition to the issue of imprecise descriptions in the evaluation, there are other points that can be addressed. For example, some notations (e.g., d_SAE in Section 3.1) and important reference methodologies (e.g., inference-time optimization, ITO, in Section 3.1) are directly used without any definition or introduction. Further, there are also mixing usage of \citet and \citep throughout the paper.
Other Comments Or Suggestions: - **Please use the correct template with line numbers clearly shown on the sidebar.**
- Section 2.1: "...with f denoting the pre-activation features..." (This should be post-activation since the activation \sigma is included.)
- Section 3.1: "Figure Figure 2 presents..." (Duplicate text)
Questions For Authors: My questions are as follows.
- In Section 3.1, you found that task vectors are out-of-distribution to the SAE. Since a task vector is effectively an average over in-distribution vectors, have you tried to establish a baseline by feeding these iid inputs to the SAE, and check if there are commonly-activated features? Can the task-execution and task-detection features be identified through this method? If not, what is the ratio that these two sets of features are activated respectively, when these original inputs are fed into the model?
- As described, when launching the proposed TVC, you stated it “begins with collecting residuals for task vectors using a batch of 16 and 16-shot prompts”, and then optimize the coefficients “on a training batch of 24 pairs, with evaluation conducted on an additional 24 pairs.” Have you tested how robust TVC is to these hyperparameters? Can TVC recover the task-execution features with fewer prompts when constructing the task vector? How does the quality of identified features vary when the size of the training batch changes?
- In Section 4.1.3, you mentioned, “we opt for zero ablation since it better aligns with the sparse nature of SAE features.” How do different ablation choices here affect your experimental results? Will it lead to a detrimental effect?
- The result in Section 4.1.3 appears contradictory to the claims made in the main text. For example, the circuit in Figure 12 only consists of **5 features**, while SFC requires **~500 features** to achieve a faithfulness level of 0.6. Do you have an explanation for this phenomenon? What is the functionality of the other features, given that the discovered task-related features only occupy a tiny fraction of this circuit?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Hello Reviewer `fzas`. Thank you for your impressively extensive and thoughtful feedback, and for your careful attention to details even in the Appendix.
**On our "10-35x more parameters" claim:** You note that "[1] has already performed circuit-style analysis [...] using models up to 70B parameters." We should have been more precise in our claim. By "comparable, circuits-style mechanistic interpretability research," we specifically meant work that provides end-to-end descriptions of task performance using SAEs. The Lieberum et al. paper focuses on attention heads' impact on final logits rather than providing a comprehensive explanation of MMLU processing. They state: "Our main focus in this work was on the final parts ... the rest of the circuit is still undiscovered." We will edit this into our manuscript.
Experimental design concerns:
**On "lack of quantitative description of experiments"**: We moved many implementation details to the Appendix due to space constraints. We'll improve figure captions in our revision to better explain used metrics like the Average relative loss change you mentioned. The normalization process for the heatmaps (Fig. 6, 10, 11) follows these steps (Appendix F):
1. Calculate raw metric: metric[task, feature] = steered_loss[task, feature] / loss[task]
2. Clip: metric[task, feature] = clip(metric[task, feature], 1)
3. Normalize per task:
metric[task, feature] = (metric[task, feature] - min_f(metric[task, f])) /
(max_f(metric[task, f]) - min_f(metric[task, f]))
4. Threshold: metric[task, feature] = 0 if metric[task, feature] < 0.2
**On token mass calculation:** We use the same parameters as for TVC: a batch of 32 prompts each with 20 randomly sampled ICL examples.
**On "missing results though reported":** You cite that we mentioned "...we successfully applied ... to Gemma 2 2B and 9B models... it was also successful with the Phi-3 3B model." This statement refers specifically to the TVC algorithm's ability to reduce feature count while preserving task vector effects, not to the full suite of steering experiments. Our Appendix contains TVC results for all mentioned models (Figures 17-18), confirming this capability. Full steering experiments require more computational and human resources than just TVC. We decided to limit this type of analysis to Gemma 2B models, but expect that it will extrapolate to other models, judging by some of the max activating examples we analyzed.
**On "mismatched results between heatmap and text description":** You noted that "the feature that has the most significant effect on en_fr and en_it is eliminated through the TVC process according to Figure 22."
Feature 26987 is a generic "English to foreign language" translation feature. Since Spanish tokens are prevalent in model training data, the model handles English-to-Spanish translation well, while English-to-French/Italian ICL performance is weaker. Task vectors without cleaning perform poorly for these language pairs.
Task vectors contain multiple mixed-language features (26987, 26594, 6594), with 26987 having the cleanest translation examples. Since en_fr and en_it directions aren't naturally strong in the model, TVC reconstructs them using a different mix of features.
In better-trained models and SAEs (Figure 25), these tasks have separate execution features. This shows that TVC finds a decent performing sparse combination of features in the absence of strong task-specific features.
Questions:
**1. On task vectors being ood:** Yes, we tried identifying common features across individual residual streams, but this highlighted many dense features alongside task-relevant ones. For Gemma-1 2B SAEs, individual residual streams activate ~40 features versus ~10 in averaged task vectors and 3-5 after cleaning.
**2. On robustness to sample sizes:** We briefly explored different batch sizes and shot counts. Larger samples didn't significantly improve the TVC results. Smaller shot counts risk missing task-specific features whose presence is weaker early in the prompt. Smaller batch sizes may lead to extracting batch-specific features, especially for tasks with weaker vectors. Strong task-executing features are extracted even from small batches.
**3. On ablation:** Our experiments showed no dramatic differences between zero and mean ablation. Zero ablation also doesn’t require us to store SAE activation statistics.
**4. On the apparent contradiction in Section 4.1.3:** Figure 12 shows only a small subset of the circuit focusing on core task-related features. The ~500 features needed for 0.6 faithfulness include generic pattern repetition features, task executing features split across multiple features and layers, and features encoding answers in later model layers.
Overall, we appreciate your feedback on the specificity of our claims, and will clearly state the takeaways from our research in our manuscript. We hope you will revise your score in light of this!
---
Rebuttal Comment 1.1:
Comment: Thanks for your response to my questions.
**Scope and applicability:** This part is not addressed. To me, TVC plots in Figures 17-18 are insufficient since these do not necessarily connect to the steering results. My concern is that providing results on a **single** model is insufficient to draw a conclusion on (vector-abstracted) general in-context learning.
**Mismatched results:** My concern here is two-fold. First, TVC discards this (multi-lingual) strongest translation feature candidate according to the heatmap. This brings questions on the **validity and effectiveness** of TVC: why and how could it ignore the strongest candidate? If the interpretation is that TVC tends to reconstruct the task vector by **ignoring strong, broader usage features**, then this is a critical limitation that should be mentioned and emphasized. Second is that the current heatmap is **misleading**: the caption does not match the demonstrated result, in the sense that (1) several features with high impact are not included in the task-execution set, and (2) several features with low impact are included in the task-execution set. This phenomenon should be depicted in the plot and be properly discussed.
**Task vector and SAE feature identification:** As in my raised concern, can you further provide the **ratio** that task-execution and task-detection features are activated, when you feed your ICL inputs to the model? This is a baseline to check if these two sets of features are indeed critical for the ICL task such that they are constantly frequently activated; if not, the validity of these sets should be questioned.
**Contradiction in Section 4.1.3:** I understand that in Figure 12 only a tiny fraction is illustrated. However, given this significant gap between the claimed number of effective task-related features and the number of total features, the ablation on whether there are features hidden in the circuit that can also steer model performance is necessary. If all other features are generic but not ICL-specific, it would strengthen the conclusion on TVC; otherwise, it would weaken the result.
---
Reply to Comment 1.1.1:
Comment: **Scope and applicability**
We agree that our TVC results on other models do not show that our findings necessarily generalize to larger models. However we disagree that this is an issue we need to address in this paper: it is already extremely packed with details and adding more work that would likely need to be in an Appendix would make the paper more confusing for other readers. While we understand this concern, we note that seminal works in circuits mechanistic interpretability such as the IOI Circuit paper [1] have also focused on single, smaller models than Gemma 2B and have made significant contributions to the field. We expect our work to inspire future work to study the generalization of our findings to larger models.
[1] Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small
**Mismatched results**
**Regarding translation tasks:** As you noted, this is a task-execution feature candidate for translation tasks. Since it has a strong effect across all translation tasks, it is by no means task-specific. Steering with just this feature produces a much lesser effect than the fully cleaned task vector of 3 features.
It's also worth noting that Spanish is much more broadly represented in the training data, and Gemma 1 2B demonstrates stronger capabilities for English-to-Spanish translation compared to other languages. Therefore, it's not surprising that a better English-to-French direction can be built without this feature.
We do not believe that this is a critical limitation of the method, since its main purpose is to discover task-specific features in the task vector, if there are any present. We would only expect it to discover features like 26987 in a generalized translation task vector.
**Regarding the heatmap caption:** It's important to note that directly comparing effect magnitudes across different tasks on the normalized heatmap is not straightforward. Different tasks have different starting losses, different task vector effectiveness, and different model capabilities. For example, a task that's well-represented in the model's knowledge will have lower initial loss and thus a smaller potential for improvement from any feature.
This is precisely why we normalize effects task-by-task. The normalization allows us to identify which features have the strongest relative effect for each specific task rather than making cross-task comparisons of absolute effect sizes.
Our captions state that "most tasks have a single feature with a high effect on them, and this feature generally does not significantly affect unrelated tasks" and "Most features boost exactly one task, with a few exceptions for similar tasks like translating" Although most of the tasks have a feature that has a noticeably stronger effect on it than others, they often have weaker but still task-specific ones too. We will change it to “most features that have a strong effect are highly task specific”.
**Task vector and SAE feature identification**
We ran activation fraction experiments (batch 32, 24 n-shots) for detectors and executors. For each task, we measured the percentage of ICL pairs where the top features from heatmaps were active (on corresponding tokens), split into "head" (first 4 examples) and "tail" (remaining 20) activations. We measured in-task and cross-task percentages.
Results for executor features:
- In-task activation: 50% (head), 79% (tail)
- Cross-task activation: 14% (head), 17% (tail)
Results for detector features:
- In-task activation: 42% (head), 40% (tail)
- Cross-task activation: 3% (head), 3% (tail)
These results strongly validate our claims: both feature types show significant task-specificity (much higher in-task than cross-task activation). The increasing activation rate for executors from head to tail demonstrates they become more active as the model observes more examples of the task. Lower percentages for detector features could be explained by them being more prompt-specific than executors on average.
**Contradiction in Section 4.1.3**
When restricting IE-based node ablation to just layers 11-12 (where we study executors and detectors), we need far fewer features—approximately 10 on average to achieve 0.6 faithfulness. Interestingly, in these cases, faithfulness often exceeds 1 because removing certain negative-IE features actually improves model performance beyond the baseline, so we had to clip faithfulness to 1. If we take the best metric after ablation as the non-ablated metric (to make faithfulness <= 1), we need 40-60 nodes on average.
Brief manual examination revealed that among the most important features are attention output and residual stream features like those shown in Figure 12, as well as several transcoder features with activation patterns similar to executor features. A strong “->” token feature was also among them.
We hope these clarifications address your concerns and demonstrate the validity of our approach and findings. | Summary: The paper applies sparse autoencoders (SAE) to better understand in-context learning (ICL).
The paper first learns SAE representations of task vectors. Task vectors are first constructed in a heuristic manner (averaging the residual streams of arrow tokens (between inputs and outputs). The paper proposes "task vector cleaning," in which the SAE decomposition is fine-tuned on the ICL task with a sparsity constraint. This improves the zero-shot task performance as well as sparsity. Here, "task execution features" are discovered, which activate directly before an output. Steering experiments are conducted, which establish the causal effect of task latents on the task. Similar tasks have similar strong steering vectors.
Then, a circuit learning approach is applied. The primary finding in this section is the identification of "task detection features," which activate on output tokens in the training data. It is shown that detection features causally activate executor features.
Claims And Evidence: The primary claim is that sparse autoencoders can help understand in-context learning circuits. This is supported by the identification of single SAE features that causally effect performance in the in-context learning task (Figure 6). The claim that this helps with understanding is backed up by the identification of "detection" and "execution" features. The evidence here is somewhat less convincing, primarily because the definitions of these features are not clearly stated. For example, while Table 1 shows that executor features activate mostly on arrow tokens, it is unclear if this is remarkable given that executor features are "characterized by" the property that "Their activation peaks on the token immediately preceding task completion." A more comprehensive analysis of what these features are for each task would be beneficial.
Methods And Evaluation Criteria: The methods seem solid. The task vector cleaning method in particular is a simple approach that appears to work well to get strong SAE decompositions.
Theoretical Claims: NA
Experimental Designs Or Analyses: Experiments generally seemed valid. However, as mentioned above, definitions of detection and execution features were underspecified.
Supplementary Material: I browsed several of the example top activating texts of features. These were interesting, and I think some more comprehensive analysis of these would offer more convincing evidence of the utility/interpretability of learned ICL SAE features.
Relation To Broader Scientific Literature: The paper contributes to a growing literature in mechanistic interpretability, in particular in using sparse autoencoders. While sparse autoencoders have been shown to identify interpretable and causal features in LLMs, there use to study more specific behaviors and capabilities is very nascent. This paper considers the task of in-context learning, which is a topic of significant interest. The existence of detection and execution features are an interesting finding.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Overall, the paper pursues a promising direction in applying sparse autoencoders to better understand in-context learning. The development of task vector cleaning appears to be a clean way to learn task vectors that are cleanly expressed by SAE latents.
The primary weakness of the paper is a lack of specificity when describing and analyzing the proposed detection and execution features. Without a more precise understanding of how these features are characterized, it is difficult to assess the validity of this abstraction. Here, the most convincing evidence of this abstraction would be a demonstration of (1) the ability to construct task vectors on new tasks based on learned features, similar to how it has been demonstrated that LLM behavior can be manipulated through SAE latents, and (2) a more precise connection between detection and execution features and behavior that distinguishes their specific roles.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Hello reviewer `zx2e` – thank you for your thoughtful review. We appreciate your recognition that our methods are "solid" and that task vector cleaning is "a simple approach that appears to work well to get strong SAE decompositions."
**Regarding feature definitions**: we consider *task-specific causal influence to be the defining characteristic of both detector and executor features*, and apologize for not making this clearer in the main text. We will update the manuscript based on this feedback. To expand specifically on what we mean by task-detection and task-execution features:
**Task-detection** features are defined as *intermediate circuit components between the raw prompt inputs and the executor features*. They function as a specialized probing mechanism that identifies and encodes which specific task is being performed. These features primarily activate on output tokens (96.76% as shown in Table 2), consistent with their role in monitoring completed examples to determine the pattern being demonstrated. Importantly, our steering experiments in Section 4.2 demonstrate that these detector features also have direct causal effects on task performance when steered on blank output tokens.
**Task-execution** features are defined as features that *directly impact task completion*. They affect whether the specific task happens without requiring additional intermediate processing. Their strong activation on arrow tokens (89.8% as shown in Table 1) demonstrates their positioning at precisely the point where the model must apply the identified transformation.
For greater specificity, we classify these features using clear criteria:
- **Detection features**:
(1) consistent activation on output tokens in examples, (2) minimal activation during generation, (3) causal influence on executor features (quantified in Figure 11), and (4) direct causal effect on task performance when steered on output tokens
- **Execution features**: (1) peak activation on arrow tokens immediately preceding task completion, (2) strong causal effect on task performance when steered with (Figure 6), and (3) task-specific activation patterns on raw data.
**Regarding your suggestion about "constructing task vectors on new tasks"** - if you're referring to zero-shot generalization, our steering experiments in Section 3.2 and Figure 6 demonstrate the causal effect of our identified features on model behavior for the tasks studied. While running additional experiments on entirely new tasks is beyond our current rebuttal time frame, we believe the consistent patterns across our current task set suggest potential for generalization.
The circuit analysis in Section 4.2 and Figure 11 provides empirical validation of our proposed abstraction by showing that: (1) ablating detection features reduces execution feature activation, (2) this relationship is consistent across tasks. These quantitative measures establish that detection and execution features form distinct functional components in the ICL circuit.
We appreciate your feedback and will incorporate these clarifications to strengthen the paper. | Summary: This paper explores how sparse autoencoders (SAEs) can enhance our understanding of in-context learning (ICL) mechanisms in LLMs. The paper's main contributions include:
- Identifying two core components of ICL circuits: task-detection features that identify required tasks from the prompt, and task-execution features that implement those tasks during generation.
- Developing a Task Vector Cleaning (TVC) algorithm that decomposes task vectors into their most relevant sparse features, enabling more precise analysis of ICL mechanisms.
- Adapting the Sparse Feature Circuits (SFC) methodology to work with the much larger Gemma-1 2B model (30x larger than models in previous circuit analysis studies) and applying it to the complex task of ICL.
- Uncovering the interaction between these components: attention heads and MLPs process information from task-detection features to activate appropriate task-execution features.
- The authors provide evidence that these task vectors can be represented as sparse sums of SAE latents, and they demonstrate causal relationships between detection and execution features through circuit analysis.
Claims And Evidence: The claim that task vectors can be decomposed into sparse, interpretable features is supported by the TVC algorithm results, showing the algorithm can reduce the number of active SAE features by 70% while maintaining or improving effect on loss.
The identification of task-detection and task-execution features is supported through thorough analysis of activation patterns, steering experiments, and ablation studies that demonstrate their causal roles.
Methods And Evaluation Criteria: The methods and evaluation criteria used in this paper are appropriate for the research questions:
- The Task Vector Cleaning algorithm effectively isolates task-relevant features
- The steering experiments provide measure of causal influence by testing how individual features affect performance on specific tasks.
Theoretical Claims: There is no proofs or theoretical claims
Experimental Designs Or Analyses: The experimental designs are sound:
- The token position categorization approach for handling ICL prompts is well-motivated and effective for isolating different functional roles.
- The steering experiments with both positive and negative steering provide complementary evidence for feature functions.
- The ablation studies establish causal relationships between detected circuit components.
One minor issue is that the authors restricted their circuit search to intermediate layers 10-17 of the 18 total layers, justifying this based on IE approximation quality in earlier layers. While reasonable, this limitation means some potential early-layer mechanisms might have been missed.
Supplementary Material: I reviewed section A to D
Relation To Broader Scientific Literature: The paper effectively situates its contributions within SAEs and ICL literature:
- It builds upon prior work on task vectors (Todd et al., 2024; Hendel et al., 2023) by providing a mechanistic explanation of how these vectors function through sparse features.
- It extends the SFC methodology from Marks et al. (2024) to handle more complex tasks and larger models.
- It connects to work on induction heads (Olsson et al., 2022) while demonstrating that ICL requires more complex mechanisms beyond just induction.
Essential References Not Discussed: The paper could have benefited from more discussion of how their findings relate to theoretical models of IC
Other Strengths And Weaknesses: Strengths:
- The paper successfully scales mechanistic interpretability techniques for ICL.
- The identification of two distinct mechanisms (detection and execution) provides a clear conceptual framework for understanding ICL.
- The paper demonstrates how SAEs can be used for more than just interpreting individual features but for understanding complex model behaviors.
- The TVC algorithm is a valuable contribution that could be applied to other research on task vectors.
Weaknesses:
- The analysis focuses on relatively simple ICL tasks (e.g., antonyms, translation). It's unclear how well the approach would extend to more complex reasoning tasks.
- The authors note that there are often multiple competing execution and detection features, suggesting redundancy in the model's representations that could complicate interpretation.
- The paper doesn't explore how these mechanisms might differ across model
Other Comments Or Suggestions: It would be interesting to see how the detected ICL circuits relate to other mechanisms that have been studied in language models, such as factual recall or logical reasoning.
Questions For Authors: You mentioned person_profession and present_simple_gerund showed unusually weak detection-execution connections. Do you have hypotheses about why these specific tasks might be processed differently?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Hello reviewer K3Cy,
Thank you for your thoughtful and supportive review of our paper. We appreciate your detailed feedback and are grateful for your recommendation to accept the paper.
Regarding your specific question about the weak detection-execution connections for the `person_profession` and `present_simple_gerund` tasks:
For `person_profession`, our analysis revealed the model's general performance of this task is relatively poor. As seen in Figure 27f, the strongest detector feature is a single token feature focused on journalism, indicating that SAE features did not pick up strong task-related directions alongside poor model's accuracy. This both contributes to weak connection of executing and detecting features.
For `present_simple_gerund`, the situation is more nuanced. One possible explanation is feature splitting. Since gerund forms are common in the training data, they have several corresponding detector and executor features. At the same time, executor and detector features with the strongest steering effect may correspond to different cases of gerund occurrences, and thus be weakly connected.
Thank you again for your positive feedback on our work! | null | null | null | null | null | null |
Efficiently Vectorized MCMC on Modern Accelerators | Accept (spotlight poster) | Summary: With the advancement of AI infrastructures, it is increasingly interesting to scale algorithms up with parallelism. Nevertheless, it is not efficient to run parallel MCMC with naive automatic vectorization (e.g., vmap), due to a varying execution time at each sampling step across different chains. The current approach will wait for the slowest chain at each step, while this paper proposes to remove this synchronization problem (usually caused by while-loops) by rewriting an MCMC algorithm as transitions in a finite state machine (FSM). With this modification, we can run parallel MCMC as parallel transition functions in the FSM, where different chains could be in different states, and synchronization will only happen once in the end. On the other hand, the proposed FSM based parallel MCMC also introduces additional control flow cost because codes for all states have to be executed every step no matter what state a chain is in.
Based on the idea, this paper develops a theoretical framework for analysing such parallel MCMC algorithms. Namely, the expected running time of FSM v.s. standard parallel MCMCs will depend on the number of chains, the number of samples per chain, the convergence of the execution time per step, the "organization" of the FSM, etc. The takeaway is that the running time of standard parallel MCMC will converge to the slowest chain while the running time of FSM parallel MCMC will converge to a single chain scaled by the additional cost of control flow.
To reduce the cost of control flow, the paper proposes two techniques, called step bundling and cost amortization. Step bundling allows transitions to be combined in some cases. Cost amortization makes sure that an expensive function (usually the log density function) will only be executed once per FSM step.
The experiments test different aspects of the FSM parallel MCMC. FSM is the most helpful when the distribution of one-step sampling time is skewed. When implemented, FSM easily outperforms standard parallel MCMC with broad implementations of Delayed Rejection MH, elliptical slice sampling, HMC-NUTS and transport elliptical slice sampling, when the # of chains is sufficient. In addition, the control flow optimization is helpful on top of FSM and may introduce substantial speedups.
Claims And Evidence: The empirical evidences are strong but the theoretical evidences have several presentation issues. However, I do believe all claims are correct. See below for details.
Methods And Evaluation Criteria: The proposed method solves a key problem in efficient MCMC sampling. The problem of synchronization is part of why MCMC were mainly run with few chains on small machines. The solution of this work is neat and makes sense from the perspective of parallel computing systems. I would love to see the solution to be implemented in every modern MCMC libraries.
There are two axes of the benchmark: the MCMC algorithms and the models. The work has a broad selection of benchmark MCMC algorithms (delayed rejection MH, elliptical slice sampling, HMC-NUTS and transport elliptical slice sampling), which are all impactful in practice. The benchmark models include both simulated and practical models, covering difficult cases such as high correlations and funnels. I especially like the arguments of acceptance rate of at least one chain in parallel MCMC (line 436), which highlights the necessity of FSM based approaches.
Theoretical Claims: My major issue with this paper is about the theoretical framework. I believe they are correct, but there are also some obstacles that keep it from being checked thoroughly.
- Equations (3) and (4) work with expectations. I do not see how (4) is true given the information up until this part. More speficifically, my expression for (4) is $\mathbb{E}[\max_{j\le m}\sum_{i=1}^nN_{\infty,j}^{(i)}]$. The gap might be justified with Theorem 4.1, but some connections need to be made.
- The work refers to Algorithm 1 as "standard design" and Algorithm 3 as "FSM design". Both algorithms are analyzed based on the executing time of each state function in the FSM design. There is a disconnection between the state functions and Algorithm 1 as there are no states in Algorithm 1. To make sense, I am expecting an explanation of how much a step in Algorithm 1 costs as a function of $c_1, c_2,..., c_K$.
- Following the above point, I find the notations around line 220 confusing. $c_{\neg k}$ is not defined, and even if it is, I am not expecting $A_0$ to depend on a specific $k$. The presentations of $A_0, B_0, A_F, B_F$ are all not justified.
Experimental Designs Or Analyses: I checked the settings of all experiments and they all look reasonable to me.
- Among all the MCMC algorithms, I am using NUTS the most. The NUTS experiment in 7.3 assumes identity mass matrix which looks different from standard approaches. But it is chosen to demonstrate how FSM helps the synchronization problem. However, I believe there should be more experiments on NUTS for a paper like this. It would be much more interesting to try NUTS on the models of Table 1.
Supplementary Material: No. I did not review for the supplementary material.
Relation To Broader Scientific Literature: As the paper also points out, the synchronization problem is well documented in the literature (BlackJax,2019;Sountsov et al.,2024;Radul et al.,2020). The paper borrows classic ideas of finite state machines from computer systems (Hopcroft et al.,2001), and shapes it with problems of MCMC sampling.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: I believe there should be another change of mind about parallel MCMC following this work. The synchronization problem caused some chains to run "in vain" waiting for the slowest chain. With the FSM design, however, the waiting by the faster chain IS useful for sampling. For example, if we set $n=1,000$, the slowest chain will generate $1,000$ samples while the fastest chain might have generated $1,100$ samples. They shall all be mixed together for estimation. In practical settings, we can set a limit of # of transition cycles instead of # of samples to make more sense.
Other Comments Or Suggestions: A list of typos I can find:
- Line 121 right: $k\in\\{0,1\\}$ -> $k\in\\{1,2\\}$
- Line 159 right: $k$ starts from 1 so probably $(1,z_1)$
- Line 212 right: $(n,m)$ and $(m,n)$ are used inconsistently
- Line 8 Algorithm 5: should $g\\_flag$ be returned?
Questions For Authors: My points can be condensed into the questions:
- Is there a more intuitive justification of (4), as well as (6) and (7)?
- How good is the performance of NUTS-FSM compared to NUTS on the models in Table 1?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your diligent review - we are glad you believe our method is a worthwhile addition to modern MCMC libraries. Below we address your questions.
**Theoretical framework:** Eq(3) and eq(4) hold without taking expectations under appropriate theoretical conditions, but we appreciate the symbol is doing a lot of heavy lifting in our effort to convey the basic ideas. With more standard probability theoretic notation, the exact form of eq(3) is $$\sum_{i=1}^n\max_{j \leq m}N_{i,j}=n\mathbb E\max_{j \leq m}N_{\infty,j} + \mathcal O_p(\sqrt{n})$$ which holds under an appropriate Central Limit Theorem or concentration inequality on the sequence $(\max_{j \leq m}N_{i,j} : i \geq 1)$. Since $0 < n \mathbb E\max_{j \leq m}N_{\infty ,j} = \mathcal O(n)$ in our setting, as $n$ grows the stochastic error term becomes negligible.
For eq(4), the exact version under equivalent conditions is
$$\max_{j \leq m}\sum_{i=1}^nN_{i,j}= \max_{j \leq m} n \mathbb EN_{\infty,j} + \mathcal O_p(\sqrt{n})$$
which holds using the same arguments as eq(3) for the sum inside the max above, and using the fact that the max over finitely many $\mathcal O_p(\sqrt{n})$ variables is $\mathcal O_p(\sqrt{n})$.
This is all made rigorous in Theorem 4.1, where we use an approach based on Hoeffding bounds for Markov chains.
We appreciate the reviewer bringing this to our attention. We are happy to replace eq(3) and eq(4) by their reformulations above and modify the text accordingly.
**Section 4 explanation:** We agree that more details on how the quantities in Section 4 hold would improve clarity.
The states $S_1,...,S_K$ exist in Algorithm 1 inside `sample` (recall that our FSM decomposes `sample` into contiguous states/blocks $S_1,...,S_K$ and defines `step` as a single call of one of these states). A single call of `vmap(sample)` for $m$ chains executes all $S_l$ (for $l \neq k$) once, and $S_k$ (the iterative state) $\max_{j \leq m}N_{i,j}$ times, where $N_{i,j} =$ \# while loop iterations to get sample $i$ for chain $j$ and the max reflects that the while loop executes until it terminates for all chains.
Assume the cost of state $S_j$, when executed for a batch of $m$ chains using `vmap`, is $c_j(m)$. In this case, the cost of calling `vmap(sample)` to get sample $i$ for all chains is $$\sum_{l \neq k}c_l(m) + c_k(m)\max_{j \leq m}N_{i,j}$$ Averaging this cost over $n$ samples gives eq(6) with $A_0(m) = c_{\neg k}(m)=\sum_{l \neq k}c_l(m)$ (we appreciate $c_{\neg k}$ should have been made clear) and $B_0(m) = c_k(m)$.
For eq(7), we follow similar steps with the main difference that now `vmap(step)` executes every block for all chains. This means the cost of a single FSM step is $\sum_{j =1}^Kc_j(m)$. The overall cost of sample $i$ for chain $j$ is therefore $A_0(m) + B_0(m) N_{i,j}$, where $A_0(m) = (K-1)(\sum_{j =1}^Kc_j(m))$ and $B_0(m) = \sum_{j =1}^Kc_j(m)$. In the paper we multiply by $\alpha$ to later motivate amortization. Averaging over $n$ samples gives eq(7).
**Additional NUTS Experiments** Since another reviewer asked us to implement on datasets in the `posteriordb` database, in the limited time we chose to implement NUTS on (i) the Real-Estate dataset (Experiment 7.2), (ii) two datasets from Experiment 7.4, and (iii) two known challenging `posteriordb` datasets (`pilots` and `soil`). We used 128 chains + 1000 samples per chain + 400 steps pre-tuning for step size + mass matrix via the window adaptation algorithm. To save space, below we report the improvement in ESS/Sec using the FSM:
- Real Estate: 3.14x
- Google Stock: 3.36x
- Predator Prey: 1.55x
- Soil: 2.42x
- Pilots: 0.91x
We find substantial speed ups in ⅘ cases, and larger speed-ups than the FSM variant of Ellip-slice/TESS across all datasets we implemented for both methods. We note that because these datasets have highly variable local geometries (except real-estate GPR), computational resources did not average out across chains after 1000 samples. We expect that when sampling even longer chains, these speed-ups would be even larger. The Pilots dataset has a relatively cheap log-likelihood and less variation in the number of integration steps used, and so the gains in avoiding synchronization costs are offset by the increased cost per block of the FSM.
**Using extra samples**: We also had the idea to let chains that finish early collect more samples while waiting for other chains. The problem is that this biases the samples, as chains initialized in regions that are easier to sample from are over-represented. Importance weighting could resolve this for estimating expectations, but we leave this to future work. Enforcing n samples per chain still allows us to avoid the inefficiency from chains waiting at every iteration with `vmap`.
**Typos**: Thank you for pointing those out, we will amend them in revisions. In line 8 of Alg 5 `g_flag` is not meant to be returned - the flag is an output of the state functions, and is only used in `amortized_step`.
---
Rebuttal Comment 1.1:
Comment: I appreciate the additional clarifications (in retrospect I missed the definition of $k$ during reviewing) and NUTS experiments (the inclusion of adaptation schemes is really helpful). I raise my score to 4. | Summary: This work focuses on minimizing synchronization steps when vectorizing MCMC algorithms. Modern MCMC methods, such as NUTS, involve stochastic number of operations per Markovian transition, depending on the initial sample and random seed. This introduces unavoidable overhead when running multiple chains in a vectorized manner, as each transition step requires synchronization, forcing all parallel chains to wait for the slowest one to complete, leading to wasted computational resources.
Addressing this long-standing challenge is crucial for enabling MCMC algorithms to fully leverage modern hardware, such as GPUs. Most existing approaches tackle this problem by designing new MCMC algorithms that minimize random execution steps (e.g., CHEES). In contrast, this work provides a systematic implementation strategy for existing algorithms—such as NUTS and the slice sampler—using finite state machines on a single chain, ensuring that synchronization occurs only at the final transition step during vectorization.
The authors present theoretical results quantifying the expected performance gains compared to naïve vectorization and empirically demonstrate the effectiveness of their approach across multiple examples.
Claims And Evidence: This work makes careful and well-supported claims regarding the expected improvements of the proposed implementation strategy. The authors substantiate their arguments with rigorous yet concise theoretical analysis and thorough empirical validation.
I find their reasoning compelling and fully agree with their conclusions.
Methods And Evaluation Criteria: yes, they make sense.
I personally think a better metric to compare the efficiency gain is the number of synchronization steps per sec or per 100 sample. Though, this might not be straightforward to log in the run time.
Also, I'm bit surprised that one FSM relative efficiency gain is only 3/4 times faster than the naive vectorization (as shown in Fig 6); I'm expecting a much more significant difference. See the question section for details.
Theoretical Claims: I checked the correctness of the proof. The theoretical claims operate under very reasonable assumptions, and the proof is sort eluded in the maintext.
Experimental Designs Or Analyses: The selection of metrics, choice of competitors, and Bayesian examples all seem well thought out and appropriate.
However, for a fundamental idea like this (which I hold in very high regard), I believe a more comprehensive empirical evaluation would further strengthen the work. I recommend conducting additional evaluations on a broader set of models from PosteriorDB or TensorFlow Probability’s Inference Gym to provide more extensive validation.
Supplementary Material: Yes, I read the proofs and additional expt results.
Relation To Broader Scientific Literature: This work provides a thorough literature review of existing efforts to enable MCMC on modern hardware. To my knowledge, it takes a completely different approach—one that should absolutely be incorporated into modern probabilistic programming languages (PPLs). Instead of modifying the algorithm itself, this work focuses on optimizing its implementation, making it the first to do so in this manner.
Essential References Not Discussed: Not to my awareness.
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: - This could be due to my personal taste or lack of expertise in this field. However, I would appreciate a pedagogical example (e.g., ellipitical slice sampler with two chains) showcasing step by step the difference in the synchronization betweem the naive vectorization and the FSM variant. To buy more space, I personally think it's okay to move section 5 to apdx.
- Readers with less familarity with jax could benefit from a proper definition of the `switch` function in Algorithm 2.
Questions For Authors: - I'm bit surprised that one FSM relative efficiency gain is only 3/4 times faster than the naive vectorization (as shown in Fig 6) given that the drastic difference in iterations per sample. Why doesn't walltime and ESS/sec doesn't scale in the same rate? (My guess is that for the walltime, it's dictated by the slowest chain, while iter/sample is controlled by the average speed across the chains.)
- In the begining of section 4, the authors mentioned about the limitation of FSM vectorization that all branches would have to evaluated. If my understanding is correct, this would be the same for the naive vectorization as well? As long as one call `vmap` or `@jit`, all branches of control flows will be evaluated? So in my perspective, regardless of the number of steps needed to obtain n samples, the performance of FSM vectorization will be lower bounded by the naive vectorization?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your detailed and engaging review, we are excited that you see the value in our contribution. Below we address your comments and questions.
**FSM Efficiency gain:** The main reasons that the improvement in walltimes don’t match iters/sample in Fig 6 are (i) as you guessed, the slowest chain is still substantially slower than the mean after n=1000 samples, and (ii) the log-likelihood is very cheap to compute in this experiment, so control-flow costs (which are large in NUTS) dominate. Since each FSM `step` executes all states (see clarification below on branch evaluation cost vs. naive vectorization), this makes the trajectory integration more expensive per step, even with step bundling. It should also be noted that $R(m)$ is around $20$ for $m=100$ chains, and we obtain a speed-up of around 10x, so the gap is actually not that large. For $m=500$ chains it is larger, but we suspect this is because of limited GPU capacity.
**FSM evaluating all branches:** All branches of control-flows are indeed evaluated when vectorizing both the FSM and the naive implementation with vmap. However, this has different effects in each case due to the differing control-flow used “between” the blocks/states. The FSM `step` uses a switch over all code blocks $S_1,…,S_K$ that comprise `sample`. This means that a single call of `vmap(step)` has to execute all blocks for all chains, whichever block the chains are currently on. By contrast, in `sample`, the control-flow “between blocks” takes the form of (possibly multiple) while loops. In the case of a single while loop with blocks $S_1,S_2,S_3$, this results in the loop body ($S_2$) being evaluated for all chains until they have finished iterating. However when calling $S_1$ and $S_3$ with sample, only those blocks are executed.
Essentially, we exchange the variance in execution time in `sample` (across chains), for a bias. As a result, when there is little/no variance in execution time (e.g. the while loop always terminates after 1 iteration), the basic FSM (without our optimizations) can perform worse than naive vectorization. The step-bundling and amortization procedures outlined in Section 5 are specifically designed to minimize this bias. For example, by using `bundled_step` (with blocks called in chronological order) the FSM would run in the same time as the naive vectorization if there is no waiting and the loop terminates in one iteration, because a single step now progresses all chains through $S_1 \to S_3$. Amortization enables functions which are executed in multiple blocks/states to only be called once per FSM step, therefore reducing the additional overhead. For the elliptical slice sampler, doing this for the logpdf can reduce the cost of executing all blocks almost back to the cost of a single block, whenever the logpdf is expensive enough to dominate computational cost.
**posteriordb/InferenceGym:** This is a great suggestion. Another reviewer also asked for more experiments with NUTS. Given the limited time, we have now implemented NUTS on several other datasets from our paper, and two datasets from posteriordb (Pilots and Soil-Incubation), which were in the list of challenging problems for NUTS in Table 1 of [1]. For the posteriordb problems we re-defined the logpdfs in JAX, and used the data from the JSON files at the posteriordb repository. Below we present the improvements in ESS/sec when implementing NUTS with our FSM (vs. BlackJax NUTS) on these datasets:
- (Experiment 7.2) Real Estate: 3.14x
- (Experiment 7.4) Google Stock: 3.36x
- (Experiment 7.4) Predator Prey: 1.55x
- (posteriordb) Soil: 2.42x
- (posteriordb) Pilots: 0.91x
We find substantial speed ups in ⅘ cases, and larger speed-ups than the FSM variant of EllipSlice/TESS across all datasets we implemented for both methods. We note that because most of these datasets have highly variable local geometries, computational resources did not average out much across chains after 1000 samples. We expect that when sampling even longer chains, these speed-ups would be even larger. The Pilots dataset has a relatively cheap log-likelihood and less variation in the number of integration steps used, and so the gains in avoiding synchronization costs are offset by the increased cost per block of the FSM.
**Pedagogical Example:** We agree that walking through the effect of synchronization and using our FSM on a simple example would add further clarity to the paper. We had wanted to do this in the submitted version but could not due to space constraints. We propose to use part of the additional page available to include this in Section 2 for the Elliptical Slice Sampler, and return to it again in Section 3 after presenting the FSM.
**Switch Function Definition:** Thank you for the suggestion, we will include an explanation in the final version.
[1] Magnusson et al. (2024). “posteriordb: Testing, Benchmarking and Developing Bayesian Inference Algorithms”, arXiv preprint arXiv:2407.04967.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. After reading the rebuttal and comments from the other reviewers, I decide to remain my positive opinion on this work. | Summary: Vectorizing Markov Chain Monte Carlo (MCMC) algorithms using vmap to create multi chain code results in a synchronization problem. That is all chains have to wait for the last chain to be completed. The paper proposes to solve this problem by using FSMs, finite state machines, to design single-chain MCMC algorithms in a way that can avoid synchronization barriers when vectorizing with vmap. The authors implemented popular MCMC algorithms like slice sampling, hmc-nuts, delayed rejection.It is possible to write single chain MCMC code and call vmap to turn it into vectorized, multi chain code that can run in parallel on the same processor. Vmap transforms every instruction in the sample into a corresponding instruction operating on a batch of inputs matrix-vector multiplication to vectorize the sample. These instructions are executed in lock-step across all chains. Those MCMC libraries adopted ML frameworks with automatic vectorization tools as their backend. Bottlenecks for these libraries happen at control flow, if there is a while loop all chains wait for the last chain. Paper is showing transforming MCMC algorithms into equivalent sampler that is avoiding mentioned synchronization problem; the novelty is at the approach of transforming MCMC algorithms into FSMs.These FSMs enable us to breakdown ‘sample’ into series of smaller steps that has minimal variance in execution time and avoids using while loops.
Claims And Evidence: The claims in the paper are supported by experimental evaluations.
Methods And Evaluation Criteria: The proposed methods make sense for the targeted problem.
Theoretical Claims: No theoretical claims in the paper.
Experimental Designs Or Analyses: The experimental analysis makes sense but it is not possible to validate the results.
Supplementary Material: N/A
Relation To Broader Scientific Literature: -
Essential References Not Discussed: -
Other Strengths And Weaknesses: Strengths:
- Sample to FSM approach is generalizable for various algorithms
-Time complexity of FSM and corresponding functions are given; comprehensive detailed mathematical formula and derivation is given in detail
-Various MCMC algorithms are used including Delayed-Rejection Metropolis Hasting, Elliptical Slice Sampling,HMC-NUTS their approach is tested on various different algorithms
Weaknesses and Suggestions:
-The paper shows FSM of an MCMC algorithm with a single while loop, two sequential while loops, two nested while-loops. It does not mention different configurations and how FSM should be obtained if these while loops occur in different way or interacting each other
-Step function is defined, this function performs single transition along an edge in FSM diagram, in definition of step function it is called iteratively over blocks of samples, this process given simply not mentioned how to deal with errors, or where to resume if long sequence of blocks of sample preempted for some reason
-It shows how to make FSM design more optimal, by reducing parameters α and K, Paper claims step bundling method increases efficiency, but no detailed example is given
-Another optimization is proposed as returning additional g_flag variable for step function; but, the affect of it on cost function or time complexity is not discussed
Minor:
Some experiment results are shared on jax. Although jax is popular, pytorch or tensorflow could be used to extend derivation of their approach in different frameworks and libraries
Other Comments Or Suggestions: none
Questions For Authors: see the points listed under "weaknesses" above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our work and for your recommendations to improve the paper. Below we address your comments and questions.
**More general while loop structures:** The conversion of algorithms into FSMs by splitting up while loops is a generally applicable recipe, but we agree that our presentation in Sec 3 sweeps this under the rug by immediately launching into specific algorithms. We have added a paragraph at the outset that explains how to obtain FSMs for any finite number of while loops. This can in fact be done automatically: We have a parser that reads the MCMC algorithm in a symbolic language and outputs the FSM (see below for details). This is already implemented and tested. We had not included it to save space and because all MCMC algorithms with while loops we are aware off fall into the categories considered in our paper. However, since the automatic procedure exactly addresses your point, we will add it to the appendix if the reviewers have no objections.
*General FSM construction procedure:* We have derived an algorithm able to convert arbitrary programs (written in a symbolic programming language) into their respective FSM graph. The algorithm (1) calls a parser to obtain the syntax tree of its input program, (2) creates a coarsened version of that tree where each leaf corresponds to a code block (and each non-terminal node corresponds to a while loop condition), and (3) recursively transforms that tree into an FSM graph. Each node and edge of the final FSM graph can be mapped back to a code block or a while loop condition of the original program.
**Step function errors:** Our method transforms an existing MCMC program into an equivalent sampler using the same lines of code within blocks. As such, any error handling in the original program is retained by the FSM. To handle preemption, one can straightforwardly checkpoint the state of the FSM and all flags at a desired frequency of `step` updates.
**Bundling method:** In Section 5.1, we provide an illustrative example of `bundled_step` (see Algorithm 4) and discuss how it improves performance over `step`. We also tested the performance of `bundled_step` in the Delayed-Rejection experiment 7.1 (see the discussion under Experimental Setup, Results, and in the caption of Fig 6). However, we agree that more explanation of the effect of step-bundling would improve the paper. To that end, we propose to add text similar to the following in Section 5.1:
"As long as the switch in `step` executes each state/block sequentially when using `vmap` (and our testing finds that it does$^*$), then, both `vmap(step)` and `vmap(bundled_step)` have the same execution cost. This is because `vmap(bundled_step)` always executes all blocks/states sequentially, and both `vmap(bundled_step)` and `vmap(step)` execute all blocks. As a result, if each chain makes on average M transitions when calling `vmap(bundled_step)`, the speed-up over using `vmap(step)` is M-fold. Since $M \geq 1$ (i.e. `bundled_step` must make at least one transition each time it is called), this means that `bundled_step` must always weakly improve the performance of `step`, with `vmap`".
$^*$We have tested the JAX's `switch` and found that it behaves consistently with sequential block execution. We can add this to the appendix, if desired.
**Amortization method:** We explain in Section 5.2 that amortization ensures any function $g$ that is called in multiple states/blocks, will only be executed once per FSM step. This reduces $\alpha$. In Section 7.2 we also test the effect of amortization on the performance of the elliptical slice sampler (see line 376 onwards). However, we appreciate that more detail on the effect of amortization would improve the clarity of our paper. We therefore propose to add text similar to the following in Section 5.1:
"If a function $g$ is called in M different states/blocks and its executions cost (in total) $\beta \in [0,1]$ fraction of the cost of all blocks/states $S_1,...,S_K$, then amortization will reduce the cost of calling `vmap(step)` to $\beta/M + (1-\beta)$ fraction of the original cost, since $g$ is now executed only once, instead of $M$ times. We indeed observe this for the elliptical slice sampler in Section 7.1 (see Fig 5 - here $M=2$ and $\beta \approx 1$ as the log-likelihood is very expensive, resulting in a 2x speed-up over no amortization)".
**JAX vs. Torch/TensorFlow:** We agree that testing other frameworks is of interest. That we have not done so yet is because (1) PyTorch is not applicable, since its version of `vmap`does not yet handle data-dependent control-flow, and (2) TensorFlow’s `vectorized_map` is so similar to JAX’s `vmap` (it is a tensor-level vectorization tool that constructs a single batched while loop for all chains) that we expect it to behave very similarly. We do plan to test this properly in the future, but with limited space, we feel that a proper evaluation for one framework is more helpful. | Summary: This work proposes a way of more effectively parallelising (multiple chains of) certain MCMC algorithms via automatic vectorisation tools like the vmap function in JAX. Specifically, this work is concerned with MCMC algorithms whose computational cost per iteration has a large variance, e.g., due to the use of a proposal based around rejection sampling.
The authors show that a naive parallelisation of such chains (i.e. by passing functions that execute an entire MCMC iteration to vmap) is wasteful because it leaves most of the processors idle while waiting for the slowest iteration to complete. Instead, they propose to break MCMC iterations down into smaller steps (formalised via finite state machines). This allows automatic vectorisation to occur on the level of these smaller steps rather than on the level of entire MCMC iterations.
Claims And Evidence: The claims are supported by clear and convincing evidence. The computational cost improvement over "naive" vectorisation (on the MCMC iteration level) is established theoretically. Run-time improvements are demonstrated empirically on a number of benchmark examples.
Methods And Evaluation Criteria: The proposed methods make sense.
Theoretical Claims: I checked the main manuscript but not the additional derivations in the appendix. I did not find any issues.
Experimental Designs Or Analyses: I only checked the experiments insofar as checking their description in the manuscript. I did not find any isues.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: Effectively using parallel architectures to speed up MCMC implementations is an important research area. This work contributes to it by providing a fairly simple solution to exploit existing general-purpose automatic vectorisation routines.
Essential References Not Discussed: None come to mind.
Other Strengths And Weaknesses: I think this work is overall well written and well structured. As far as I know, the contribution is original. In my opinion, the contribution is also significant enough for publication in ICML because it provides a fairly simple way of speeding up multiple-chain MCMC sampling (for certain MCMC kernels) using existing general-purpose automatic vectorisation functionality.
Other Comments Or Suggestions: - Figures should not appear in the text on pages before they have been referenced (see, e.g., Fig. 1 which is only referenced on Page 3; or Fig. 6)
- Line 204, RHS: the double use of the index $k$ in the lower bound for $\alpha$ is confusing
- Section 4, 2nd paragraph. Maybe denote the index of the block containing the loop by $k^*$ rather than $k$ to avoid confusing this "special" value of $k$ and $k$ as a generic index below
- Line 2020: define what $c$ with "negated $k$" in the subscript means
- Eq. 10: Is this really equal for finite $n$ (not just approximately equal)?
- Line 284: use used
- Fig. 7: "and or than half" in the caption
- Table 1: use natbib's citet rather than citep for the reference in the caption
- Bibliography: fix missing capital letters in names/proper nouns in article titles, and in journal names; use of journal name abbreviations is also not consistent.
Questions For Authors: Can you give some more guidance for how to optimise the "bundling" for the "bundled_step" routine? Presumably, one has to be careful here because its performance can end up being worse than "step" (i.e. without any bundling).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and recommendations. We are glad that you see the value of our contribution and your feedback has been helpful in strengthening the clarity of the paper. Below we address your main questions/comments:
**On step bundling**: In short, performance should always be weakly improved (i.e. not get worse), since at worst `bundled_step` takes a single step during each call.
*In more detail*:
- As long as the switch in `vmap(step)` executes the blocks/states sequentially (and our testing indicates that it does), the step-bundling routine should always weakly outperform the basic `step` function. Like `vmap(step)`, `vmap(bundled_step)` will execute all blocks/states (sequentially) when called. However, `bundled_step` allows any chains that transition from $S_{k} \to S_{k+1}$, to (for example) immediately transition from $S_{k+1} \to S_{k+2}$ at no extra cost (assuming $S_{k+1}$ is called after $S_k$ in the step bundling order). As such, in the worst case each chain only progresses through exactly one transition (which mirrors the behaviour of `step`).
- Whilst one can optimise the ordering of `bundled_step` to improve performance, a surprisingly effective heuristic is just to use the chronological order of $S_1,...,S_K$, since (for all algorithms we considered in this work) $S_{k+1}$ is constructed as a block that is callable after $S_k$ in the original `sample` function. This ordering is what we use in the paper. In our experiments, we have not found a significant gain to manual tuning of the block ordering over chronological ordering.
**Definition of $c_{\neg k}$**: Thank you for pointing out this omission, we meant to define $c_{\neg k}(m) = \sum_{j \neq k} c_j(m)$.
**Eq(10)**: Equation 10 holds asymptotically as $n \to \infty$, as it is derived by plugging in the limits of $C_0(m,n)$ and $C_F(m,n)$, which are given by Theorem 4.1. Since the approximation error for each of these terms is (with a high probability) $\mathcal O(1/\sqrt{n})$, after some basic manipulations (and assuming $C_F(m,n) > 0$) this results in the approximation error $C_0(m,n)/C_F(m,n) - E(m) = \mathcal O(1/\sqrt{n})$ (again, with some fixed high probability $1-\delta$). If space permits and there are no objections, we are happy to add this analysis in the main text.
**Other typos+ formatting suggestions**: Thank you for pointing out the typos, formatting and improvements, we will amend those accordingly in the final version. | null | null | null | null | null | null |
Towards Better-than-2 Approximation for Constrained Correlation Clustering | Accept (spotlight poster) | Summary: In this paper, the authors study Constrained Correlation Clustering, introduced by van Zuylen and Williamson (2009). The problem is a generalization of the well-known Correlation Clustering, where we are additionally given the set of friendly pairs and the set of hostile pairs and the goal is to find an optimal clustering (in terms of Correlation Clustering) satisfying the additional constraint that every friendly pair is placed within a cluster and every hostile pair is placed across clusters. In the context of machine learning, this problem can be seen as a semi-supervised variant of Correlation Clustering. For the problem, the state-of-the-art polynomial-time approximation ratio is 3, which was also given by van Zuylen and Williamson (2009). In the present paper, the authors propose a polynomial-time better-than-2 (< 1.94) approximation algorithm. To this end, the authors combine two techniques that have recently been developed to achieve better-than-2 approximations for Correlation Clustering: one is the cluster LP (Cao et al., 2024) and the other is a local search algorithm based on an optimal clustering (Cohen-Addad et al., 2024).
## update after rebuttal
My initial recommendation was conditional due to an issue in the proof. The authors have now rectified this in their rebuttal, and I maintain my original recommendation.
Claims And Evidence: Generally speaking, the claims are supported by clear and convincing evidence. In particular, the intuitive explanations provided throughout the paper make it easy to follow.
Methods And Evaluation Criteria: The proposed algorithm makes sense.
Theoretical Claims: I have checked the correctness of all contents in the main body. My recommendation, Weak Accept, is conditional, based on the following concern about the correctness. Let's look at the last paragraph of the proof of Lemma 4.4 (starting with "We similarly transform the right-hand side..."). The authors state that a plus edge in $\mathcal{E}^+(\mathcal{L})$ contributes an amount of $z_C$ to the sum if either $u$ or $v$ is contained in $C$. However, in my understanding, it is wrong. Indeed, a plus edge in $\mathcal{E}^+(\mathcal{L})$ contributes an amount of $z_C$ to the sum if both $u,v$ are contained in $C$. Note that the right-hand side computes the cost decremented by $C$ rather than that incremented by $C$. This leads to the contribution $1-x_{uv}$ rather than $1+x_{uv}$. Similarly, for a minus edge, it seems we have the contribution $1-x_{uv}$ rather than $1+x_{uv}$. Based on the above, it is not clear if we can still have the simplification stated in Lemma 4.4.
Experimental Designs Or Analyses: No experiments performed.
Supplementary Material: I did not check the supplementary material.
Relation To Broader Scientific Literature: Correlation Clustering is one of the most actively-studied clustering problems in machine learning.
Essential References Not Discussed: No essential references missing.
Other Strengths And Weaknesses: - I enjoyed reading the paper and was a bit surprised that a better-than-2 approximation for (Constrained) Correlation Clustering can be achieved through such a simple and reader-friendly analysis.
- Although it is acknowledged that the paper improves the state-of-the-art result of the SODA 2009 paper, the contribution might be not quite significant in the machine learning community. Indeed, the proposed algorithm is, unfortunately, not practical due to solving the variant of the cluster LP and the authors do not test the algorithm empirically.
Other Comments Or Suggestions: Comments:
- Regarding Constrained Correlation Clustering, unfamiliar readers could think that handling the set of friendly pairs is trivial (as they could think that the pairs can be simply aggregated into super elements). It would be better to point out that the aggregation leads to a weighted instance outside the problem.
- Considering that Constrained Correlation Clustering is a generalization of Correlation Clustering, we can see that the present paper also simplifies the analysis of some better-than-2 approximations for Correlation Clustering. This point could be better highlighted.
- Section 4 could be organized better using subsections etc.
- What the "fractional clustering" refers to should be explained.
- Lemmas 4.2 and 4.13 are trivial and do not need the proofs. Alternatively, Corollary 4.14 is not so straightforward and the proof would be helpful.
- On Page 5, we do not need to mention the range of the cost to see the polynomial time complexity of Algorithm 1.
- The reason why we are required to use the fractional clustering, as stated on Page 6, is not quite clear. Is it to simplify the analysis?
- The first sentence of the last paragraph of the proof of Lemma 4.16 is not clear.
- It might be unusual not to have a concluding section.
Typos:
- Page 2: Wirt -> Wirth
- Page 2: an $(1+\epsilon)$ -> a $(1+\epsilon)$
- Page 3: unweighted, undirected -> unweighted, undirected, complete
- Page 5: Some $E^+$ and $E^-$ should be $\mathcal{E}^+$ and $\mathcal{E}^-$, respectively.
- Lemma 4.10 is not exactly the same as Lemma 6 of Cao et al. (2024) due to the generalization.
Questions For Authors: Please see the above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review and comments.
First we reply to their two main concerns, that is, the practicality of our algorithm and Lemma 4.4.
---
Regarding the practicality of our algorithm, as the reviewer noted, the bottleneck is in solving the Constrained Cluster LP. We are happy to share that after our submission, a paper titled “Solving the Correlation Cluster LP in Nearly Linear Time” was published at STOC 2025. This makes us optimistic that our overall algorithm can be practical.
It is worth noting that to feasibly use this particular approach, one should compromise for worse theoretical guarantees. Furthermore, the paper is 57 pages long, so it would require a whole new work to study how to adapt it to the constrained setting.
We view our algorithm as a flexible framework to which one can incorporate any solution to the Constrained Cluster LP (whether it is a variation of the STOC paper or some totally new algorithm), and get a solution for Constrained Correlation Clustering. Under this light, future work can focus on implementing practical solvers for the Constrained Cluster LP (even with weaker guarantees, as long as they are efficient) and get solutions for Constrained Correlation Clustering.
---
Regarding Lemma 4.4, thank you very much for spotting the issue. There is indeed a mistake in the proof the way it is written in the submission. Specifically, we should have used a stronger inequality resulting from the local optimality of the clustering L. However, the following is a simple fix.
We say that an edge is crucial if it has at least one endpoint in C. Since the cost of L is at most the cost of $L_C$, it must be the case that the number of crucial edges unsatisfied by $L_C$ is more than the number of crucial edges unsatisfied by L. We redefine $E_C^+(L)$ to be the set of crucial edges in $E^+(L)$, and $E_C^-(L)$ to be the set of crucial edges in $E^-(L)$.
The proof, as was written in the submission, holds once the aforementioned changes are made.
---
We also briefly reply to some of the other comments of the reviewer. For the ones we do not give a reply, we agree with the reviewer and will incorporate the suggestions on the paper.
* "The reason why we are required to use the fractional clustering, as stated on Page 6, is not quite clear. Is it to simplify the analysis?":
Indeed, we will make this more clear. This is not just to simplify the analysis. It is because the guarantees for our two clusterings are with respect to the fractional optimal (instead of the actual optimal, as was done in the paper that introduced the local search approach for correlation clustering [Combinatorial Correlation Clustering; Cohen-Addad et al.; STOC 2024]).
* "On Page 5, we do not need to mention the range of the cost to see the polynomial time complexity of Algorithm 1.":
Our understanding is that it takes polynomial time to improve the cost by at least 1, and as the maximum cost is polynomial, this improvement cannot happen many times. However this would not be the case if the maximum cost is exponential and every time we only improve by 1.
* "What the "fractional clustering" refers to should be explained.":
We will make this more clear; with "fractional clustering" we refer to a solution of the Constrained Cluster LP.
* "The first sentence of the last paragraph of the proof of Lemma 4.16 is not clear."
Indeed, we will rephrase that to "By maximality of S we have $Q_{XY'Z} \le S$, and therefore we can find an injective function $f: Q_{XY'Z} \rightarrow S$". If $\{u,f(u)\}$ is a plus edge [...]" so that we are more precise.
* Conclusion section: Thanks for the suggestion. We will add a concluding section summarizing our contributions and discussing the research directions left open by our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my comments.
I'm not entirely convinced that the algorithm becomes practical with the STOC'25 result, because: (i) While the cluster LP can theoretically be solved in linear (or even sublinear) time, this does not necessarily translate to strong practical performance. (ii) As the authors noted, it is non-trivial whether the fast algorithm applies to the constrained cluster LP. That said, I agree that referencing the STOC'25 paper helps highlight the potential practicality of the algorithm.
I also believe the presentation of the results could be improved to better emphasize their significance to the ML community. One way to do this is by more strongly motivating Constrained Correlation Clustering in an ML context. Currently, it appears mainly as a semi-supervised variant of Correlation Clustering, but providing concrete application scenarios could strengthen the impact of the paper.
Regarding the proof of Lemma 4.4, I'm glad to hear that the authors have corrected it.
---
Reply to Comment 1.1.1:
Comment: We agree with the reviewer and shall definitely refer to the STOC’25 paper in order to highlight the potential practicality aspect. As suggested by the reviewer, we shall also address the challenges associated with it.
We shall also follow the suggestion to include a discussion on motivating the problem from the ML-perspective in the final version of the paper.
In particular, there are specific use-cases in the ML literature that have used constrained correlation clustering in the way that we define it. As an example, constrained correlation clustering with must-link and cannot-link constraints has been used in clustering news articles about the same event across different languages [IJCAI 2007; Correlation Clustering for Crosslingual Link Detection; Van Gael, Zhu]. The hard constraints here were introduced in order to ensure that news articles about different events from the same language do not end up being in the same cluster.
Finally, we would like to point out the broader applicability of must-link and cannot-link constraints, as evidenced by the works from the ML community incorporating these constraints, in general, for other clustering objectives.
We once again thank the reviewer for their thorough comments. | Summary: This paper proves a better-than-2 approximation for constrained correlation clustering (correlation clustering where certain "friendly" pairs are required to be in the same cluster and other "hostile" pairs are required to be separated). The approach combines two recent techniques for standard correlation clustering that led to better-than-2 approximations for the unconstrained case: rounding the cluster LP relaxation and the local search method. In more detail, this paper approximately solves the cluster LP relaxation (with new constraints for friendly and hostile pairs), runs local search, then runs another local search with a new type of penalty to get a qualitatively different clustering, and then rounds the LP relaxation to get another clustering. If the first two clusterings are no better than 2 approximations, a new procedure is used to "mix" them with the rounded LP solution to get another clustering that is a better-than-2 approx.
## Update post rebuttal
Thanks for the responses. I continue to have a very high view of the paper.
Claims And Evidence: None of the claims are problematic, the paper provides proofs for all aspects of the paper.
Methods And Evaluation Criteria: The methodology is sound and is based on a combination of existing techniques for getting better than two approximations for standard correlation clustering.
Theoretical Claims: I checked the first couple proofs in more detail (finding no problems), and skimmed more quickly through the rest of the proofs. I did not encounter any issues. The logical layout of the Lemmas and corollaries is very clear and does a good job guiding the reader.
Experimental Designs Or Analyses: There are no experimental results.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: There has been a flurry of recent work on better-than-2 approximations for standard correlation clustering (which the paper has reviewed in detail), but very little for constrained correlation clustering. This paper shows how these recent methodologies (with some extra work) can be made to work for constrained correlation clustering.
Essential References Not Discussed: None
Other Strengths And Weaknesses: This paper is overall very strong and fills a large gap in the literature. There has been a lot of work on unconstrained correlation clustering recently, with very little work focused on understanding how these recent advances can apply to other variants of correlation clustering. The paper outlines prior work and provides important background very clearly, and then provides a detailed proof for better-than-2 approximations for constrained correlation clustering. This builds on prior techniques for unconstrained correlation clustering, but the theoretical contribution is still very non-trivial. It needs to combine two previous techniques from the literature, and includes several other new tricks and results in order to make everything work for the constrained case. I found the manuscript very easy to read and the proof (while detailed and non trivial) is laid out very carefully and logically for the reader.
Other Comments Or Suggestions: On the right column at the top of page 5, I think you are missing a "z_C" in one of your summations; you have \sum_{c : v \in C} = 1 instead of \sum_{c: v \in C} z_C = 1.
In the statement and proof for Corollary 4.5, you used E several places instead of \mathcal{E}
Questions For Authors: On page 7 you mention that creating a new clustering that is 30\delta-competitive is a "contradiction" for small enough \delta. However, when I read this I had not forgotten that you previously set \delta to be 2/31, meaning that you get a slightly better than 2-competitive result, and unless I'm missing something there's no contradiction here. My understanding is that the idea is just that if your first 2 clusterings are not $2-\delta$ competitive, you then can construct a 3rd clustering via "mixing" that is a better than 2 approx. My question is: am I missing something or is this really not a proof by contradiction but just rather a proof that at least one of three clusterings is guaranteed to get you the desired result? I wonder if it'd be cleaner and more direct to not try to argue there is some sort of contradiction here. This is of course very minor.
This entire approach also applies when F = H = \emptyset, right? So can we not also view this as a new approach for getting a better-than-2 approximation for unconstrained correlation clustering that mixes a couple prior approaches? This would not be groundbreaking given the other existing better approximation factors, but could there be a benefit of your approach even just for unconstrained correlation clustering (e.g., in terms of simplicity of arguments?).
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their suggestions, we will apply them all in the final version.
Regarding the proof-by-contradiction: Indeed, this claim of ours is inaccurate; this is how it was treated in the original local search paper [Combinatorial Correlation Clustering; Cohen-Addad et al.; STOC 2024], because the clustering obtained from "mixing" was only part of the analysis (as it requires knowledge of the optimal). As the reviewer correctly understood, in our case we have access to a (fractional) optimal, and therefore also have access to the clustering obtained by "mixing". Therefore, there is no need for proof-by-contradiction. We shall rectify that in the final version of the paper.
We also thank the reviewer for their suggestion to highlight that our work conceptually simplifies better-than-2 approximations of (unconstrained) Correlation Clustering.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. | Summary: The main contribution of this paper is the development of a 1.94-approximation algorithm for the constrained correlation clustering problem. In this context, the input is a graph consisting of edges labeled as {+1, -1}. The objective of correlation clustering is to find a partition (clustering) of the nodes that minimizes the total number of negative edges inside clusters plus positive edges between clusters. A natural extension of this problem is imposing constraint that dictates whether certain selected edges must belong to the same cluster or to different clusters.
For vanilla correlation clustering, there are two primary techniques LP and LS that achieve good approximation. Both struggle to be directed applied to the constrained version. However, the authors point out that the fractional solution is sufficient to help find a satisfactory integer solution. Roughly speaking, they initially form a collection of clusters by selecting all subsets with a positive indicator from the LP solution, and then compute a clustering L by running local search. Next, they compute L’ by running local search with penalty based on L. Finally, they implement the pivot algorithm, which “mixes” L and L’ and outputs a clustering P (with high prob). At least one solution out of {L, L’, P} achieves a 2-2/31 approximation. The authors demonstrate this by assuming that both L and L’ are inadequate, leading to the conclusion that P would be a sufficiently good solution.
Claims And Evidence: All claims are supported by proofs.
Methods And Evaluation Criteria: This is a totally theoretical paper without experiments.
Theoretical Claims: Yes. I checked all proofs except those in the supplement.
Experimental Designs Or Analyses: There is no experiments in this paper
Supplementary Material: I reviewed the content but did not check the correctness of the proofs
Relation To Broader Scientific Literature: the method the author proposed has the potential to handle more constrained problems.
Essential References Not Discussed: I have not found so far.
Other Strengths And Weaknesses: The contribution of this paper is straightforward. The study provides an innovative approximation ratio for the constrained correlation clustering problem. Moreover, the overall idea of the proof is concise and easy to follow. The algorithm is very simple to implement.
Other Comments Or Suggestions: typos
- Line 214, a feasible clustering $C$ should be $\mathcal{C}$
- In corollary 4.5, some $\mathcal{E}$ are misspelled as $E$
Questions For Authors: More discussions on the hardness are needed.
1. As the constrained correlation clustering is APX-Hard, are there any studies or results that discuss the lower bound of its approximation ratio?
2. Correlation clustering w/o constraints are APX-Hard. Are there more nuanced differences in computing complexity between the two? Maybe constrained correlation clustering is much harder.
Although responses to these questions will not affect my score, I believe it is beneficial to include this content in the paper for a more comprehensive discussion.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their suggestions, we will apply them all in the final version.
In particular, regarding APX-Hardness of Correlation Clustering:
- It was shown that Correlation Clustering is APX-Hard, but without an explicit constant, in [Clustering with qualitative information; Charikar, Guruswami, Wirth; FOCS 2003].
- An explicit constant 24/23 > 1.043 was given in [Understanding the Cluster LP for Correlation Clustering; Cao et al.; STOC 2024], under the assumption that $P \ne BPP$. In the same paper they proved a 4/3 integrality gap for the cluster LP.
We are not aware of any studies or results that discuss lower bounds on the approximation ratio of Constrained Correlation Clustering. In fact, when we first started this work, we were considering whether 2 is an actual lower bound, but we did not have any insight on how to leverage the hard constraints in order to improve the lower bound.
It is worth mentioning here that Constrained Correlation Clustering is not the only problem generalizing Correlation Clustering for which the only known lower bound is the APX Hardness of (unconstrained) Correlation Clustering. Ultrametric Violation Distance and $L_1$ Best-Fit Ultrametrics (from the tree-reconstruction world) also generalize Correlation Clustering, but their hierarchical nature has not been successfully leveraged to provide any better lower bound.
Once again, we thank the reviewer and will incorporate the above discussion in the final version of our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for your clarification. | Summary: The authors consider the classic Correlation Clustering problem which, given a complete graph with edges labeled either + or -, the goal is to find a partition of the vertices so as to minimize the number of + edges across parts plus the number of - edges within parts. The has received a lot of attention since its introduction in the early 2000s.
The authors study the hard-constrained version of Correlation Clustering which goes as follows. In addition to the graph, some pairs of vertices may be labeled as hard positive or hard negative. The solution then must not separate any hard positive pair and not join any hard negative pair.
The problem is harder than the standard Correlation Clustering for which a 1.43 approximation is known and for which a 2.5-approximation has been known for 20 years. Only a 3-approximation was known for the hard-constrained version of the problem.
The authors combine two state-of-the-art methods to obtain this results: the Cluster Linear Program (LP) introduced at FOCS'23 and further analyzed at STOC'24, and the local search techniques presented at STOC'24. It is remarkable that these two techniques could be (1) extended to handle the hard-constrained version of the problem (albeit at the expense of worse approximation) and (2) combined to obtain better approximation bounds. The idea of using the clusters output by the Cluster LP as input clusters for local search is neat and a clever way to handle the hard constraints.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, I went over the proofs. I haven't checked the proofs in the appendix in details though.
Experimental Designs Or Analyses: No experiments.
Supplementary Material: Yes, I went over it and checked some parts.
Relation To Broader Scientific Literature: Nothing missing.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: I like the paper. It cleverly combines state-of-the-art techniques to solve a more general problem.
Other Comments Or Suggestions: I would suggest to remove "competitive" and use "approximation" in, e.g., 2-competitive. Usually competitive refers to the online algorithm setting, and it is a bit confusing to have both competitive and approximation at the same time in the paper.
Questions For Authors: No specific question at this stage.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for the suggestion. We inherited the "competitive" terminology from the local search paper [Combinatorial Correlation Clustering; Cohen-Addad et al.; STOC 2024], but we understand now that mixing the two can be confusing. We will stick to "approximate". | null | null | null | null | null | null |
Stochastic Encodings for Active Feature Acquisition | Accept (poster) | Summary: Considering the training challenges of reinforcement-learning approaches and the myopic shortcomings of conditional mutual information (CMI) strategies, this paper proposed a method called Stochastic Encodings for Active Feature Acquisition (SEFA). By encoding features into a regularized, stochastic latent space, it can better predict future observations; it then scores each unobserved feature using a gradient-based measure of how it would disambiguate among likely classes. This paper tested their method on standard tabular/image data, and cancer gene-expression classification.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The paper’s benchmark datasets and evaluation setup from synthetic datasets designed to showcase the pitfalls of myopic conditional mutual information to tabular, image, and real-world medical tasks to demonstrate both practical applicability.
Theoretical Claims: These formal statements (Propositions 4.1 and 4.2) are demonstrated on that indicator toy example. Seems that the authors use it as a focused scenario to prove where purely myopic strategies fall short and how considering possible unobserved values can address that limitation. They do not provide general proofs.
Experimental Designs Or Analyses: The experimental settings, including datasets, baselines and evaluation metrics are standard.
Supplementary Material: I reviewed Appendix C-H.
Relation To Broader Scientific Literature: SEFA draws inspiration from several established concepts and methods, like RL, CMI. SEFA goes beyond these existing methods by incorporating a stochastic, latent-space perspective that is regularized via an information bottleneck. It introduces a gradient-based sensitivity strategy in which each feature owns a block of the latent representation.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths: The method has been tested across synthetic data, real tabular, image, and cancer genomics datasets.
Weaknesses: The method’s factorized latent encoder implicitly assumes feature independence in the unobserved blocks, however, considering some strong feature interdependencies situations, this method might not work well.
Other Comments Or Suggestions: N/A
Questions For Authors: Figures like Figure 4 are really hard to read.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the positive review, we are grateful for the feedback, we answer your questions below. All changes below will be added.
# Theory
We agree that the theory provided is specific to the indicator and does not provide general statements about bounds on SEFA's performance. The purpose was not to provide fully general proofs, but insight into why CMI maximization can fail (it does not take into account possible future feature observations).
**Purpose of Proposition 4.2**: Proposition 4.1 gives a necessary condition for optimality, proposition 4.2 shows that it can be sufficient, we include 4.2 because only necessity without showing sufficiency is a weaker result. The connection to SEFA is to use the **idea** of considering possible future observations in the model design (the expectation in the latent space), since this meets a necessary condition for optimality. We verify this empirically in our ablations (Table 1 and Table 3 Deterministic Encoder). We do not use the objective from 4.2 because it is intractable and does not fit into our latent set up.
**Generalizing Beyond the Indicator**: Proposition 4.1 and 4.2 can be generalized beyond the indicator: CMI will be suboptimal where there are features that are jointly informative but individually non-informative. By definition, the CMI of the jointly informative features is zero, so any other features that are individually informative (even if they are very noisy) will be selected first by CMI. In contrast, the objective in 4.2 can capture the effect of jointly informative features, since it considers the possible values of the unobserved features.
# Feature Independence
There are two reasons why we believe our conclusion overstates this as a limitation:
1. Whilst the encoders do not model the feature interdependence, this can be accounted for by the predictor network $p\_{\phi}(y|\mathbf{z})$. The complex interdependencies are captured by the layers after the latent space. Existing work uses this principle, for example, $\beta$-VAE [Higgins et al. 2017] has disentangled latent spaces where the complex relationships are handled by the decoder. The key point is that we are not modelling distributions of the features themselves, but of their representation - their effects on the predictor network. The predictor can learn to act differently based on other latent values, for example, using samples of feature 2's latent components differently depending on if latent component 1 is positive or not. Whilst the distribution of feature 2's latent components cannot be affected by observing feature 1, how the latent samples are used by the predictor can be affected. As a result the gradients are affected, and therefore the acquisition scores. This is another reason to run the acquisition in the latent space. The predictor network can learn the complex relationships (provided it trains on enough latent samples), allowing the encoder to have a structure that does not learn the complex feature distributions. Whereas if we ran the acquisition in feature space, the predictor would only be trained on real feature values, and so the generative model would need to be able to learn the complex multivariate feature distributions.
2. Empirically we see this. The Feature Space Ablation (Table 1 and Table 3), uses a generative model for $p(\mathbf{x}\_{U}|\mathbf{x}\_{O})$, to run the acquisition in feature space and does not perform as well as SEFA that runs in the latent space, despite being able to model the dependencies. Additionally, our datasets have high interdependencies (image and genomics datasets), and SEFA still performs well on these.
We appreciate this is still a modelling restriction, however, we believe that due to the predictor being able to model the complex interdependencies and the ablation results, we have overstated this as a limitation. We will adjust our conclusion to accurately reflect this.
# Figure 4
When we were making Figure 4, we initially had just the feature numbers on the y-axis. However, the trade-off we made was to put the feature names, providing more immediate information, but making the text smaller. The figure is a vector image so can be zoomed in. We shall add that the figure is best viewed zoomed in to the caption.
As suggested by Reviewer V2wP, without the supporting text it is not immediately clear which features are relevant for certain tumor locations. Therefore, we will edit Figure 4, such that class-wise selections with high frequencies (darker rectangles on the heat map) are highlighted using green bounding boxes, as was done with Syn 1-3. And notable low frequency selections, (where the selection frequency is low, but high for other classes) will be highlighted with black bounding boxes. This way the instance-wise differences are more noticeable, the selections are easy to identify when reading about them in the text, and the figure should be clearer by guiding attention to relevant parts of the figure. | Summary: This paper addresses Active Feature Acquisition (AFA), the task of sequentially selecting which features to measure for a specific test instance to improve prediction accuracy while minimizing the number of features acquired. The authors identify limitations in existing approaches: Reinforcement Learning (RL) methods face training difficulties, while Conditional Mutual Information (CMI) maximization makes myopic decisions that don't consider future acquisitions.
The method was evaluated on synthetic datasets with known optimal feature orderings and on real-world datasets including image classification and cancer classification tasks. The authors demonstrate that SEFA consistently outperforms baseline methods, including both RL-based and CMI-based approaches. For the cancer classification task, they also validate that the features selected by SEFA align with biomarkers identified in scientific literature.
Claims And Evidence: Overall, the paper's claims are generally well-supported by evidence, though with some limitations:
## Well-supported claims:
1. **Theoretical limitations of CMI maximization**: The authors provide both theoretical arguments and concrete examples showing why greedy CMI maximization can be sub-optimal for AFA (Section 4).
2. **Performance of SEFA vs. baselines**: The extensive empirical evaluation across multiple datasets consistently shows that SEFA outperforms baseline methods. The authors report means with standard errors across 5 runs, providing statistical confidence in their results.
3. **Ablation studies**: The impact of each SEFA component is demonstrated through comprehensive ablations on synthetic and real datasets, showing that each design element contributes to performance.
4. **Feature relevance on medical datasets**: For TCGA data, the authors cite scientific literature supporting that their model selects biologically relevant features for different cancer types.
## Claims with more limited evidence:
1. **Overcoming RL difficulties**: While the authors cite RL challenges as motivation for SEFA, they don't directly demonstrate how these specific challenges affect performance on their tasks. SEFA does outperform the RL baseline, but the exact connection to the cited RL difficulties isn't experimentally verified.
2. **Non-greedy acquisitions**: The synthetic experiments show SEFA makes better decisions than greedy methods, but it's not entirely clear if this is due to the specific "non-greedy" design or other aspects of the method. The indicator example helps, but more direct experimental validation of this specific claim would strengthen it.
3. **Scalability claims**: While the paper discusses computational complexity in Table 4, there's no empirical evaluation of actual runtime comparisons between methods, which would help verify the practical implications of the theoretical complexities.
4. **Generalization beyond classification**: The method is currently limited to classification tasks, as noted in the limitations section. Claims about the general superiority of the approach should be considered in this context.
The paper's claims are largely substantiated by the evidence presented, with the experimental methodology being thorough and the conclusions reasonably drawn from the results.
Methods And Evaluation Criteria: ## Proposed Methods
The proposed SEFA method makes sense for the Active Feature Acquisition (AFA) problem and addresses known limitations of existing approaches:
1. **Latent space reasoning**: Using a regularized latent space to simplify decision-making is appropriate for complex feature relationships, particularly since the Information Bottleneck approach helps focus on label-relevant information.
2. **Stochastic encoding**: The approach of using multiple latent samples to consider diverse feature realizations addresses the key issue of myopic decision-making in CMI-based methods.
3. **Probability weighting**: Weighting by predicted class probabilities is a sensible way to focus on distinguishing between likely classes rather than ruling out unlikely ones.
4. **Supervised training**: Avoiding reinforcement learning complexities by using supervised training with a predictive loss is a reasonable design choice given the known difficulties with RL.
The factorization of the latent distribution (each feature responsible for specific latent components) is a pragmatic simplification, though as the authors acknowledge, it limits modeling conditional dependencies between features.
## Evaluation Criteria
The evaluation approach is comprehensive and appropriate:
1. **Dataset selection**: The mix of synthetic datasets (with known optimal strategies), image datasets, tabular datasets, and medical datasets provides good coverage of different application scenarios.
2. **Synthetic datasets**: Using synthetic data with known optimal feature ordering provides clear validation of the method's ability to learn non-myopic acquisition strategies.
3. **Metrics**: Using AUROC for binary classification and accuracy for multi-class classification are standard and appropriate choices. The "average evaluation metric during acquisition" appropriately captures performance across the entire acquisition process.
4. **Baselines**: The comparison against diverse baselines (RL-based, CMI-based, and fixed ordering) provides a thorough evaluation landscape.
5. **Ablation studies**: The detailed ablation experiments effectively isolate the contribution of each component of the method.
6. **Qualitative analysis**: For the TCGA dataset, connecting selected features to literature-supported biomarkers adds biological plausibility to the results.
One potential limitation is the lack of evaluation under different acquisition budget constraints, which would be relevant for resource-constrained applications. Also, while the authors pre-select features for high-dimensional datasets (MNIST, TCGA, etc.) using STG for computational efficiency, this pre-selection might impact the true feature acquisition performance in those domains.
Overall, the methods and evaluation criteria are well-aligned with the AFA problem and provide convincing evidence for the effectiveness of SEFA.
Theoretical Claims: 1. The proof of Proposition 4.2 doesn't fully explain why considering possible unobserved feature values in the acquisition objective enables optimal feature acquisition more generally beyond the indicator example.
2. The paper doesn't provide formal guarantees about SEFA's optimality, only demonstrating that considering unobserved feature values is necessary for optimality in the indicator example.
3. The connection between the theoretical insights from Propositions 4.1 and 4.2 and the design of SEFA isn't made fully explicit—particularly how the stochastic latent space approach relates to the integral in Proposition 4.2.
Experimental Designs Or Analyses: The synthetic dataset experiments (Section 6.1) are well-designed to test whether models can learn optimal feature orderings. The use of known optimal paths, multiple runs with error reporting, and clear visualizations provides strong validation of acquisition strategies. This controlled setting effectively demonstrates SEFA's ability to prioritize features optimally.
The real dataset experiments (Section 6.2) appropriately evaluate performance across varied acquisition steps. The diverse dataset selection, consistent protocols, and multiple runs strengthen validity. One limitation is the pre-selection of features for high-dimensional datasets using STG, which might introduce bias if different acquisition methods would naturally prefer different feature subsets.
The cancer classification experiments (Section 6.3) effectively combine quantitative performance with qualitative analysis. The supporting literature citations for selected features and analysis across cancer types validate that the model makes biologically plausible decisions. While not exhaustive, this literature validation adds credibility to the feature selections.
The ablation studies are comprehensive, systematically removing each proposed component while maintaining consistent evaluation protocols across synthetic and real datasets. The accompanying visualizations of acquisition behavior clearly demonstrate each component's impact, strongly supporting the claim that all components are necessary for optimal performance.
The sensitivity analyses and hyperparameter selection procedures follow sound practices, with appropriate exploration of parameter ranges and validation metrics. The reporting of final configurations enhances reproducibility, though more explicit justification for initial hyperparameter ranges would strengthen methodological transparency.
Overall, the experimental designs employ appropriate controls, metrics, and validation procedures. The analyses thoroughly support the paper's claims about SEFA's effectiveness for active feature acquisition.
Supplementary Material: briefly looked at the additional results
Relation To Broader Scientific Literature: The paper's critique of CMI-based acquisition relates to broader discussions about greedy vs. non-greedy information acquisition. Chen et al. (2015a) established theoretical bounds for when greedy information maximization is near-optimal, but SEFA demonstrates scenarios where this breaks down. SEFA's factorized latent space approach conceptually relates to Ma et al.'s EDDI (2019), but fundamentally differs by focusing on gradients in latent space rather than estimating CMI.
SEFA addresses the exploration-exploitation tradeoff in active learning without using reinforcement learning. This connects to broader challenges in RL, offering an alternative supervised approach.
SEFA's probability weighting mechanism addresses a known issue in information-theoretic approaches where minimizing entropy can focus on low-probability events. This relates to research on decision-theoretic active learning and optimal experimental design, where the goal is to directly minimize expected error rather than maximize information gain.
SEFA's approach of encoding features individually before integration relates to research on partial variable-wise encoders in missing data contexts. While prior works like ACFlow (Li et al., 2020) used normalizing flows for this purpose, SEFA provides a simpler yet effective alternative specifically designed for the acquisition task.
Essential References Not Discussed: While the paper provides a thorough literature review, there are a few notable omissions that would help contextualize their contributions:
- Monte Carlo Tree Search (MCTS): The non-greedy acquisition problem shares similarities with planning problems where MCTS has been successful (Lim et al., 2012). MCTS explicitly balances immediate rewards with long-term payoffs through rollouts, conceptually similar to how SEFA samples multiple latent realizations to evaluate acquisition decisions.
- Feature attribution methods: SEFA's gradient-based approach for calculating feature importance shares similarities with feature attribution methods like Integrated Gradients (Sundararajan et al., 2017) and LIME (Ribeiro et al., 2016). These connections could strengthen the theoretical foundation of the gradient-based scoring function.
- Partial observability in sequential decision making: The paper lacks references to Partially Observable Markov Decision Processes (POMDPs) literature
Other Strengths And Weaknesses: ## Strengths
**Novel Integration of Techniques**: The paper creatively combines stochastic encoders, latent space regularization, and gradient-based feature scoring into a coherent framework. While each individual component builds on existing ideas, their integration into a unified acquisition approach is original and effective.
**Principled Design Choices**: Each aspect of SEFA addresses specific limitations of prior methods. The stochastic latent space addresses myopic decision-making, probability weighting focuses on distinguishing between likely classes, and the factorized architecture enables tractable feature scoring. These design choices are well-motivated by theoretical insights.
**Strong Empirical Results**: The consistent outperformance across diverse datasets is impressive. Particularly noteworthy is the performance on medical datasets where domain knowledge validates that selected features align with biological mechanisms.
**Pragmatic Balance**: The method strikes a good balance between theoretical principles and practical implementation. By using supervised training instead of reinforcement learning, the authors provide a more accessible approach to AFA that performs better and is easier to implement.
**Clear Writing and Visualizations**: The paper is generally well-written with excellent visualizations. The heat maps showing acquisition trajectories (Figures 2, 4) are particularly effective at conveying complex patterns in feature selection across different scenarios.
## Weaknesses
**Limited Theoretical Guarantees**: While the paper provides examples where SEFA works well, it lacks formal guarantees about its optimality or convergence properties. The theoretical foundation primarily motivates the approach rather than providing rigorous performance bounds.
**Parameter Sensitivity**: The method introduces several hyperparameters (β, number of latent components per feature, number of samples). While ablations show their importance, the paper doesn't provide strong guidance on how to select these parameters for new datasets beyond empirical tuning.
**Computational Overhead**: At inference time, SEFA requires sampling from latent distributions and computing gradients, which may be computationally demanding compared to simpler methods. The paper acknowledges this limitation but doesn't provide empirical runtime comparisons.
**Restricted to Classification**: As noted in the limitations, the method is currently designed for classification tasks. This restricts its applicability to regression problems, which are common in many domains where feature acquisition is relevant (e.g., predictive maintenance, climate modeling).
**Feature Independence Assumption**: The factorized latent space, while computationally convenient, assumes independent encoding of features. This may limit the model's ability to capture complex interactions between features, particularly in datasets with strong feature correlations.
**Limited Discussion of Scalability**: While the paper demonstrates effectiveness on datasets with moderate dimensionality, it's unclear how the approach would scale to problems with thousands or millions of features, which appear in genomics and other high-dimensional domains.
Overall, the paper presents a significant contribution to the AFA literature with strong empirical validation. Its originality lies more in the creative combination of techniques and their application to an important problem than in fundamental theoretical breakthroughs.
Other Comments Or Suggestions: 1. When describing the latent space regularization (Section 5.3), the paper refers to using a variational upper bound but doesn't explicitly state the form of this bound. For clarity, it would be helpful to provide the specific form.
Questions For Authors: 1. **Computational Overhead Analysis**: How does the inference time of SEFA compare to the baseline methods across different datasets? Since SEFA requires multiple latent samples and gradient calculations during acquisition, it would be helpful to understand the practical trade-off between performance gains and computational costs. A response showing reasonable computational requirements would strengthen the practical applicability of the method.
2. **Adaptation to Varying Acquisition Costs**: Many real-world scenarios have different costs associated with acquiring different features (e.g., some medical tests are more expensive or invasive than others). How would SEFA be adapted to account for variable acquisition costs? A clear adaptation strategy would significantly enhance the method's practical utility in cost-sensitive domains.
3. **Extension to Regression Tasks**: Given that the current probability weighting mechanism is specific to classification, what modifications would be required to extend SEFA to regression tasks? Understanding this extension would clarify the method's generalizability beyond classification problems, which is currently listed as a limitation.
4. **Robustness to Noisy Features**: How does SEFA perform when features contain noise or measurement errors? Since the method relies on gradient information to score features, it's important to understand its sensitivity to noise. Evidence of robustness would strengthen confidence in SEFA's real-world applicability.
5. **Selection of Latent Dimensionality**: How should practitioners determine the number of latent components per feature for new datasets? This hyperparameter seems crucial for balancing model capacity with regularization, and guidance beyond empirical tuning would make the method more accessible to new users.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the positive review, we are grateful for the feedback, we answer your questions below. All changes below will be added.
# Scalability
We have addressed this shared point in our response to Reviewer ABGZ.
# Generalization to Regression
SEFA can be modified for regression so that the predictor network has two heads, one that predicts the label directly and one that predicts the label as a classification problem (after it has been discretized into equally sized bins). We can use the regression head to predict the label, and the classification head to carry out the feature acquisitions.
# Feature Costs and Budget
**Feature Costs**: There are two common ways to include cost: (1) additively, e.g. minusing the cost from the reward in RL [Li and Oliva 2021], and (2) to divide by the cost either in the reward [Kachuee et al. 2019] or in the CMI estimate [Gadgil et al. 2024]. We believe dividing by the cost is better. Intuitively, the adjusted scores represent the advantage of acquiring a feature per unit cost. And it enforces the desired property that features with zero cost are acquired. Additionally, like CMI, the scores and training of SEFA are independent of cost, scores are adjusted based on cost **after** calculation. So if feature costs vary over time, SEFA would not need to be retrained like RL methods that learn a policy.
**Acquisition Budget**: Figure 3 shows the evaluation metrics throughout the acquisition - how the models would perform if the acquisition was ended after $n$ features. SEFA consistently performs best throughout the acquisition.
# Theoretical Claims
We have addressed this shared point in our response to Reviewer N1oA.
# Hyperparameter Ranges
**$\beta$**: The value of $\beta$ typically needs to be small, similar to a weight decay parameter. This is because it limits how much information from the features can be found in the latent space, papers such as Alemi et al. 2017 typically use values smaller than 0.01. In the sensitivity analysis in Figure 10 we see that if $\beta$ is too large, SEFA fails because it cannot predict effectively, and if $\beta$ is too low, then the latent space is not regularized enough. Based on this, we recommend 0.0001 - 0.01 as an effective range.
**Number of Samples**: SEFA improves with the number of acquisition and training samples, shown by the sensitivity analyses in Figures 11 and 12. This is because during training it encourages a diverse latent space, and during acquisition the diversity can be fully sampled. We recommend 50-200 as a good tradeoff between acquisition performance and computation.
**Number of Latent Components**: We use more than one latent component per feature so that there is a large capacity in the latent space for a rich representation of each feature. However, using too many increases the possibility of overfitting. Since each feature is only one value we recommend a value between 5-10. We tested 4, 6 and 8 as possible values and these were effective.
**Other Hyperparameters**: The initial ranges were determined by manually finding good values on the synthetic datasets and using ranges around those. We started with broadly accepted values, e.g. 0.001 as the default PyTorch Adam learning rate.
# References
We agree with all suggestions and will add them. We will also cite Baehrens et al. 2009 and Simonyan et al. 2014 as gradient based feature attribution methods.
# Feature Independence
We have addressed this shared point in our response to Reviewer N1oA.
# Variational Upper Bound
The regularization term to minimize is $I(Z; X)$. This requires access to the marginal $p(z)=\int p(z|x)p(x)dx$, which is intractable. Instead we use the standard normal as a variational approximation since it produces an analytical upper bound. The derivation can be found in equations 13 and 14 of Alemi et al. 2017.
# Noisy Features
We have run Syn 1 (with two additional baselines) with added Gaussian noise with three standard deviations: 0.1, 0.2 and 0.4. The number of acquisitions to acquire the correct features are:
|Model|No Noise|0.1|0.2|0.4|
|---|---|---|---|---|
|DIME|$4.079\pm0.057$|$4.688\pm0.211$|$4.703\pm0.276$|$5.368\pm0.267$|
|EDDI|$9.183\pm0.187$|$8.988\pm0.285$|$9.216\pm0.136$|$9.382\pm0.245$|
|Fixed MLP|$6.009\pm0.000$|$6.312\pm0.202$|$7.321\pm0.378$|$6.009\pm0.000$|
|GDFS|$4.568\pm0.195$|$4.566\pm0.187$|$4.543\pm0.164$|$5.583\pm0.283$|
|Opportunistic RL|$4.203\pm0.034$|$4.347\pm0.091$|$4.720\pm0.142$|$5.488\pm0.084$|
|VAE|$6.593\pm0.085$|$6.866\pm0.041$|$7.005\pm0.086$|$7.095\pm0.078$|
|GSMRL|$5.570\pm0.127$|$5.495\pm0.126$|$5.778\pm0.120$|$6.858\pm0.269$|
|Random|$9.484\pm0.006$|$9.495\pm0.010$|$9.495\pm0.010$|$9.495\pm0.010$|
|SEFA (ours)|$4.017\pm0.003$|$4.100\pm0.004$|$4.207\pm0.008$|$4.406\pm0.004$|
The models get worse with more noise. SEFA remains the best model at all levels of noise. This is likely due to the latent space regularization removing noise between the feature space and latent space. | Summary: This paper considers active feature acquisition, which is a dynamic test-time selection (usually sequential) of features to make predictions on each test instances. The selected features are chosen independently for each considered test instance. The authors argue (with some theory) how prior methods based on conditional mutual information can fail in pathological cases, and how they might fail more generally. Instead the authors produce an information bottleneck inspired approach that utilizes a latent space to encode surrogate behavior for how unobserved variables might couple with selected features, in predicting the label. This relies on a decoupling of latent variables, one grouping for each candidate, along with a gradient-based acquisition function. There are extensive synthetic and real-world experiments to support the use of this method in practice.
Claims And Evidence: The empirical results are strong, with sufficient comparisons against baselines and ablations, and results that seem significant over these baselines. I also really liked the explanation of why conditional mutual information might fail in proposition 4.1 and the surrounding discussion. This was well done.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I did not look carefully at the proofs
Experimental Designs Or Analyses: Yes. There are a nice range of analyzes including both number of features needed to reach the optimal set (synthetic data) as well as accuracy vs number of acquired feature curves, and qualitative visualizations showing the "trajectories" of selected features.
Supplementary Material: Briefly examined appendix
Relation To Broader Scientific Literature: While I appreciate the discussion around Shannon entropy not necessarily encouraging a maximal probability most likely class, I think more discussion is needed here around alternate metrics. Sometimes Renyi entropies are used instead of Shannon entropy to encourage different properties. For instance the min-entropy exactly encourages maximum probability on a single class. Extrinsic Jensen-Shannon divergence is another example, as an alternative to mutual information that better distinguishes between likely hypotheses (see e.g., Naghshvar & Javidi 2012).
Naghshvar, M., & Javidi, T. (2012, July). Extrinsic Jensen-Shannon divergence with application in active hypothesis testing. In 2012 IEEE International Symposium on Information Theory Proceedings (pp. 2191-2195). IEEE.
Essential References Not Discussed: See above. These alternate metrics instead of Shannon entropy are not in the critical path of the paper, but would strengthen the discussion
Other Strengths And Weaknesses: - one additional baseline that I'm wondering about is, why didn't you compare against the baseline of selecting a *random* order of features, for each dataset?
I think Section 5 needs more exposition. It just gets right into the details of the design choices made, but this paper would be strengthened with additional discussion around each of these choices. In particular:
- What is the core intuition for z? Does it essentially serve as a surrogate for a marginalization over p(xU | xO), which would otherwise be too computationally expensive to consider?
- As discussed in the limitations, the factorization of the latent encoding is a huge assumption. This entirely decouples any relationship between the selected features. It is not clear to me at all why this is a good idea, intuitively. In fact, I'm a bit surprised that the empirical results are as good as they are, since the features do not seem to be coupled in any way. Am I missing something?
- The discussion around Shannon entropy set things up really nicely to argue how a different scoring metric is needed to better encourage maximal class probabilities on a single class. Yet, to me the choice of the gradient-based r(c,z,i) is still very much a heuristic. I think more discussion is needed around this scoring function. Why is this a good idea? Sensitivity of p(Y|z) is listed as the reason, but how does this in any way fulfill the property that Shannon entropy is lacking?
Other Comments Or Suggestions: I like the visualizations showing the "trajectories" of the selected features. However, without the context of the scientific literature discussion in Section 6.3, I don't know how much Figure 4 is really adding. For someone without a medical background, the trajectories look random. This figure could be strengthened by annotating which features are actually scientifically important for each tumor location, so we can see that it is actually grabbing the important features, early on.
Questions For Authors: Please see the comments above
## update after rebuttal
Thank you for your response. I maintain my score at accept
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the positive review, we are grateful for the feedback, we answer your questions below. All changes below will be added.
# Renyi Entropy
Thank you for bringing this to our attention, we agree that Renyi entropies are relevant. In particular how the min-entropy relates to Shannon Entropy focussing on making low likelihoods lower (Section 4). We shall add these citations.
That being said, Renyi entropies would still suffer from making myopic decisions. They would still fail on the Indicator for the same reasons as Shannon Entropy due to Proposition 4.1. Additionally it is highly non-trivial to incorporate these metrics into the SEFA architecture.
# Random Selections
We originally tested random orderings to determine whether datasets would benefit from a non-random ordering. Random orderings performed poorly so we did not present the results to save space. We have provided the original results in our response to Reviewer ABGZ in the Additional Baselines sections. The random ordering always performs worse than the Fixed Global MLP Ordering - the datasets benefit from non-random orders.
# Using the Latent Space
Marginalization over $p(\mathbf{x}\_{U} | \mathbf{x}\_{O})$ is possible, we do it in our Feature Space ablations (Table 1 and Table 3). Instead the choice to use the latent space was made to improve performance, due to the following reasons:
- Acquisitions are not made using gradients calculated with feature level noise. By using latent space regularization we make acquisitions with a feature representation with noise removed containing only label relevant information.
- We avoid the need to train a generative model, avoiding the complex feature distributions (e.g. multi-modal) with complex interdependencies. In latent space we can model the distribution as independent Gaussians and the predictor learns to predict using these, and handles the interdependencies. See more in the Feature Independence response for a further explanation.
- Tabular data may contain features that are categorical (gradients are not meaningful), or features at different scales (the gradients are different scales), so gradients with respect to the raw features is not a reliable way to score them. We are able to train the encoders so that the latent components are all approximately the same scale (by regularizing the latent space) and real numbers, so that the gradients give reliable feature scores.
In summary, by calculating scores using **representations** of the features, we avoid both the need to train a generative model, and the associated pitfalls (feature level noise, complex distributions, complex interdependencies, less meaningful gradients).
# Feature Independence
As suggested by the ICML Program Chairs we have addressed this shared point in our response to Reviewer N1oA. Briefly, the complex interdependencies are able to be accounted for by the predictor network. The encoders do not need to learn the couplings between the features, because the predictor is able to treat latent components **differently** based on observed features, and therefore the gradients change based on this.
# Scoring Metric
**Overcoming Shannon Entropy Drawbacks**: The two main properties that Shannon Entropy lacks are (1) considering unobserved values in the current decision and (2) focussing on identifying the most likely class. SEFA addresses the first property by taking an expectation of $r(c, \mathbf{z}, i)$ over $\mathbf{z}$ considering the possible latent realizations of the unobserved features. SEFA addresses the second property by using probability weighting (the outer sum over $p\_{\theta, \phi}(y| \mathbf{x}\_{O})$ in Equation 1).
**Intuition behind $r(c, \mathbf{z}, i)$**: As pointed out by Reviewer Pdip, a good perspective on $r(c, \mathbf{z}, i)$ is via Feature Attribution. Gradients are a common technique for explaining which features are most important for predicting a particular instance [Baehrens et al. 2009, Simonyan et al. 2014]. And LIME [Ribeiro et al. 2016] suggests attributions using locally linear approximations of the predicted probability for each class separately. So we take many samples from the latent space, representing possible future latent realizations. For each of these, $r(c, \mathbf{z}, i)$ uses gradients as local linear approximations, to calculate feature attributions via the latent components based on how important they are for predicting class $c$. After the mean over the latent samples is taken, we have feature attributions for each class, we take a sum over classes, weighted by the current predicted probability, $p\_{\theta, \phi}(y| \mathbf{x}\_{O})$, to give more focus to features that are relevant to more likely classes.
# Figure 4
Thank you for the suggestion, we agree that the figure can be improved to highlight the class relevant features. As suggested by the ICML Program Chairs, we have addressed this shared point in more detail in our response to Reviewer N1oA. | Summary: The paper proposes a Stochastic Encodings for Feature Acquisition (SEFA) framework addressing the limitations of CMI and RL methods. SEFA uses an encoder-predictor architecture with intermediate stochastic latent variables. The architecture makes predictions and calculates the acquisition objective.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The contributions are novel addressing the limitations of CMI and RL methods.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths -
The theoretical proofs to establish the limitations of CMI and RL and the proposed SEFA framework, look solid. The performance look pretty good.
Weaknesses -
There's a high probability of encountering scalability issues with the proposed method and may turn out to be difficult to extend this to various other applications.
Some recent RL baselines can be used as baselines.
Other Comments Or Suggestions: NA
Questions For Authors: Please check the weaknesses section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the positive review, we are grateful for the feedback, we answer your questions below. All changes below will be added.
# Scalability
We provide the computational complexities for a single training step and single acquisition step for each model in Table 4 (Appendix H.3). The main takeaway is that SEFA scales better than half the methods at training time, better than the other half during acquisition and never the worst.
As suggested by Reviewer Pdip, to add to this, we have timed each model to determine the wall-clock times for inference and training. The average times in seconds for a **single** acquisition are given below:
|Model|Syn 1|Syn 2|Syn 3|
|---|---|---|---|
|DIME|$0.017\pm0.000$|$0.018\pm0.000$|$0.024\pm0.000$|
|EDDI|$2.871\pm0.059$|$2.613\pm0.002$|$2.854\pm0.056$|
|Fixed MLP|$0.014\pm0.000$|$0.011\pm0.000$|$0.017\pm0.000$|
|GDFS|$0.018\pm0.000$|$0.015\pm0.001$|$0.022\pm0.000$|
|Opportunistic RL|$0.029\pm0.001$|$0.021\pm0.001$|$0.032\pm0.000$|
|VAE|$0.153\pm0.000$|$0.154\pm0.000$|$0.267\pm0.005$|
|GSMRL|$0.047\pm0.000$|$0.040\pm0.000$|$0.055\pm0.000$|
|Random|$0.014\pm0.000$|$0.011\pm0.000$|$0.018\pm0.001$|
|SEFA (ours)|$0.246\pm0.001$|$0.165\pm0.001$|$0.240\pm0.002$|
|Model|Cube|Bank Marketing|California Housing|MiniBooNE|
|---|---|---|---|---|
|DIME|$0.021\pm0.000$|$0.016\pm0.000$|$0.010\pm0.001$|$0.021\pm0.000$|
|EDDI|$8.872\pm0.213$|$1.456\pm0.002$|$0.583\pm0.006$|$9.039\pm0.245$|
|Fixed MLP|$0.020\pm0.000$|$0.012\pm0.000$|$0.006\pm0.000$|$0.019\pm0.000$|
|GDFS|$0.021\pm0.000$|$0.017\pm0.002$|$0.008\pm0.000$|$0.020\pm0.000$|
|Opportunistic RL|$0.031\pm0.000$|$0.022\pm0.001$|$0.010\pm0.000$|$0.034\pm0.001$|
|VAE|$0.278\pm0.002$|$0.136\pm0.000$|$0.048\pm0.001$|$0.291\pm0.001$|
|GSMRL|$0.061\pm0.000$|$0.035\pm0.001$|$0.021\pm0.000$|$0.045\pm0.000$|
|Random|$0.019\pm0.000$|$0.012\pm0.000$|$0.006\pm0.000$|$0.017\pm0.000$|
|SEFA (ours)|$0.861\pm0.004$|$0.130\pm0.001$|$0.077\pm0.000$|$0.314\pm0.002$|
||||||
|Model|MNIST|Fashion MNIST|METABRIC|TCGA|
|DIME|$0.024\pm0.000$|$0.023\pm0.000$|$0.003\pm0.000$|$0.010\pm0.000$|
|EDDI|$12.360\pm0.029$|$8.543\pm0.200$|$0.108\pm0.001$|$0.765\pm0.011$|
|Fixed MLP|$0.020\pm0.000$|$0.020\pm0.000$|$0.002\pm0.000$|$0.008\pm0.000$|
|GDFS|$0.024\pm0.000$|$0.025\pm0.000$|$0.003\pm0.000$|$0.009\pm0.000$|
|Opportunistic RL|$0.030\pm0.000$|$0.031\pm0.000$|$0.003\pm0.000$|$0.012\pm0.000$|
|VAE|$0.452\pm0.007$|$0.452\pm0.007$|$0.009\pm0.000$|$0.031\pm0.001$|
|GSMRL|$0.097\pm0.001$|$0.069\pm0.001$|$0.016\pm0.000$|$0.021\pm0.000$|
|Random|$0.019\pm0.000$|$0.019\pm0.000$|$0.002\pm0.000$|$0.008\pm0.000$|
|SEFA (ours)|$1.498\pm0.002$|$1.497\pm0.003$|$0.019\pm0.000$|$0.098\pm0.000$|
As the number of classes increases (MiniBooNe to Cube and Cube to MNIST/Fashion MNIST), SEFA's acquisition time increases, despite the number of features staying the same, this matches Table 4. The time for Generative Models increases significantly with the number of features in line with Table 4. As expected, models with policy networks are fastest, and scale well with the number of features, but SEFA is never the slowest (faster than EDDI, often by a significant amount).
The average time in milliseconds for a **single** training step (one gradient descent step) on Syn 1 is given below:
|Model|Iteration Time (mS)|
|---|---|
|DIME|$35.156\pm0.081$|
|EDDI|$7.290\pm0.028$|
|Fixed MLP|$5.152\pm0.027$|
|GDFS|$21.661\pm0.056$|
|Opportunistic RL|$89.565\pm0.370$|
|VAE|$6.761\pm0.075$|
|GSMRL|$90.383\pm0.285$|
|SEFA (ours)|$15.843\pm0.091$|
Models that train by simulating acquisition are slowest to train despite being faster at inference time. Supervised models, including SEFA, are faster because they do not need to generate rollouts.
# Additional Baselines
Thank you for the suggestion, we have run the experiments on the following new baselines:
- GSMRL [Li and Oliva 2021], a more recent and popular RL baseline for AFA.
- A random ordering with an MLP predictor as requested by Reviewer V2wP (to show feature ordering is necessary).
The results are given below:
|Model|Synthetic 1|Synthetic 2|Synthetic 3|
|---|---|---|---|
|GSMRL|$5.570\pm0.127$|$6.227\pm0.185$|$8.199\pm0.067$|
|Random|$9.484\pm0.006$|$9.499\pm0.005$|$9.987\pm0.008$|
|SEFA (ours)|$4.017\pm0.003$|$4.098\pm0.007$|$5.081\pm0.021$|
|Model|Cube|Bank Marketing|California Housing|MiniBooNE|
|---|---|---|---|---|
|GSMRL|$0.891\pm0.001$|$0.879\pm0.006$|$0.638\pm0.003$|$0.946\pm0.001$|
|Random|$0.699\pm0.001$|$0.816\pm0.003$|$0.569\pm0.003$|$0.912\pm0.001$|
|SEFA (ours)|$0.904\pm0.000$|$0.919\pm0.001$|$0.676\pm0.005$|$0.957\pm0.000$|
||||||
|Model|MNIST|Fashion MNIST|METABRIC|TCGA|
|GSMRL|$0.701\pm0.002$|$0.683\pm0.001$|$0.665\pm0.002$|$0.781\pm0.003$|
|Random|$0.661\pm0.001$|$0.648\pm0.001$|$0.647\pm0.005$|$0.753\pm0.003$|
|SEFA (ours)|$0.761\pm0.001$|$0.721\pm0.000$|$0.708\pm0.002$|$0.845\pm0.002$|
SEFA is better than both new baselines. Random performs poorly in all cases. | null | null | null | null | null | null |
Concurrent Reinforcement Learning with Aggregated States via Randomized Least Squares Value Iteration | Accept (poster) | Summary: The authors extend the classic Least Squares Value Iteration (LSVI) method to the setting of multi-agent systems, and show guarantees for a randomized variant.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, to some extent.
Experimental Designs Or Analyses: Yes, to some extent.
Supplementary Material: No.
Relation To Broader Scientific Literature: The theoretical guarantees cover an important gap in the literature.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: None.
The writing is clear!
Other Comments Or Suggestions: None.
Questions For Authors: Please put an Impact Statement :)
Page 2:
Line 94:
Can you please elaborate on the difference of the techniques used?
Page 3:
Why is $V_h^\pi(s)$ defined in this way?
Page 4:
Can you please further explain $\hat R_{h, s, a}^{k, \texttt{full}}$ and $\hat P_{h, s, a}^{k, \texttt{full}}(s')$?
Page 4:
Can you please elaborate on the concept of aggregated-state representations?
Page 5:
I do not understand the perturbation that is captured by Equation (4).
Page 6:
Can you please clarify the choice of Equation (7)?
Page 8:
More discussion on future work is required! :)
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for your positive feedback!
1. We will add an impact statement later :)
2. [2] improves the regret bound of [1] by using clipping in their algorithm to avoid unreasonable estimates of the value functions. The rest of their algorithm closely follows [1], using a similar proof approach: constructing a confidence set for the empirical MDP to bound its deviation from the true MDP. They decompose regret similarly to [1] and get a tighter upper bound for the approximation error of the $Q$-value function due to their clipping technique.
Our paper uses a different algorithm from [2] and proof strategy to improve the regret bound of [1]. Firstly, we do information aggregation at the aggregated-state level, unlike the algorithms in [1,2] that record the information for every pair of $(s,a)$. Secondly, we use a different loss function and penalty term for updating the estimates of $Q$-value functions. By the first-order condition the estimated $Q$-value function is updated according to the derivation between line 1004 and line 1028, $\alpha$ and $\xi$ are carefully chosen based on concentration lemma 8, so that (i) Lemma 9 holds, which is a key step to the proof of Lemma 10 using iteration trick; (ii) the term in (37) between line 1375 and line 1389 can be controlled by a tight upper bound using Cauchy-Schwarz inequality.
3. $V_h^{\pi}(s)$ is the value function of adopting policy $\pi$ at state s during period h, which is a standard definition (see e.g. [3]). We include $h$ in the subscript because we consider inhomogeneous transition probabilities and reward functions (i.e. the transition probabilities and reward functions are allowed to be different at each period $h$ in our model).
4. On the one hand, $R_{h,s,a}^{k,\textrm{full}}$ and $\hat P_{h,s,a}^{k,\textrm{full}}(s')$
are estimates of rewards $R_{h,s,a}$ and transition probability $P_{h,s,a}(s')$ used in episode $k$ based upon all the historical data across all agents from episode $0$ to $k-1$, where $R_{h,s,a}$ is the reward received by state-action pair $(s,a)$ at time step $h$, and $P_{h,s,a}(s')$ is the probability of transferring to state $s'$ if taking action $a$ at state $s$ during time step $h$. On the other hand, $\hat P_{h,s,a}^k$ and $\hat R_{h,s,a}^{k}$ only uses data in episode $k-1$ (much less data compared to the previous two estimates). The algorithm utilizing $R_{h,s,a}^{k,\textrm{full}}$ and $\hat{P}_{h,s,a}^{k,\textrm{full}}(s')$ has a lower regret bound because we use more data in the buffer (information set), but can be computationally infeasible especially as the number of agents gets large, as we explained in the paper.
5. As we pointed out at the beginning of subsection 2.1, when $SA$ is too large, the algorithm can be computationally infeasible. So we aggregate the state-action pairs that have close $Q$-values under optimal policy. By $\epsilon$-error aggregated state-representation, we aggregate all those $(s,a)$ in the same block if the differences of their $Q_h^*$ values are less than $\epsilon$ according to definition 1.
6. Here in (4) randomization is applied to the value function estimates by perturbing observed rewards to drive exploration, which is essential to randomized LSVI algorithm (see [4] for more details).
7. As we responded in the 2nd point, $\xi_n$ is a parameter carefully derived based upon the concentration bound in Lemma 8 in Appendix B, to make Lemma 9 hold, and to control the terms by a tight upper bound in the proof. This is a technical term and is important to make the algorithm and the proof work.
8. We thank the reviewer for the suggestion about more future work. We will extend this in a revised version later. For example, we plan to implement our algorithm on real-world data, and derive concurrent RL results on function approximation beyond the current tabular setup, and compare our results with the existing concurrent functional approximation literature (e.g. [5,6]).
[1] Russo, Daniel. "Worst-case regret bounds for exploration via randomized value functions." Advances in Neural Information Processing Systems 32 (2019).
[2] Agrawal, Priyank, Jinglin Chen, and Nan Jiang. ``Improved worst-case regret bounds for randomized least-squares value iteration." Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 8, 2021.
[3] Bertsekas, Dimitri, and John N. Tsitsiklis. Neuro-dynamic programming. Athena Scientific, 1996.
[4] Osband, Ian, et al. "Deep exploration via randomized value functions." Journal of Machine Learning Research 20.124 (2019): 1-62.
[5] Desai, Nishant, Andrew Critch, and Stuart J. Russell. "Negotiable reinforcement learning for pareto optimal sequential decision-making." Advances in Neural Information Processing Systems 31 (2018).
[6] Min, Yifei, et al. "Cooperative multi-agent reinforcement learning: asynchronous communication and linear function approximation." International Conference on Machine Learning. PMLR, 2023. | Summary: The paper presents a novel approach to concurrent reinforcement learning (RL) using Randomized Least Squares Value Iteration (RLSVI) with aggregated states. The authors propose a framework where multiple agents interact with a common environment, sharing their experiences to improve decision-making collectively. The paper also discusses the role of the discount factor in RL, arguing that it serves primarily as a tuning parameter to stabilize value updates rather than being an intrinsic part of the environment.
Claims And Evidence: See the comments below
Methods And Evaluation Criteria: See the comments below
Theoretical Claims: See the comments below
Experimental Designs Or Analyses: See the comments below
Supplementary Material: See the comments below
Relation To Broader Scientific Literature: See the comments below
Essential References Not Discussed: See the comments below
Other Strengths And Weaknesses: Strengths:
- The paper addresses an important problem in RL, particularly in the context of multi-agent systems.
- The theoretical results are novel and provide improvements over existing bounds.
- The empirical results are consistent with the theoretical predictions, providing strong evidence for the efficacy of the proposed algorithms.
Weaknesses:
- The empirical validation is limited to synthetic environments. It would be beneficial to see how the algorithm performs in more complex, real-world scenarios.
- The paper could benefit from a more detailed discussion of the practical implications of the theoretical results, particularly in terms of scalability and computational efficiency.
Other Comments Or Suggestions: I suggest adding a systematic comparison table to explicitly contrast the proposed approach with existing concurrent RL methods, highlighting theoretical guarantees, empirical performance, and practical considerations to emphasize its novelty and superiority.
Questions For Authors: - How does the proposed algorithm compare to other concurrent RL algorithms in terms of both theoretical guarantees and empirical performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and constructive comments.
1. We would like to emphasize two points regarding the role of experiments in this work. First, the primary contribution of this paper is theoretical analysis of concurrent model-free RLSVI algorithms. Notably, neither of the two most closely related works [1,2], which focus on single-agent RLSVI, includes numerical results. Second, our experiments using synthetic data are explicitly designed to validate theoretical findings in a controlled environment, a common practice in theoretical literature. Importantly, unlike [1,2], our theoretical results are derived from practical, storage-efficient algorithms, and the corresponding numerical validations provide valuable insights for practitioners. Nevertheless, we agree with the reviewer that extending experiments to real-world datasets would be an interesting direction for future exploration, which we plan to pursue.
2. For scalability, as detailed in lines 248–258 (paragraph under equation (6)), prior approaches [1,2] require storing all historical data, making them computationally impractical at scale, while ours is storage-efficient. Following the reviewer's suggestion, we provide a comparison table below to explicitly contrast our approach with existing related work, including the two most closely related work [1,2] on single-agent RLSVI:
| Agent | Setup | Algorithm | Regret Bound | Regret-Type | Data Stored | Numerical |
|-------|--------------|-----------------------------|--------------------------------------|-------------|--------------|-----------|
| Single| Tabular | RLSVI [1] | $\tilde{O}(H^3S^{3/2}\sqrt{AK})$ | Worst-case | All-history | N/A |
| Single| Tabular | RLSVI [2] | $\tilde{O}(H^{5/2}S\sqrt{AK})$ | Worst-case | All-history | N/A |
| Multi | Tabular | Concurrent RLSVI [3] | N/A | Bayes | All-history | Synthetic |
| Multi | Linear Functional Approximation | Concurrent LSVI [4] | $\tilde{O}(H^2\sqrt{d^3KN})$ | Worst-case | All-history | N/A |
| Multi | Linear Functional Approximation | Concurrent LSVI [5] | $\tilde{O}(H\sqrt{dKN})$ | Worst-case | All-history | N/A |
| Multi | Tabular | Concurrent RLSVI (**ours-1**)| $\tilde{O}(H^{5/2}\Gamma\sqrt{KN})$ | Worst-case | All-history | N/A |
| Multi | Tabular | Concurrent RLSVI (**ours-2**)| $\tilde{O}(H^{5/2}\Gamma K\sqrt{N})$ | Worst-case | One episode | Synthetic |
The following discussions will be added to paper:
- $\Gamma$ denotes the size of aggregated states. This reduces complexity and accelerates learning by focusing on grouped state-action pairs.
- To the best of our knowledge, all existing concurrent RLSVI work focuses on finite-horizon case, while we also cover the infinite-horizon case. The regret bound for the infinite-horizon case is similar to that of the finite-horizon case, by replacing $KH$ with total time steps $T$.
- The methods marked as "N/A" in the "numerical" column store all agents' trajectories at every step, making them computationally infeasible as $N$ grows large. These approaches (including our first finite-horizon algorithm, second-to-last row in the table) provide only theoretical analyses without numerical results.
- Although [3] stores all data empirically, their RLSVI algorithm assumes a known parametric feature matrix, making it simpler to implement than ours. Also they evaluate only Bayes regret—less stringent than our worst-case regret—and provide empirical results exclusively on synthetic data without theoretical guarantees.
- The last row corresponds to storing only the latest episode data, increasing the regret bound by a factor of $\sqrt{K}$ but reduces space complexity by a factor of $K$, making it computationally feasible.
[1] Russo, Daniel. "Worst-case regret bounds for exploration via randomized value functions." Advances in Neural Information Processing Systems 32 (2019).
[2] Agrawal, Priyank, Jinglin Chen, and Nan Jiang. ``Improved worst-case regret bounds for randomized least-squares value iteration." Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 8, 2021.
[3] Taiga, Adrien Ali, Aaron Courville, and Marc G. Bellemare. "Introducing Coordination in Concurrent Reinforcement Learning." ICLR 2022 Workshop on Gamification and Multiagent Solutions.
[4] Desai, Nishant, Andrew Critch, and Stuart J. Russell. "Negotiable reinforcement learning for pareto optimal sequential decision-making." Advances in Neural Information Processing Systems 31 (2018).
[5] Min, Yifei, et al. "Cooperative multi-agent reinforcement learning: asynchronous communication and linear function approximation." International Conference on Machine Learning. PMLR, 2023. | Summary: The paper adapts the concurrent learning framework via randomized least squares value iteration with an aggregated state representation, to improve exploration efficiency and the worst-case regret bound. Extensive experiments are conducted to support the theoretical findings.
Claims And Evidence: The claims are well supported by evidence.
Methods And Evaluation Criteria: The methods and evaluation make sense to me.
Theoretical Claims: Yes. The proofs are correct.
Experimental Designs Or Analyses: Yes, the experimental designs are valid.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper is related to the concurrent randomized least-squares value iteration literature.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strength
- The paper focuses on an important problem in RL, with sound proof of the worst-case regret bound over the existing ones.
Weakness
- The experiments are only on the synthetic datasets. With experiments on the real-world datasets, it could demonstrate the improvements better.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and constructive comments. We are aware that our experiments are only on synthetic data. We would like to emphasize two points regarding the role of experiments in this work.
Firstly, the main focus and contribution of our work is the first theoretical analysis of concurrent model-free RLSVI algorithms. It is worth noted that neither of the two most closely related papers [1,2] (single-agent RLSVI) includes any numerical results.
Secondly, we conduct experiments on synthetic data to specifically validate the theoretical findings in a controlled manner. This approach aligns with existing literature of theory papers. It is also worth noted that, unlike [1,2], our theoretical results are based on practical, storage-efficient algorithms. With carefully designed numerical validations, they provide valuable insights for practitioners.
We will add a table comparing with existing theoretical results to discuss the practical implications in terms of scalability and computational efficiency.
That said, we agree with the reviewer that experiments on real-world data would be an interesting direction to extend the current work, and we plan to explore it in the future.
[1] Russo, Daniel.``Worst-case regret bounds for exploration via randomized value functions." Advances in Neural Information Processing Systems 32 (2019).
[2] Agrawal, Priyank, Jinglin Chen, and Nan Jiang.``Improved worst-case regret bounds for randomized least-squares value iteration." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 8. 2021. | null | null | null | null | null | null | null | null |
SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator | Accept (poster) | Summary: This paper proposes SepLLM, an efficient Transformer-based architecture designed to accelerate inference in large language models (LLMs). The key insight is that separator tokens (such as punctuation and line breaks) disproportionately carry segment-level information, enabling the compression of other tokens without significant performance loss. The authors implement a sparse attention mechanism that retains only initial, separator, and neighboring tokens, substantially reducing the memory usage of the KV cache. Experimental results demonstrate that SepLLM achieves more than 50% reduction in KV cache size on GSM8K-CoT while maintaining competitive accuracy, and significantly improves inference speed and memory efficiency in both static and streaming tasks.
Claims And Evidence: The paper identifies separator tokens ([".", ",", "?", "!", ";", ":", " ", "\t", "\n"]) as critical for compressing segment-level information.
However, the authors do not clearly explain or experimentally justify how these separator tokens were chosen.
Methods And Evaluation Criteria: The proposed methods, particularly the selective retention of initial, separator, and neighboring tokens, are well-aligned with the goal of reducing computational complexity in LLMs.
Additionally, employing well-established benchmarks such as GSM8K-CoT, MMLU, and PG19 provides a clear and appropriate evaluation framework for the model's performance.
Theoretical Claims: I reviewed the theoretical proofs provided, particularly the universal approximation analysis in Appendix G.
Experimental Designs Or Analyses: The experimental design, particularly using GSM8K-CoT and MMLU, is well-structured to demonstrate efficiency and reasoning improvements. However, it remains unclear whether the approach can maintain performance on more challenging tasks requiring extended reasoning (e.g., Math 500), and further experiments in this area would be beneficial.
Supplementary Material: I reviewed all supplementary materials.
Relation To Broader Scientific Literature: The paper builds on prior research in KV cache compression and sparse attention—such as FastGen, SnapKV, and StreamingLLM—by introducing a novel approach that leverages separator tokens to compress segment-level information. This contribution extends methods like BigBird and other sparse attention mechanisms by showing that dynamic, data-dependent token selection can significantly reduce computational overhead while maintaining performance.
Essential References Not Discussed: To the best of my knowledge, the paper has cited all essential references necessary to understand the context and key contributions of the work.
Other Strengths And Weaknesses: Strengths:
1.The paper introduces an innovative approach by leveraging separator tokens to compress segment-level information, which is a novel idea in efficient LLM design.
2.The method is thoroughly evaluated across various settings (training-free, training-from-scratch, post-training, and streaming), demonstrating its practical effectiveness.
3.The approach achieves substantial computational savings—significantly reducing KV cache usage—while maintaining competitive performance.
Weaknesses:
1.The size of the neighboring tokens must be manually selected, requiring a trade-off between efficiency and performance.
2.The justification for the chosen set of separator tokens is not fully detailed, raising questions about the robustness of this selection across different tasks or languages.
3.The experiments are largely limited to benchmarks like GSM8K-CoT and MMLU, leaving uncertainty about the method’s effectiveness on more complex, reasoning-intensive tasks.
Other Comments Or Suggestions: In the Related Work, after discussing the shortcomings of existing KV cache compression methods, the paper does not explicitly highlight how its approach overcomes these limitations.
Questions For Authors: 1.How did you select the fixed set of separator tokens, and have you evaluated alternative sets?
2.Can you provide insights into how the size of neighboring tokens (n) might be determined adaptively?
3.How does SepLLM perform on more challenging, reasoning-intensive tasks beyond GSM8K-CoT and MMLU?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to sincerely thank the reviewer for the valuable comments on our work. We take every comment seriously and hope our response can address the reviewer’s concerns. If there are any remaining questions, we are more than happy to address them.
> Q1. The authors do not clearly explain or experimentally justify how these separator tokens were chosen.
**A1**.
We take the tokens in the 300B natural language dataset that exceed a certain word frequency threshold as separators, which are [“.”, “,”, “?”, “!”, “;”, “:”, “ ”, “\t”, “\n”].
> Q2. The size of the neighboring tokens must be manually selected, requiring a trade-off between efficiency and performance.
**A2**.
In our experiments, the size of neighboring tokens is set to be $m/8$ ~ $m/4$, where $m$ denotes the input sequence length. We find that the performance is still closed to that of full attention in these settings. While the size of neighboring tokens is set to be $m/2$ in StreamingLLM to maintain a similar performance.
> Q3. The justification for the chosen set of separator tokens is not fully detailed, raising questions about the robustness of this selection across different tasks or languages.
**A3**.
Indeed, we use the same set of separators in all the experiments, including using lama-3-8B, Pythia-6.9B, Falcon-40B models and conducted on GSM8K, ARC, and MMLU datasets.
> Q4. The experiments are largely limited to benchmarks like GSM8K-CoT and MMLU, leaving uncertainty about the method’s effectiveness on more complex, reasoning-intensive tasks.
**A4**.
As suggested, we evaluate our SepLLM on MATH, which is a more complex benchmark, and the results are as follows.
It justifies the superiority of SepLLM in complex tasks.
| | MATH | | | | | | | | |
|---|---|---|---|---|---|---|---|---|---|
| |**overall** | **r.KV(%)** |algebra | counting_and_prob | geometry |intermediate_algebra | num_theory | prealgebra | precalc |
| Vanilla |27.32| 100.00|37.83| 24.26|18.58|12.62|21.11| 48.34|11.72|
| SepLLM | 26.74 |32.4 | 37.99 | 24.05 | 19.83 |10.52|18.7|48.56|10.62 |
| H$_2$O | 23.72 | 33.1| 34.71 | 17.72 | 15.03 | 11.41 | 19.63 | 41.79 | 8.24|
| SnapKV | 25.68 | 33.1 | 37.49 | 21.94 | 16.91 | 10.41 | 20.37 | 45.58 |9.71|
| PyramidKV | 25.1 | 33.1| 37.49|20.46| 16.49 | 9.97 |19.44 |44.55 |9.34|
> Q5. In the Related Work, after discussing the shortcomings of existing KV cache compression methods, The paper does not explicitly highlight how its approach overcomes these limitations.
**A5**.
As discussed in Section 1 of our submitted paper, the existing KV cache compression methods are training-free approaches, which is inconsistent with training phase, resulting in approximation error.
> Q6. How did you select the fixed set of separator tokens, and have you evaluated alternative sets?
**A6**.
For the selection confusion, please refer to Q1.
As suggested, we evaluate our SepLLM with subsets of our previous setting on GSM8K-CoT, which are ["," "." "?" ";"] and ["." "?"]. The experimental results are as follows. It shows that performance drops <0.5 when removing half of the separators. So SepLLM is robust against the separator selection.
| | SepLLM (n=256) | SepLLM (n=256) | SepLLM (n=256) | StrmLLM (n=256) | StrmLLM (n=380) |
|------------------|-------------------------------|----------------|-----------------|-----------------|-----------------|
| flexible-extract | 77.18 | 76.68 | 70.66 | 68.84 | 71.42 |
| strict-match | 77.18 | 76.85 | 70.36 | 67.63 | 70.89 |
| r.KV (%) | 47.36 | 37.92 | 36.44 | 31.92 | 47.54 |
| separators | "." "," "?" "!" ":" ":" " " "\t" "\n" | "," "." "?" ";" | "." "?" | - | - |
> Q7. Can you provide insights into how the size of neighboring tokens (n) might be determined adaptively?
**A7**.
As discussed in A2, we recommend the size of neighboring tokens to be $m/8$~$m/8$, where $m$ denotes the input sequence length.
Q8. How does SepLLM perform on more challenging, reasoning-intensive tasks beyond GSM8K-CoT and MMLU?
**A8**.
Please refer to Q4. | Summary: ## update after rebuttal
The author's rebuttal addressed many of my concerns; however, I am still a little hesitant about the generalisability. I will change my score from 2 to 3.
* The paper identifies a key pattern: certain seemingly meaningless special tokens (i.e., separators) contribute disproportionately to attention scores compared to semantically meaningful tokens.
* Plug-and-play method for compressing such special tokens and removing redundancy.
* Also implement efficient kernels for training acceleration.
Claims And Evidence: * "From Figure 2, it is seen that a given input context, seemingly “meaningless” separator tokens (such as commas, periods, exclamation marks, semicolons, etc.) that segment sequences receive higher attention scores compared to semantically meaningful tokens
(such as nouns or verbs)." This is a very interesting observation and is well used for the design.
* "Neighboring tokens usually help form locally smooth and coherent contexts, allowing the model to generate sentences that are reasonable within the immediate context." I like how the authors well defined all the different kinds of tokens.
* Targeted masking for pre-training so that different kinds of tokens have different importance score follows well from the initial insights.
Methods And Evaluation Criteria: * Splitting the KV Cache into different blocks based on the kind of tokens seems like a promising idea and is well explained.
* Evaluation is carried out on standard benchmark datasets and show very promising results
* "Our positional encoding strategy for streaming settings is the same as the state-of-the-art StreamingLLM" more information on the exact technique would have been helpful, as the correct positional encoding seems to inspire the design a lot. Is the technique used the same as ROPE? Better?
* Ablation studies show the importance of additional tokens, but it seems a better idea would have been to show the effect of separator tokens as that is the main claim for improvement.
* How does SepLLM compare against well known KVCache compression techniques such as H20 and SnapKV?
Theoretical Claims: Minor theoretical results are presented and the claims seem correct.
Experimental Designs Or Analyses: * The experimental setup was well explained and evaluation on many parameters (quality, flops (compute) and size ) was carried out.
* It was useful to see both the pre-training and post-training performance
* One issue in the ablation studies, the claim of better retrieval was introduced but no result was shown in the main body of the paper.
Supplementary Material: Yes, skimmed through the appendices.
Relation To Broader Scientific Literature: * The space of KV Cache compression is extremely interesting and has very promising implications on faster inference for LLMs. The scope and the methods in the paper are timely and well presented.
Essential References Not Discussed: * Comparison to popular KVCache compression methods such as H20 (Neurips '23) and SnapKV (Arxiv'24) has not been considered. Would love to see insights on that too.
Other Strengths And Weaknesses: * Interesting approach to see a KVCache compression technique which considers the KVCache being split into multiple sub KVCaches based on the properties of the tokens.
Other Comments Or Suggestions: N/A
Questions For Authors: I'd encourage the authors to compare with well known KVCache compression techniques such H20 (Neurips '23) and SnapKV (Arxiv'24) to better position the gains and have further clarity on how much the token-based KVCache partitioning helps. I saw a claim in the introduction that training-free methods are suboptimal but no evidence has been presented in the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to sincerely thank the reviewer for the valuable comments on our work. We take every comment seriously and hope our response can address the reviewer’s concerns. If there are any remaining questions, we are more than happy to address them.
**Should there be any need for further clarification to assist in advancing our scores, please do not hesitate to inform us. Thank you very much.**
> Q1. "Our positional encoding strategy for streaming settings is the same as the state-of-the-art StreamingLLM" more information on the exact technique would have been helpful, as the correct positional encoding seems to inspire the design a lot. Is the technique used the same as ROPE? Better?
**A1**.
We just adopt RoPE. However, in scenarios involving infinitely long streaming input, we follow StreamingLLM[1] and apply PE's shifting; i.e., we focus on positions within the cache instead of those in the original text. For instance, if the current cache contains tokens [0, 1, 2, 3, 6, 7, 8] and is in the process of decoding the 9th token, the positions assigned are [0, 1, 2, 3, 4, 5, 6, 7], rather than the positions in the original text, which would be [0, 1, 2, 3, 6, 7, 8, 9]. However, for training and downstream tasks of general length (<4K; Sec 3.1), we simply use standard RoPE without PE's shifting. See **Sec. 3.3;4.6 and Tab. 8** for details.
> Q2. Ablation studies show the importance of additional tokens, but it seems a better idea would have been to show the effect of separator tokens as that is the main claim for improvement.
**A2**.
Indeed, the ablation studies you propose are illustrated in **Tab.1-8 and Fig. 5** (for both training & training-free). Without separators, SepLLM degrades to StreamingLLM. As demonstrated by the experimental results across so many settings and tasks, the separators indeed provide significant benefits to the architecture design. See more studies on separators in **A6 to Reviewer kB2E**.
> Q3. How does SepLLM compare against well known KVCache compression techniques such as H2O and SnapKV?
**A3**.
As suggested, we add the H2O, SnapKV and PyramidKV [2] as baselines and the results on GSM8K_CoT and the challenging MATH are as follows (All based on Llama3-8B-Inst backbones).
| | GSM8K-CoT | | |
|---|---|---|---|
| | flexible | strict | r.KV(%) |
| Vanilla |77.79| 77.26| 100.00|
| SepLLM | 77.18 |77.18| 47.36|
| H2O | 76.27 |75.06| 47.54|
| SnapKV | 76.5 |73.62| 47.54|
| PyramidKV | 75.82 |72.02| 47.54|
| | MATH | | | | | | | | |
|---|---|---|---|---|---|---|---|---|---|
| |**overall** | **r.KV(%)** |algebra | counting_and_prob | geometry |intermediate_algebra | num_theory | prealgebra | precalc |
| Vanilla |27.32| 100.00|37.83| 24.26|18.58|12.62|21.11| 48.34|11.72|
| SepLLM | 26.74 |32.4 | 37.99 | 24.05 | 19.83 |10.52|18.7|48.56|10.62 |
| H$_2$O | 23.72 | 33.1| 34.71 | 17.72 | 15.03 | 11.41 | 19.63 | 41.79 | 8.24|
| SnapKV | 25.68 | 33.1 | 37.49 | 21.94 | 16.91 | 10.41 | 20.37 | 45.58 |9.71|
| PyramidKV | 25.1 | 33.1| 37.49|20.46| 16.49 | 9.97 |19.44 |44.55 |9.34|
With similar amount of KV cache, SepLLM can achieve better performance.
H2O and SnapKV are training-free methods that rely on full attention for pretraining and prefilled KV, but their inference-stage KV selection causes training-inference inconsistency and query dependency. SepLLM addresses these issues by introducing **a novel language modeling paradigm** that uses separators for segment infomation summarization, **optimizing from the pretraining to the inference**. Additionally, SepLLM naturally aligns with the hierarchical semantic structure of language (e.g., "," for segments, "." for sentences, \n for paragraphs) and reduces KV usage during inference.
> Q4. One issue in the ablation studies, the claim of better retrieval was introduced but no result was shown in the main body of the paper.
**A4**.
Indeed, we have already conducted **needle in a haystack** experiments and the results are shown in **Fig. 8-11** & App. E, which demonstrate the retrieval ability of SepLLM.
> Q5. Comparison to popular KVCache compression methods such as H20 (Neurips '23) and SnapKV (Arxiv'24)
**A5**.
Please refer to Q3.
> Q6. I saw a claim in the introduction that training-free methods are suboptimal but no evidence has been presented in the paper.
**A6**.
As discussed in **Q3**, training-free methods are based on KV selection, an approximation to full attention, leading to
inconsistency between training and inference phases.
**Tab.13** can support our claim. Based on the well-trained Llama3-8B-Inst model, we fine-tuned for only 200 steps on the LongAlpaca dataset, and the results outperformed the original training-free SepLLM and even surpassed Vanilla.
[1] Xiao, Guangxuan, et al. "Efficient streaming language models with attention sinks." ICLR 2024.
[2] Cai, Zefan, et al. "Pyramidkv: Dynamic kv cache compression based on pyramidal information funneling." preprint arXiv:2406.02069 (2024). | Summary: The paper introduces SepLLM, a novel framework aimed at improving the efficiency of large language models (LLMs) by leveraging the observation that separator tokens (e.g., commas, periods) disproportionately contribute to attention scores. The authors propose compressing segment information into these separator tokens, reducing computational costs and memory usage while maintaining model performance.
## update after rebuttal
The author's rebuttal addressed most of my concerns; I will keep my score.
Claims And Evidence: Yes, the authors propose a sparse attention mechanism based on separator tokens and validate its effectiveness through experiments.
Methods And Evaluation Criteria: Yes, the authors first observe the attention matrix and find that separator tokens have stronger attention weights, which leads to the proposal of the SepLLM method. And the evaluation on long-context ppl is the standard setting in sparse attention.
Theoretical Claims: N/A
Experimental Designs Or Analyses: - Although there are relatively sufficient experiments on long texts, some important evaluations are still missing, such as long ppl [1], because traditional ppl evaluations may not accurately reflect long-text capabilities on truly extended texts. Additionally, testing tasks like "needle in a haystack" would help assess long-text capabilities.
- Furthermore, some simple baselines were not considered. For instance, while this method retains separator tokens, what would happen if we retained one token for every fixed number of tokens? Especially after some training, fixed sparse strategies might also prove to be quite efficient.
[1] What is Wrong with Perplexity for Long-context Language Modeling?
Supplementary Material: No.
Relation To Broader Scientific Literature: The key contributions of the paper are closely related to prior research on sparse attention mechanisms and efficient memory management in large language models (LLMs). The observation that separator tokens receive disproportionately high attention aligns with earlier findings on the importance of specific tokens (e.g., attention sinks) in long-context modeling, as highlighted in works like StreamingLLM and SnapKV. Additionally, the proposed SepLLM framework builds on ideas from sparse attention methods, such as BigBird and Longformer, but differentiates itself by focusing on data-dependent token selection (separator tokens) rather than fixed patterns.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The reliance on separator-based design might affect generalization, as separators may vary across different languages or datasets. A more general consideration could involve first performing adaptive chunking on the data, ensuring low entropy within chunks and high entropy between chunks, followed by compressing each chunk.
Other Comments Or Suggestions: None
Questions For Authors: - What would happen if sparse attention is not used, and instead some heads are converted to sliding window attention while allocating the remaining computation to a full window attention? This would represent a standard hybrid baseline.
- Are there any methods to further improve the KV compression rate of SepLLM?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to sincerely thank the reviewer for the valuable comments on our work. We take every comment seriously and hope our response can address the reviewer’s concerns. If there are any remaining questions, we are more than happy to address them.
**Should there be any need for further clarification to assist in advancing our scores, please do not hesitate to inform us. Thank you very much.**
> Q1. important evaluations such as long ppl and "needle in a haystack"
**A1**.
We have provided numerous results on perplexity (ppl) for extremely long test datasets. For instance, based on the Llama3-8B model and the PG19 test set, we conducted inference on extremely long sequences of up to 4 million tokens. The results are presented in Table 4 of our paper, which we have copied here for your reference.
| Input Length |3M |3.5M | 4M |
|----|----|----|----|
|StreamingLLM| 36.4|35.8|36.1|
| SepLLM(s=32)|34.9|34.2|34.5|
|SepLLM(s=64)|34.3|33.7|33.9|
Additionally, we have also conducted experiments on **Needle In A Haystack**, with results and detailed discussions presented in Figure 8-11 of our submitted paper. The experimental results show the long-context retrivie capacity of SepLLM.
> Q2. retained one token for every fixed number of tokens
**A2**.
As suggested, we conducted such experiments and the results are shown in the table below.
| | GSM8K-CoT | | | MMLU | | | | | |
|---|---|---|---|---|---|---|---|---|---|
| | flexible | strict | r.KV(%) | humanities | stem | social | other | Overall | r.KV(%) |
|Vanilla |77.79|77.26|100.00|60.49|56.61|76.50|72.19|65.72|100.00|
| FixLLM($\Delta$=5,$n$=256)|70.43|70.43|45.64|55.52|54.80|72.99|69.75|62.33|50.20|
| FixLLM($\Delta$=4,$n$=256)|72.71|72.33|49.08|55.92|54.39|74.36|70.81|62.91|53.32|
|SepLLM($n$=256)| 77.18| 77.18| 47.36| 57.66| 56.49| 76.21| 72.19| 64.68| 44.61|
Here, FixLLM represents the variant you mentioned, where $n$ still denotes the number of neighboring tokens, and every $\Delta$-th tokens are also remained outside the neighboring window. Experimental results indicate that FixLLM is still inferior to SepLLM.
> Q3. generalization issue as separators may vary across different languages or datasets.
**A3**.
In our perspectives, separators are general across different languages or datasets. In modern commonly used languages, separators are widely present (including but not limited to English, German, French, Italian, Chinese, and etc, covering over 7 Billion people). While most languages use separators similar to modern English, a few languages still retain the usage of their traditional separators. For example, in Bengali, the period (".") is commonly used as a sentence-ending marker, but the traditional sentence-ending marker "।" (vertical line) is still in use. During training and inference, we only need to specify the separators we want to use, and the commonly used separators in most languages are around 10 in number.
Emprically, we have verified the robustness of our SepLLM across various tasks or datasets, including GSM8K, MMLU, ARC, PIQA and so on. All emprical results can demonstrate the generalization of SepLLM as well.
> Q4. low entropy within chunks and high entropy between chunks
**A4**.
We agree that entropy-based grouping is a feasiable apporach. However, entropy-based grouping is not training/inference efficient since we need to calculate the attention mask by some algorithms. In comparison, SepLLM follows the natural modeling of the language and split the segments by separators, which is more efficient and more robust.
> Q5. some heads are converted to sliding window attention while allocating the remaining computation to a full window attention? This would represent a standard hybrid baseline.
**A5**.
As suggested, we added the following experiments.
| | MMLU | | | | |r.KV(%) | n|
|---|---|---|---|---|---|---|---|
| #Sparse_Heads/#Total_Heads | humanities | social sciences | stem | other |overall | | |
|20/32 | 23.93 | 23.07 | 24.42 | 23.53 | 23.76|44.74|80|
|24/32 | 24.23 | 23.3 | 26.36 | 23.37 | 24.31|47.71|208|
|28/32 | 25.66 | 27.29 | 26.31 | 23.69 | 25.81|45.13|256|
|30/32 | 27.29 | 25.09 | 27.78 | 38.11 | 29.31|45.42|288|
|SepLLM |57.66|76.21| 56.49| 72.19|64.68|44.61|256|
|Vanilla | 60.49| 76.50 | 56.61 | 72.19 | 65.72|100||
For example, "20/32" means there are 20 sliding-window heads out of a total of 32 heads (the remaining heads using full attention).
As can be seen, with similar amount of KV cache, SepLLM is much better than these hybrid baselines.
> Q6. any methods to further improve the KV compression rate of SepLLM
**A6**.
One possible method may be the multi-level version of SepLLM. In the current version, we compress the "segment-level" into general seperators. In the multi-level version, we may define "sentence-level" or "paragraph-level" as a higher-level seperation and compress a set of previous seperators into one more condensed KV Cache. Then, the KV compression rate will be improved. | null | null | null | null | null | null | null | null |
Projection Pursuit Density Ratio Estimation | Accept (poster) | Summary: This paper considers the problem of density ratio estimation (DRE), where parametric methods may leads to bias, while non-parametric methods struggle with the curse of dimensionality in high-dimensional settings. The authors present an approach that leverages projection pursuit approximation to estimate the density ratio. Theoretical analysis establishes the estimator’s consistency and convergence rate. Additionally, experimental results demonstrate the effectiveness of the proposed method.
Claims And Evidence: The consistency and the convergence rate of the proposed algorithms is established in Theorem 3.2.
The effectiveness of the algorithm is presented in Table 1,2 and 3, where the proposed density ratio estimation method is applied in several different cases and achieves good permormance
Methods And Evaluation Criteria: The proposed method makes sense, since it has a clear motivation and mathematical derivation, and also has experimental support. For benchmark datasets, the authors use the dataset that has been adopted by previous works in the corresponding application, which is also reasonable.
Theoretical Claims: I did not check every proof in detail.
Experimental Designs Or Analyses: The experimental designs are in general sound. Only one question: how do you choose the hyper parameters of other algorithms? How can readers be convinced that the superiority performance of ppDRE is not an artifact of hyper parameter selection?
Supplementary Material: No supplementary material.
Relation To Broader Scientific Literature: The ppDRE method, as already shown in the paper, can be applied to dose response function estimation and other fields of science.
Essential References Not Discussed: Not finding any missing essential related works.
Other Strengths And Weaknesses: Strength: novel idea of leveraging projection pursuit in density ratio estimation. Clear motivation and explanation of the algorithm. Theoretical results are provided, justifying the consistency of the estimator. Experimental results are provided, showing the effectiveness of this algorithm, compared with prior works.
Other Comments Or Suggestions: No further comments.
Questions For Authors: I am wondering if you can see from your theory (Thm 3.2) of why your method achieves superior performance than other methods. Is your theory only about convergence rate and consistency, not any non-asymptotic result that can show your superiority?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the positive comments.
*Q1: Theoretical support for the superiority of the proposed method*
We develop asymptotic convergence rates (Theorem 3.2) for our proposed projection pursuit density ratio estimator:
$$ \sup_{x\in\mathcal{X}}| \hat{r}\_{K}(x) - r_{K}(x)| =O_{p}(\sum_{\ell=1}^K[\\{J_{\ell}^{-(s-1)}+ \sqrt{\frac{J_{\ell}}{n_q \wedge n_p}}\\}\cdot \Pi_{i=\ell}^K\\{\sqrt{\tilde{\zeta}\_1(J_i)}\vee \zeta^2_0(J_i)\\}])$$
Please kindly refer to the details in our response to Q1 of Reviewer mSdz. This rate does not depend on the dimension of the data, which highlights the strengths of projection pursuit modeling in mitigating the curse of dimensionality and may explain the superior performance compared to other methods based on direct nonparametric approximation. We agree with you that the non-asymptotic rates are more proper evidence for evaluating the performance of the estimator, but the proof presents significant challenges and is beyond the scope of this paper. We will pursue it in future work.
*Q2: Hyperparameter selection process*
To ensure rigorous and fair comparisons, we implemented the following protocol.
- For our proposed ppDRE method, as mentioned in Remark 3.3, the optimal hyperparameters are identified by the minimal validation loss in CV. For clarity, we detail our approach as follows: across all experiments, we utilized 5-fold CV with random sampling. A grid search was conducted over a predefined set of parameter ranges, which are outlined in the table below:
Table1. Hyperparameters for ppDRE method
|parameter|description|searchspace|
|:-:|:-:|:-:|
|$K$|Number of PP iterations|\{5,10,15\}|
|$J_k$|Number of basis functions|\{20,50,70,100,150\}|
|$\lambda$|L2-penalty strength|\{0.5,1,5,10\}|
|$\delta$|Gradient descent learning rate|\{0.001,0.01,0.1\}|
- For open-source baseline methods (e.g., KLIEP, uLSIF, $\text{D}^3$-LHSS), we utilized their official implementations (e.g., Python densratio package for uLSIF) and default hyperparameter search grids as recommended in their original papers or standard toolkits. For instance, uLSIF's hyperparameters ($\sigma$, $\lambda$) were selected from the grid 1e-3:10:1e9, consistent with its standard implementation. These methods inherently incorporate CV-based hyperparameter selection in their standard workflows. We preserved these built-in CV mechanisms without modification.
- For Classification and nnDRE, we implemented 5-fold CV (aligned with our ppDRE method) to identify the optimal hyperparameters within the search spaces outlined in the table below:
Table2. Hyperparameters for Classification and nnDRE methods
| method | parameter | search space |
| :-: | :-: | :-: |
| Classification | n_estimators | \{100, 300} |
| | learning_rate | \{0.0001, 0.001, 0.01\} |
| | num_leaves | \{20, 30} |
| nnDRE | depth | \{2, 3} |
| | width | \{8, 32, 64\} |
| | learning | \{0.0001, 0.001, 0.01\} |
- For fDRE, we have implemented a version that employs KLIEP as the second-stage DRE method. Adhering to the guidelines provided by the official open-source code and the original research paper [1], we trained the masked autoregressive flow (MAF) models with the configurations presented in the table below. For the KLIEP method in the second stage, we have retained the original hyperparameter settings as per the open-source implementation.
Table3. Hyperparameters for the flow model in fDRE method
|Dataset|n_blocks|n_hidden|hidden_size|n_epochs|
|:-:|:-:|:-:|:-:|:-:|
|IHDP-Continuous|5|1|100|100|
|Regression Benchmarks|5|1|100|100|
|MI Gaussians|5|1|100|200|
- For RRND, in the absence of open-source code, we implemented the algorithm by adhering to the implementation details outlined in the Numerical Illustrations section of [2]. The kernel function is assigned as $K(x,x^\prime)=1+\exp\\{-(x-x^\prime)^2/2\\}$. Utilizing a 5-fold CV, the hyperparameter $\lambda$ is chosen based on the quasi-optimality criterion, and the optimal iteration step $k$ is determined by minimizing squared distance loss function in the validation set. Following the configurations in [2], the search grids are $k\in\\{1,2,\ldots,10\\}$ and $\lambda\in\\{\lambda_\ell=\lambda_0\rho^\ell,\ell=1,\ldots,w\\}$ with $\lambda_0=0.9$ and $w=9$. The decay factor $\rho=(\lambda_w/\lambda_0)^{1/w}$ is derived from the lower bound $\lambda_w=(n^{-1/2}+m^{-1/2})$, where $n,m$ are the sample sizes of the numerator and denominator distributions respectively.
We will incorporate these implementation details into the Appendix of the revised manuscript to improve reproducibility.
[1] Choi et al. Featurized Density Ratio Estimation, 2021
[2] Duc Hoan Nguyen, Werner Zellinger, and Sergei Pereverzyev. On regularized Radon- Nikodym differentiation. Journal of Machine Learning Research, 25(266):1–24, 2024. | Summary: This paper proposes a non-parametric method for the density ratio estimation (DRE) task using a projection pursuit (PP) approximation. The method is computationally convenient and also helps alleviate the curse of dimensionality. The authors conduct extensive experiments demonstrating that the proposed method consistently outperforms other approaches.
Claims And Evidence: Yes, the claims are well supported by strong theoretical proofs and a large number of experiments.Yes, the claims are well supported by strong theoretical proofs and a large number of experiments.
Methods And Evaluation Criteria: Most of the evaluation metrics make sense, such as MSE or similar measures. However, there is no metric provided to demonstrate the computational cost of this method.
Theoretical Claims: Yes. I checked the proofs for Proposition3.1 and AppendixC (L2-distance Minimization); they all appear to be correct.
Experimental Designs Or Analyses: The datasets used in this paper are appropriate. The authors employ simulated data for a 2-D DRE experiment on stabilized weights
estimation. They also use the real IHDP dataset for dose-response function estimation. In the section on covariate shift adaptation, they use 5 different benchmark datasets.
Supplementary Material: NO
Relation To Broader Scientific Literature: This paper is a good application of the projection pursuit (PP) idea to the density ratio estimation problem. As stated in the paper, the approach is similar to nnDRE, except that the feedforward neural network (NN) is replaced by projection pursuit (PP).
Essential References Not Discussed: No, I believe all relevant references are adequately cited in this work.
Other Strengths And Weaknesses: Strengths: The paper is well written and presents the ideas clearly, with solid proofs and thorough experimental validation.
Weaknesses: The main idea is not entirely new, but it is well integrated into this particular problem. There is no demonstration of computational cost in high-dimensional cases within the experiments
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your suggestions.
*Q1: Computational cost evaluation in high-dimensional cases*
Compared to the conventional projection pursuit method (kindly refer to our response to Q1 of Reviewer fbc2) that requires additional Monte Carlo sampling to estimate the pursuit function, our estimators of $\\{f_{k}(·)\\}_{k=1}^K$ in equation (5) take closed-form expressions, enhancing computational efficiency. Moreover, we use Cholesky decomposition in computing the closed-form solution, improving numerical stability and reducing overall computation time.
We evaluated computational costs via numerical experiments, which will be included in Appendix. Specifically, we will report the runtime of ppDRE method and all baseline methods (including newly added baselines, fDRE and RRND, suggested by Reviewers fbc2 and mSdz), in the same setup as Figure 3 in Section 4.2.1 with dimensions $d\in\\{2,10,30,50,100\\}$. The experiments were conducted on a node with dual AMD EPYC 7713 CPUs, totaling 128 cores. The computation time (minutes) averaged over 3 runs is reported below:
Table4. Computation Time Comparison
|Method|$d=2$|$d=10$|$d=30$|$d=50$|$d=100$|
|:-:|:-:|:-:|:-:|:-:|:-:|
|uLSIF|0.38|0.35|0.38|0.43|0.46|
|KLIEP|0.36|0.39|0.49|0.57|0.72|
|Class|0.21|0.16|0.19|0.23|0.40|
|fDRE|3.78|3.88|3.92|4.28|7.25|
|RRND|3.78|3.87|3.77|3.82|3.81|
|nnDRE|5.91|3.84|8.76|6.02|8.63|
|$\text{D}^3$-LHSS|5.95|11.25|10.60|10.79|10.97|
|ppDRE|0.32|0.91|5.64|10.78|15.22|
We observe the following results: (i) when the dimension is low or moderately large ($d=2,10$), the computation time of our ppDRE method is comparable to that of the uLSIF, KLIEP, and classification methods, which are suitable for low-dimensional data. Our computation time is much less than that of the fDRE, RRND, nnDRE and $\text{D}^3$-LHSS methods, some of which are also designed for high-dimensional data. (ii) When the dimension is large ($d=30,50,100$), the methods for low-dimensional data fail due to the curse of dimensionality. Our ppDRE method consistently outperforms all baseline methods in estimation accuracy, and the computation time is comparable to that of existing methods for high-dimensional data. The computation time increases significantly, which can be expected as we need more iterations to achieve convergence.
*Q2: Adding fDRE and RRND as baselines*
We will make the following improvements to our experimental analysis in Section 4 of the paper.
- We will incorporate the two methods as baselines.
1. We will add a discussion on the featurized DRE method (fDRE) in [1] (kindly refer to our response to Q3 of Reviewer fbc2). The fDRE method maps the data to a shared feature space via a flow model to mitigate inaccuracies caused by distribution discrepancy.
2. We will add a discussion on the regularized Radon-Nikodym differentiation estimation method (RRND) in [2] (kindly refer to our response to Q5 of Reviewer mSdz). The RRND method employs a regularized kernel method in RKHS to estimate the density ratio.
- We will add the following experimental results of both baselines across key applications to Tables 1, 2 and 3 in our paper. Implementation details are given in our response to Q2 of Reviewer WQAb.
Table1. ASE in dose response function estimation
|Method|ARDF|QRDF(0.1)|QRDF(0.25)|QRDF(0.5)|QRDF(0.75)|QRDF(0.9)|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|fDRE|0.106(0.026)|0.331(0.056)|0.127(0.029)|0.045(0.016)|0.088(0.021)|0.267(0.061)|
|RRND|0.117(0.022)|**0.307**(0.053)|0.125(0.026)|0.043(0.015)|0.090(0.021)|0.408(0.082)|
Table2. MAE in mutual information estimation for varying dimension $p$ and correlation coefficient $ρ$
|Method|$d=2,ρ=0.2$|$d=2,ρ=0.8$|$d=10,ρ=0.2$|$d=10,ρ=0.8$|$d=20,ρ=0.2$|$d=20,ρ=0.8$|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|fDRE|0.043(0.000)|0.907(0.048)|0.199(0.007)|4.283(0.014)|0.487(0.036)|8.710(0.059)|
|RRND|0.061(0.001)|0.926(0.006)|0.200(0.008)|5.092(0.031)|0.414(0.022)|10.241(0.089)|
Table3. NMSE under covariate shift adaptation
|Method|Abalone|Billboard Spotify|Cancer Mortality|Computer Activity|Diamond Prices|
|:-:|:-:|:-:|:-:|:-:|:-:|
|fDRE|0.586(0.136)|0.615(0.084)|0.664(0.064)|0.294(0.053)|0.233(0.078)|
|RRND|0.574(0.144)|0.616(0.087)|**0.655**(0.069)|0.279(0.051)|0.232(0.082)|
The results indicate that our ppDRE maintains superior performance in most cases. The fDRE method (mapping data with a flow model prior to KLIEP) shows its effectiveness compared to the naive KLIEP method with enhanced performance in most cases, but remains inferior to ppDRE. The RRND method proves its merit by outperforming the non-regularized kernel methods (uLSIF, KLIEP) in several instances and even achieves top performance in some cases (bolded); however, its overall performance is less stable, ultimately underperforms ppDRE.
[1] Choi et al. Featurized Density Ratio Estimation, 2021
[2] Duc Hoan Nguyen, Werner Zellinger, and Sergei Pereverzyev. On regularized Radon- Nikodym differentiation. Journal of Machine Learning Research, 25(266):1–24, 2024. | Summary: This paper introduces a novel projection pursuit (PP)-based method for density ratio estimation (DRE), a critical task in machine learning with applications in areas like causal inference and covariate shift adaptation. Addressing the limitations of parametric methods (potential bias) and non-parametric methods (curse of dimensionality), the proposed approach approximates the density ratio function as a product of functions, each representing a projection onto a low-dimensional space. These projections are iteratively estimated using semi-parametric single-index functions and a regularized empirical loss, leading to a consistent estimator with a demonstrable convergence rate. Experimental results showcase the method's superior performance compared to existing DRE techniques in various applications.
Claims And Evidence: Yes, the claims are made and supported by clear and convincing evidence
Methods And Evaluation Criteria: Yes, the proposed methods make sense for the problem.
Theoretical Claims: Yes, I have checked the proof of Proposition 3.1.
Experimental Designs Or Analyses: Yes, I have checked. All experiments are designed to support the theory. However, it would be better if the up-to-date methods are added, such as in a recent paper [1].
[1] Duc Hoan Nguyen, Werner Zellinger, and Sergei Pereverzyev. On regularized Radon-
Nikodym differentiation. Journal of Machine Learning Research, 25(266):1–24, 2024.
Supplementary Material: Yes, I have reviewed Part A, Part B, and Part C.
Relation To Broader Scientific Literature: The proposed approach approximates the density ratio function as a product of functions, each representing a projection onto a low-dimensional space, to address the limitations of parametric methods (potential bias) and non-parametric methods (curse of dimensionality).
Essential References Not Discussed: To put the paper in the proper place, it would be reasonable to include in the discussion the recent publications such as
[1] Duc Hoan Nguyen, Werner Zellinger, and Sergei Pereverzyev. On regularized Radon-
Nikodym differentiation. Journal of Machine Learning Research, 25(266):1–24, 2024.
[2] Elke R. Gizewski, Lukas Mayer, Bernhard A. Moser, Duc Hoan Nguyen, Sergiy
Pereverzyev, Sergei V. Pereverzyev, Natalia Shepeleva, and Werner Zellinger. On a
regularization of unsupervised domain adaptation in RKHS. Applied and Computational
Harmonic Analysis, 57:201–227, 2022.
[3] Qichao Que, Mikhail Belkin. Inverse Density as an Inverse Problem: The Fredholm Equation Approach. Advances in Neural Information Processing System, 26 (2013).
Other Strengths And Weaknesses: Strengths: The authors provide the analysis of algorithms.
Other Comments Or Suggestions: 1. Line 163 in the right column page 3, $k$ in this fomula $\hat{f}_k (\hat{a}_k^Tx)$ should be $m$.
2. In proposition 3.1, $I_{J_k}$ is not defined.
Questions For Authors: 1. The main results are two estimations for $\hat{f}_{a,k}$ and $\hat{a}_k$, but there is no error bound for density ratio $\hat{r}$. Can the authors provide a bound for $\hat{r}$?
2. Which norm do the authors measure the difference between $\hat{a}_k - a_k$?
3. Can the authors indicate the norms in which space? For example, in line 128 in the right column on page 3, the authors use “$||a||_2$”, but $||.||_2$ is not defined. In the proof of Theorem 3.2 in Appendix D, the authors use $||.||$, the readers cannot know what space the norm is considered.
4. It will be beneficial if the authors add to the baseline in experiments the up-to-date method such as in [1]
[1] Duc Hoan Nguyen, Werner Zellinger, and Sergei Pereverzyev. On regularized Radon-
Nikodym differentiation. Journal of Machine Learning Research, 25(266):1–24, 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the constructive comments.
*Q1: Error bound for density ratio $\hat{r}$*
The main result in the original manuscript gives the convergence rates of $\hat{f}\_{a,k}$ and $a_l$ given the estimate $\hat{r}\_{k-1}$. Since our estimator $\hat{r}\_k$ is computed iteratively, and $\hat{r}\_0=r_0=1$, we can establish the rate of $\hat{r}\_k$ in an inductive way. We first suppose for $k=1,\ldots, K$, $n_p^{-1}\sum^{n_p}\_{i=1}\\{\hat{r}\_{k-1}(x_i^p)-r_{k-1}(x_i^p)\\}^2=O_p(\xi_{n,k-1})$ for some $\xi_{n,k-1}>0$. Using the arguments for establishing Theorem 3.2 in the original manuscript, we can establish the convergence rates:$$\sup_{a\in\mathcal{A}}\sup_{z\in\mathcal{Z}}|\hat{f}\_{a,k}(z)-f_{a,k}(z)|=O_p(J_k^{-s}\zeta_0(J_k)+\sqrt{\xi_{n,k-1}}\zeta_0(J_k)^2+\frac{\sqrt{J_k}\zeta_0(J_k)^2}{\sqrt{n_q\wedge n_p}})\tag{1}$$and $\\|\hat{a}\_k-a_k\\|=O_p(\\{J_k^{-(s-1)}+\sqrt{\xi_{n,k-1}}+\frac{\sqrt{J_k}}{\sqrt{n_q\wedge n_p}}\\}·\sqrt{\tilde{\zeta}\_1(J_k)})$
where $\tilde{\zeta}_1(J)$ is a sequence of constants such that the maximum eigenvalue of $\mathbb{E}[\Phi_k^{(1)}(a^\top x)\Phi_k^{(1)}(a^\top x)^\top]$ is bounded by $\tilde{\zeta}_1(J)$ uniformly in $a\in\mathcal{A}$.
We can then decompose$$\hat{f}\_{\hat{a}\_k,k}(\hat{a}\_k^\top x)-f_{a_k,k}(a_k^\top x)=\hat{f}\_{\hat{a}\_k,k}(\hat{a}\_k^\top x)-f_{\hat{a}\_k,k}(\hat{a}\_k^\top x)\tag{2}$$
$$\qquad+f_{\hat{a}\_k,k}(a_k^\top x)-f_{a_k,k}(a_k^\top x)\tag{3}$$
$$\qquad+f_{\hat{a}\_k,k}(\hat{a}\_k^\top x)-f_{\hat{a}\_k,k}(a_k^\top x)\tag{4}$$
For (2), we have $\sup_{x\in\mathcal{X}}|\hat{f}\_{\hat{a}\_k,k}(\hat{a}\_k^\top x)-f_{\hat{a}\_k,k}(\hat{a}\_k^\top x)|\leq\sup_{a\in\mathcal{A}}\sup_{x\in\mathcal{X}}|\hat{f}\_{a,k}(a^\top x)-f_{a,k}(a^\top x)|$.
For (3), under Assumption D.2. (b), we can use Taylor's expansion to expand $f_{\hat{a}\_k,k}$ around $f_{a_k,k}$ and obtain $\sup_{x\in\mathcal{X}}|f_{\hat{a}\_k,k}(a_k^\top x)-f_{a_k,k}(a_k^\top x)|=O_p(\\|\hat{a}\_k - a_k\\|)$.
For (4), we can first decompose it as$$(4) = [f_{a_k,k}(\hat{a}\_k^\top x) - f_{a_k,k}(a_k^\top x)] +[f_{\hat{a}\_k,k}(\hat{a}\_k^\top x) - f_{a_k,k}(\hat{a}\_k^\top x)] + [f_{a_k,k}(a_k^\top x)-f_{\hat{a}\_k,k}(a_k^\top x)].$$
The supremum norm of the first term can be bounded by $O_p(\\|\hat{a}\_k-a_k\\|)$, using Taylor's expansion expanding $f_{a_k,k}(\hat{a}\_k^\top x)$ around $f_{a_k,k}({a}\_k^\top x)$. The second and the third term can also be bounded by $O_p(\\|\hat{a}\_k-a_k\\|)$ using the same argument for (3). Thus, we have$$\sup_{\in\mathcal{X}}|\hat{f}\_{\hat{a}\_k,k}(\hat{a}\_k^\top x)-f_{a_k,k}(a_k^\top x)|=O_p(\sup_{a\in\mathcal{A}}\sup_{x\in\mathcal{X}}|\hat{f}\_{a_k,k}(a_k^\top x)-f_{a_k,k}(a_k^\top x)|+\\|\hat{a}\_k-a_k\\|).$$
Noting that $\zeta_0(J_k)\leq J_k$, we then have$$\sup_{x\in\mathcal{X}}|\hat{f}\_{\hat{a}\_k,k}(\hat{a}\_k^\top x)-f_{a_k,k}(a_k^\top x)|=O_p([J_k^{-(s-1)}+\sqrt{\xi_{n,k-1}}+\sqrt{\frac{J_k}{n_q\wedge n_p}}]·[\sqrt{\tilde{\zeta}\_1(J_k)}\vee\zeta^2_0(J_k)]),$$
where $\sqrt{\tilde{\zeta}_1(J_k)}\vee \zeta_0(J_k)=\max[\sqrt{\tilde{\zeta}_1(J_k)},\zeta_0(J_k)]$.
Finally, using the fact that $\xi_{n,0}=0$, we can inductively derive that$$\sup_{x\in\mathcal{X}}|\hat{r}\_{K}(x)-r_{K}(x)|=O_{p}(\sum_{\ell=1}^K[\\{J_{\ell}^{-(s-1)}+\sqrt{\frac{J_{\ell}}{n_q\wedge n_p}}\\}·\Pi_{i=\ell}^K\\{\sqrt{\tilde{\zeta}\_1(J_i)}\vee\zeta^2_0(J_i)\\}]).
$$We will add this result to the paper.
*Q2\&3: Clarification of norms*
Thank you for your careful comments. We use the Euclidean norm to measure the distance between $\hat{a}\_k$ and $a_k$. To make the notation consistent, we will clearly define the Euclidean norm $\\| v\\|:=\sqrt{v^{\top}v}$ (without the subscript "2" shown in $\\|·\\|_2$ in the previous manuscript) for a general vector $v$, and we will apply it to both $\hat{a}\_k-a_k$ and $a$, i.e. $\\|\hat{a}\_k-a_k\\|$ and $\\| a\\|$.
*Q4: Adding Baseline from [1]*
We will incorporate the regularized kernel method proposed in [1] into our experimental comparisons. Please kindly refer to our response to Reviewer qyuE. Due to space limitation, we addressed this point there.
*Q5: Discussion of related work*
Thank you for your references. We will add the following discussion to the introduction:
> **Regularized Kernel Learning Methods**. Recently, the regularization scheme within reproducing Kernel Hilbert space (RKHS) has been developed for estimating the DRE problem. [3] reformulated the DRE problem as an inverse problem in terms of an integral operator corresponding to a kernel, then proposed a regularized estimation method with an RKHS norm penalty [1] established the pointwise convergence rate of the regularized estimator taking into account both the smoothness of the density ratio and the capacity of the space in which it is estimated. [2] applied the regularized kernel methods in the context of unsupervised domain adaptation under covariate shift and developed the convergence rates.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the rebuttal. The authors cleared up all my concerns. I am satisfied with the author's rebuttal and have no further questions.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time and constructive feedback on our work. Your insights are greatly appreciated and have significantly contributed to enhancing the rigor and clarity of our manuscript. | Summary: Parametric methods for DRE are susceptible to bias when model assumptions are misspecified, whereas traditional non-parametric approaches often struggle with the curse of dimensionality in high-dimensional settings. To overcome these limitations, the authors suggest using a projection pursuit (PP) based approach to estimate density ratios. They prove consistency and convergence rates for their suggested estimator. Their method is experimentally evaluated on diverse tasks and compared to several baselines.
Claims And Evidence: Overall, the claims made in the paper are supported by theoretical and experimental evidence. Theorem 3.2 presents the theoretical findings, while Section 4 provides a detailed account of the experiments conducted on various datasets.
Methods And Evaluation Criteria: The proposed PP method is well-motivated for the DRE setting, with a clear intuitive rationale for its potential to enhance performance. The selection of benchmark datasets and evaluation metrics is appropriate and aligns with standard practices in assessing DRE methodologies.
Theoretical Claims: The results in Proposition 3.1 seem to be valid and skimming over the proof of Theorem 3.2 I did not find any inconsistencies.
Experimental Designs Or Analyses: The experimental setup appears to be well-structured for evaluating the stated claims. I could not find a detailed description of the hyperparameter selection process, including dataset splits, search methodology, and search grid specification, however.
Supplementary Material: I reviewed the theory section at a high level to understand the proofs. Additionally, the description of the datasets and baseline methods appears to be comprehensive.
Relation To Broader Scientific Literature: The primary contributions of this paper pertain to Projection Pursuit Density Estimation [1,2] and established DRE methods, which are utilized as baselines for comparison.
[1] Friedman et al. A projection pursuit density estimation, 1984
[2] Welling et al. Efficient Parametric Projection Pursuit Density Estimation, 2012
Essential References Not Discussed: To enhance the comprehensiveness of the study, it would be beneficial to discuss related approaches or incorporate them as baseline methods for comparison. For instance, the method proposed in [3] first maps the data to a low-dimensional latent space before performing DRE to mitigate the curse of dimensionality.
Additionally, it would be valuable to explore how the theoretical results presented in this manuscript compare to the convergence rates established in [4].
[3] Choi et al. Featurized Density Ratio Estimation, 2021
[4] Gruber et al. Overcoming Saturation in Density Ratio Estimation by Iterated Regularization, 2024
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: At first glance, PP for DRE seems to be an application of the PP algorithm conceptually very similar to how it is used in [1,2]. Could the authors emphasize the major differences/novelties compared to these approaches?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the insightful comments. Below, we address them point by point.
*Q1: Comparison with projection pursuit density estimation*
[1] and [2] focused on estimating a pdf $p(x)$. In contrast, we aim at estimating the ratio $r^*(x)=p(x)/q(x)$ between two probabilities densities, which is significantly different from [1] and [2] in problem formulation, estimation methods, and theoretical results. We will include these discussions in the revised manuscript.
- Comparison with [1]:
| | $\qquad\qquad\qquad\qquad$ [1] | $\qquad\qquad\qquad\qquad$ Ours |
| :-------: | :----------------------: |:----------------------: |
| Model | $p(x)\approx p_K(x)=f_0(x)\prod_{k=1}^Kf_k(\theta_k^{\top}x)$ | $r^*(x)\approx r_K(x)=\prod_{k=1}^Kf_k(a_k^{\top}x)$ |
| Identification | They iteratively identify $f_k(\theta_k^{\top}x)$ by minimizing the KL distance between $p_K$ and $p$ | We iteratively identify $f_k(a_k^{\top}x)$ by minimizing the $L^2$ distance between $r_K(x)$ and $r^*(x)$ |
| Estimation | Monte Carlo Sampling | For a fixed $a_k$, we derived a closed-form estimator of the pursuit function $f_k(·)$, which provides convenience compuation of the estimator. |
| Theory | No any theoretical justification | We developed convergence results for our proposed method |
- Comparison with [2]:
[2] proposed a parametric projected probabilistic product model:
$$p(x)=p(y,z)=\prod_{i=1}^{d-J}\mathcal{N}(y_i|0,1)\prod_{j=1}^J\mathcal{T}(z_j|\alpha_j),$$
where $y_i=v_i^{\top}x$ and $z_j=w_j^{\top}x$ are projections of $x$ onto $v_i$ and $w_j$ respectively, $\mathcal{N}(y_i|0,1)$ is the standard Gaussian distribution, and $\mathcal{T}(·|\alpha_j)$ is a univariate parametric distribution indexed by $\alpha_j$ such as the Student-t distribution. They further developed a sequential learning algorithm for this model. Note that, [2] employs a fully parametric approach, distinct from both [1] and our work which require estimation of unknown pursuit functions. Again, no theoretical results are provided in [2].
*Q2: Hyperparameter selection process*
We will add a detailed description in the paper. Please kindly refer to our response to Q2 of reviewer WQAb. Due to space limitation, we addressed this point there.
*Q3: Discussion of featurized DRE*
We will incorporate the fDRE method as a baseline (kindly refer to our response to reviewer qyuE for experimental results) and discuss it in the paper. We would like to clarify that [3] neither map the data to a low-dimensional latent space nor address the curse of dimensionality as focused by our paper. In fact, they tackled a different DRE challenge posed by significant distributional divergence between $p(x)$ and $q(x)$. To address this issue, they proposed an invertible parametric transform $f_{\theta}:\mathbf{R}^{d}\to\mathbf{R}^{d}$ such that the transformed densities become closer. Then $r^*(x)$ is estimated based on:
$$r^*(x)= \frac{p(x)}{q(x)}=\frac{p'(f_{\theta}(x))}{q'(f_{\theta}(x))},$$ where $p'$ and $q'$ are densities of transformed data $f_{\theta}(x_p)$ and $f_{\theta}(x_q)$ respectively. Note that, the invertible transformation preserves the original data's dimensionality.
*Q4: Theoretical results comparison with DRE with iterated regularization*
We will add a discussion on the theoretical results established in [4]. [4] proposed an estimation method for the density ratio $r^*(x)$ by minimizing the Bregman distance based on a RKHS $\mathcal{H}$ with iterated regularization:
$$r^{\lambda,t+1}:=\arg\min_{r\in\mathcal{H}} B(r^*(x),r)+\frac{\lambda}{2}\|r-r^{\lambda,t}\|^2.$$
with $r^{\lambda,0}=0$. Their key theoretical result is an improved error bounds faster than the non-iterated error bound $(n_p+n_q)^{-1/3}$, under the Bregman distance and certain regular conditions (e.g. source condition and capacity condition). In contrast, we establish the convergence rates (after $k$th iteration) for our proposed estimator under sup-norm:
$$\sup_{x\in\mathcal{X}}|\hat{r}\_{k}(x)-r_{k}(x)|=O_{p}(\sum_{\ell=1}^k[\\{J_{\ell}^{-(s-1)}+\sqrt{\frac{J_{\ell}}{n_q\wedge n_p}}\\}·\Pi_{i=\ell}^k\\{\sqrt{\tilde{\zeta}\_1(J_i)}\vee\zeta^2_0(J_i)\\}]),$$where $s$ is the smoothness of the pursuit functions $\\{f_j\\}\_{j=1}^k$. Under sufficient conditions, our rate can also achieve $o_P(n_p+n_q)^{-1/3}$. However, as the function spaces, approximation space, and distance metrics considered in [4] and our paper are completely different, it is difficult/meaningless to explicitly compare the two theoretical results based on the same standard. | null | null | null | null | null | null |
Improved Theoretically-Grounded Evolutionary Algorithms for Subset Selection with a Linear Cost Constraint | Accept (poster) | Summary: The manuscript presents an advanced study on subset selection problems, which are prevalent in various fields such as machine learning, operations research, and economics. The authors focus on subset selection under a linear cost constraint, a problem characterized by its NP-hardness and practical importance. The paper introduces an improved theoretically-grounded evolutionary algorithm (EA), enhancing the performance of the Pareto Optimized Multi-Criteria (POMC) algorithm and proposing a novel multi-objective EA, named EPOL.
Claims And Evidence: yes, this paper has made complete proofs for all Lemmas.
Methods And Evaluation Criteria: yes, this work deepens the theoretical understanding of EAs.
Theoretical Claims: yes, i have checked the proofs of Lemma 3.2 and 3.3
Experimental Designs Or Analyses: yes, the experimental designs and analyses are abundant.
Supplementary Material: yes, i have read the supplementary material with more experiments.
Relation To Broader Scientific Literature: yes, there remains a gap in the approximation bounds of EAs compared to greedy algorithms, and their full theoretical potential is yet to be realized.
Essential References Not Discussed: this paper provides a theoretical understanding of EAs, which has not been discussed in previous literatures.
Other Strengths And Weaknesses: Strengths:
The paper makes a significant theoretical contribution by improving the approximation guarantee of the POMC algorithm to 1/2, which is a substantial advancement in the field.
The proposed EPOL algorithm is innovative and achieves the best-known practical approximation guarantee of 0.6174, which is a notable empirical performance.
This work not only advances the subset selection problem but also deepens the theoretical understanding of EAs.
The writing is clear and the mathematical formulation is precise, making the paper accessible to readers familiar with evolutionary algorithms and optimization.
Weaknesses:
The manuscript would benefit from a more detailed comparison with other state-of-the-art algorithms in the literature, especially in terms of computational complexity and scalability.
The discussion on the limitations of the proposed methods is somewhat brief. A more in-depth analysis could provide insights into potential improvements or alternative applications.
I understand that this work is mainly a theoretical contribution, but if we can compare more SOTA algorithms, we can further demonstrate the superiority of the algorithms.
Other Comments Or Suggestions: None
Questions For Authors: How does the proposed EPOL algorithm scale with the size of the problem (e.g., larger ground sets or higher-dimensional feature spaces)? Could the authors provide some insights or preliminary results on scalability?
Are there any parameters in the EPOL algorithm that require fine-tuning? If so, how sensitive are the results to these parameters?
The empirical study focuses on maximum coverage and influence maximization. Would the authors consider extending the study to other relevant problems such as budgeted submodular optimization?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your positive feedback and for recognizing the strengths of our work. We appreciate your kind words regarding our contributions and writing. Please find our detailed responses below.
---
> I understand that this work is mainly a theoretical contribution, but if we can compare more SOTA algorithms, we can further demonstrate the superiority of the algorithms.
Thank you for your valuable suggestion. To the best of our knowledge, we have already included all state-of-the-art algorithms relevant to our problem setting in our comparisons. Nonetheless, we appreciate your perspective and will explore further comparisons in future work as more methods become available. Thank you.
---
> How does the proposed EPOL algorithm scale with the size of the problem (e.g., larger ground sets or higher-dimensional feature spaces)? Could the authors provide some insights or preliminary results on scalability?
Thank you for your insightful comment. The proposed algorithm is designed to be highly parallelizable, which makes it well-suited for larger-scale problems. Thanks to your suggestion, we compare EPOL against three comparison algorithms—namely, the baseline GGA, the high-performance greedy algorithm 1-guess-Greedy$^+$, and POMC—using a larger network dataset email-Eu-core (1005 vertices, 25,571 edges) on the maximum coverage problem with $q = 5$. For each budget $B$, we calculated the objective values (average $\pm$ std) and highlighted the best performance in bold. Additionally, a ‘$\bullet$’ indicates that EPOL significantly outperforms the corresponding algorithm, as confirmed by a Wilcoxon signed-rank test (confidence level 0.05). Consistent with the results reported in the paper, EPOL significantly outperforms other algorithms. Thank you again for your valuable feedback.
| Budget $B$ | $300$ | $350$ |
| :----------------: | :-----------------------: | :-----------------------: |
| GGA | 525 $\bullet$ | 575 $\bullet$ |
| 1-guess-Greedy$^+$ | 528 $\bullet$ | 578 $\bullet$ |
| POMC | 531.3 $\pm$ 1.3 $\bullet$ | 581.3 $\pm$ 0.9 $\bullet$ |
| EPOL | **532.8 $\pm$ 0.4** | **582.5 $\pm$ 0.5** |
---
> Are there any parameters in the EPOL algorithm that require fine-tuning? If so, how sensitive are the results to these parameters?
We would like to clarify that the design of the EPOL algorithm inherently avoids the need for fine-tuning of parameters. This feature contributes to its robust and consistent performance across various settings. Thank you.
---
> The empirical study focuses on maximum coverage and influence maximization. Would the authors consider extending the study to other relevant problems such as budgeted submodular optimization?
Thank you for your insightful comment. While our empirical study has primarily focused on maximum coverage and influence maximization, our methodology is inherently adaptable to budgeted submodular optimization problems in scenarios where cost-aware resource allocation and the diminishing returns property of objective functions are central concerns. For instance: sensor placement, which balances information gain with installation costs (Krause et al., 2006); recommendation systems, which promote products within advertising budgets and respect user preferences (Ashkan et al., 2015); data summarization, which maximizes information retention under computational resource constraints (Lin & Bilmes, 2011); active learning, which selects maximally informative data samples under limited annotation budgets (Golovin & Krause, 2011); and human-assisted learning, which optimizes machine learning models with limited expert resources (De et al., 2020, 2021). As suggested by Reviewer vZX8, we will revise the paper to include more discussions on the applications of the studied problem, thereby improving its visibility. Thank you very much.
Reference:
Krause, A., et al. "Near-optimal sensor placements: Maximizing information while minimizing communication cost." Proc. IPSN, 2006.
Ashkan, A., et al. "Optimal Greedy Diversity for Recommendation." Proc. IJCAI, 2015.
Lin, H., and Bilmes, J. "A class of submodular functions for document summarization." Proc. ACL, 2011.
Golovin, D., and Krause, A. "Adaptive submodularity: Theory and applications in active learning and stochastic optimization." J. AI Research, 2011.
De, A., et al. "Regression under human assistance." Proc. AAAI, 2020.
De, A., et al. "Classification under human assistance." Proc. AAAI, 2021. | Summary: This paper contributed to 1) analyzing the existing approach for submodular optimization with a linear inequality constraint, and 2) proposing a novel approach with a better approximation guarantee and better practical performance. This paper first analyzed the existing approach, POMC, an evolutionary algorithm that maintains a population of non-dominated solutions where the constraint is treated as the second objective. The analysis of this paper improves the existing theoretical guarantee by a constant factor (from 0.5(1 - 1/e) to 0.5). The rigorous proof is provided in the main text, followed by a description of the difference in the proof from the proof of the existing result. Then, the authors propose a novel approach, EPOL, which simply repeats running POMC for subproblems that are constructed by removing each element. The theoretical analysis revealed the improved theoretical guarantee (from 0.5 to 0.61...). The empirical study shows that the proposed approach is not only theoretically improved, but also performs better than existing approaches over different test problems.
## update after rebuttal
All my concerns have been addressed during the rebuttal phase. I keep my initial score.
Claims And Evidence: Their claims, improving theoretical guarantee of EAs for subset selection problem with a monotone and submodular objective function under a linear cost constraint as well as improving their practical performance, are supported by rigorous theoretical results and thorough experimental results.
Methods And Evaluation Criteria: Methodologies and Experimentation looks valid. However, honestly, as I am not familiar with this specific problem class, I am not fully sure about my assessment.
Theoretical Claims: I did check the proof and didn’t find any flaw.
Experimental Designs Or Analyses: Empirical results only show the final performance. It seems that different algorithms spent different iterations / function calls / time. Ideally, to compare the performance, the budget (iterations/f-calls/time, etc.) should be the same for all algorithms. Low performance algorithms with fast convergence may be trivially improved by incorporating a restart mechanism.
As I am not familiar with this topic itself, I am not sure whether the experiments done in this paper meet the standard of the community.
Supplementary Material: I checked the proofs.
Relation To Broader Scientific Literature: The discussion about the potential real-world applications would improve the visibility of this paper.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
- Improved theoretical results with rigorous and non-trivial proofs
- Strong empirical results showing superior performance of the proposed approach over different baselines on different problems
Weaknesses:
- The theoretical guarantee doesn't seem to match the practical performance, hence the improvement in theoretical guarantee does not necessarily imply the improvement in practice, hence its value is a bit questionable.
- Empirical results only show the final performance. It seems that different algorithms spent different iterations / function calls / time. Ideally, to compare the performance, the budget (iterations/f-calls/time, etc.) should be the same for all algorithms. Low performance algorithms with fast convergence may be trivially improved by incorporating a restart mechanism.
Other Comments Or Suggestions: The sizes of parentheses could be reconsidered for better presentation.
Questions For Authors: Please read the “Other Strengths And Weaknesses” part.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive feedback and encouraging validation of our work's strengths. Please find our detailed responses below.
---
> The discussion about the potential real-world applications would improve the visibility of this paper.
The discussion about the potential real-world applications would enhance the impact and relevance of this paper. The problem studied in this paper is highly relevant to domains where cost-aware resource allocation and the diminishing returns property of objective functions play a key role. It has applications in diverse areas, e.g., combinatorial optimization, computer networks, data mining, and machine learning. Beyond the maximum coverage and influence maximization examined in this paper, potential applications include sensor placement, balancing information gain with installation costs (Krause et al., 2006); recommendation systems, promoting products within advertising budgets while respecting user preferences (Ashkan et al., 2015); data summarization, maximizing information retention under computational resource constraints (Lin & Bilmes, 2011); active learning, selecting maximally informative data samples under limited annotation budgets (Golovin & Krause, 2011); and human-assisted learning, optimizing machine learning models with limited expert resources (De et al., 2020, 2021). We appreciate your suggestion and will incorporate these discussions into the paper. Thank you very much for your suggestion.
Reference:
Krause, A., et al. "Near-optimal sensor placements: Maximizing information while minimizing communication cost." Proc. IPSN, 2006.
Ashkan, A., et al. "Optimal Greedy Diversity for Recommendation." Proc. IJCAI, 2015.
Lin, H., and Bilmes, J. "A class of submodular functions for document summarization." Proc. ACL, 2011.
Golovin, D., and Krause, A. "Adaptive submodularity: Theory and applications in active learning and stochastic optimization." J. AI Research, 2011.
De, A., et al. "Regression under human assistance." Proc. AAAI, 2020.
De, A., et al. "Classification under human assistance." Proc. AAAI, 2021.
---
> Empirical results only show the final performance. It seems that different algorithms spent different iterations /function calls / time. Ideally, to compare the performance, the budget (iterations/f-calls/time, etc.) should be the same for all algorithms. Low performance algorithms with fast convergence may be trivially improved by incorporating a restart mechanism.
Thank you for your comment. We fully agree, and actually have tried to make the comparison fair. The algorithms in our study fall into two categories: fixed-time algorithms such as GGA, Greedy$^+$, and 1-guess Greedy$^+$, with runtime complexities of $O(nK_B)$, $O(nK_B)$, and $O(n^2K_B)$, respectively, and anytime algorithms such as POMC, EAMC, FPOMC, and EVO-SMC, whose performance improves with increased runtime. To ensure a fair comparison, we allocated the same number of objective function evaluations ($20nK_B$) to all anytime algorithms. EPOL decomposes the original problem into $K_B$ subproblems and solves them in parallel using POMC, allocating a budget of $20nK_B$ evaluations to each subproblem. Furthermore, we also compared EPOL with a variant of POMC, called P-POMC, as described in Appendix B.4. P-POMC directly runs the original problem on $K_B$ parallel processors, where each processor is also allocated a budget of $20nK_B$ evaluations. Our experimental settings align with prior work in the literature [Bian et al., 2020, Roostapour et al., 2022, Zhu et al., 2024]. It is worth noting that fixed-time algorithms (GGA, Greedy$^+$, and 1-guess Greedy$^+$) cannot benefit from restart mechanisms, as they always return the same solution for a given input. We will revise to make them clear. Thank you very much.
---
> The theoretical guarantee doesn't seem to match the practical performance, hence the improvement in theoretical guarantee does not necessarily imply the improvement in practice, hence its value is a bit questionable.
Good point! Yes, the theoretical guarantee of an algorithm may not match its practical performance. This is because the theoretical guarantee implies the approximation performance in the worst case, which may differ from the tested real-world cases. Thus, it is possible that two algorithms with the same theoretical guarantee may demonstrate different performances in practice. However, a good theoretical guarantee is still important, which guarantees the worst-case performance of an algorithm and can characterize the robustness of the algorithm. Thus, a good algorithm should have both good theoretical guarantee and empirical performance. Our proposed algorithm EPOL has this property, which not only achieves the best-known practical theoretical guarantee, but also demonstrates empirical advantages across different settings. We will revise to add more explanation to make them clear. Thank you very much for your suggestion. | Summary: This paper studies the problem of subset selection with a linear cost constraint. The authors first improved the approximation guarantee (from (1-1/e)/2 to 1/2) of an existing evolutionary algorithm, so-called POMC, with the best empirical performance. Then, they proposed a new evolutionary algorithm EPOL with a better approximation guarantee of 0.6174. They also performed experiments on two applications (maximum coverage and influence maximization), showing that EPOL can achieve significantly better performance in most cases.
Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. The improved approximation guarantee of the existing algorithm POMC and the approximation guarantee of the newly proposed algorithm EPOL are proved mathematically and correctly. The best empirical performance of EPOL has been clearly shown by extensive experiments. The authors compared EPOL with a large number of previous methods on the studied subset selection problem.
Methods And Evaluation Criteria: The studied problem of subset selection with a linear cost constraint, where the objective function is monotone submodular, has wide applications and has been studied extensively. This work improves the approximation guarantee of a previous algorithm (with the best observed performance in experiments) and proposes a new better algorithm both theoretically and empirically. The theoretical analysis is rigorous, and the experiments use common benchmark data sets and compare all state-of-the-art works. I think the proposed methods and/or evaluation criteria make sense.
Theoretical Claims: I have checked all the proofs (including the proof of a lemma in the appendix) carefully. The improved approximation guarantee of POMC mainly lies in a tighter analysis of the objective improvement by adding a single item. The proof is mainly accomplished by iteratively adding proper single items. The approximation bound of EPOL is proved by considering the residual problem corresponding to the item having the largest objective value in the optimal solution, which must be generated by enumerating all residual problems.
I almost did not find any mistakes. The proofs are presented clearly. I have only some minor questions or suggestions.
- page 5, first column, line 248, I suggest using $J \geq i+c(v^*)$, to be consistent with the presentation in cases (2) and (3).
- page 5, second column, “This implies that $J_{\max}$ has already exceeded $i$” I suggest adding more explanations. Why the formulas in cases (1)-(3) satisfy Equation (6) with $i+c(v^*)$?
- page 5, second column, “After including … $J_{\max} \geq i+c(v^*)$” Did you consider $i+c(v^*) > B-c(o_c)$, which will contradict with $ J_{\max} \leq B-c(o_c)$?
Experimental Designs Or Analyses: Yes, I checked the experimental parts. The proposed algorithm is compared with a series of existing algorithms for the studied subset selection problem, including greedy and evolutionary algorithms. Two applications of subset selection with various parameter settings are tested. The authors performed a significance test, showing that the proposed EPOL algorithm is significantly better in most cases, and never significantly worse. They also conducted some ablation studies, e.g., comparing with multiple runs of the POMC algorithm on the original problem.
Supplementary Material: Yes, but I just have a glance at the code provided in the supplementary material.
Relation To Broader Scientific Literature: The studied problem, subset selection, has wide applications across various areas such as machine learning, data mining, and computer networks. This paper proposes a new evolutionary algorithm with better theoretical guarantees and empirical performance. Another contribution of this work is improving the approximation bound of the existing evolutionary algorithm (which performed the best empirically for the subset selection problem). As far as I can tell, this work will bring some influence to the topic of both subset selection and evolutionary computation.
Essential References Not Discussed: The related works are properly cited.
Other Strengths And Weaknesses: Overall, this is a solid work with both theoretical analysis and empirical evaluation. It makes a good contribution to solving the significant problem of subset selection.
Other Comments Or Suggestions: The paper is well written. Some minor issues:
- page 1, second column, line 38-40, “further optimizes this approach” -> “further improves this approach”
- page 2, first column, line 107, “objective” -> “objective evaluation”
- page 4, second column, line 183, “Lemma 3.2” -> “The proof of Lemma 3.2”
- page 4, the function $z$ in Theorem 3.1 is defined in Lemma 3.4 after its first appearance. I suggest giving the definition first.
- page 7, first column, $/N\delta$ -> $/(N\delta)$
- page 7, first column, line 374, budget $B$ -> the budget $B$
- page 8, first column, line 417-419, “search” -> “searches” “maintain” -> “maintains”
- page 8, second column, line 400, “, additional” -> “. Additional”
- page 8, second column, line 407, “performs best” -> “performs the best” Please check throughout the paper, e.g., the same issue in page 12.
- page 8, second column, line 413, “, shows” -> “, which shows”
- page 9, first column, line 449, “EA POMC” -> “EA, POMC”
- page 11, line 573, “Combing” -> “Combining”
- page 11, line 587, “,and” -> “, and”
- page 12, “conclude those” ?
- caption of Table 10, “the probability of each 0.05” ?
- page 16, line 831, “the optimal objective value” -> “the best objective value”
Questions For Authors: 1. In the introduction part, you mentioned “While there are greedy algorithms that obtain the theoretical optimal approximation of 1 − 1/e, they are generally impractical due to high computational costs” I’d like to see more explanations. Why are these algorithms impractical?
2. The proposed EPOL algorithm enumerates all the $n$ residual problems by excluding each single item. But in the experiments, only a subset of residual problems corresponding to the items with large $f$ values are enumerated. The authors did not give any explanation. I can understand this setting is sufficient to show the superiority of the proposed EPOL as the implemented version is weaker. But I still would like to see the performance of the full version of EPOL. Can it bring further improvement?
3. Why is the objective evaluation noisy for the application of influence maximization?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thorough review and valuable suggestions. We sincerely appreciate your feedback, which will help us improve our manuscript. Below are our detailed responses to your queries.
---
> ... more explanations. Why the formulas in cases (1)-(3) satisfy Equation (6) with $i +c(v^*)$?
Thank you for your suggestion. We will include a more detailed explanation after analyzing cases (1)–(3) as follows:
By combining the inequalities derived from cases (1)–(3), we can conclude that the new solution $X' = X \cup v^*$ satisfies:
$f(X') \geq \left(1 - e^{-\frac{\min\\{i+c(v^*), J\\}}{B-c(o_c)}}\right) \cdot \left(f(X^*) - f(o_c)\right) + \frac{\max\\{i+c(v^*)-J, 0\\}}{B-c(o_c)} \cdot z(r) \cdot f(X^*)$.
Additionally, the cost of solution $X'$ satisfies $c(X') = c(X) + c(v^*) \leq i + c(v^*)$. Consequently, the solution $X'$ must be included in $P$; otherwise, $X'$ must be dominated by one solution in $P$ (line 6 of Algorithm 1). This implies that $J_{max}$ has already exceeded $i$, contradicting the assumption that $J_{max} = i$. Hence, after including $X'$, we have $J_{max} \geq i + c(v^*)$.
---
> ... Did you consider $i+c(v^*)>B-c(o_c)$, which will contradict with $J_{max}\leq B-c(o_c)$?
Yes, we did consider this case. On page 5, second column, starting from line 251, we addressed the situation where the inclusion of the new solution $X'$ causes $J_{max} + c(v^*)$ to exceed $B - c(o_c)$, i.e., $J_{max} \leq B - c(o_c) < J_{max} + \delta \leq J_{max} + c(v^*)$. In this situation, we can deduce that $X'$ is a $(1 - z(r))$-approximation solution. We will further clarify this point in the revised manuscript. Thank you.
---
> “... greedy algorithms that obtain the theoretical optimal approximation of 1 − 1/e” ... Why are these algorithms impractical?
[Sviridenko, 2004] developed a $(1-1/e)$-approximation algorithm by selecting three optimal elements and using a greedy approach, but it has a high time complexity of $O(n^5)$. Later, [Badanidiyuru & Vondrák, 2014] and [Ene & Nguyen, 2019] proposed greedy-based algorithms that achieve a $(1-1/e-\epsilon)$-approximation. However, their computational costs remain prohibitively high, with time complexities of $O(n^2(\frac{\log n}{\epsilon})^{O(1/\epsilon^8)})$ and $(1/\epsilon)^{O(1/\epsilon^4)}n\log^2n$, respectively. These complexities render the algorithms impractical for real-world applications. These details will be clarified in the introduction. Thank you.
---
> ...only a subset of residual problems corresponding to the items with large values are enumerated. ... But I still would like to see the performance of the full version of EPOL. Can it bring further improvement?
In our experiments, we enumerated a subset of residual problems to balance computational efficiency and performance. We agree that evaluating the full version of EPOL, is both interesting and necessary. Thanks to your suggestion, we conducted additional experiments comparing the original EPOL (as in the manuscript) and the full version (EPOL-full), which enumerates all residual problems. For $q = 5$, the objective values (avg $\pm$ std) on maximum coverage for three datasets are summarized in the tables. For each budget $B$, the larger value is bolded, and ‘$\bullet$’ indicates that EPOL-full significantly outperforms EPOL (Wilcoxon signed-rank test, confidence level 0.05). The results highlight EPOL-full’s potential to improve performance by addressing all residual problems, consistently outperforming EPOL with significant advantages in several cases.
We will revise the manuscript to include these results and discussions. Thank you for your valuable feedback.
| Budget $B$| $300$| $350$ | $400$ | $450$|$500$ |
| - | :--: | :--: | :--: | :--: | :--: |
| **frb-30-15-1:** |
|EPOL|301.1 $\pm$ 1.0 $\bullet$| 329.7 $\pm$ 0.5 $\bullet$ |354.4 $\pm$ 0.9|375.2 $\pm$ 1.5 $\bullet$|394.1 $\pm$ 1.8|
|EPOL-full|**302.1 $\pm$ 0.8**|**330.9 $\pm$ 0.3**|**354.8 $\pm$ 0.4**|**377.3 $\pm$ 1.3**|**395.2 $\pm$ 1.1**|
| **frb-35-17-1:** |
|EPOL|319.1 $\pm$ 0.8|356.6 $\pm$ 0.9 $\bullet$|389.8 $\pm$ 0.6 $\bullet$|419.0 $\pm$ 0.6| 445.7 $\pm$ 0.6 $\bullet$|
|EPOL-full|**319.8 $\pm$ 0.4**|**357.9 $\pm$ 0.3**|**390.6 $\pm$ 0.5**|**419.4 $\pm$ 0.3**|**446.5 $\pm$ 0.5**|
| **congress:** |
|EPOL|332.8 $\pm$ 0.4|358.0 $\pm$ 0.6 $\bullet$|381.2 $\pm$ 0.6|399.9 $\pm$ 0.7|415.5 $\pm$ 0.5 $\bullet$|
|EPOL-full|**333.4 $\pm$ 0.5**|**359.0 $\pm$ 0.6**|**381.4 $\pm$ 0.7**|**400.2 $\pm$ 1.0**|**416.1 $\pm$ 0.5**|
---
> Why is the objective evaluation noisy for the application of influence maximization?
The objective evaluation for influence maximization, i.e., $E[|IC(X)|]$, is noisy because the propagation process is randomized, and we use the average of multiple Monte Carlo simulations to estimate the expectation. Specifically, starting from a solution $X$, we simulate the propagation 500 times and use the average as the estimated objective value. We will revise to make it clear. Thank you.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response!
My concerns have been addressed well. I will keep my score, and recommend accepting this paper. | Summary: This paper addresses monotone submodular maximization under a linear cost constraint using evolutionary algorithms. It reanalyzes the Pareto Optimization Algorithm for Monotone Cost functions (POMC), providing an improved approximation ratio of $1/2$. Additionally, the authors propose a novel multi-objective evolutionary algorithm, EPOL, which achieves the best-known practical approximation ratio of $0.6174$. Empirical evaluations on maximum coverage and influence maximization tasks demonstrate the superior performance of EPOL compared to existing methods.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence
Methods And Evaluation Criteria: Proposed methods and evaluation criteria make sense for the problem or application at hand.
Theoretical Claims: No apparent issue found.
Experimental Designs Or Analyses: Empirical evaluation is conducted on maximum coverage and influence maximization with six datasets. The results are sound.
Supplementary Material: I have reviewed the supplementary material.
Relation To Broader Scientific Literature: The paper studies a fundamental combinatorial optimization problem: monotone submodular maximization under knapsack constraints. It improves the state-of-the-art approximation ratio achieved by evolutionary algorithms (EAs) from $(1-1/e)/2$ to $0.6174$, marking a significant advancement in the field.
Essential References Not Discussed: No
Other Strengths And Weaknesses: In this work, the authors re-analyze POMC and demonstrate that it achieves a $1/2$ approximation ratio. By repeatedly running POMC with a 1-guess technique, this approximation ratio is further improved to $0.6174$. However, the technical contribution of the paper appears to be somewhat limited, as the primary advancements rely on modifications and repeated applications of existing methods rather than introducing fundamentally new algorithmic ideas.
Moreover, the algorithm sections consist primarily of lengthy proofs without sufficient discussion of the high-level ideas. This makes it challenging for readers to grasp the intuition behind the proposed methods. The presentation of the paper could be significantly improved by including more intuitive explanations and balancing technical details with conceptual insights.
Other Comments Or Suggestions: - A linear cost constraint is typically referred to as a knapsack constraint in the literature.
- The paper does not provide a clear definition of what a multi-objective EA is in the introduction or preliminary.
- The core idea behind improving the approximation ratio of POMC should be introduced at a high level at the beginning of Section 3 to provide readers with a clear understanding of the approach before delving into technical details.
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. We greatly appreciate your time and thoughtful feedback. Below are our detailed responses.
---
> The core idea behind improving the approximation ratio of POMC should be introduced at a high level at the beginning of Section 3 ...
> Moreover, the algorithm sections consist primarily of lengthy proofs without sufficient discussion of the high-level ideas...
Thank you for your suggestion. In the original paper, we included a high-level explanation of the core idea behind improving POMC's approximation ratio following the proof of Theorem 3.1 (Page 5, left column, lines 307–316). Previous analysis of POMC used a coarse-grained manner to evaluate the lower bound for improving $f(X)$. This led to POMC being able to derive a solution that satisfies $f(X_1) \geq (1 - 1/e)\cdot f(X^*)$, which is, however, infeasible, i.e., $c(X_1) > B$. A connection was then established between $X_1$ and two feasible solutions $X$ and $Y$, demonstrating that $\max\\{f(X), f(Y)\\} \geq f(X_1)/2 \geq (1/2)(1 - 1/e)\cdot f(X^*)$. This weakens the tightness of the bound. In contrast, our analysis adopts a fine-grained approach to evaluate the lower bound for improving $f(X)$ (the analysis in Lemma 3.3 and the careful design of $J_{max}$ in Lemma 3.2), showing that POMC can find a feasible solution $X_2$ with $f(X_2) \geq (1/2) \cdot f(X^*)$ and $c(X_2) \in (B - c(o_c), B]$. Thanks to your suggestion, we will move this discussion to the beginning of Section 3, which will provide readers with a clearer conceptual overview before delving into technical details. Thank you very much.
To ensure clarity and readability, we have included brief explanations of each theorem and lemma, along with proof sketches:
- Page 4, left column, lines 198–203: Connection between Theorem 3.1 and Lemma 3.2; Lemma 3.2's role in proving Theorem 4.1.
- Page 4, left column, lines 211–219: Relationships between Lemma 3.2 and Lemmas 3.3–3.4; their importance in proofs.
- Page 4, right column, lines 182–189: Core proof idea of Lemma 3.2: Relies on Lemma 3.3 to analyze $J_{max}$ growth and expected iterations.
- Page 5, left column, lines 279–283: Proof sketch for Theorem 3.1.
- Page 5, left column, lines 307–316: Main idea for improving POMC’s approximation ratio.
- Page 5, right column, lines 293–299: Proof sketch for Theorem 4.1.
We believe these explanations provide readers with the necessary intuition to better understand the technical details. Thanks to your suggestion, we will revise to add more intuitive explanations to improve the readability of our work.
---
> However, the technical contribution of the paper appears to be somewhat limited, as the primary advancements rely on modifications and repeated applications of existing methods rather than introducing fundamentally new algorithmic ideas.
Thank you for your feedback. We would like to clarify that the proposed EPOL algorithm is not a simple repeated application of existing methods. EPOL decomposes the original problem into multiple residual problems. For each $v\in V$, it creates a residual problem $(V \setminus v, f(\cdot \mid v), c, B - c(v))$, which is then solved in parallel using POMC. We also compared EPOL with P-POMC (i.e., a simple repetition of POMC, as detailed in Appendix B.4), where POMC directly executes the original problem in parallel, and the best solution is selected as the final output. The experimental results show that EPOL outperforms P-POMC, thereby highlighting the importance of our design over simple repetition.
In fact, EPOL is carefully designed to analyze the residual problem anchored at the key element $o_f$, defined as $(V \setminus o_f, f(\cdot \mid o_f), c, B - c(o_f))$. It leverages POMC's 1/2-approximation guarantee on the residual problem and connects this guarantee to the original problem, thereby further improving the theoretical bound. Consequently, EPOL not only advances theoretical insights but also attains a SOTA practical performance guarantee.
From a practical problem-solving perspective, the primary goal is to design an algorithm that brings measurable performance improvements. We understand and respect the high expectations for fundamentally new algorithmic ideas. While such breakthroughs are exciting, most research focuses on advancing existing methods to achieve measurable improvements. To our understanding, exploring the potential of existing algorithms and further developing them into approaches that yield improvements in both theoretical analysis and empirical performance is, in itself, a significant contribution.
We hope these clarifications address your concerns. Additionally, we will revise the paper to improve presentation, including defining multi-objective EAs in the introduction, adding intuitive proof explanations, and clarifying that a linear cost constraint refers to a knapsack constraint.
Please do not hesitate to reach out if you have further questions or suggestions. Thank you very much. | null | null | null | null | null | null |
Wide & Deep Learning for Node Classification | Reject | Summary: The paper identifies the _over-generalization_ problem in the GCNII model and addresses it by proposing GCNIII, which leverages a Wide & Deep architecture. The effectiveness of this approach is validated through experiments.
Claims And Evidence: Claims are not clear, see **Other Strengths and Weaknesses**.
Methods And Evaluation Criteria: See **Other Strengths and Weaknesses**.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: - The implementation of baselines requires further verification. In lines 364–367, you state, _"Therefore, we re-conduct the experiments in accordance with the optimal model hyperparameters reported in Luo et al. (2024b) under our experimental framework."_ However, the optimal hyperparameters in Luo et al. were tuned within their own framework and may not transfer well to a different setting. Conducting a separate hyperparameter search under your method's framework would be a fairer comparison. The same issue applies to lines 381–383: _"We reproduce the results of APPNP (Gasteiger et al., 2019) and GCNII (Chen et al., 2020) using the hyperparameters in Chen et al. (2020)."_
- The ablation study lacks experiments on LLMs.
- If your method incorporates LLMs as auxiliary tools, it would be beneficial to include similar methods in the baselines or enhance existing GNNs with LLMs.
- I notice that all baselines are from papers published between 2017 and 2020. Could you compare your method against more recent baselines?
- In the discussion of over-generalization, all experiments are conducted on the Cora dataset. Have you observed the same phenomenon on other datasets?
- In Appendix C, I recommend increasing the training ratio in the data split rather than relying solely on training accuracy, as evaluating only training accuracy is not practical in real-world scenarios.
- In Figure 6, there seems to be no significant improvement in the training error of GCNIII over GCNIII.
Supplementary Material: Yes. The code part.
Relation To Broader Scientific Literature: The paper re-examines the previous work GCNII and introduces the Wide & Deep approach into GNN design.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths**
- The paper provides a detailed re-examination of GCNII and identifies the _over-generalization_ problem. It explores a potential solution by incorporating the Wide & Deep architecture into GNN design.
**Weaknesses**
- The core intuition is unclear, particularly due to the lack of a concrete definition of _over-generalization_. Key questions include:
1. What exactly is over-generalization?
2. How does it impact performance? Specifically, does it degrade test accuracy?
3. Why does over-generalization occur in GCNII?
- The proposed method, GCNIII, requires better explanation. Important clarifications include:
1. What is the final architecture of GCNIII? While Equation 11 describes the Deep & Wide structure and Equation 12 presents the intersect memory technique, there is no clear formulation for the complete model.
2. What are the key differences between GCNII and GCNIII?
3. How do these differences address the over-generalization problem?
- The writing lacks coherence. The paper introduces multiple motivations, including over-generalization, a shift in focus from structure to node features, and the inability of GCNII to incorporate a linear model (line 247). However, these points appear disconnected, without a clear overarching theme or logical progression.
Other Comments Or Suggestions: I recommend revising the line colors of Figure 5. It is difficult to mapping lines to legends.
Questions For Authors: 1. In the section on intersect memory, an adjacency matrix is applied before the feature transformation (Equation 12). Why not directly use a GCN layer for this purpose?
2. In lines 200–202, you state, _"The improved model is still a linear model, but whether to use this technique depends on the dataset."_ Does this mean the model architecture is dataset-dependent? Additionally, I did not find any experiments that explore this claim.
3. In lines 265–266, you mention, _"Equation (8) is not commonly seen in the design of GCNs, but it is indeed one of the key aspects of the GCNII model."_ However, dropout is widely used in GNNs—even the early GCN paper incorporates dropout.
4. In Equation 2, the cross-product transformations $\phi(x)$ are included in the wide component. However, in GCNIII, this part is removed in Equation 7. What is the reasoning behind this change? If it is removed, why is this component still considered _wide_?
5. Some descriptions of attention are unclear. For instance, in line 267, what does _"both"_ refer to? Additionally, what is the '_attention_' of GCNII?
6. **Typos**: In line 259, you state, _"Graph Convolution G~\tilde{G} corresponds to the attention matrix on the left-hand side of Equation (13)."_ However, Equation 13 does not have a left-hand side.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you very much for your review, suggestions, and questions. We have carefully addressed your concerns shown below.
Q1: Conducting a separate hyperparameter search under your method's framework would be a fairer comparison.
A1: Thanks for your suggestion. We re-optimized all hyperparameters, and the experimental results can be found in https://anonymous.4open.science/r/GCNIII_supplement.
Q2: The ablation study lacks experiments on LLMs. If your method incorporates LLMs as auxiliary tools, it would be beneficial to include similar methods in the baselines or enhance existing GNNs with LLMs.
A2: Your suggestion is very reasonable. We consider that the focus of this paper is not LLM, so the content related to LLM will be deleted.
Q3: Could you compare your method against more recent baselines?
A3: Thanks for your suggestion. We conducted comparative experiments between GCNIII and
recent baseline methods, with results available at https://anonymous.4open.science/r/GCNIII_supplement.
Q4: Have you observed the same phenomenon on other datasets?
A4: The same phenomenon is more likely to occur on homophilic datasets.
Q5: In Appendix C, I recommend increasing the training ratio in the data split rather than relying solely on training accuracy, as evaluating only training accuracy is not practical in real-world scenarios.
A5: We consider this limit case in order to verify the node classification ability of the linear models rather than the generalization ability.
Q6: In Figure 6, there seems to be no significant improvement in the training error of GCNIII over GCNIII.
A6: We suspect your question stems from Figure 5, which illustrates a clear trend: as γ increases, the training error decreases significantly. Our goal is to strike a balance, as we also aim to prevent the model from overfitting.
Q7: What exactly is over-generalization? How does it impact performance? Specifically, does it degrade test accuracy? Why does over-generalization occur in GCNII?
A7: The over-generalization phenomenon shown in Figure 1 refers to the situation where training error remains significantly higher than validation error during model training. This phenomenon has currently only been observed when training deep GCNs. It does not lead to test accuracy degradation, but it means that the utilization efficiency of features in the model training process will be reduced, which is why linear models are introduced. We provide a detailed analysis of over-generalization phenomenon in GCNII in Section 4.
Q8: What is the final architecture of GCNIII? What are the key differences between GCNII and GCNIII? How do these differences address the over-generalization problem?
A8: The structure of GCNIII is shown in Figure 2. Compared with GCNII, GCNIII introduces a linear model and uses Intersect memory, Initial residual and Identity mapping as optional modules. GCNIII is a more flexible framework. We believe that the improved training error curve of GCNIII indicates a better balance between the model’s fitting ability and generalization, which successfully demonstrates that GCNIII can more effectively balance the trade-off between the over-fitting of GCN and the over-generalization of GCNII.
Q9: In the section on intersect memory, an adjacency matrix is applied before the feature transformation (Equation 12). Why not directly use a GCN layer for this purpose?
A9: Our proposed technique applies attention mechanisms to the output of a linear model, where graph convolution is not the only possible choice.
Q10: Does this mean the model architecture is dataset-dependent?
A10: Yes, details are in Appendix B.
Q11: you mention, "Equation (8) is not commonly seen in the design of GCNs, but it is indeed one of the key aspects of the GCNII model." However, dropout is widely used in GNNs—even the early GCN paper incorporates dropout.
A11: We mean that it is not common to use dropout in Feature Embedding prior to the GCN layers.
Q12: In Equation 2, the cross-product transformations ϕ(x) are included in the wide component. However, in GCNIII, this part is removed in Equation 7. What is the reasoning behind this change? If it is removed, why is this component still considered wide?
A12: Intersect memory creates a new 'wide' effect, where attention enables each node's output representation to incorporate information from more nodes.
Q13: Some descriptions of attention are unclear. For instance, in line 267, what does "both" refer to? Additionally, what is the 'attention' of GCNII?
A13: $\hat{G}$ and $\alpha(I_{n}-{(1-\alpha)\hat{G})}^{-1} (1)$. Initial residual causes the “attention” of GCNII to asymptotically approach (1) as the number of layers increases indefinitely.
Q14: In line 259, you state, "Graph Convolution G~\tilde{G} corresponds to the attention matrix on the left-hand side of Equation (13)." However, Equation 13 does not have a left-hand side.
A14: Sorry for the confusion. We meant the left side of the multiplication sign.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for your response. For me there are still key questions unsolved:
1. The over-generalization issue is only demonstrated on one dataset *(i.e.*, Cora) with one deep model (*i.e*., GCNII). **It is hard to tell it is a general phenomenon for deep models.** Additionally, if it is true that “*The same phenomenon is more likely to occur on homophilic datasets.*”, Table 4 shows contrary results, where GCNII $\approx$ GCNIII on homophily datasets while GCNII $>$ GCNIII on heterophily graphs.
2. **The underlining mechanism of over-generalization is still unclear.** I have carefully read Section 4. It seems that the (only) possible reason is the attention. However, this part is too rough without comprehensive experimental verifications or theoretical supports.
3. **The significance / negative effect of over-generalization is questionable.** There is a contradiction point, over-generalization “*reduces the utilization efficiency of features in the model training process*”, however, “*It does not lead to test accuracy degradation”.* My point is, if over-generalization ***only*** affects the utilization of training data (assuming it’s true) but does not degrade the performance, what is the motivation for solving it?
4. I really recommend showing a specific formulation of GCNIII, as the case of Equation 6 for GCNII. It’s beneficial for further analysis and comparison. Additionally, the practical applicability of GCNIII is limited when the the model architecture is actually dataset-dependent.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your new reviews and suggestions. Our responses to your points are as follows.
Q1: The over-generalization issue is only demonstrated on one dataset (i.e., Cora) with one deep model (i.e., GCNII). It is hard to tell it is a general phenomenon for deep models.
A1: You are right that this is not a general phenomenon, which is why researchers have not noticed it. The development of deep GCNs has been hindered in recent years, and we hope that the discussion of this phenomenon will provide some inspiration for further research.
Q2: Additionally, if it is true that “The same phenomenon is more likely to occur on homophilic datasets.”, Table 4 shows contrary results, where GCNII ≈ GCNIII on homophily datasets while GCNII < GCNIII on heterophily graphs.
A2: This is not a contradiction. Heterophily graphs do not provide a good enough graph structure as a priori attention to support the over-generalization of GCNII, that is, if it is not good on the training set, it is not good on the validation/test set. GCNIII performs better than GCNII because it balances the over-generalization phenomenon and makes better use of the linear model's memory for features during training.
Q3: The underlining mechanism of over-generalization is still unclear. I have carefully read Section 4. It seems that the (only) possible reason is the attention. However, this part is too rough without comprehensive experimental verifications or theoretical supports.
A3: Attention is not the only explanation and reason. In the original we wrote, “Dropout is the key. In fact, this can be easily inferred intuitively from Figure 1, because dropout is the only component in the entire end-to-end GCNII model that has a different structure during training and inference”(lines 248–252). We also have experimental evidence, “Taking the Cora dataset as an example, we find through experimental studies that removing all dropout from GCNII results in a drop in accuracy from over 85% to 82%”(lines 254–258). Dropout's position in the overall framework also plays a key role, “We argue that the dropout in Equation (8) is more akin to a robust feature selection process, where a subset of features is randomly selected for feature embedding at each epoch. This process enhances the model’s ability to efficiently leverage node feature information, thereby improving its generalization performance”(lines 268–274).
Moreover, the analysis of the GCNII’s mechanism in subsection "Ultra-deep is not necessary" is also responsible for over-generalization, and it is very rare for models to start working first from the part closer to the output. Common models GCN, CNN, Transformer... all give priority to the layers near the input, and ResNet is also designed based on this. The special structural mechanism of GCNII leads to special phenomena. Thank you for your advice. We will further refine the content of this part in detail, including the details of theoretical analysis and experimental results.
Q4: The significance/negative effect of over-generalization is questionable. There is a contradiction point, over-generalization “reduces the utilization efficiency of features in the model training process”, however, “It does not lead to test accuracy degradation”. My point is, if over-generalization only affects the utilization of training data (assuming it’s true) but does not degrade the performance, what is the motivation for solving it?
A4: “There’s no such thing as a free lunch.” Over-generalization is a systematic phenomenon that is a successful aspect of a model in certain scenarios. When over-generalization occurs, the model can perform well on test data despite poor training performance. However, this phenomenon is fundamentally problematic as it reveals inherent design flaws - specifically, reduced feature utilization and excessive reliance on the prior graph structure, which limits the model’s performance ceiling. This is precisely why we propose simple yet effective improvements like GCNIII.
Q5: I really recommend showing a specific formulation of GCNIII, as the case of Equation 6 for GCNII. It’s beneficial for further analysis and comparison. Additionally, the practical applicability of GCNIII is limited when the the model architecture is actually dataset-dependent.
A5: Thank you very much for your advice. We will give the specific formulation of GCNIII in the paper. GCNIII is a framework rather than a fixed model, and we think it will be more usable. In fact, on different data scenarios, the initial residual technique corresponding to APPNP and the identity mapping technique introduced by GCNII may be redundant. Each layer of the identity mapping in GCNII will introduce parameters. If these parameters do not play a sufficient role, the training speed will be greatly improved after deletion. We hope to attract more researchers' attention through GCNIII, try to explore the emphasis on node features and introduce the use of linear models. | Summary: This paper improves the architecture of the existing model, GCNII, mainly based on the idea "wide & deep". Also, LLM is used to encode the node features.
Claims And Evidence: 1. The authors claimed on page 5 that "The former is 0.0018, while the latter is 0.8423, which demonstrates that GCNII’s “attention” captures more information, leading to stronger generalization." However, it is not clear what is the connection between the "attention weights", "information", and "generalization".
2. On page 4, lines 198, the paper mentioned "The attention matrix between the nodes is the adjacency matrix" which is confusing. The term "adjacency matrix" usually is only used to describe the network structure.
Methods And Evaluation Criteria: The datasets are a bit old, and more heterophilic graph datasets should be included. In addition, the major drawback is the baseline methods.
1. The baseline methods used in this paper are pretty old. If I did not miss anything, the latest baseline was published in 2020. There are so many latest node classification baselines, and I will not name them here.
2. The proposed method incorporates LLM into the framework. Hence, for a fair comparison, many LLM-incorporated node classification methods should be compared, such as [1-2].
[1] He, Xiaoxin, et al. "Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning." The Twelfth International Conference on Learning Representations.
[2] Chen, Runjin, et al. "LLaGA: Large Language and Graph Assistant." Forty-first International Conference on Machine Learning.
Theoretical Claims: I checked the statement of Theorem 4.1, which is intuitive and reasonable.
Experimental Designs Or Analyses: Mentioned in the section "Methods And Evaluation Criteria".
Supplementary Material: No supplementary material was provided. The appendix attached to the main content is reviewed.
Relation To Broader Scientific Literature: It is based on the previous model GCNII [1] and wide & deep [2].
[1] Chen, Ming, et al. "Simple and deep graph convolutional networks." International conference on machine learning. PMLR, 2020.
[2] Cheng, Heng-Tze, et al. "Wide & deep learning for recommender systems." Proceedings of the 1st workshop on deep learning for recommender systems. 2016.
Essential References Not Discussed: 1. The authors mentioned on page 5 that "We suggests that graph can be viewed as a form of static, discrete self-attention mechanism
(Vaswani et al., 2017)." Actually, this is a well-known fact that "the attention matrix can be viewed as a fully-connected weighted graph", e.g., in papers [1-3]
[1] Chen, Dexiong, Leslie O’Bray, and Karsten Borgwardt. "Structure-aware transformer for graph representation learning." International conference on machine learning. PMLR, 2022.
[2] Zaheer, Manzil, et al. "Big bird: Transformers for longer sequences." Advances in neural information processing systems 33 (2020): 17283-17297.
[3] Wang, Sinong, et al. "Linformer: Self-attention with linear complexity." arXiv preprint arXiv:2006.04768 (2020).
Other Strengths And Weaknesses: 1. I think the so-called "over-generalization" is interesting, but it may only be due to the dropout, so the models in training and validation are different. Also, such a difference accumulates when the model is deeper, so the 64-layer GCNII's training error is larger than the validation error (Figure 1).
2. If I understand correctly, the difference between the proposed GCNIII and the existing model GCNII is that this paper has an extra module "Intersect memory", which is essentially an attention module (line 195). However, incorporating the self-attention layer with the graph convolution layer is not novel, as many graph transformers have similar architecture [1], named the "Parallel architecture".
[1] Min, Erxue, et al. "Transformer for graphs: An overview from architecture perspective." arXiv preprint arXiv:2202.08455 (2022).
Other Comments Or Suggestions: 1. Many notations are wrapped with {}. E.g., {\psi} is Eq. (7). It is unclear why they are presented like this.
2. I think the notation in Eq. (12) is problematic. The notation \tilde{G} is a matrix (line 110), but it is also used as a function in Eq. (12).
Questions For Authors: Please check the above weaknesses mentioned.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank you for your reviews and address your concerns as follows.
Q1: The authors claimed on page 5 that "The former is 0.0018, while the latter is 0.8423, which demonstrates that GCNII’s “attention” captures more information, leading to stronger generalization." However, it is not clear what is the connection between the "attention weights", "information", and "generalization".
A1: Thank you for your questions and suggestions, we will provide additional explanations. The attention matrix corresponds to $softmax(\frac{Q{K}^{T}}{\sqrt{{d}_{k}}})$. We formally define attention density as the proportion of non-zero elements in this matrix. Higher density indicates that a node's feature representation aggregates information from more other nodes, thereby capturing broader contextual information and enhancing generalization capacity (details in Appendix G).
Q2: On page 4, lines 198, the paper mentioned "The attention matrix between the nodes is the adjacency matrix" which is confusing.
A2: Sorry for the confusion caused by the inaccuracy of our description. The expression should be changed to 'The attention relationship between nodes can be obtained by the adjacency matrix'. If the self-attention mechanism is used for all nodes in the graph, the usual operation needs to learn the node characteristics to get the attention matrix, but the graph itself contains the relationship information among nodes. The neighbor of a node is the prior attention, so the neighbor matrix tells us which other nodes we should pay attention to for a certain node, which can be used as an attention matrix.
Q3: The baseline methods used in this paper are pretty old.
A3: Thank you very much for your suggestion. We conducted comparative experiments between GCNIII and recent baseline methods, with results available at https://anonymous.4open.science/r/GCNIII_supplement.
Q4: The proposed method incorporates LLM into the framework. Hence, for a fair comparison, many LLM-incorporated node classification methods should be compared, such as [1-2].
A4: Thank you for highlighting these key references. We declare at the beginning of Chapter 3 that embedding large language models (LLMs) into the framework for upgrading is only a technical idea, not the focus and main contribution of this paper. We focus on the analysis and improvement of GCNII. If it has an impact on the whole paper, we will consider deleting the part related to LLM.
Q5: The authors mentioned on page 5 that "We suggests that graph can be viewed as a form of static, discrete self-attention mechanism (Vaswani et al., 2017)." Actually, this is a well-known fact that "the attention matrix can be viewed as a fully-connected weighted graph", e.g., in papers [1-3].
A5: Thank you for pointing out this important reference, we have cited this paper. We think that treating the attention mechanism as a fully connected graph is not the same insight as using the graph as a static attention mechanism. A priori attention that does not need to be learned is an important reason for the over-generalization phenomenon in deep GCNs. Although the training stage has not yet ended, the accuracy of the verification set has far exceeded the accuracy of the training set.
Q6: I think the so-called "over-generalization" is interesting, but it may only be due to the dropout, so the models in training and validation are different.
A6: Thanks for your recognition, we believe that the overgeneralization phenomenon has a great impact on understanding and exploring the direction of deep GCNs, and further exploration is needed to fully understand its causes and mechanisms.
Q7: the difference between the proposed GCNIII and the existing model GCNII is that this paper has an extra module "Intersect memory", which is essentially an attention module (line 195).
A7: The most important thing in GCNIII is to introduce the linear model and conduct joint training. Intersect memory, Initial residual and Identity mapping are three kinds of techniques as optional hyper-parameters, but not necessary. Compared to the fixed-structure GCNII, the GCNIII framework is more flexible and can better adapt to different datasets.
Q8: Many notations are wrapped with {}. E.g., {$\psi$} is Eq. (7). It is unclear why they are presented like this.
A8: We state in lines 117-119 that {} represents non-essential module that need to be adjusted for different datasets and tasks. We found that these often overlooked components can have a big impact on how models perform on different datasets, so the GCNIII framework was designed to be more flexible.
Q9: I think the notation in Eq. (12) is problematic. The notation $\tilde{G}$ is a matrix (line 110), but it is also used as a function in Eq. (12).
A9: The notation $\tilde{G}$ is still a matrix in Eq. (12) rather than a function. $A_{IM}$ in Eq. (12) also represents a generalized attention matrix rather than a function.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response and apologize for my late reply. The response solves some of my concerns, and I will list the remaining ones with some suggestions.
For A3, more baselines are included, but they are on the Cora, Citeseer, and Pubmed only. You used many more datasets in your paper; some are heterophilic, and there are many new methods for that. You might need to include the most notable baselines from them.
For A4, yes, please pay special attention to that, as the current organization is very confusing.
For A6, as I mentioned in my original review, "over-generalization" sounds interesting but definitely needs a serious and rigorous explanation. The current explanation is not convincing, in my view.
I will keep my original evaluation, and I believe the current version is below the bar for acceptance. | Summary: This paper proposes a new model GCNIII which aims to get more effectively balance the trade-off between over-fitting and over-generalization. The framework incorporates three key techniques: intersect memory, initial residual and identity mapping. Experiments conducted on benchmark datasets demonstrate the effectiveness of the proposed framework.
Claims And Evidence: This work focuses on achieving a more effective balance in the trade-off between over-fitting and over-generalization. To validate the framework's performance, the authors conduct experiments on both heterophily and homophily datasets, demonstrating the framework's effectiveness across different graph characteristics.
Methods And Evaluation Criteria: The proposed techniques demonstrate limited innovation in their conceptual design. Specifically, the intersect memory mechanism appears overly simplistic in its implementation. Furthermore, both the initial residual and identity mapping components show substantial similarity to existing methodologies, raising concerns about their novelty and originality.
Theoretical Claims: Yes. The proof of Theorem 4.1.
Experimental Designs Or Analyses: While the experimental methodology is scientifically sound, the results reveal certain limitations. The semi-supervised node classification experiment demonstrates only marginal improvements over baseline methods. Furthermore, the ablation study yields mixed findings: while it confirms the necessity of the initial residual component, it paradoxically shows inconsistent performance when either the intersect memory or identity mapping components are removed, which contradicts the authors' original hypotheses.
Supplementary Material: All.
Relation To Broader Scientific Literature: This paper addresses a valuable and challenging research topic: achieving an effective balance between over-fitting and over-generalization. However, the proposed implementation demonstrates limited novelty, primarily building upon existing methodologies without substantial innovation. Furthermore, the experimental results show only marginal improvements, suggesting that the practical impact of the proposed approach may be limited.
Essential References Not Discussed: In my understanding, there are none.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: None.
Questions For Authors: To achieve a more effective balance between over-fitting and over-generalization, have you conducted comparative experiments with state-of-the-art models specifically designed for heterogeneous datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks a lot for your review, suggestions, and questions. We have carefully addressed your concerns shown below.
Q1: The proposed techniques demonstrate limited innovation in their conceptual design. Specifically, the intersect memory mechanism appears overly simplistic in its implementation. Furthermore, both the initial residual and identity mapping components show substantial similarity to existing methodologies, raising concerns about their novelty and originality.
A1: Thank you for your criticism and concern. Intersect memory is merely an optional technique (hyper-parameter) in our model, while our approach focuses on bringing linear models in and conducting joint training. Despite its apparent simplicity, this design proves remarkably effective in balancing the trade-off between over-fitting and over-generalization. The novelty of our work is primarily reflected in the following aspects: We revisit two highly influential works (with ~2000 citations) on deepening GCNs - APPNP and GCNII, where we are the first to identify the over-generalization phenomenon. We argue this phenomenon holds significant implications for understanding both deep GCNs and even the fundamental mechanism of graph convolution.The reason why GCNIII can alleviate the over-generalization phenomenon is due to the linear model's memory ability of node features during the training stage. Appendix E confirms that node characteristics are critical to the effectiveness of GCN in node classification tasks (often overlooked in previous studies), and Appendix C confirms that linear models also have strong node classification capabilities on these datasets. The analysis in sec.4 offers a novel interpretation of the deep GCNII model and provides insights that led to our proposal of GCNIII. The deep GCNII can be viewed as a chimera of multiple models, specifically 2-MLP, 1-GCN, 2-GCN, ..., k-GCN, where models closer to the output layer play a more significant role. Previous research on GNNs has never explored using the simplest linear model (2-MLP is considered in [1]). We argue that the most straightforward yet effective way to address GCNII’s limitations is to jointly train a linear model with GCNII. This approach is empirically validated by our experimental results (Figure 5 and 'Over-Generalization of GCNIII' in Section 6.2).
Q2: The semi-supervised node classification experiment demonstrates only marginal improvements over baseline methods. Furthermore, the ablation study yields mixed findings: while it confirms the necessity of the initial residual component, it paradoxically shows inconsistent performance when either the intersect memory or identity mapping components are removed, which contradicts the authors' original hypotheses.
A2: Thanks for your criticism and questions. GCNIII demonstrates consistent improvements over the baseline, with detailed experimental results provided in Appendix F. The model achieves peak accuracy of [86.1, 74.0, 81.4] on these three datasets respectively. The results obtained in the ablation experiment are not contradictory. As detailed in Section 3.2, the three techniques - Intersect Memory, Initial Residual, and Identity Mapping - serve as optional hyper-parameters in GCNIII that do not universally improve performance. Our analysis reveals these techniques are dataset-dependent, demonstrating GCNIII's enhanced flexibility over GCNII. Details of the use of these three techniques in all experiments are documented in Appendix B.
Q3: To achieve a more effective balance between over-fitting and over-generalization, have you conducted comparative experiments with state-of-the-art models specifically designed for heterogeneous datasets?
A3: Thank you for your questions and suggestions. The over-generalization phenomenon we identified is specific to deep GCN models (such as the classic work APPNP, GCNII, etc.), which are primarily designed for homophilic and heterophilic graphs. The heterogeneous graphs you mentioned (containing different types of nodes and edges) are outside the scope of this study. Notably, neither the literature we cited nor recent works discussed in Section 5 ('Other Related Work') have performed comparative experiments on heterogeneous graph benchmarks. As far as I know, homogenous graphs and heterogeneous graphs are two different research fields. | Summary: This work proposes a new gnn architecture which uses the wide & deep neural network framework to address the problems of over-fitting and over-generalization occurring the current deep graph neural networks, which combines the linear model for initial node features and the deep graph convolution layers. In particular, the root of over-generalization in GCNII is investigated. Numerical experiments are conducted on semi-supervised and full-supervised node classification tasks to verify the effectiveness of the proposed model.
## update after rebuttal
The authors have addressed my concerns. So I have updated the score.
Claims And Evidence: In this work, the proposed GCNIII is designed to address over-fitting and over-generalization simultaneously. But there is lack of comparison between the training error and the validation error (like figure 1) for the proposed method, only training losses are reported in Figure 5 for three different models.
Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense.
Theoretical Claims: I have checked the correctness of the proof for the theorem 4.1.
Experimental Designs Or Analyses: Checked.
Supplementary Material: I have reviewed the proof of Theorem 4.1 and supplementary experiments on linear models, and OOD of GCNII.
Relation To Broader Scientific Literature: Long-range neighbor unreachability and over-smoothing of deep convolution make the choice of GNN architectures in a dilemma, which drives the emergence of deep GNNs represented by GCNII. While this paper uncovers the shortcoming of GCNII in generaliability, namely, the so-called over-generalization. To address this issue, wide component is introduced to optimize node representation learning.
Essential References Not Discussed: The related works are essential for understanding the contribution of this work.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written and easy to follow.
2. Wide & deep architecture is introduced to GNNs.
2. The experimental settings are clearly demonstrated.
Weaknesses:
1. Overf-generalization is the focus of this work. How the proposed model resolves this problem need to be fully justified with theorectical analysis/empirical results, besides the discussion in sec.4.
2. No results and explanation (if unavailable) about semi-supervised learning on heterophilic graph.
3. The limitations of the proposed method should be discussed.
Other Comments Or Suggestions: 1. It is claimed in the last lines of left column, page 2, that wide & deep can more effectively balance the trade-off between over-fitting and over-generalization, which lacks of justification.
2. why GCNIII achieves marginal improvements compared to GCNII on homophilic graphs for both semi-supervised and full-supervised tasks? is the superiority of GCNIII on heterophilic graphs attributed to the wide component, since this component is less relevant to the graph structure (which is semantically inconsistent with node attributes)? Does intersect memory participate in this senario?
Questions For Authors: 1. In the 'attention is all your need' part, they calculate the attention density values for both on Cora with \alpha be 0.1, ? I guess something missing here.
2. In Table 5, does the ablation study on the wide component use intersect memory?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank you for your reviews and address your concerns as follows.
Q1: There is lack of comparison between the training error and the validation error (like figure 1) for the proposed method, only training losses are reported in Figure 5 for three different models.
A1: Regarding Figure 5: To save space, we only plotted the training error comparison to demonstrate that our GCNIII model effectively mitigates over-generalization as the hyper-parameter λ increases, without suffering from over-fitting like shallow GCNs. The over-generalization phenomenon we found highlights a discrepancy between training and validation/test performance in deep GCNII, suggesting that the core issue lies in the training phase rather than the validation phase. We appreciate your suggestion and will include the complete comparison figures in the appendix to save space.
Q2: How the proposed model resolves this problem need to be fully justified with theoretical analysis/empirical results, besides the discussion in sec.4.
A2: The reason why GCNIII can alleviate the over-generalization phenomenon is due to the linear model's memory ability of node features during the training stage. Appendix E confirms that node characteristics are critical to the effectiveness of GCN in node classification tasks (often overlooked in previous studies), and Appendix C confirms that linear models also have strong node classification capabilities on these datasets. The analysis in sec.4 offers a novel interpretation of the deep GCNII model and provides insights that led to our proposal of GCNIII. The deep GCNII can be viewed as a chimera of multiple models, specifically 2-MLP, 1-GCN, 2-GCN, ..., k-GCN, where models closer to the output layer play a more significant role. Previous research on GNNs has never explored using the simplest linear model (2-MLP is considered in [1]). We argue that the most straightforward yet effective way to address GCNII’s limitations is to jointly train a linear model with GCNII. This approach is empirically validated by our experimental results (Figure 5 and 'Over-Generalization of GCNIII' in Section 6.2).
Q3: No results and explanation (if unavailable) about semi-supervised learning on heterophilic graph.
A3: As an improvement to GCNII, we used the same datasets of semi-supervised node classification as in the GCNII paper[2] for comparative experiments. The papers we cited, including some recent related studies mentioned in sec.5, also did not carry out semi-supervised learning on heterophilic graphs.
Q4: The limitations of the proposed method should be discussed.
A4: In our proposed method, the three techniques of Intersect memory, Initial residual and Identity mapping are regarded as hyper-parameters, which need to be selected according to the dataset, and the selection of hyperparameter γ should also be very careful when balancing over-fitting and over-generalization.
Q5: It is claimed in the last lines of left column, page 2, that wide & deep can more effectively balance the trade-off between over-fitting and over-generalization, which lacks of justification.
A5: In our original paper, we said 'a Wide & Deep architecture model as shown in Figure 2', meaning GCNIII. Specific evidence for balanced trade-offs can be seen in 'Over-Generalization of GCNIII' in Section 6.2 and [Q2-A2].
Q6: why GCNIII achieves marginal improvements compared to GCNII on homophilic graphs for both semi-supervised and full-supervised tasks? is the superiority of GCNIII on heterophilic graphs attributed to the wide component, since this component is less relevant to the graph structure (which is semantically inconsistent with node attributes)? Does intersect memory participate in this senario?
A6: GCNII remains a highly competitive model. While only marginal gains are observed on homophilic graphs, our method achieves significant improvements on heterophilic graphs. This stems from its more principled structural design, demonstrating GCNIII's superiority over GCNII in heterophily scenarios. Details about the use of intersect memory can be found in Appendix B.
Q7: In the 'attention is all your need' part, they calculate the attention density values for both on Cora with $\alpha$ be 0.1, ? I guess something missing here.
A7: Sorry, we don't fully understand your question. In our study, attention density value is defined as the proportion of non-zero elements in the attention matrix.
Q8: In Table 5, does the ablation study on the wide component use intersect memory?
A8: Sorry for the confusion caused by our unclear expression. In the original paper, the term 'a basic linear classification model' refers to a linear model that does not include intersect memory.
[1]Yang, C., Wu, Q., Wang, J., and Yan, J. Graph neural networks are inherently good generalizers: Insights by bridging gnns and mlps. In ICLR, 2023.
[2]Chen, M., Wei, Z., Huang, Z., Ding, B., and Li, Y. Simple and deep graph convolutional networks. In ICML, 2020. | null | null | null | null | null | null |
Supercharging Graph Transformers with Advective Diffusion | Accept (poster) | Summary: The paper analysis the generalization capabilities of a specific class of diffusion-based graph models, namely those following the advective diffusion transformer, under topology distribution shifts. Specifically, it highlights that while the generalization gap for local diffusion—relying solely on the adjacency matrix—grows exponentially with changes in adjacency, incorporating a non-local propagation mechanism, in a transformer-like style, alongside the adjacency matrix reduces this gap to a polynomial scale. These findings are validated through experiments on a synthetic benchmark as well as several real-world datasets.
Claims And Evidence: Most of the proofs presented in the paper are based on the result (3.5) of (Van Loan, 1977). That result requires the matrix to commute with its traspose. However, that is not necessary the case for the $I-C-\beta V$ matrix. In my opinion, additional assumptions needs to be done regarding the original graph topology and the coupling matrix C in order for the results to remain true. Since all the theoretical results presented in the paper (Theorem 3.2, Proposition 3.4 and everything derived from them) relies on this result, I would appreciate a clarification from the authors regarding the precise conditions under which their results hold. Specifically, could the authors clarify whether their framework implicitly assumes certain structural properties of C and the underlying graph, or whether alternative justifications exist for applying Van Loan’s result in this context?
Methods And Evaluation Criteria: The setup presented in the paper, with a test split consisting of graphs with shifted topological distribution represent an important and under-explored area in geometric deep learning. The experimental results are interesting and in line with the theoretical observations.
Theoretical Claims: Please see "Claims And Evidence" section.
Experimental Designs Or Analyses: In Table 1 the paper reports out-of-memory results for the GraphTrans architecture. However, the proposed model presented in Section 4 and Algorithm 2 also relies on a fully connected graph, with the same type of attention mechanism as in the GraphTrans model. I am wondering why the GraphTrans memory requirement is higher than the one for AdiT-Series model presented in this paper.
Supplementary Material: Yes, I read most of the details presented in Supplementary Material.
Relation To Broader Scientific Literature: The performance of graph neural networks under distribution shift remains an under-explored area within the graph community. In this regard, I believe the paper introduces theoretical novelty. As for the proposed model, its architecture can be seen as an interpolation between a local graph diffusion and a fully connected transformer, which somewhat diminishes its individual contribution. However, this generalization also offers advantages, as the theoretical results can be specialized to these two widely used standard architectures.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: As mentioned above, my main concern regards the fault in the proofs. In my opinion, additional assumptions are made without being stated which can reduce the contribution of the work. If there is a way of bypassing them I am happy to raise my score to acceptance.
Other Comments Or Suggestions: Section C in the appendix contains a missing reference (row 797, row 823, row 808, row 830)
Questions For Authors: Please see the boxes above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your time and thoughtful feedback.
**Q1: Concerns about theoretical assumptions and (3.5) of (Van Loan, 1977)**
We appreciate the opportunity to clarify this point. The reviewer is correct that the result (3.5) of (Van Loan, 1977) [1] requires the matrix to commute with its transpose. This condition **does hold for Prop 3.4**, as the normalized adjacency matrix $\tilde {\mathbf A} = \mathbf D^{-1/2}\mathbf A \mathbf D^{-1/2}$ is symmetric under the standard assumption that the observed graph is undirected—a common setting adopted by GNNs [e.g., 2–4] and justified via the graphon model described in Sec. 3.1. We’ll clarify this assumption in the revised paper.
For Thm 3.2, we agree that the symmetry of $\mathbf I - \mathbf C - \beta \mathbf V$ may not generally hold, especially since $\mathbf C$ (attention matrix) may be asymmetric. While enforcing symmetry on $\mathbf C$ would technically resolve this, it would overly restrict the model’s design space. Instead, we provide an alternative proof that **bypasses the need for (3.5) of (Van Loan, 1977) entirely**:
> **Lemma.** Let $X, E \in \mathbb{R}^{n \times n}$, and let $||\cdot||$ be a submultiplicative matrix norm. Suppose there exist constants $M \geq 1$, $\omega \geq 0$ such that for all $Y \in \mathbb{R}^{n \times n}$, $||e^Y|| \leq M e^{\omega ||Y||}$. Then the following perturbation bound holds:
$$
||e^{X+E} - e^X|| \leq ||E|| \cdot M^2 \cdot e^{\omega (||X|| + ||E||)}.
$$
**Proof:** Define path $X(s) := X + sE$ for $s \in [0, 1]$. Then:
$$
e^{X+E} - e^X = \int_0^1 \frac{d}{ds} e^{X(s)} \ ds.
$$
Using the integral form of the derivative of the matrix exponential [5], we have:
$$
\frac{d}{ds} e^{X(s)} = \int_0^1 e^{(1 - \theta)X(s)} E e^{\theta X(s)} \ d\theta.
$$
Therefore:
$$
e^{X+E} - e^X = \int_0^1 \left( \int_0^1 e^{(1 - \theta)X(s)} E e^{\theta X(s)} \ d\theta \right) ds.
$$
Taking norms and applying submultiplicativity:
$$
||e^{X+E} - e^X|| \leq \int_0^1 \int_0^1 ||e^{(1 - \theta)X(s)}|| \cdot ||E|| \cdot ||e^{\theta X(s)}|| \ d\theta ds.
$$
Using the growth bound assumption:
$$
||e^{(1 - \theta)X(s)}|| \leq M e^{\omega (1 - \theta)||X(s)||}, \quad
||e^{\theta X(s)}|| \leq M e^{\omega \theta ||X(s)||}.
$$
Multiplying these we have:
$$
||e^{(1 - \theta)X(s)}|| \cdot ||e^{\theta X(s)}|| \leq M^2 e^{\omega ||X(s)||}.
$$
Note that $||X(s) || = ||X + sE|| \leq ||X|| + ||E||$, so:
$$
||e^{X+E} - e^X|| \leq ||E|| \cdot M^2 \cdot \int_0^1 \int_0^1 e^{\omega ||X + sE||} d\theta ds \leq ||E|| \cdot M^2 \cdot e^{\omega(||X|| + ||E||)}.
$$
The existence of $M$ and $\omega$ is guaranteed for every consistent matrix norm [5] such as spectral norm $||\cdot||_2$ considered in our analysis.
Now consider the proof for our Thm 3.2, let $\mathbf L = \mathbf I - \mathbf C - \beta \mathbf V$ and $\mathbf L' = \mathbf I - \mathbf C' - \beta \mathbf V'$. We can apply the above lemma even if $\mathbf L$ and $\mathbf L^\top$ are not commutable:
$$
||e^{-\mathbf L' T} - e^{-\mathbf LT}||_2 \leq M^2 T \cdot ||\mathbf L' - \mathbf L||_2 \cdot e^{\omega T ||\mathbf L||_2 } \cdot e^{\omega T ||\mathbf L' - \mathbf L||_2 }.
$$
For spectral norm $||\cdot||_2$ considered in our analysis, the above result holds for $M=1$ and $\omega=1$. Also, the last term can be bounded using the same argument of Line 727-747 in our proof which leads to
$$
e^{||\mathbf L' - \mathbf L||_2} = e^{||(\mathbf C' + \beta \mathbf V') - (\mathbf C + \beta \mathbf V)||_2} \leq O(||\Delta \tilde{\mathbf A}||_2^m).
$$
Therefore, $||e^{-\mathbf L' T} - e^{-\mathbf LT}||_2$ is bounded with polynomial order w.r.t. $||\Delta \tilde{\mathbf A}||_2$ and Thm 4.2 can be concluded.
We will revise the paper to replace the use of (3.5) of (Van Loan, 1977) in Thm. 3.2 with this updated, assumption-free proof.
**Q2: GraphTrans vs. ADiT-Series memory cost**
GraphTrans uses the original global attention mechanism, resulting in quadratic time and memory complexity w.r.t. node number. This makes it infeasible for large graphs like Arxiv (0.2M nodes), leading to the out-of-memory issue in Table 1.
In contrast, ADiT-Series employs a scalable global attention that preserves all-pair interactions while reducing time and memory complexity to linearity w.r.t. node number. This enables our model to handle large graphs efficiently. Details of this acceleration are presented in Appendix D.1.2, and we’ll highlight this distinction more clearly in the revision.
**Q3: Missing references in Appendix**
Thank you for catching these. We will fix these typos in the revised paper.
[1] Van Loan, C. The sensitivity of the matrix exponential.
[2] GRAND: graph neural diffusion, ICML 2021
[3] Diffusion improves graph learning, NeurIPS 2019
[4] Semi-supervised classification with graph convolutional networks, ICLR 2017
[5] Higham et al. Functions of Matrices: Theory and Computation. SIAM, 2008.
Please let us know if further clarification is needed. Thank you again!
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed reply. I am happy with the revised proof.
---
Reply to Comment 1.1.1:
Comment: Thanks for your nice feedback and we are glad that our rebuttal addresses your concern. We'll modify the paper accordingly when we have the chance for revision.
It would be greatly appreciated if you can re-consider the rating on our work. Thanks again for your time and consideration. | Summary: This paper proposes a framework called Advective Diffusion Transformer (ADIT) to address topological distribution shifts in graph-structured data. By formulating a continuous diffusion equation augmented with an advection term, the authors effectively combine non-local propagation with local message passing. They justify ADIT via a theoretical analysis showing that the model can control and limit the impact of topological changes on performance. Empirically, ADIT outperforms standard graph neural networks and transformer-based baselines across synthetic, informational (citation/social), and molecular/protein datasets.
Claims And Evidence: The authors claim that ADIT maintains stable performance under distribution shifts in graph structure and the paper presents bounds indicating that ADIT's embeddings respond in a polynomially bounded way to changes in adjacency. However, the authors primarily focus on carefully controlled synthetic scenarios and specific real-world splits. In practice, shifts may be more diverse (e.g., entirely new nodes, partial re-labeling, dynamic edges). It remains unclear whether the same polynomial-bounded embedding behavior would hold under more chaotic or evolving graph processes.
Although the theoretical discussion is convincing in isolated PDE contexts, real data can deviate from the carefully posited assumptions (e.g., injectivity of the latent generator). Thus, additional evidence or bounding under more relaxed conditions could strengthen the claim.
Methods And Evaluation Criteria: ADIT solves a PDE that unifies local and global propagation. Two variants approximate the continuous-time solution. However, there are some weaknesses:
First, the computational overhead of matrix-inversion-based or large K-step expansions could be significant on huge graphs. While the authors propose optimizations (e.g., multi-head parallelism, Chebyshev expansions), the paper does not thoroughly compare memory/time profiles across different scales or under real-time constraints.
Second, the weighting hyperparameter is introduced to balance advection vs. diffusion, but the method to select it is somewhat heuristic.
For the evaluation criteria, it remains uncertain how these chosen splits map onto the PDE-based assumptions.
Theoretical Claims: The authors present bounding arguments for out-of-distribution generalization error, contrasting polynomial vs. exponential sensitivity to adjacency changes. However, the authors assume “injectivity” in the latent generator g; real data might see degenerate or repeated node embeddings, which can weaken the claims. Moreover, the PDE analysis hinges on classical results that require smoothness assumptions and well-posedness of the underlying continuous system. Graph data can be highly irregular, with discrete or non-smooth connectivity. Some justification of how classical PDE frameworks carry over to more irregular or large-scale networks would be valuable.
Experimental Designs Or Analyses: The topological splits for real datasets (by year or region) do not fully guarantee that all topological changes (e.g., changes in the node sets, or partial re-labeling) are tested. Additional experiments varying the node set or combining multiple shift factors might reveal more nuanced results.
Supplementary Material: I have reviewed the supplementary material. The PDE background in the appendices can be challenging to digest for an audience less familiar with advanced matrix approximation methods. A more accessible or modular presentation might be beneficial for better adoptions.
Relation To Broader Scientific Literature: Recent research (e.g., robust GNNs under adversarial attacks or continual learning over evolving graphs) also addresses shifting topologies from different perspectives. A more explicit comparison or synergy with these areas (e.g., adversarially perturbed edges) would highlight whether ADIT can handle worst-case shifts or only mild, distributional ones.
The authors stress the PDE analogy but do not benchmark or contrast directly with non-PDE-based robust or domain-adversarial GNN approaches. Further references and direct comparisons might clarify whether PDE-based methods inherently outperform, or if domain-invariant approaches could match ADIT’s stability.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Other strengths:
ADIT offers a clear PDE-based argument for combining local and global message passing; this is theoretically principled and practically relevant.
Other weaknesses:
For extremely large graphs, it is unclear if the matrix exponential expansions remain viable or how approximations degrade performance.
The model introduces several PDE-inspired hyperparameters. A systematic procedure for selecting them under real constraints or partial label availability is not thoroughly explored.
Other Comments Or Suggestions: No other comments.
Questions For Authors: No other questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive feedback.
**Q1: Theoretical assumptions and justification on more relaxed real-world conditions**
Our PDE model indeed serves as a simplified abstraction, but it does not introduce new assumptions beyond the standard setting of graph representation learning [1–4]. Despite the simplification, PDE-based analysis allows for understanding GNN behavior, even on irregular and large-scale graphs, as shown in prior work [1-4] and our own.
The injectivity assumption in Thm 3.2 is indeed nontrivial but reasonable: since $g$ maps from a low-dimensional latent space to a higher-dimensional observation space, collisions (i.e., degenerate mappings) are rare in practice. Removing this assumption would be interesting, but it introduces substantial technical challenges that we'd like to leave to future work.
Crucially, our model performs well on real datasets that do not satisfy the theoretical assumptions, confirming that the PDE framework yields effective architectures even under relaxed real-world conditions. While we agree that shifts in the wild can be more complex, our goal is to establish a principled framework and demonstrate its robustness to a wide range of topological shifts—including more diverse settings added below.
**Q2: Adding experiments with more complex distribution shifts (combination of multiple shift factors, new nodes/edges and partial label availability)**
Before presenting new results, we kindly note that Table 1 already included dynamic shifts where new nodes and edges appear over time. We adopt an inductive setting (see details in Appendix E.1.2) where test nodes and their edges are excluded during training. Apart from this, we add experiments on a new dataset OGBL-COLLAB that is also a dynamic graph where new nodes/edges are added to the test graph. See our rebuttal to **Q1** of Reviewer ZJpC for results and details.
To further strengthen evaluation, we add a more challenging variant of the Arxiv dataset combining multiple shift types (node/edge changes and partial label availability). On top of the data splits that already involve dynamic shifts, we mask 10 of 40 classes for training data, and evaluate on test data with full labels. As shown below, while all models degrade under this setting, ADiT-Series maintains clear superiority:
||GCN|GAT|GraphGPS|DIFFormer|ADiT-Series
|-|-|-|-|-|-|
|**Test Acc ($\uparrow$) with single shift**|46.46±0.85|46.50±0.21|46.46±0.95|44.30±2.02|49.64±0.54|
|**Test Acc ($\uparrow$) with multiple shifts**|39.29±0.91|38.82±0.52|39.46±1.33|37.30±1.14|44.64±0.82|
**Q3: Scalability to large graphs and comparison of computational costs**
ADiT-Series scales linearly with graph size, enabling it to scale up to large graphs. In practice, we use small values of
$K$ (e.g., 1–4), yielding strong performance and efficiency. We add comparison of computational costs on the Arxiv graph (0.2M nodes). ADiT-Series consumes moderate GPU memory and much fewer time costs than Transformer competitors (DIFFormer and GraphGPS):
||GCN|GAT|GraphGPS|DIFFormer|ADiT-Series
|-|-|-|-|-|-|
|**Train Time per Epoch (second)**|7.66|18.12|198.32|73.99|25.21|
|**Train GPU Memory (GB)**|2.4|10.8|13.7|5.3|4.2|
**Q4: Systematic procedure for hyperparameter selection**
We provided systematic studies for all hyper-parameters introduced by our model (such as $\beta$, $\theta$ and $K$) in Sec. 5.2 and Appendix F.2, with searching procedures and spaces detailed in Appendix E.3.
**Q5: Comparison with non-PDE-based robust or domain-adversarial GNNs**
We appreciate the suggestion that can increase our impact to broader area of robust graph learning. We add a comparison on the DPPIN edge regression task against SR-GNN [5] and EERM [6], two robust GNNs—EERM uses domain-adversarial training. While all models show similar average RMSE, our models substantially improve worst-case performance, highlighting better stability under distribution shifts:
||SR-GNN|EERM|ADiT-Inverse|ADiT-Series
|-|-|-|-|-|
|**Test Average RMSE ($\downarrow$)**|0.170±0.003|0.172±0.006|0.166±0.006|0.167±0.004|
|**Test Worst RMSE ($\downarrow$)**|0.201±0.013|0.207±0.018|0.184±0.011|0.188±0.010|
This suggests that PDE-inspired architectures can offer robustness comparable to domain-invariant GNNs, without requiring adversarial training schemes.
[1] GRAND: graph neural diffusion, ICML 2021
[2] GRAND++: graph neural diffusion with a source term, ICLR 2022
[3] Gradient gating for deep multi-rate learning on graphs, ICLR 2023
[4] Understanding convolution on graphs via energies, TMLR 2024
[5] Shift-robust gnns: Overcoming the limitations of localized graph training data, NeurIPS 2021
[6] Handling Distribution Shifts on Graphs: An Invariance Perspective, ICLR 2022
Please let us know if further clarification is needed. Thank you again!
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the efforts in providing the rebuttals. Most of my concerns are well targeted. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your nice feedback. We are glad that our rebuttal addresses your concerns. We'll revise the paper accordingly and incorporate the new results when we have the chance for revision. Thank you again for your time and constructive comments. | Summary: This paper proposes advective diffusion transformer, a model that provably controls generalization error with topological shifts. The model has been evaluated on synthetic and several real-world datasets that verify its superiority compared with existing local and non-local graph neural networks/transformers.
Claims And Evidence: The claims are clear with convincing evidence.
Methods And Evaluation Criteria: The method and its evaluation is technically sound.
Theoretical Claims: The claims seem to be sound though I did not carefully check the proof.
Experimental Designs Or Analyses: The experimental designs are valid and rigor.
Supplementary Material: No.
Relation To Broader Scientific Literature: A general graph diffusion framework with competitive empirical performance, though not a fully novel approach since there are a series of existing works on graph neural diffusion.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Nice unified view of global and local message passing.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How is the performance of the proposed approach on the OGB benchmarks and OGB-LSC PCQM4Mv2 dataset?
2. Are there any results on the benchmarks from [1]?
[1] Dwivedi et al. Benchmarking graph neural networks.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your time in reviewing our paper and for the constructive feedback.
**Q1: "How is the performance of the proposed approach on the OGB benchmarks?"**
We appreciate the suggestion to include additional datasets. We added results on OGBL-COLLAB, a link prediction task suited for evaluating models under topological shifts. We follow the temporal splits from [2], which naturally introduce distribution shifts. We found that our model (ADiT-Series) achieves substantial improvements over GNN/Transformer baselines:
| | GCN | GAT | GraphGPS | DIFFormer | ADiT-Series
| - | - | - | - | - | - |
| **Test Hit@50 ($\uparrow$)** | 50.42±1.13 | 51.50±0.96 | 53.98±0.98 | 53.24±0.42 | 56.24±0.75 |
**Q2: "Are there any results on the benchmarks from [1]?"**
We also added results on Wiki-CS from [1]. As the original split uses random sampling (i.e., no distribution shift), we introduce a harder setting: nodes are sorted by degree and split into 25%/25%/50% for train/val/test. This allows us to evaluate performance under topological shifts. ADiT-Series matches or slightly outperforms baselines under the original split, and shows notable gains in the harder setting with topological shifts:
| | GCN | GAT | GraphGPS | DIFFormer | ADiT-Series
| - | - | - | - | - | - |
| **Test Acc ($\uparrow$) with original split** | 77.46±0.85 | 76.90±0.82 | 78.34±0.88 | 79.39±0.62 | 79.53±0.42
| **Test Acc ($\uparrow$) with harder split** | 61.98±0.63 | 63.52±0.40 | 64.21±0.98 | 65.34±1.03 | 68.21±0.79
**Q3: "Not a fully novel approach since there are a series of existing works on graph neural diffusion"**
We agree that graph neural diffusion has been studied extensively. However, our work introduces novelty in two key aspects: (1) we address topological shifts, a challenging but underexplored setting where training and test graphs differ; and (2) our approach is derived from a continuous PDE formulation that unifies local and non-local propagation, in contrast to prior works that focus primarily on local diffusion and static structures.
[1] Dwivedi et al. Benchmarking graph neural networks.
[2] Hu et al. Open graph benchmark: Datasets for machine learning on graphs, NeurIPS 2020.
Please let us know if further clarification is needed. Thank you again! | null | null | null | null | null | null | null | null |
Spectral Informed Neural Networks | Reject | Summary: This paper proposes spectral-informed neural networks (SINNs), which solve PDEs using spectral information. SINNs uses less memory than PINNs, especially for problems with higher-order derivatives, and it also obtains better accuracy than PINNs across several experimental settings. The authors also provide a theoretical error convergence analysis to show why SINNs outperform PINNs.
Claims And Evidence: The authors claim that their SINN approach has a lower memory requirement than PINNs, and they clearly show this is true in the paper.
The authors show that SINNs outperform PINNs on several experimental settings, which supports the contributions that they state in the introduction. However I’m concerned that some of the experimental settings in the paper are biased towards the SINN method. For example, the initialization for the 2-D heat and NS equations (see Equations 33 and 37) induce decay in the higher frequency components of the initial condition. If there was no decay, should we still expect SINNs to outperform PINNs?
Moreover, the authors claim to provide an error convergence analysis to show that SINNs are more accurate than PINNs. However, I believe that this claim is not properly justified (see “Theoretical Claims”).
Methods And Evaluation Criteria: I believe that the proposed methods make sense for the problems being solved in the paper. I also appreciate that the authors thoroughly compare their method to other PINN-based approaches. However, I am not sure if this paper is suitable for the application-driven machine learning track. If I recall correctly, a submission to the application-driven ML track is supposed to address a real-world use case, but the experimental settings in the paper are synthetic.
Theoretical Claims: In section 3.3, what does it mean to “assume that the capability of MLP is powerful enough”? Is this statement referring to the universal approximation theorem? If so, this statement should be made rigorous.
The $O(N^{-s})$ result for spectral convergence implicitly relies on the fact that we can find the optimum $\tilde{\theta}_N^*$ during optimization. However, this is not a reasonable assumption, since the SINN training problem is non-convex. Perhaps an NTK-style argument could fix some of these issues?
Experimental Designs Or Analyses: The experimental design appears to be sound. The authors’ clearly describe the details for the experiments in Appendix D.
Supplementary Material: I skimmed the entirety of the supplementary material.
In Equation 26, how does the loss function help deal with the aliasing error?
Relation To Broader Scientific Literature: The SINN method proposed in the paper is an interesting approach for solving PDEs with periodic boundary conditions. To the best of my knowledge, there is no existing work that has tried out this exact approach for solving PDEs.
Essential References Not Discussed: The authors should talk about how their method relates to other methods in the literature that combine spectral methods with PINNs. How does their method fit into the prior work combining these two ideas? Here are some papers that have combined these ideas:
1. https://arxiv.org/abs/2202.02710.
2. https://openreview.net/pdf?id=218sl_mPChc.
Other Strengths And Weaknesses: A weakness of the work, which is also stated by the authors (to their credit), is that SINNs only apply to PDEs with periodic boundary conditions. This limits the utility of SINNs to a special class of PDEs.
Other Comments Or Suggestions: The font in several figures is too small to read. I would encourage the authors to increase the font size in these figures.
“scaler” should be “scalar” on page 5
“spacial” should be “spatial”
“burgers” should be “Burgers’”
Top of page 6: there appear to be typos in the expression for $\hat u$
“Navier-stokes” should be “Navier-Stokes” on page 6
The y-axes in Figures 4 and 5 should be labeled. Also, what are u and v in the x-axes in Figure 5?
Equation 23 in Appendix B: $x0$ should be $x_0$
Questions For Authors: 1. Page 4: What is $n_j$ in the definition of $\|k\|_{\mathrm{mix}}$?
2. Page 5: I appreciate the authors addressing why they only explore Fourier basis functions in the paper. However, the references that they cite in point 2 are over 10 years old. Is the approach of transforming Fourier basis functions to other basis functions still an active area of research? If not, this could undermine the authors’ justification for only exploring the Fourier basis.
3. Page 6: In Figure 6a, why does the relative error for SINN approach the relative error for PINN as N increases? This seems to contradict the caption for the figure, which says that “SINNs are robust even with complex spectral structures”, making me skeptical of some of the claims in the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your extensive feedback. We understand that you have several questions about the paper, and we address each of your points in turn below.
# Questions 1 and Other Comments Or Suggestions
First and foremost, we apologize for the multiple typos and notational inconsistencies you encountered. We take full responsibility for these and will correct all of them in the final paper. In Figure 5, $u$ and $v$ represent velocity components in $\boldsymbol{u}=(u,v)$ which is the solution of the Navier-Stokes equation. Also, since the notation $k = (n_1, n_2, \dots, n_N)$ was confusing, it is better to use $k= (k_1, k_2, \dots, k_N)$ and $\|\boldsymbol{k}\| _{\text{mix}}=\Pi _{j=1}^d \max \lbrace 1,k _j\rbrace$.
# Question 2
Transforming Fourier basis functions to other basis functions is a very mature and classical research field. However, only exploring the Fourier basis is not unfair. There are two reasons: 1) it is not practical to explore all basis functions in one paper. 2) exploring one kind of basis is enough. For example, Fourier Neural Operator only explores Fourier basis in Neural Operator, but after FNO, researchers propose Spectral Neural Operator, Wavelet Neural Operator, and so on.
We understand that you are worrying about other basis functions. We conducted a simple example with Chebyshev basis on 1D heat equations:
$$
u_{xx}=\epsilon u_t, \quad (x,t)\in[-1,1]\times[0,T],
$$
$$
u(0,x)= \frac{e^{-\epsilon x ^ 2 / 12}} {\sqrt{3}},\quad x \in [-1,1]. \quad u(t,1)=u(t,-1)=\frac{e^{-\epsilon / 12}} {\sqrt{3}}.
$$
with the solution $u(x,t)=e^{-\frac{\epsilon x ^ 2 } {4 (t + 3)}} / \sqrt{t + 3}$. In our experiments, $T=1.0, \epsilon=10$.
The results are:
|PINN|SINN|
|----|----|
| $3.25\times 10^{-4}\pm1.74\times 10^{-5}$ | $1.17\times 10^{-4}\pm4.61\times 10^{-5}$ |
# Question 3
The $N$ in this experiment is the number of $\sin$ terms in the initial condition of Eq.20. To avoid confusion, it is better to use $N_k$ instead of $N$.
This experiment is conducted to verify the robustness of SINNs. As we stated in Section 4.3, 'For most problems, the structure of coefficients in the spectral domain is simpler than the structure of solutions in the physical domain.', low frequencies can represent most cases in the real world. But how about some specific cases in which the structure of coefficients is also complex?
Furthermore, more experiments on testing high-frequency terms are asked by other reviewers and can answer your concern of 'decay in the higher frequency components of the initial condition'. You can find the experiments in the reviewers of zt1d and 34of.
# Application-driven ML
When comparing algorithms, precise relative error measurement is essential. Real-world cases are noisy, and full of unknowns, making it hard to isolate and assess algorithm performance. By using synthetic examples, we can better evaluate our algorithm's core capabilities, laying a solid groundwork for real-world applications. For example,
because of its inherent mathematical complexity, the 2-D Taylor-Green vortex flow has been established as a representative benchmark in computational fluid dynamics for validating numerical methods, fincluding
finite element penalty–projection method (doi: 10.1016/j.jcp.2006.01.019), PINN (doi: 10.1016/j.jcp.2021.110676), and so on.
Furthermore, the periodic boundary condition is not real, but it can be used to understand the real world's features, for example, coherent vortex evolution (doi: 10.1017/S0022112084001750), intrinsic physical dynamics(doi: 10.1017/S0022112095000012), and so on.
# Theoretical Claims
'Is this statement ..': Yes, the conclusion is held by the universal approximation theorem, thanks for the question.
'Perhaps an NTK-style ..': The error analysis is based on a very ideal assumption. NTK analysis is based on the loss function, which is constructed by the residual equation rather than the error equation in both PINNs and SINNs. To our best knowledge, there is no research that reveals that the error and the residual have direct relationships. Furthermore, empirically speaking, a smaller loss function may not guarantee a smaller error (see Error Certification (arXiv:2305.10157)). However, Ref(doi:10.1016/j.jcp.2023.112527) proves that: adding some terms to the error yields an upper bound including the residual. Thus, if we can find or construct a connection between the residual (or other loss functions, for example, Astral (arXiv:2406.02645)) and the error, perhaps we can use NTK analysis.
# Eq.26
Thanks for being concerned about the aliasing error. How to dealiase the error lies in how to deal with $\hat{N}$. In a trivial way, one can do inverse FFT back to the physical domain and do those multiplications. Additionally, one can use the 2/3 rule (doi: 10.1175/1520-0469(1971)028<1074:OTEOAI>2.0.CO;2)
# Essential References
Thanks, we will give a further discussion on the relevance of those papers.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my concerns. I will raise my score accordingly. | Summary: The Manuscript describes how to use spatial k-space description of the trial neural network with promising results. The general form of the PDE is assumed to be first order in time with periodic boundary conditions in space. Numerous essential PDEs are considered and it is shown that the naturally sparse k-space representations do provide a significant advantage especially for equations with high oder derivatives and allow non-uniform sampling in k-space for the collocation points. The Authors describe a parametric probability distribution for high sampling rate at lower frequencies diminishing towards the higher frequencies.
Claims And Evidence: The claims correspond to the evidence, also the restrictions of the method are stated clearly, like the difficulty for realistic i.e. (complicated) domains important in industrial work.
Methods And Evaluation Criteria: The method does work well for the cases it works well. The challenges are clearly stated by the Authors, for example the experimental finds the for first order derivates the PINNs are more efficient.
Theoretical Claims: The theory is sound, in one place when describing continuity equation is k-space the spatial derivative used is gradient when it is supposed to be divergence.
Experimental Designs Or Analyses: Experiments are well performed,
Supplementary Material: The code is given and related data was provided.
Relation To Broader Scientific Literature: The Manuscript as a clear introduction with proper references to the literature positioning the work amongst the physics informed machine dee learning scene.
Essential References Not Discussed: The relevant references are cited in the Manuscript.
Other Strengths And Weaknesses: The Manuscript is well written, and shows very promising results. The basic weakness is, as stated by the Authors, that as a general PDE solver it is restricted to simple periodic geometries. Hence, its impact is restricted although definitely interesting for the industrial community.
However, a discussion and experimentation of defining initial and boundary conditions from experimental measurements could be considered as a choice. This would allow the estimation, for example, he material parameters of a PDE. In this domain and use case, the experimental measurement can be done in particularly designed geometries that fulfill the periodicity expectations of the SINNs and this would not be such a sin anymore, and would improve the value of the method in practical applications.
Other Comments Or Suggestions: I would have liked to have more discussion on the Burgers equation with low viscosity, where the hight frequency components become essential. How is this scaling?
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your positive feedback and useful suggestions.
Firstly, we sincerely appreciate your having caught this typo of $\nabla$ which should be $\nabla$ $\cdot$.
# Burgers’ equation with low viscosity
We already conducted an experiment on Burgers’ equation with a fairly low viscosity ($\mu = \pi/150 \approx 0.0209$) which already contrains high-frequency content. But we agree that exploring different viscosities is insightful.
Here we consider a more explicit Burgers' equation:
$$
u_t = \nu u_{xx} - u u_x, \quad x\in \left[0,2\pi\right], t \in \left[0,T\right]
$$
$$
u(0,x) =\sin (x)+\sin(5x)
$$
which contains a high frequency component and a low frequency component.
We set $\nu=\pi/r$. The results with different $r$ are as follows:
| $r$ | relative error |
|------|----------------|
| 20 | $5.23\times10^{-4} \pm 9.48\times10^{-5}$ |
| 40 | $1.59\times10^{-3} \pm 2.17\times10^{-4}$ |
| 80 | $9.21\times10^{-3} \pm 6.41\times10^{-3}$ |
| 160 | $9.23\times10^{-3} \pm 2.98\times10^{-3}$ |
| 320 | $2.19\times10^{-2} \pm 1.30\times10^{-2}$ |
# Helmholtz
Additionally, Reviewer zt1d suggests conducting experiments on Helmholtz equations which only contain a single frequency.
Here we consider a 1D Helmholtz equation:
$$
u_{xx} + \lambda^2u=0, x\in [0,2\pi],
$$
with the boundary condition
$$
u(0)=u(2\pi)=1, \quad u_x(0)=u_x(2\pi)=\lambda.
$$
The solution is $u=\cos(\lambda x)+\sin( \lambda x)$. The results are as follows:
| $\lambda$ | PINNs | SINNs |
| --- | --- | --- |
| 2 | $5.37\times10^{-4} \pm 5.30\times10^{-4}$ | $4.72\times10^{-6} \pm 3.39\times10^{-6}$ |
| 4 | $9.02\times10^{-4} \pm 4.54\times10^{-4}$ | $1.90\times10^{-5} \pm 3.71\times10^{-6}$ |
| 8 | $1.75\times10^{-3} \pm 6.41\times10^{-4}$ | $1.45\times10^{-4} \pm 1.87\times10^{-4}$ |
| 16 | $4.80\times10^{-1} \pm 3.38\times10^{-1}$ | $8.63\times10^{-6} \pm 6.24\times10^{-3}$ |
| 32 | $9.94\times10^{-1} \pm 7.57\times10^{-3}$ | $1.47\times10^{-3} \pm 1.46\times10^{-3}$ | | Summary: PINNs have arisen as an exciting and promising alternative to classical solution methods for solving partial differential equations.
However, PINNs are not without their challenges. One key issue is the cost of automatic differentiation for higher-order derivative PDEs. It is well-known that the cost scales with the dimensionality of spatial variables, which is problematic for higher-order PDEs in high dimensions.
To alleviate this issue, the paper proposes Spectral PINNs, which work in spectral space, and reduces applying the differential operator to multiplication, a highly efficient operation on gpu. The authors provide theoretical support to validate their approach, along with experiments to verify its effectiveness in practice.
Claims And Evidence: The paper makes three central claims of contribution:
**Claims:**
1.) We propose a method that eliminates the spatial derivatives of the network to deal with the high GPU memory
consumption of PINNs.
2.) We propose a strategy to approximate the primary
features in the spectral domain by learning the low frequency preferentially to handle the difference between SINNs and PINNs.
3.) We provide an error convergence analysis to show that SINNs are more accurate than PINNs. Furthermore,
our experiments corroborate that the method can reduce the training time and improve the accuracy simultaneously.
**Overall:**
The only claim I really have issue with is item 3.) in particular the first part on error analysis. The assumptions made to reach that conclusions is highly idealized, as I discuss more in detail below.
Given that the claims are mostly well supported and the methods does well in the experiments, I'm rating the submission as a weak accept at this point. My main reason for not rating the submission higher is that spectral methods are somewhat limited in the problem complexity they can handle. In particular, they do not handle complex domain geometries well.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria for the paper are appropriate. As the SINNs framework can be combined with most PINN frameworks/strategies, it makes the most sense to compare to the vanilla PINN and PINN frameworks for which SINNs does not apply.
This is exactly what the paper does.
Moreover, the paper uses standard benchmark PDEs for its evaluation.
Thus, overall I don't have any issues with the baselines for comparison or the experimental design. Though one thing that would have improved the paper is if it had compared with the recently proposed Stochastic Taylor Derivative Estimator (STDE) (Shi et al. 2024) which claims to greatly reduce the cost of differential operators for higher-order PDEs.
Theoretical Claims: This is my main issue with the paper. The claims of spectral convergence in section 3.3 is both too informal and oversimplified.
I will focus on the PINN case, but analogous statements hold for SINN.
For instance, in the case of a PINN it decompose the error into the sum of two-terms via the triangle inequality.
1) Statistical Error: $||u^{\theta_N^{\star}}-u^{\theta^{\star}}||_{\Omega}$
2) Approximation Error: $||u^{\theta^{\star}} - u^{\star}||_{\Omega}$.
It then state assuming the neural network is powerful enough, the approximation error term vanishes, while the statistical term is $O(N^{-1/2})$, as the PINN objective is an empirical estimator.
This greatly oversimplifies things.
For one the approximation error is likely small, but treating it as exactly zero is somewhat unreasonable. But my bigger issue, is the entire discussion neglects optimization.
The PINN objective is a challenging non-convex optimization to solve (and similarly for SINNs), so its highly unlikely the solution found via training corresponds to the globally optimal solution of the empirical risk $u^{\theta_N^{\star}}.$
Thus, in general we should expect achieved in practice to be worse than $O(N^{-1/2})$ or $O(N^{-s})$.
I'm sympathetic to the authors in that I understand, analyzing the solution error of PINNs is highly non-trivial, with papers entirely focused on this issue. So, I would be satisfied if the authors were more explicit in their assumptions and that they are considering an ideal setting where optimization error is ignored / negligible.
Experimental Designs Or Analyses: Yes, I checked and this seems fine. As I said above, it would have been nice to see a comparison with STDE in Table 1 rather than just Taylor mode automatic differentiation. This is my main criticism.
Supplementary Material: I read the paper's appendix which contained further background on the problem classes, experimental setup, and hyperparameter values.
Relation To Broader Scientific Literature: The paper lies at the intersection of the literature on PINNs and spectral methods for solving PDEs.
Its main proposal is to enhance PINNs with techniques from spectral methods for PDEs to reduce the time required to compute derivatives.
Given the evaluation in the paper, I believe this approach will enhance the use of PINNs for any PDE problem that possesses structure for which it is appropriate to apply a spectral method.
Thus, in terms of the PINNs literature it falls in between papers that propose general strategies to enhance training vs. those who propose problem specific strategies for improving the performance of PINNs.
Essential References Not Discussed: I cannot think of any essential references the paper missed in its discussion, so the paper's coverage is sufficient.
Other Strengths And Weaknesses: **Strengths**
- The key ideas of the paper are expressed clearly and are easy to follow.
Other Comments Or Suggestions: I have no other comments or suggestions at this time.
Questions For Authors: Is there any particular reason why you did not compare with STDE?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. We address each of your points below:
# Is there any particular reason why you did not compare with STDE?
STDE is indeed a recent work to speed up high-order derivatives in PINNs. However, STDE works by amortizing the cost of computing mixed partials in multivariate problems through random estimations. Thus, for univariate problems, its “stochastic” part isn’t needed and it essentially performs the same operations as Taylor-mode. In our manuscript, to intuitively and concisely demonstrate the results, we choose a univariate function with only one spatial derivative term. Herein, Taylor-mode performs the same as STDE. As for multivariate functions, because STDE is an estimator and still uses univariate Taylor-mode, STDE still scales with the order of derivatives and the dimensionality. However, SINNs remain in a constant number with both the order of derivatives and the dimensionality.
To provide a more convinced conclusion, we conducted a test on the efficiency of $d$-variate problems. If we show SINN is constant with the dimensionality, combining the result that SINNs is constant with the order of derivative, we can claim that SINNs is more efficient than STDE.
We consider a Poisson's equation:
$$
\Delta u =f, \boldsymbol{x} \in [0,2\pi]^d
$$
with $u=\sum_{i=1}^d \cos(x_i)$ and $f=-\sum_{i=1}^d \cos(x_i)$.
The results are as follows:
|$d$ |Memory (MB)| Iteration rate (ite/s)|
|--|------|-------|
|2|542|992|
|5|550|1000|
|10|550|1000|
|20|790|982|
|100|8736|995|
The memory increases because the input scales with $d$. But, because our operator is only multiplication, the iteration rate is almost constant as expected.
# Theoretical Claims
We agree that we are considering an ideal setting that the network can express the truncated Fourier series accurately. And we admit that, in practice, the error we obtained is worse than the theorem results. We will clarify in the paper that the theoretical exponential convergence rate is an asymptotic result assuming sufficient capacity.
Additionally, we don't expect treating the approximation error as exactly zero. $\| u^{\theta^*}-u^*\| \ll \|u^{\theta^*_N}- u^{\theta^*}\|$ is enough in error analysis.
# Irregular geometries
Thanks for pointing out the limitation that our experiments were on regular domains. However, we argue that our SINNs can be extended to irregular geometries with additional operations. There are several approaches to deal with this limitation, for example:
1) The moving mesh method can construct a coordinate transformation to map an irregular domain to a regular domain (or map subdomains respectively, for example, doi: 10.1016/j.jcp.2020.109835), and it has already been used in PINNs by PhyGeoNet (Gao et al., 2021). (impressive results in Fig. 5 and 7).
2) One can extend the irregular domain to the regular domain, for example, the fictitious domain method(doi: 10.1137/20M1345153).
3) Even, there is a simpler and more straightforward approach if the irregular region can be expected to be expanded into a regular region: one can just add the irregular geometries as a constraint by inverse Fourier transformation. Mathmatically speaking, for PINNs: we minimize the boundary loss by $\mathcal{L} [u_{NN}] (x_b)$, where $u_{NN}$ is the output; For SINNs: we minimize the boundary loss by $\mathcal{L} [\mathcal{F}^{-1} [\hat{u}_{NN}]] (x_b)$, where $\hat{u} _{NN}$ is the output.
Here we consider a 2D Helmholtz equation on a plate with a hole using the third approach:
$$
\Delta u+ \lambda^2u=0, \boldsymbol{x}\in \Omega = [0,2\pi]^2\textbackslash B_\pi(\boldsymbol{x}),
$$
with Dirichlet boundary condition:
$$
u(\boldsymbol{x})=g(\boldsymbol{x}), \boldsymbol{x} \in \partial \Omega,
$$
where $B_\pi(\boldsymbol{x})$ is the ball with the center at the origin (0,0) and a radius of $\pi$ and $g$ is derived by the solution $u=(\cos(\lambda x)+\sin(\lambda x))(\cos(\lambda y)+\sin(\lambda y))$.
Here are the results:
|$\lambda$| PINN | SINN|
|---|----------------|----------------|
|1|$3.59\times10^{-4} \pm 3.11\times10^{-4}$ | $5.31\times10^{-7} \pm 2.11\times10^{-7}$ |
|2|$2.20\times10^{-2} \pm 6.09\times10^{-3}$ | $2.15\times10^{-5} \pm 2.87\times10^{-5}$ |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my concerns, and so will raise my score to a 4. | Summary: This paper introduces Spectral-Informed Neural Networks (SINNs) as an alternative to standard Physics-Informed Neural Networks (PINNs) for solving PDEs. Instead of computing spatial derivatives through automatic differentiation, SINNs leverage spectral methods, replacing differentiation with simple multiplications in the frequency domain. The authors claim that this approach significantly reduces memory consumption and computational cost while improving accuracy due to spectral methods' exponential convergence properties. The paper presents experimental results demonstrating efficiency gains over PINNs, particularly for higher-order PDEs.
Claims And Evidence: While the proposed method presents an interesting alternative to standard PINNs, some of its key claims lack sufficient evidence:
- One of the well-known advantages of PINNs is their ability to handle complex geometries flexibly. However, the spectral method used in this paper is inherently tied to structured grids and periodic domains, making it unsuitable for irregular geometries. The paper does not provide any solution or discussion on overcoming this limitation.
- The spectral approach reduces memory consumption for differentiation but does not address the curse of dimensionality. For high-dimensional PDEs, the cost of maintaining and computing spectral coefficients could be prohibitive. The paper does not discuss how the method scales beyond low-dimensional problems.
Methods And Evaluation Criteria: The paper presents a reasonable experimental setup, but there are significant gaps in the evaluation:
- The paper evaluates common PDEs but does not test its robustness on problems known to be challenging for spectral methods, such as high-frequency Helmholtz equations or highly nonlinear PDEs. These cases would provide a better understanding of the method’s limitations.
- The paper does not clearly compute the costs associated with FFT, nor does it discuss how the complex number operations arising from first-order differentiation are handled efficiently.
Theoretical Claims: The convergence analysis does not take into account the impact of sampling errors, which could significantly affect the claimed spectral accuracy. In addition, the paper does not provide a theoretical discussion on the significant discrepancy between the experimental results and spectral convergence.
Experimental Designs Or Analyses: The method is evaluated on problems where spectral methods are expected to perform well. There are no experiments demonstrating performance on hyperbolic PDEs, non-smooth solutions, or high-dimensional problems where spectral methods typically struggle. Also, there is no discussion on how computational cost scales with problem dimensionality. Given that spectral methods can become expensive in high dimensions, a cost-benefit analysis comparing against other operator learning methods would be beneficial.
Supplementary Material: Yes, all of them.
Relation To Broader Scientific Literature: The methodology of solving PDEs using machine learning aligns with the broader field of Scientific Machine Learning (SciML), which has shown significant potential in advancing computational efficiency and flexibility across various applications.
Essential References Not Discussed: The paper does not cite relevant works on neural networks with spectral methods, including:
- Fanaskov et al., 2023 – Spectral Neural Operators (Doklady Mathematics, 2023)
- Choi et al., 2024 – Spectral Operator Learning for Parametric PDEs Without Data Reliance (CMAME, 2024)
Other Strengths And Weaknesses: **Strengths:**
- The idea of leveraging spectral methods in neural networks is well-motivated and could be useful for problems where spectral methods naturally excel.
- The proposed approach effectively reduces memory consumption for automatic differentiation, making it a viable alternative for solving PDEs with high-order derivatives.
**Weaknesses:**
- The method does not extend well to irregular geometries, significantly limiting its practical applicability.
- Scalability is unclear, especially for high-dimensional problems.
- The method is likely ineffective for hyperbolic PDEs or non-smooth solutions, but no experiments are provided to evaluate these cases.
- The theoretical claims lack rigor, particularly regarding spectral convergence and sampling error analysis.
Other Comments Or Suggestions: N/A
Questions For Authors: - The proposed method does not support irregular geometries, which is a major advantage of PINNs. How do you plan to extend it beyond structured domains?
- The computational cost for high-dimensional PDEs could be expensive. Have you tested how well SINNs scale beyond low-dimensional problems?
- Given the reliance on spectral methods, how does the approach handle hyperbolic equations and non-smooth solutions? Have you conducted any experiments to evaluate performance on such cases?
- The convergence analysis does not consider sampling error. How do you justify the claim of spectral convergence without accounting for this factor?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the positive assessment and thoughtful questions.
# Challenging Problems
Firstly, we consider a well-defined 1D Helmholtz equation as you suggested:
$$
u_{xx} + \lambda^2u=0, x\in [0,2\pi],
$$
with the boundary condition
$$
u(0)=u(2\pi)=1, \quad u_x(0)=u_x(2\pi)=\lambda.
$$
The solution is $u=\cos(\lambda x)+\sin( \lambda x)$. The results are as follows:
| $\lambda$ | PINNs | SINNs |
| --- | --- | --- |
| 2 | $5.37\times10^{-4} \pm 5.30\times10^{-4}$ | $4.72\times10^{-6} \pm 3.39\times10^{-6}$ |
| 4 | $9.02\times10^{-4} \pm 4.54\times10^{-4}$ | $1.90\times10^{-5} \pm 3.71\times10^{-6}$ |
| 8 | $1.75\times10^{-3} \pm 6.41\times10^{-4}$ | $1.45\times10^{-4} \pm 1.87\times10^{-4}$ |
| 16 | $4.80\times10^{-1} \pm 3.38\times10^{-1}$ | $8.63\times10^{-6} \pm 6.24\times10^{-3}$ |
| 32 | $9.94\times10^{-1} \pm 7.57\times10^{-3}$ | $1.47\times10^{-3} \pm 1.46\times10^{-3}$ |
For a 2D Helmholtz equation:
$$
\Delta u+ 8u=0, \boldsymbol{x}\in \Omega = [0,2\pi]^2,
$$
with the condition:
$$
u(\boldsymbol{x})=g(\boldsymbol{x}), \boldsymbol{x} \in \partial \Omega,
$$
$$
u_x(\boldsymbol{x})=p(\boldsymbol{x}), \boldsymbol{x} \in \lbrace 0 \rbrace \times[0,2\pi],
$$
$$
u_y(\boldsymbol{x})=q(\boldsymbol{x}), \boldsymbol{x} \in [0,2\pi] \times \lbrace 0 \rbrace.
$$
$g,p,q$ are derived by $u=(\cos(2x)+\sin(2x))*(\cos(2y)+\sin(2y))$.
The results are:
| PINNs | SINNs |
| --- | --- |
| $3.11\times10^{-3} \pm 1.97\times10^{-3}$ | $2.33\times10^{-4} \pm 7.31\times10^{-5}$ |
Furthermore, we don't think this problem is a challenge for SINNs. Because solving the boundary-value problems are like optimizing:
$$
\min_{x} |Ax-b|^2.
$$
Thus, changing $\lambda$ is merely changing the location of the non-zero element in $x$ in SINNs.
# Irregular geometries
Due to the limitations, please read it in reviewer PNMU.
# High-dimensional problems
Thanks for pointing out this limitation. There is a difference between using SINNs to solve high-dimensional PDEs and low-dimensional PDEs: PINNs can handle high-dimensional problems because their error is dependent on $N$ which is the number of sampled points and independent of the dimensionality $d$. As SINNs sample $N$ frequencies, if the error is also independent of the dimensionality $d$, SINNs can address the curse of dimensionality (CoD). Fortunately, by utilizing the optimized hyperbolic cross, (Shen et al., 2011) proved that the error is also independent of the dimensionality $d$ (Key result is Corollary 8.3). Some examples can be found in (Shen \& Yu, 2010). Herein, we will apply high-dim Fourier operators by a sparse matrix to avoid CoD.
And we show the scaling by an ideal example:
$$
\Delta u =f, \boldsymbol{x} \in [0,2\pi]^d
$$
with $u=\sum_{i=1}^d \cos(x_i)$ and $f=-\sum_{i=1}^d \cos(x_i)$. Here are the results (memory and rate are for SINN):
|$d$|PINN|SINN|Memory (MB)| Iteration rate (ite/s)|
|--|------|-----|----|----|
|2|$7.72\times 10^{-4}$|$5.34\times 10^{-4}$|542|992|
|5|$5.09\times 10^{-3}$|$2.75\times 10^{-3}$|550|1000|
|10|$1.24\times 10^{-1}$|$1.18\times 10^{-1}$|550|1000|
|20|$7.25\times 10^{-1}$|$1.50\times 10^{-1}$|790|982|
|100|$1.04\times 10^{0}$|$1.27\times 10^{-1}$|8736|995|
# Theoretical Claims
The sampling error exists in PINNs because the Monte Carlo method is used to estimate the integration. For example, the residual term:
$$
\min_{\theta} \int_{\Omega}|\mathcal{L}_r(x;\theta)|^2 d x \approx \min _{\theta}\mathcal{L}= \min _{\theta} \sum _{i=0}^M \left|\mathcal{L}_r\left(x_i;\theta\right)\right|^2
$$
As the integration domain contains infinite points, using finite points to estimate it always has an error, i.e. the sampling error in Monte Carlo methods. However,SINNs don't need Monte Carlo methods because we don't have integrations and our input domain is finite: the frequencies. Herein, our loss function is directly:
$$
\min_{\theta} \tilde{\mathcal{L}}= \min _{\theta} \sum _{i=0}^M|\tilde{\mathcal{L}}_r(k_i;\theta)|^2
$$
With the assumption that the network is powerful enough (i.e. regardless of the sampling method selected, finally, the network learns the total dataset). Then $\tilde{\mathcal{L}}=0$ means the output of SINNs exactly equals to the coefficients of the frequencies. But $\mathcal{L}=0$ only means the output of PINNs is accurate at the points $ \lbrace x_i \rbrace_{i=1}^M$ but still has the sampling error from Monte Carlo methods.
In conclusion, because we truncate the Fourier series to a finite number, SINNs have the spectral error but don't have the sampling error.
# Essential References
Thanks for the given relevant papers. However, the frameworks of PINNs and Neural Operators are different. PINNs aim to learn the map from a definition domain to a functional space while NOs learn the map from a functional space to a functional space. Herein, we didn't write the review of Neural Operators in our manuscript. But, indeed, we should write a brief review of neural networks with spectral methods in other fields.
---
Rebuttal Comment 1.1:
Comment: The paper's experiments primarily focus on PDEs that are easily solvable using spectral methods and omit more challenging cases, such as hyperbolic PDEs. Furthermore, the theoretical claims provided address only the sampling error, whereas my original concern pertained specifically to spectral convergence. Additionally, the selected baseline—a plain PINN method from seven or eight years ago—is somewhat outdated, and the paper lacks comparisons with more recent methods specifically developed to address high-frequency or spectral bias issues (e.g., PINNs enhanced by positional encoding). Given these limitations, I will maintain my current score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your further discussion of our paper.
# Omit more challenging cases
We apologize for having wrongly thought that solving the Helmholtz equation that you mentioned could address your concern.
Here we choose the classical hyperbolic PDEs: wave equations. We consider the following wave equation from PINNacle (arXiv:2306.08827):
$$
u_{tt}-4u_{xx}=0, \quad x\in [0,2\pi], t \in [0,1]
$$
$$
u(x,0)=\sin(x)+ \sin(4 x) / 2, \quad x\in [0,2\pi].
$$
$$
u_t(x,0)=0, \quad x\in [0,2\pi].
$$
with the solution $u=\sin(x) \cos(2t) + \sin(4 x) \cos(8t) / 2$
The results are as follows:
|PINN|SINN|
|--|--|
|$1.49\times 10^{-1}$|$3.15\times 10^{-5}$|
You can find other methods from Table 3 Wave1d-C in PINNacle. The best one is $9.79\times 10^{-2}$ which uses NTK loss reweighting.
Hope the results can address your concern. If not, please give us a concrete equation.
# The convergence analysis does not take into account the impact of sampling errors, which could significantly affect the claimed spectral accuracy.
As SINNs don't have sampling errors, they will not influence the spectral accuracy theoretically. And we admit that it is hard to discuss the discrepancy between the experimental results and spectral convergence.
# More recent methods specifically developed to address high-frequency
Our baselines not only have the vanilla PINNs but also include VSPINN (Hoon Song et al., 2024) which is accepted by NeurIPS2024.
As for the positional encoding, we already have the experiments of Fourier Embedding (FE) which is a representative method in PINNs. As we stated, those methods are also valid for our SINNs, so we only did experiments on SINN combined with FE. We can understand your concern, so we did experiments to compare SINN+FE and PINN+FE with different embedding channels $N_{FE}$ ($N_{FE}=0$ means without FE) under the same setting as in Section 4.5:
|$N_{FE}$|0|2|4|8|
|--|--|--|--|--|
|PINN+FE|$3.73\times 10^{-3}$|$2.11\times 10^{-3}$|$1.79\times 10^{-3}$|$2.48\times 10^{-3}$|
|SINN+FE|$3.17\times 10^{-3}$|$3.31\times 10^{-4}$|$1.74\times 10^{-3}$|$3.34\times 10^{-3}$|
We hope the above supplementary can have a positive impact on your evaluation。 | null | null | null | null | null | null |
Do NOT Think That Much for 2+3=? On the Overthinking of Long Reasoning Models | Accept (poster) | Summary: The authors observe that these models often allocate excessive computational resources to simple problems, leading to inefficiencies. To address this, they introduce novel efficiency metrics from both outcome and process perspectives and propose self-training strategies to streamline reasoning without compromising accuracy. Experimental results demonstrate that their approach reduces computational overhead while preserving model performance across various test sets.
## update after rebuttal
I will keep my positive rating.
Claims And Evidence: The primary claim is that o1-like LLMs exhibit overthinking, especially on simple problems, resulting in unnecessary computational resource usage. The authors support this by analyzing model responses to simple arithmetic questions, where o1-like models consumed significantly more tokens than conventional models to reach the same answer. They also introduce efficiency metrics to quantify this overthinking and demonstrate that their self-training strategies can mitigate it without sacrificing accuracy.
Methods And Evaluation Criteria: The authors employ a self-training paradigm to reduce overthinking, streamlining reasoning processes without compromising accuracy. They evaluate their approach using novel efficiency metrics from both outcome and process perspectives, assessing the rational use of computational resources by o1-like models. Experimental results show that their approach successfully reduces computational overhead while preserving model performance across a range of test sets with varying difficulty levels.
Theoretical Claims: The paper does not focus on theoretical claims but rather on empirical observations and practical solutions to the overthinking problem in o1-like LLMs.
Experimental Designs Or Analyses: Yes. The experimental design involves evaluating o1-like models on various mathematical benchmarks to assess overthinking patterns. The authors introduce efficiency metrics to quantify overthinking and apply self-training strategies to mitigate it. They demonstrate that their approach reduces computational overhead while preserving model performance across different test sets. In general, the experiments are solid and convincing.
Supplementary Material: Yes. All.
Relation To Broader Scientific Literature: This work contributes to the broader scientific literature by addressing the inefficiencies in o1-like LLMs due to overthinking. By introducing efficiency metrics and self-training strategies, the authors provide insights into optimizing computational resource allocation during inference, aligning with ongoing research on improving the efficiency and effectiveness of LLMs.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: A potential weakness is the generalization of the proposed phenomenon and algorithm. More models and general evaluations can be helpful.
Other Comments Or Suggestions: NA
Questions For Authors: Do you have the results for the proposed efficiency enhancing methods with more models?
Have you checked the evaluation on other benchmarks beyond mathematical/scientific reasoning?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your questions! We will answer the questions mentioned in your review.
Q: Do you have the results for the proposed efficiency enhancing methods with more models?
A: Yes, we have conducted experiments on the R1-Distilled-Qwen-32B model, and our efficiency-enhancing methods remain effective.
| Model | Dataset | Acc | #Solutions | Length | Outcome Efficiency | Process Efficiency |
| --------------------- | ------- | ---- | ---------- | ------ | ------------------ | ------------------ |
| R1-Distilled-Qwen-32B | ASDIV | 96.2 | 1.6 | 308.6 | 84.4 | 95.0 |
| +SimPO+FCS+Reflection | ASDIV | 96.4 | 1.2 | 209.1 | 87.0 | 97.4 |
| | | | | | | |
| R1-Distilled-Qwen-32B | GSM8k | 94.1 | 1.3 | 328.3 | 80.9 | 96.4 |
| +SimPO+FCS+Reflection | GSM8k | 93.3 | 1.1 | 254.4 | 82.5 | 98.1 |
| | | | | | | |
| R1-Distilled-Qwen-32B | MATH500 | 90.2 | 4.7 | 3210.0 | 54.2 | 71.0 |
| +SimPO+FCS+Reflection | MATH500 | 91.4 | 3.4 | 2735.2 | 63.9 | 77.2 |
---
---
Q: Have you checked the evaluation on other benchmarks beyond mathematical/scientific reasoning?
A: We have not performed evaluations on benchmarks beyond mathematical and scientific reasoning. There are two primary reasons behind this choice: (1) Our paper specifically focuses on mathematical and scientific reasoning tasks, as they represent cutting-edge domains that strongly reflect aspects of human intelligence. (2) Benchmarks in general domains often lack clearly defined and verifiable answers, which makes objective evaluation challenging.
Investigating the overthinking phenomenon in more general tasks by leveraging an LLM-as-a-judge evaluation approach is an interesting direction that we intend to explore in future work. | Summary: This paper addresses the "overthinking" issue observed in o1-like reasoning models, which expend excessive computational resources during inference, especially for simple problems. The authors first analyze this phenomenon by prompting LLMs with trivial questions (e.g., "What is the answer of 2 plus 3?"), observing significant redundancy in generated solutions. They identify that models often produce numerous redundant or minimally diverse solutions, which do not substantially contribute to improving accuracy. To quantify this, the authors propose novel efficiency metrics from both outcome (accuracy contribution) and process (solution diversity) perspectives. Leveraging these metrics, the authors introduce a self-distillation approach that samples diverse solutions via temperature sampling and recombines selected efficient solutions to streamline inference. Experiments across benchmarks (e.g., GSM8K, MATH500, GPQA, AIME) demonstrate the proposed approach significantly reduces token usage while maintaining accuracy, highlighting its effectiveness in enhancing CoT efficiency.
Claims And Evidence: The claims presented in the paper are generally supported by clear and convincing evidence. Specifically, the paper makes three primary claims:
(1) o1-like large language models exhibit a significant "overthinking" issue, especially on simple problems.
(2) novel efficiency metrics (outcome and process metrics) effectively quantify this inefficiency.
(3) self-distillation strategies leveraging these metrics substantially mitigate overthinking without compromising performance.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria effectively address the stated problem. The authors introduce intuitive outcome and process efficiency metrics, clearly capturing the extent and nature of the "overthinking" phenomenon in o1-like models. Their chosen benchmark datasets (ASDIV, GSM8K, MATH500, GPQA, AIME) are appropriate and well-aligned with the goal of assessing reasoning efficiency across a range of difficulty levels. Additionally, the self-training approaches (SimPO combined with streamlined solutions like FCS+Reflection) logically target inefficiencies identified by the metrics, making them well-suited for the study's objectives. Overall, both the methods and evaluation criteria are well-motivated and sensible for the problem at hand.
Theoretical Claims: This is not a theory paper. It conducts emperical study.
Experimental Designs Or Analyses: The experimental designs and analyses presented in the paper are generally sound and valid. The authors provide thorough and rigorous experiments across multiple well-established benchmark datasets of varying difficulty (ASDIV, GSM8K, MATH500, GPQA, and AIME), clearly demonstrating the effectiveness of their proposed methods in mitigating overthinking.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: The paper clearly situates its contributions within the broader literature. While prior research has extensively studied methods like self-consistency, Tree-of-Thought, and early-stopping to optimize test-time computation and improve model reasoning, this paper uniquely identifies the "overthinking" issue specific to o1-like LLMs. The authors bridge a relevant gap by proposing targeted efficiency metrics and self-training strategies (SimPO, FCS+Reflection) that explicitly address inefficiencies in extended chains-of-thought.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
1.The paper addresses a novel and practically significant problem—the "overthinking" inefficiency in o1-like LLMs—providing valuable insights into the intelligent allocation of inference-time resources.
2.The proposed metrics (outcome and process efficiency) are original, clearly defined, and intuitively appealing.
3.Experiments are comprehensive, covering multiple datasets and difficulty levels, enhancing confidence in the robustness of the findings.
Weaknesses:
1.While the metrics and simplification methods are effective, the generalizability to domains beyond mathematical reasoning tasks remains unclear.
Other Comments Or Suggestions: None.
Questions For Authors: The paper does not discuss the "self-reflection" or "aha moment" behavior recently appeared in "Guo, Daya, et al. "Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning." (2025).", which describes LLMs revising earlier reasoning after reflecting or backtracking.
Question 1:
Could the authors clarify how they categorize this type of reflective or "aha moment" behavior in their framework?
Question2:
Should a solution that emerges from a reflective or backtracking process be retained during the self-distillation process, or would it be considered redundant and thus removed?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful questions. We address each question raised in your review individually below.
Q (Weakness): While the metrics and simplification methods are effective, the generalizability to domains beyond mathematical reasoning tasks remains unclear.
A: This is a valuable point. In this paper, we primarily focus on the mathematical reasoning domain because verifying the correctness of answers to mathematical questions is straightforward and objective. Nevertheless, we have noticed similar "overthinking" phenomena in general reasoning tasks, such as repeatedly asking the same questions or recurring assumptions. A deeper and more comprehensive analysis of such phenomena in more general tasks is warranted, which we plan to explore in future work.
---
Q: Could the authors clarify how they categorize this type of reflective or "aha moment" behavior in their framework?
A: The "aha moment" described in R1-Zero refers to a training-time behavior, in which a model autonomously acquires reasoning strategies via reinforcement learning. In contrast, our framework specifically targets inference-time behaviors in models executing long reasoning processes. In our work, a "solution" explicitly denotes portions of the model-generated outputs containing a final answer. Therefore, the concept of the "aha moment" aligns more closely with reflective behaviors (e.g., reflection, self-verification, or backtracking) that bridge two solutions. When the model continues reasoning beyond obtaining an answer within a solution — whether through reflection, self-verification, or backtracking — we characterize this behavior as "overthinking".
---
Q: Should a solution that emerges from a reflective or backtracking process be retained during the self-distillation process, or would it be considered redundant and thus removed?
A: Whether a reflective or backtracking-derived solution should be retained during self-distillation depends on the specific training data construction policy described in Section 3.2. For example, our "First Correct Solution" (FCS) policy retains only the first solution generated, whereas the "FCS+Reflection" policy retains one additional reflective solution alongside the initial FCS solution.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification :D ! I enjoy reading your paper!
Q1:
I agree.
Q2:
Not fully agree.
a,I do not think the "aha moment" only appear in the training-time.
b,The "aha moment" happens when a model backtracking from an incorrect answer to a correct answer.
Therefore, should "aha moment" be treated as "overthinking"?
Q3:
Your answer is self-consistent. (no problem!)
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive feedback. Below, we address Q2 with additional detail and clarification.
To align with standard terminology, we will adopt the term "reasoning strategies" (e.g., reflection, self-verification, backtracking) in place of the "aha moment" in this discussion. These reasoning strategies do exist in both training-time and inference-time.
We define "overthinking" as a phenomenon where a long reasoning model repeatedly evaluates and revisits solutions within the CoT reasoning process, rather than converging promptly to a final answer. Crucially, this differs from scenarios where the model backtracks from an incorrect answer to a correct one—a valid corrective process. Overthinking specifically arises when the model redundantly applies strategies like reflection, self-verification, or backtracking after already arriving at the same answer in successive iterations.
We appreciate your engagement with this topic. Your feedback has deepened our conceptual framing of overthinking and will help refine its articulation in subsequent work. | Summary: This paper studies the over-thinking problem of o1-like models. The overthinking refers to the scenario where unnecessary compute resource is used for simple question.
It first compares the average tokens needed by traditional LLM and the o1-like reasoning models. It finds that o1-like models spend much more tokens on simple problems with minimal benefits and lack diversity in reasoning strategies. Based on this observation, it proposes several metrics to evaluate the efficiency of LLM reasoning models
It then adopts a self-training paradigm to mitigate this issue by removing redundant solutions. Results on GSM8K, MATH500, GPQA, and AIME demonstrate effectiveness and robustness
Claims And Evidence: This paper claims that o1-like reasoning models have the overthinking issue. It investigates this issue based on the proposed Outcome Efficiency Metric, distinctness ratio, and Process Efficiency Metric. Their claims are supported by the comparisons between traditional LLM and o1-models, which shows that their longer responses are less efficient in improving accuracy and solution diversity.
Methods And Evaluation Criteria: To investigate the overthinking, it proposes new evaluation metrics Outcome Efficiency Metric, distinctness ratio, and Process Efficiency Metric to evaluate the efficiency and diversity of the solution. One concern for the diversity is that it is independent of the correctness. High diversity but an incorrect final response may not make too much sense.
To mitigate overthinking, it finetunes the model towards short and correct solutions. It also finetunes the model on simplified responses.
Theoretical Claims: There is no theoretical claim in the paper.
Experimental Designs Or Analyses: The experiments in Section 3 lack the necessary details
• What are the sizes of different kinds of training examples in Table 2 and Table 3?
• What are the training details (such as hyperparameters) for models in Section 3?
The descriptions in Section 3.1 and 3.2 are confusing. Which part is the method proposed by this paper?
• Section 3.1 lists several self-training methods such as SFT and DPO, but these methods are not particularly designed for Length Preference Optimization. What does Length Preference Optimization refer to here? Is there a specific dataset for length optimization?
• The purpose of Streamlining Responses in Section 3.2 is unclear. Are they used during the inference time to select the best response? Or are they the strategies for constructing self-training datasets?
Why experimental settings in different datasets are different? For example, on MATH500, there are 7 settings, while for other datasets, there is only one. Similarly, Qwen2.5-Math-72B-Instruct is one of the baselines for GPQA and AIME24, but their performance on ASDIV, GSM8k, MATH500 is not mentioned.
What does "ours" mean in SimPO FCS+Reflection (Ours) ? It looks like this one is the final method proposed by this paper and the other settings are just baseline or ablation studies. If that is the case, this should be explicitly mentioned.
Supplementary Material: No supplementary material is provided
Relation To Broader Scientific Literature: The overthinking issue is interesting and important for analyzing the performance of current o1-like models. It provides insights on how many of the tokens in the extremely long chain of thought contribute to the final accuracy.
Essential References Not Discussed: Necessary references have been discussed in their related work
Other Strengths And Weaknesses: Strength: The detailed and comprehensive investigations on the overthinking scenario of o1-like models provide more insight into the performance of o1-like models.
Weakness: The methods and experiments to mitigate the overthinking issue are not convincing.
Other Comments Or Suggestions: Section 3 should be reorganized to make it clearer. Necessary experimental details should be added for reproduction.
Questions For Authors: In the Outcome Efficiency Metric and Process Efficiency Metric, why does the first consider correctness and the second one independent of correctness?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your questions. Below we address each point mentioned in your review.
Since most of your questions pertain to Section 3, we provide additional clarifications on our methods as follows:
1. Our proposed methods focus primarily on constructing suitable training data. Specifically, Section 3.2 describes how we generate paired training data for preference optimization algorithms.
2. Section 3.1 describes the preference optimization algorithms employed in our experiments. As you pointed out, these algorithms were not originally designed for length optimization. Nevertheless, we explore their potential effectiveness in addressing the length optimization task.
3. We evaluated the performance of various preference optimization algorithms across different training datasets. Ultimately, we chose SimPO trained with the FCS+Reflection dataset as our final method, marked as "ours" in Table 4.
4. The total number of training examples is approximately 11,000. Both SFT and preference optimization methods were trained using a learning rate of 1e-7, with a cosine learning rate scheduler, and a batch size of 16. We trained the SFT models for four epochs and the preference optimization models for one epoch. We will include these details in the appendix of the next version.
5. For simplicity and clarity, we first compared multiple training algorithms using the "Shortest Response" dataset, finding SimPO performed the best. Subsequently, we tested SimPO on a broader range of training datasets and selected "FCS+Reflection" as our best performing dataset configuration. The complete experimental results will be provided in the appendix in our revised manuscript.
Thank you again for your insightful questions and valuable suggestions. We will improve the clarity and presentation of Section 3 accordingly. | Summary: This paper investigates the phenomenon of "overthinking" in recent "o1-like" large language models (LLMs), where models expend excessive computational resources (generating many tokens and solution steps) on simple problems with minimal benefit to accuracy or reasoning diversity. The authors provide the first comprehensive analysis of this issue, introducing novel outcome and process efficiency metrics to quantify the inefficiency.
# update after rebuttal
Since most of my concerns remain and significant revisions are still needed, I strongly suggest another round of revision. Thanks.
Claims And Evidence: Fine
Methods And Evaluation Criteria: Fine
Theoretical Claims: Fine
Experimental Designs Or Analyses: Fine
Supplementary Material: Fine
Relation To Broader Scientific Literature: Fine
Essential References Not Discussed: Fine
Other Strengths And Weaknesses: Hope the feedback is valuable for the authors and helps improve the quality in the revision or camera ready. I am happy to update my score after rebuttal if necessary. Thanks!
Pros:
1. Addresses a pertinent and relatively unexplored inefficiency ("overthinking") emerging in the latest generation of powerful reasoning LLMs.
2. Effectively demonstrates and quantifies the overthinking issue through clear examples (Fig 1, Fig 2) and quantitative analysis (solution distributions, token usage vs. difficulty).
3. Proposes novel "Outcome Efficiency" (ξo) and "Process Efficiency" (ξp) metrics that provide a more nuanced evaluation of LLM reasoning beyond simple accuracy, considering resource usage and solution diversity.
4. Develops and evaluates concrete self-training based methods (preference optimization + response simplification) that demonstrably improve efficiency without significant accuracy loss.
Cons:
1. It is suggested that the authors provide the code for reproduction.
2. The central concept relies on "o1-like" models, but this term isn't precisely defined beyond citing specific models (QwQ, DeepSeek-R1, Kimi). The paper could benefit from explicitly stating the defining characteristics of this model class (e.g., explicit multi-solution generation, specific internal reflection mechanisms cited by the model providers) that lead to the observed overthinking, rather than just relying on the "o1" name recognition.
3. It is not very clear how the authors set the hyperparameters such as temperatures. It suggested that the authors provide more details about all the experiment setting. Although temperature is briefly mentioned in "We generated 10 samples for each instance in the training dataset with a temperature of 1.0" (line 312), more explanations about the selection of the temperature and the temperature setting for the other parts of the study are desired.
4. It is not very clear about the impact of hyperparameters such as temperatures on the "Overthinking" issues. It is suggested that the authors conduct a more detailed analysis on the sensitivity of the hyperparameters.
5. Table 4 is hard to interpret. "+SimPOFCS+Reflection (Ours) 92.8 1.9 1330.7 80.0% 89.5%" does not have much difference compared to "+SimPOFirst-Correct Solution 91.0 1.4 1016.0 88.7% 98.1%" and "+SimPOGreedily Diverse Solutions 91.8 1.7 1286.1 84.3% 93.6%". It is not sure whether the proposed method has a big advantage.
6. The insight is limited and the overall observations are superficial. Why Some LLMs have the "Overthinking" issues? Do all reasoning LLMs have the "Overthinking" issues? What are the fundamental cause of the "Overthinking" issues? Does the proposed method solve the fundamental cause of the "Overthinking" issues?
7. There is no formal definition of the "Overthinking" issue. It is suggested the authors provide a formal definition of the "Overthinking" issue. Otherwise, it is not very clear.
8. The proposed solution seems incremental. It is mostly based on SimPO. It seems that the proposed method does not solve the fundamental cause of the "Overthinking" issues.
Other Comments Or Suggestions: Typo:
1. Figure 1 is referred to as "Figure 1(a)" in the text (line 041), but the figure itself doesn't have an "(a)" label.
Questions For Authors: See Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you very much for your insightful questions and suggestions.
Q1: Provide the code for reproduction.
A1: We will make our code publicly available soon.
---
Q2: The term "o1-like models" isn't precisely defined.
A2: Thank you for pointing this out. We acknowledge that our original description of “o1-like” models lacks precision. A clearer term would be “Long Reasoning Models”, which refers to models that generate detailed CoT reasoning by iteratively producing intermediate reasoning steps and sequentially refining solutions until reaching a final conclusion.
---
Q3&Q4:More details about the temperatures.
A3&A4: We set the temperature to 1.0 for generating training data to promote diversity. For benchmark evaluations, we used a temperature of 0 to ensure reproducibility.
We followed your suggestion and conducted a more detailed analysis regarding the impact of temperature settings. The table below summarizes the results of QwQ-32B-Preview evaluated on the Math Level 1 and 5. Clearly, the temperature hyperparameter has only a marginal effect on the number of generated solutions, further confirming our assertion that "overthinking" is a fundamental behavior inherent to these models.
| Level | temperature | solution_num |
|--------:|--------------:|---------------:|
| 1 | 0 | 3.7 |
| 1 | 0.3 | 3.9 |
| 1 | 0.5 | 3.6 |
| 1 | 0.7 | 3.8 |
| 1 | 1 | 3.6 |
|||
| 5 | 0 | 4 |
| 5 | 0.3 | 3.9 |
| 5 | 0.5 | 3.9 |
| 5 | 0.7 | 4 |
| 5 | 1 | 3.9 |
---
Q5: Table 4 is hard to interpret. It is not sure whether the proposed method has a big advantage.
A5: Both approaches demonstrate advantages in simplifying long CoT processes and mitigating overthinking. Nonetheless, considering benchmark performances and their suitability for challenging problems, we prefer "SimPO+FCS+Reflection" as our final recommended approach.
---
Q6:The insight is limited and the overall observations are superficial.
A6: Our results indicate that "overthinking" mainly happens in Long Reasoning Models, which learn advanced reasoning methods (e.g., reflection and backtracking) through reinforcement learning. When they fail to stop reasoning at the right time, they start overthinking.
We confirmed our findings by evaluating two additional models, DeepSeek-R1 and QwQ-32B, which were released after our original submission, further proving that our results are robust. One important observation was that the formal (full-release) versions of the models provided higher accuracy than their preview versions, but exhibited much stronger overthinking behaviors. These results further demonstrate that the overthinking problem is a fundamental behavior inherent to these models.
|Model|Acc|#Solutions|Length|
|---|---|---|---|
|R1-Preview|93.4|2.8|2168.6|
|R1|96.4|4.3|2704.3|
|QwQ-Preview|93.0|3.2|2407.9|
|QwQ|96.4|7.1|4799.4|
However, more in-depth research is still needed to fully understand the root causes of overthinking. While our current method effectively reduces this phenomenon, it doesn't completely solve the underlying problem. We plan to investigate more fundamental solutions in future work.
Finally, it is worth noting that our primary contribution is presenting the first comprehensive study that clearly explains the overthinking issue. We show that Long Reasoning Models frequently waste computational effort by generating unnecessary or repeated reasoning steps that add very little benefit. This notable contribution has been recognized positively by the other reviewers.
---
Q7: There is no formal definition of the "Overthinking" issue.
A7: Formally speaking, we define "overthinking" as a phenomenon where a long reasoning model repeatedly evaluates and revisits solutions within the CoT reasoning process, rather than converging promptly to a final answer. We will incorporate this formal definition explicitly in the Introduction section of our forthcoming manuscript revision.
---
Q8: The proposed solution seems incremental.
A8: We acknowledge that our proposed method does not completely resolve the fundamental cause of the "overthinking" phenomenon. Our primary contribution instead lies in providing the community’s first systematic analysis and explicit characterization of the overthinking problem, introducing quantitative metrics to assess efficiency specifically in Long Reasoning Models, and proposing viable mitigation strategies. We aim for our findings and benchmarks to provide a valuable framework to enhance the community’s overall understanding of these models' behavior and guide future research toward addressing overthinking more comprehensively. | null | null | null | null | null | null |
Wasserstein Flow Matching: Generative Modeling Over Families of Distributions | Accept (poster) | Summary: The paper shows how Riemannian Flow Matching (RFM) (Chen & Lipman, 2023) is applied to the Wasserstein metric space, the space of distributions endowed with the Wasserstein metric. For RFM to work, a parametric vector field that such that when marginalizing it over the [0,1] time interval, turns a source, Gaussian distribution to a target distribution, an appropriate metric $g$ and a family of conditional flows. The paper shows that, for the Wassserstein metric space, we can use McCann interpolation, its time derivative naturally matches as the desired parametric vector field, and the Wasserstein metric acts as the Riemannian metric. The authors coined the approach as Wasserstein Flow Matching (WFM).
The paper further discusses a special subspace of the Wasserstein space, the subspace of non-degenerate Gaussian distributions, called the Bures-Wasserstein space, where the authors show that the McCann interpolation, its time derivatives and all other key entities can be computed in closed forms, exploring the analytical form of Gaussian distributions. The Bures-Wasserstein space has an application in single-cell gnenomics where cellular microenvironments or highlight fine-grain clusters can be captured with just means and covariances.
For the general case of general distributions though, since optimal transport (OT) maps are intractable, the authors proposed to rely on entropic OT (Cuturi, 2013), which approximates OT and can be efficiently computed with GPUs via Sinkhorn's algorithm (Sinkhorn, 1964).
Experiments were conducted over the family of Gaussians and the family of general distributions using synthetic and real data, where WFM was shown to be competitive against existing baseline approaches.
## Update after rebuttal
My review is already *positive* for the paper. The additional clarifications from the authors are helpful. I've kept my score.
Claims And Evidence: The main claim, which is instantiation of RFM on the Wasserstein metric space is supported by clear and convincing evidences.
There is one small claim (end of section 1), that "...current approaches cannot scale to high-dimensional distributions and fail when distributions are realized as point-clouds with variable sizes. Conversely WFM succeeds in these challenging settings, enabling generative modeling in new domains like synthesizing tissue microenvironments...". I acknowledge that the authors have a very interesting application. However, I am not totally convinced by the evidences. There is nothing in the Wassertein geometry or RFM that deals with point cloud-related computational costs. What I have seen in the paper is that every point cloud in consideration are sampled with 1,000 points. This is too small to represent a realistic point cloud in industry. In addition, WFM in this case has to rely on the approximate approach of entropic OT, in which the relationship between point cloud size and computational cost is not clear in the paper.
Methods And Evaluation Criteria: The methods and evaluation criteria are reasonable to me. In the experiments over the family of Gaussians, the results make sense.
My concern is at the experiments over the general family of point clouds where, if I understand correctly, all point cloud sample sizes are at 1,000 points, which in my view is too low in practice. There is no study regarding the impact of point cloud size vs the overall WFM performance in terms of both accuracy and speed. In my view point, this seems to be a substantial weak point because WFM in this case has to rely on entropic OT due to the intractability of the true OT.
Theoretical Claims: I did check the proposed proofs. They appear valid to me and they are not that hard to derive.
Experimental Designs Or Analyses: N/A
Supplementary Material: I checked appendices A, B, E, F, G.
I do not understand the need for Lemma B.1. The Wasserstein distance function is a metric (Villani, 2009). That means it satisfies the 4 metric axioms: identity of indiscernibles, positivity, symmetry and triangle inequality. A premetric only requires 2 axioms: identity of indiscernibles and positivity. By definition, every metric is a premetric. RFM only requires a premetric. Thus, why do we need Lemma B.1 here?
Relation To Broader Scientific Literature: From a theoretical point of view, the contribution here is interesting to branches dealing with synthesizing 2D and 3D point clouds. It might be good to relate to Dirichlet distributions as well for classification applications.
From a practical point of view, applications that can stem from this work like modelling tissue biology and cellular microenvironment distributions can benefit from the work.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: I have some concerns above but this is a good paper overall because it has interesting applications and its theoretical contribution is sound, although not very technically difficult.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer 4 (**euNm**)
We thank the reviewer for their insightful feedback and positive assessment of our work. We address each point below.
**"...What I have seen in the paper is that every point cloud in consideration are sampled with 1,000 points. This is too small to represent a realistic point cloud in industry."**
In our revised **Table 3** (see response to reviewer 3 - **ToSj**), we now apply WFM to point-clouds with 2048 particles, which is the standard size used by other generative models for shapes in the literature (see [PVD](https://arxiv.org/abs/2104.03670) and [PSF](https://arxiv.org/abs/2212.01747)). In this regime, WFM performs on par with or better than other methods, while being significantly more computationally efficient. We note that while training is performed on 2048-sized realizations, inference can be done at any scale. For biological applications like cellular microenvironments, niche sizes are typically on the order of tens of cells, so our methodology is more than sufficient for these important use cases.
**"...In addition, WFM in this case has to rely on the approximate approach of entropic OT, in which the relationship between point cloud size and computational cost is not clear in the paper."**
As we discuss in **Appendix A**, the complexity of entropic OT is between point-clouds with $n$ particles is $O(n^{2}/\varepsilon)$, where $\varepsilon$ is the entropic regularisation strength. We also discuss the trade-offs between true and entropic OT and how the regularisation parameters $\varepsilon$ provides a straightforward way to balance computational efficiency and OT accuracy.
**"I do not understand the need for Lemma B.1. The Wasserstein distance function is a metric (Villani, 2009). That means it satisfies the 4 metric axioms: identity of indiscernibles, positivity, symmetry and triangle inequality. A premetric only requires 2 axioms: identity of indiscernibles and positivity. By definition, every metric is a premetric. RFM only requires a premetric."**
The reviewer correctly points out that the classical definition of a premetric requires only positivity and identity of indiscernibles, both of which are satisfied by any metric. However, the RFM framework (Chen & Lipman, 2023) uses a slightly different definition that includes a third necessary condition:
1. Positivity: $d(x,y)\geq 0$ for all $x,y$
2. Identity of indiscernibles: $d(x,y)=0$ iff $x=y$
3. Non-degeneracy: $\nabla_x d(x, y) \neq 0$ iff $x\neq 0$
Indeed, if a (squared-)metric is differentiable, then it is easy to sketch out that condition (3) holds. We found it pedagogically useful to explicitly write out this third condition, which introduced the logarithmic and exponential maps. | Summary: The paper addresses the problem of learning generative models of high-dimension distribution, i.e., where each sample from the model is a distribution itself. The authors propose Wasserstein flow matching (WFM), which builds on top of recent advances in the Riemannian flow matching (RFM) framework (Chen & Lipman, 2023) and extend it to the Wasserstein geometry (Ambrosio et al. 2008). The authors show that WFM can generate high-dimensional distribution either represented analytically as Gaussians or empirically as point clouds and derive valid flow-matching objectives for these cases. The authors then apply the method to single-cell and spatial transcriptomics datasets. Additionally, the method is applied to 3d shape generation on point cloud data (ShapeNet & ModelNet), on which it performs similarly, although not better than other methods.
Claims And Evidence: The authors claim to introduce WFM, extending the Flow-matching framework to the space of probability distributions. To my knowledge, this is a novel contribution. The authors provide a sound theoretical foundation for the method and demonstrate its effectiveness on various tasks. Empirical results suggest that it outperforms other approaches (Table 2) in its intended application and is on par with current state-of-the-art models for 3d shape generation (Table 3) while allowing flexible choice of number of particles (Table 4).
Methods And Evaluation Criteria: The proposed method is well-motivated, and the evaluation is thorough.
Theoretical Claims: The authors prove the validity of the WFM objective in Appendix B. The proof is sound and largely builds upon previous work on FM (Lipman et al., 2022, Chen & Lipman, 2023).
Experimental Designs Or Analyses: Experimental design and analysis are well thought out and the results are presented clearly and concisely. The experiments are well motivated and the results are convincing.
Supplementary Material: NA
Relation To Broader Scientific Literature: Related work is discussed in Section 2.2 and is, to my knowledge, comprehensive.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
- Very well written and structured.
- Extensive experiments on toy datasets, standard benchmarks, and fitting scientific applications in genomics.
Weaknesses:
- While Tab 3 reveals that WFM performs similarly to other methods on ShapeNet and ModelNet, it never outperforms them.
Other Comments Or Suggestions: - Page 3, line 110 typo "moodal data".
Questions For Authors: - Can the authors discuss the results "lower" performance in Tab 3?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to Reviewer 3 (**ToSj**)
We are grateful to the reviewer for their positive assessment of our work.
**"While Tab 3 reveals that WFM performs similarly to other methods on ShapeNet and ModelNet, it never outperforms them."**
All reviewers expressed concerns about our performance in 3D shape generation experiments (**Table 3**). We've addressed this by making the following improvements:
1. **Fixed a minor inference bug**: We corrected a time-stepping loop that previously ran from t=0 to t=1+$\Delta$t (i.e. from $i=0:N$) , instead of the correct t=0 to t=1 ( $i=0:N-1$).
2. **Standardized evaluation metrics**: We now use the same approximate EMD implementation as other benchmark methods. Our original results used true EMD calculations, creating an unintended evaluation discrepancy. Using consistent benchmarking code provides a fairer comparison.
3. **Matched point cloud size**: We've increased our point cloud size from 1000 to 2048 points to align with other methods' evaluations. This directly addresses reviewer concerns about scalability while enabling more direct comparisons.
These changes have significantly improved our results:
| | Airplane | | Chair | | Car | |
|---|:---:|:---:|:---:|:---:|:---:|:---:|
| | CD ↓ | EMD ↓ | CD ↓ | EMD ↓ | CD ↓ | EMD ↓ |
| PointFlow | 75.68 | 70.74 | 62.84 | 60.57 | 58.10 | 56.25 |
| SoftFlow | 76.05 | 65.80 | 59.21 | 60.05 | 64.77 | 60.09 |
| DPF-Net | 75.18 | 65.55 | 62.00 | 58.53 | 62.35 | 54.48 |
| Shape-GF | 80.00 | 76.17 | 68.96 | 65.48 | 63.20 | 56.53 |
| PVD | 73.82 | 64.81 | **56.26** | **53.32** | 54.55 | **53.83** |
| PSF | 71.11 | **61.09** | 58.92 | 54.45 | 57.19 | 56.07 |
| WFM (ours) | **69.88** | 64.44 | 57.62 | 57.93 | **53.41** | 58.10 |
WFM now achieves state-of-the-art CD performance on 2/3 datasets while maintaining competitive performance on all metrics. Importantly, WFM achieves these results with substantially lower computational requirements (~120 GPU hours on an 80GB GPU versus ~400 GPU hours for PSF and PVD).
**"Page 3, line 110 typo 'moodal data'."**
Thank you for catching this typo. It has been fixed to 'multi-modal data'.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and for addressing my concern.
I will maintain my already positive evaluation of the work. | Summary: This paper proposes Wasserstein Flow Matching, a method for building a generative model where the datapoints themselves are distributions. The basic idea is to treat the Wasserstein space as a manifold and use Riemannian flow matching techniques. The model is studied in two settings: (1) the Bures-Wasserstein space, and (2) empirical measures / point clouds. The method is evaluated on synthetic data, 3D point clouds, and some applications in single-cell genomics and spatial transcriptomics.
## update after rebuttal
After the author's rebuttal, I will maintain my positive score of weak accept. I think the core idea of the paper is interesting and novel, but stronger theoretical and/or empirical results would be necessary to further strengthen the paper and raise my score.
Claims And Evidence: There are two claims that I feel are not entirely justified.
- The authors claim that existing approaches for 3D point cloud generation cannot handle point clouds of variable sizes. However, a method like PSF for example (https://arxiv.org/abs/2212.01747) should in principle be able to do this, as long as a suitable architecture is selected. The ability to handle variable-size point clouds seems to have less to do with the proposed methodology than with the specific architectural choice (in this case, just a transformer).
- The use of Equation 11 / Proposition 3.1 do not seem to be properly justified, at least in the general setting. In particular, Riemannian flow matching makes an important assumption that the manifold is **finite dimensional**. In this setting one is thus generally justified in using densities, e.g., those appearing in Equation 19 of this submission. However, in the setting of this paper, the Wasserstein manifold is in general an infinite dimensional space, where the derivations in Appendix B no longer work without significant extra steps.
- While this is fine in the Bures-Wasserstein case (as this is a finite dimensional submanifold), this paper does not seem to have solved the problem in as much generality as it claims.
- See, e.g., recent works on diffusions and flows in infinite-dimensional spaces [[1](https://arxiv.org/abs/2302.07400)] [[2](https://arxiv.org/abs/2305.17209)] [[3](https://arxiv.org/abs/2302.10130)], amongst others.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense.
Theoretical Claims: See previous comment
Experimental Designs Or Analyses: The experimental analyses seem appropriate to me.
Supplementary Material: I reviewed the appendix but not the code.
Relation To Broader Scientific Literature: - In relationship to the existing literature, this work can be viewed as an extension of [Riemannian Flow Matching](https://arxiv.org/abs/2302.03660) to the case of manifolds of distributions. This involves taking known results about the geometry of Wasserstein space and combining these tools with RFM to obtain their model.
- Prior work on flow matching for point cloud generation (that the authors cite) develop similar approaches, but do not provide the nice formalism the authors here do.
- The relationship between the submission and [Meta Flow Matching](https://arxiv.org/abs/2408.14608) could be described in more detail.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: # Strengths
- The general framework of FM on Wasserstein space is very clear and compelling, and lends itself nicely to the methodology developed.
- The synthetic experiments demonstrate the importance of appropriately accounting for the geometry of the space.
# Weaknesses
- Some of the claims are not entirely justified (see above).
- The proposed method has a fairly high computational cost, as each training step requires solving an OT problem.
- While this is fine if it leads to better models, I am not convinced that it does. On the 3D point cloud experiments for instance, the proposed method seems to do a bit worse than PSF, which is essentially the same technique as the one proposed in this work, except without using an inner optimal transport solver.
Other Comments Or Suggestions: - Equation 11 is critical to the proposed method; dedicating more space to the derivation and justification of this in the main paper might be useful.
- There appear to be some typos in Section 2.3.1 -- namely the use of a, b, A, B to (I think?) represent means/covariances.
Questions For Authors: See above comments on claims
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer 2 (**C2Yo**)
Thank you for your review and insightful comments. We address each of your concerns below:
**"The authors claim that existing approaches for 3D point cloud generation cannot handle point clouds of variable sizes. However, a method like PSF for example should in principle be able to do this, as long as a suitable architecture is selected."**
PSF fundamentally relies on 'Euclidean' distance between point clouds ($X_1 - X_0$), which only exists for fixed-size point clouds. Such an interpolation between point-clouds of different sizes is not possible. While one might hypothetically modify PSF to handle variable sizes, such modification would likely require implementing some form of optimal transport to align points between differently realised distributions, precisely what WFM does by design.
**"The use of Equation 11 / Proposition 3.1 do not seem to be properly justified, at least in the general setting. In particular, Riemannian flow matching makes an important assumption that the manifold is finite dimensional."**
This is an important assumption, which we do indeed make. In the papers you mentioned (such as [Functional Flow Matching](https://arxiv.org/pdf/2305.17209)), the authors simply assume that a notion of continuity equation holds at the level of the flow defined over functionals. We readily adopt this approach and assume the continuity equation holds in the Wasserstein space as well. Another possible interpretation of can be inferred from **Appendix B** --- we are simply averaging the original flow matching loss (not the conditional flow matching loss) over pairs of measures.
**"The proposed method has a fairly high computational cost, as each training step requires solving an OT problem"**
The reviewer is correct that there is a computational complexity incurred due to the OT solver. However, as **Table 3** shows (see response to reviewer 3 - **ToSj**) , WFM achieves competitive performance while requiring significantly less compute than competing methods such as PVD and PSF.
**"The relationship between the submission and Meta Flow Matching could be described in more detail."**
We expand upon the comparison here. The main difference is that Meta Flow Matching (MFM) requires paired couplings of distributions on the Wasserstein space while WFM operates with unpaired couplings. WFM is a general generative framework for creating new samples from a learned distribution of distributions, while MFM specifically learns transformations between paired distributions.
**"Equation 11 is critical to the proposed method; dedicating more space to the derivation and justification of this in the main paper might be useful."**
We provide a complete derivation of **Eq 11** in **Appendix B**. We now dedicate more background on its motivation in the main text to make it clearer to the reader.
**"There appear to be some typos in Section 2.3.1...**
Fixed, we thank the reviewer for spotting this typo.
---
Rebuttal Comment 1.1:
Comment: Thanks for these detailed responses.
> variable size point clouds...
Yes, I agree with you here in the sense that a given source point cloud and target point cloud must have the same number of points for PSF -- but this can vary across different clouds. I had originally interpreted your comments (e.g., Section 4.2.1) in the sense of "all clouds have the same, fixed number of points".
While your method can allow for variable sizes in the source/target, it is still somewhat unclear to me where this is beneficial -- if you are using a noise distribution as your source, you can freely choose the number of points to be equal to that of the target point cloud. Otherwise, simply downsampling one of the clouds might be fairly effective.
> finite dimensional...
I think with this assumption everything should work (at least, for the Bures-Wasserstein and point cloud examples, assuming there is an upper bound on the number of points). However, to avoid over-claiming, I think it is important to spell out that the derivations are only informal in the general setting which the paper considers.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful follow-up comments.
On variable-sized point clouds: You are right that PSF might be able to handle point clouds of different sizes across examples through downsampling/matching techniques. We'll soften our claims about this aspect of novelty. Still, WFM's ability to handle different-sized source/target pairs can offers benefits where maintaining original densities matters or downsampling would lose structural information.
Regarding the finite-dimensional assumption: We did not intend to over-claim. Our revision will explicitly mention that our derivations and thus results are informal. | Summary: This paper proposes Wasserstein flow matching, a generative model which can be treated a variant of Riemannian Flow Matching on Wasserstein manifold. This model allows for working with the families of distributions and is tested in a variety of experiments.
## **After the rebuttal.**
I kept my initial positive score and explained the reasons in my answer to the authors.
Claims And Evidence: Overall, the paper is well-written and most of the claims are well-supported.
Methods And Evaluation Criteria: Overall, the proposed method and evaluation criteria make sense. However, the paper is lacking an experiment which shows that the proposed approach beats its competitors in the case of non-Gaussian and at least moderate-dimensional experiment.
Theoretical Claims: I skimmed through the provided proofs. The main issue here is the gap between the established theoretical results which where classic OT problem is considered and practical implementation of the approach which utilizes the solutions of the *entropy-regularized* OT problem (in the experiments with general distributions). While the authors show that error of estimating OT maps with entropic Brenier maps converges to zero with the increase of the number of samples (Appendix A.3), this issue should be directly stated in the main text, possibly in the limitations subsection.
Experimental Designs Or Analyses: Experiments are valid but have some issues:
- Practically, the method demonstrates strong performance mainly for distributions which can be approximated by Gaussian families. In point cloud experiments, the method actually shows a fair performance in low dimensions (3d), while in high dimensions there is a lack of baseline comparisons which makes it difficult to assess its effectiveness. Experiments related to biology also do not clarify this point for me since I am not an expert in this field.
Supplementary Material: I skimmed through the appendix session which is well-structured.
Relation To Broader Scientific Literature: The paper applies the Riemannian flow matching approach to the Wasserstein space. The theoretical results are strongly connected with the original paper (Chen, R. T. and Lipman, Y, 2023).
Chen, R. T. and Lipman, Y. Riemannian flow matching on general geometries. arXiv preprint arXiv:2302.03660, 2023.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths.** The paper is well-written, has a nice structure and combines theoretical and experimental results. It proposes an approach which can be treated as an application of Riemannian flow matching to Wasserstein space, is justified theoretically and has interesting practical properties.
**Weaknesses.** I was reviewing this paper at one of the previous conferences. Overall, the paper became a lot better since many important theoretical aspects are now clarified. Previously, I was expressing my concerns regarding the lack of tractable experiments in moderate or high dimensions where the proposed approach shows superior results. These concerns still hold since new experiments were not added.
Other Comments Or Suggestions: N/A
Questions For Authors: - Is it possible to conduct the experiments with point-clouds in higher dimensions and assess the method's performance w.r.t. the other baselines? I understand that many baselines can not operate in high dimensions but maybe such kind of comparison is possible for moderate dimensions (>2d,3d)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to Reviewer 1 (**8NGf**)
Thank you for your thoughtful review and constructive feedback.
**"Overall, the proposed method and evaluation criteria make sense. However, the paper is lacking an experiment which shows that the proposed approach beats its competitors in the case of non-Gaussian and at least moderate-dimensional experiment."**
Our updated **Table 3** (see response to reviewer 3 - **ToSj**) now demonstrates that WFM outperforms competing methods on 2/3 ShapeNet datasets according to the CD metric. Importantly, WFM achieves these results with approximately 70% lower computational requirements (~120 GPU hours vs. ~400 GPU hours for PSF and PVD), making it substantially more efficient.
**"The main issue here is the gap between the established theoretical results which where classic OT problem is considered and practical implementation of the approach which utilizes the solutions of the entropy-regularized OT problem (in the experiments with general distributions)."**
We acknowledge this important point. Computing exact OT maps between general distributions is not possible in most circumstances. Outside of univariate measures, product measures, and multivariate Gaussians, closed-form expressions of optimal transport maps are not known to us. Entropic OT provides the only feasible approach with both computational efficiency and statistical guarantees; see [Pooladian and Niles-Weed (2021)](https://arxiv.org/abs/2109.12004). We added a clarifying comment in the main text to address this approximation gap. In short, an approximation scheme (like entropic OT) is consistent with other geometric methods (including RFM) which rely on numerical approximations when exact solutions aren't available for complex geometries (see **Algorithm 1** in [Chen & Lipman, 2024](https://arxiv.org/pdf/2302.03660)).
The new text now reads (line 248, revision in **bold**):
The condition of "inner continuity" is fairly mild, as this is ensured for any distribution $\mu ∼ p_0$ with density. For Gaussian distributions, inner continuity holds naturally. For general distributions, we assume continuity but work with point-clouds as empirical realizations to approximate OT maps with statistical guarantees (see Appendix A.3) and computational efficiency (Flamary et al., 2021; Cuturi et al., 2022). **We note there exists a gap between our theoretical results which consider classic OT and our practical implementation which uses entropy-regularized OT approximations. This approach aligns with other geometric methods (see Algorithm 1 in Chen & Lipman, 2024) that rely on numerical approximations when exact solutions are not tractable.** The "outer continuity" condition is purely technical, and it serves the same role as in prior work. Our training algorithm is described in Algorithm 1, and Appendix E contains precise details on our neural network design.
**"Is it possible to conduct the experiments with point-clouds in higher dimensions and assess the method's performance w.r.t. the other baselines? I understand that many baselines can not operate in high dimensions but maybe such kind of comparison is possible for moderate dimensions (>2d,3d)?"**
We appreciate this thoughtful suggestion. We explored adapting other methods to moderate dimensions (4-10D), but the computational requirements proved prohibitive as even our most efficient implementations would require several weeks of computation time per experiment. Instead, we've focused on demonstrating WFM's effectiveness through **Table 4**, which shows ~60% 1-NN accuracy across all metrics (with 50% being optimal), and **Figures 4-6**, which showcase realistic generation of variable-sized and high-dimensional distributions.
**"Experiments related to biology also do not clarify this point for me since I am not an expert in this field"**
We would like to elucidate the biological application of WFM in spatial transcriptomics, a technique that combines molecular profiling of gene-expression with spatial localization of cells within tissues. In these applications, cellular microenvironments are naturally represented as distributions in gene-expression space based on neighboring cells within a tissue. WFM enables generative modeling of cellular niches, synthesizing biologically plausible microenvironments for analysis and study.
By modeling the relationship between cell phenotypes and their environments, WFM could help researchers better understand potential tissue-level responses to cellular changes. WFM addresses the inherently distributional nature of microenvironments by lifting Flow Matching to the Wasserstein space, making it well-suited for this application.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers and provided improvements in Table 3. According to the new results, the approach is now providing better performance in small dimensions (3D point clouds). Still, 1) in updated Table 3, it beats competitors only w.r.t. the CD metric and on 2 of 3 datasets; 2) its performance in high dimensions remains not entirely clear due to the absence of comparison with other approaches (while I see that there are computational obstacles for performing comparison).
Thus, I think that my current **positive** score represents a fair assessment of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging the improvements in our 3D point cloud results and for your positive evaluation. We appreciate your constructive feedback throughout the review process. | null | null | null | null | null | null |
Risk-Sensitive Theory of Mind: Coordinating with Agents of Unknown Bias using Cumulative Prospect Theory | Accept (poster) | Summary: This paper presents a new method for learning agent strategies that support human decision-making under risk. The agent has the ability to learn strategies in situations where it does not know how risk-averse its partner is. The paper adopts prospect theory as a model for human decision-making and proposes a framework for determining strategies using reinforcement learning.
Claims And Evidence: The research is based on the assumptions presented in previous literature, methods that have been proven effective in the field of agents, and evaluation experiments conducted in previous studies. As a whole, it is very convincing.
Methods And Evaluation Criteria: It is a reasonable fusion of human decision-making theory (prospect theory) and reinforcement learning. The evaluation experiment in this paper is based on the well-known setting (the Overcooked environment) by Carroll et al., and the game is extended in a way that involves risk in decision-making. I think it is appropriate as the most primitive simulation experiment to analyze decision-making under risk. On the other hand, as the author mentions in the Limitations, it has only been verified in a few simple scenarios. In addition, it is limited to simulation testing, and it has not been verified whether it is useful in actual decision-making scenes.
Theoretical Claims: The main proof is included in the reference materials from page 11, but I have not verified the accuracy of the proof.
Experimental Designs Or Analyses: (Kwon et al., 2020; Sun et al., 2019; Danis et al., 2023; Ferreira et al., 2021)
What is the reason why these methods cannot be applied?
Even if they cannot be applied straightforwardly, I would also like to see the results of comparing them with methods that have been simply customized from these conventional methods.
Supplementary Material: I have not carefully examined the supplementary materials.
Relation To Broader Scientific Literature: In this research, behavioral economics (prospect theory) is used to model the human decision-making process. In other words, the basic behavior of agents and partners is based on these findings. This research can be positioned as interdisciplinary research in the fields of economics, the social sciences, artificial intelligence, and computer science related to human behavior.
Essential References Not Discussed: I believe the papers essential to this paper are included in the references.
Other Strengths And Weaknesses: - The structure of the paper is excellent. It was easy to understand the content.
- I can appreciate the interdisciplinary approach of economics and computer science.
Other Comments Or Suggestions: I don't understand why the methods of (Kwon et al., 2020 Sun et al., 2019 Danis et al., 2023 and Ferreira et al., 2021) cannot be applied to the current experimental setting, so I would like an explanation.
Questions For Authors: I have questions mainly about experimental methods and comparative methods.
See "Other Comments Or Suggestions" and "Experimental Designs Or Analyses".
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your comments and feedback. We have addressed them below.
> **C1:** The reviewer asked “(Kwon et al., 2020; Sun et al., 2019; Danis et al., 2023; Ferreira et al., 2021) What is the reason why these methods cannot be applied? Even if they cannot be applied straightforwardly, I would also like to see the results of comparing them with methods that have been simply customized from these conventional methods.”
There are two main reasons discussed at the beginning of Sec. 3 but we have provided some additional elaboration below:
1. (Kwon et al., 2020; Sun et al., 2019) generate diverse risk-sensitive behaviors by relying on data-priors collected from a human partner. In a simulation study, we simply do not have this option but, more importantly, a key contribution of our algorithm is that we do not need a data prior to generating adaptation to risk-preferences of different human partners, because we have access to a cognitively valid decision model (i.e., CPT).
2. (Danis et al., 2023; Ferreira et al., 2021) rely on tabular methods like Q-learning algorithms which simply do not scale to large state-action spaces. We actually tried this and it did not go well because the Q-table must allocate memory for every state-joint-action pair. Other works have used low-level motion planners to simplify this problem and solve over high-level sub-tasks but this is not available to us due to “risk-sensitivity” of an agent being present in their low-level navigation policy (see Appendix A.2 for details). Let's look at the smaller of the two layouts (RCR) shown in our paper as an a motivating example:
- 2 players have 5 actions each for a total of 5^2 = 25 actions
- 2 players can each have 6 x and 6 y coordinates, 4 orientations, and 4 holding states {empty, onion, dish, soup} leading to a total of (6\*6\*4\*4)^2 = 331,776 combinations.
- Each of the 7 reachable counter spaces have 4 possible states {empty, has onion, has dish, has soup} for a total of 4^7=16,384 combinations
- Each of the 2 pots has 5 variations {nothing, 1 onion, 2 onion, 3 onion, finished soup} for a total of 5^2=25 combinations
- This brings us to a 25 \* 331,776 \* 16,384 \* 25 = 3.39e12 element array.
- With a 16 bit float precision, this requires 54 terabytes for each of the 3 models
As you can see, this quickly becomes infeasible to manage which requires a function approximation method when using this lossless state. We spent significant efforts to develop this novel deep learning algorithm including development of the novel replay memory which is one of our contributions to the AI community.
We plan on adding “there is a lack of feasible baseline algorithms …[because] state of the art MARSRL algorithms (Danis et al., 2023; Ferreira et al., 2021) are not tractable in complex tasks like Risky Overcooked due to their tabular formulation creating infeasible memory requirements under the high number of permutations in the state space.” into Sec. 3 to remove this ambiguity in the revised paper.
> **C2:** “it is limited to simulation testing, and it has not been verified whether it is useful in actual decision-making scenes.”
Reviewer 3GLy had a similar comment which we recommend you check out the response to C1 for a full description of the summary we provide here. Here, there is a slight clarification about the claim of this paper that we intend to make more clear when listing the contributions.
While testing “...whether it is useful in actual decision-making scenes” is the end goal of this research, that is not exactly the claim we are making in this particular paper. The claim we are making is that *we can overcome the challenges in state-of-the-art methods that already show adaptation to human risk-preferences improves human-robot/AI interaction*. Specifically, our approach overcomes:
1. the limited task complexity afforded by tabular RL methods (Danis et al., 2023; Kwon et al., 2020) through our “Deep MARSRL with CPT” algorithm.
2. the requirement for data priors to enable personalization (Kwon et al., 2020) through our generation of risk-sensitive candidate policies offline.
We believe the simulations support this claim as we are able to generate diverse risk-sensitive behaviors in more complex coordination tasks (i.e. infeasible to solve with tabular methods) offline and correctly adapt to an agent of unknown risk-sensitivity online. Consequently, the algorithmic contributions are independently significant from human studies, fill an important gap in the existing literature, and fits the scope of ICML well. We are conducting these human experiments in an ongoing study that warrants its own paper. This way, we can maintain the focus on AI in the current ICML paper and shift towards a more human-centered perspective in the next work. | Summary: This paper proposes a multi-agent RL method where agents model the other agents under the cumulative prospect theory (CPT) model. This allows better coordination with risk-averse or risk-seeking agents (an agent may have both tendencies depending on the context), which includes human agents.
The proposed method consists of three components: first a CPT-transformation is applied on the state-action values and transition probabilities. Second, Nash equilibria under these transformed values are found. Finally, Bayesian inference is performed to identify which equilibrium the other agent is going for.
Claims And Evidence: It is not clear to me if this algorithm easily extends to n>2 agents. In the current setup, one of the agents is modeled as the biased agent (whose behavior can be modeled via CPT) and the other agent adapts to it. This makes sense for human-robot collaboration scenarios. However, what if there are multiple humans? In such cases, different humans may go for different equilibria, and it is not clear if the robot(s) will be able to decide how to coordinate with them. So maybe claiming this is a multi-agent algorithm is a little misleading, and it should be framed as a human-robot collaboration algorithm where there is only one human and one AI agent.
Similarly, the problem formulation implicitly assumes the agents share the reward function, meaning this is a completely collaborative scenario. While I am fine with this assumption, I think this should be noted earlier, because multi-agent systems would normally include competitive or mixed-sum settings, too.
While the main claim is to better model humans' decisions under CPT-related biases which will then lead to better collaboration, there is no evidence about it in the paper due to the lack of human experiments. The simulation experiments with artificial agents do not provide enough evidence, because they are controlled experiments that are already expected to work ("designed to work" in a sense).
Methods And Evaluation Criteria: The proposed method makes sense but some assumptions should be stated more explicitly.
- One of them is the assumption about n=2 agents as I mentioned before.
- Another one is the shared rewards assumption, again as I mentioned before.
- It seems to me that approximating the Nash equilibria and the procedure described after line 165 (right column) require access to the transition model, i.e., access to the probability distribution over the next state given the current state and the action is needed. This should be stated earlier.
Also, while the paper starts defining an MDP, it includes multiple agents that are decentralized, so it should really be a multi-agent MDP.
Theoretical Claims: There is no theoretical claim in the main paper -- some "somewhat theoretical findings" are in the Appendix which I didn't review.
Experimental Designs Or Analyses: While I like the paper a lot in general, the lack of human experiments prevents me from giving an accept score. Because the target biases are human specific and it is impossible to evaluate the value of the paper without having human experiments.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: While talking about the prior work, the paper states some of them "train policies under the median CPT parameter estimates from (Tversky & Kahneman, 1992) which generally describe risk-averse behaviors. I am skeptical about this statement. As far as I know (but I am happy to be convinced otherwise), the median parameters model both the risk-averse and risk-seeking behaviors depending on outcomes being negative (e.g., people buy insurance to avoid large losses) or positive (e.g. people buy lottery tickets to earn lots of money).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: There are several typos in the paper. Some of them are:
1) "a RS-ToM" --> "an RS-ToM"
2) "in in a zero-shot" --> "in a zero-shot"
3) "in this a deep multi-agent setting" --> "in this deep multi-agent setting"
4) "Figure. 2" --> "Figure 2"
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback.
> **C1:** The reviewer is concerned that “the claim is to better model humans' decisions under CPT-related biases which will then lead to better collaboration, there is no evidence about it in the paper due to the lack of human experiments”
We would like to point out that there is “similar transfer between simulation and human studies” (Smith & Zhang, 2025) showing a side-by-side comparison between simulation and human studies to support external validity. (Kwon et al., 2020) shows that modeling risk-sensitivity with CPT leads to improved performance and robotic perceptions. Thus, we see that there is existing support for modeling human decisions with CPT leading to better collaboration in simple settings but this is not directly our claim.
The claim we are making in this paper is that *we can overcome AI challenges in state-of-the-art methods that already show adaptation to human risk-preferences improves interaction*. Specifically, we are addressing:
1. the limited task complexity available with tabular RL methods (Danis et al., 2023; Kwon et al., 2020) by using our “Deep MARSRL with CPT” algorithm affords scaling to scenarios more aligned with real world resolutions
2. the requirement for data priors to enable personalization (Kwon et al., 2020) using our generation of risk-sensitive candidate policies offline to resolve costly data collection when human risk is present.
The simulations show the ability to generate diverse risk-sensitive behaviors in more complex coordination tasks (i.e. infeasible to solve with tabular methods) offline and correctly adapt to an agent of unknown risk-sensitivity online which supports this claim. Consequently, we believe the algorithmic contributions are independently significant from human studies, fill an important gap in the existing literature, and fit the scope of ICML well .
We agree on the importance of supporting the “better model humans' decisions” claim that you mentioned. This paper paves the way for subsequent human studies. We have already started human experiments where we intend to collect 60 subjects where we employ more sophisticated experimental designs to characterize human behavior in response to RS-ToM and Rational agents. We believe that this next study warrants its own paper due to the possibility that combining these works would cause muddled contributions and decrease the amount of depth we can explore in each due to the page limit constraints.
> **C2:** The reviewer is not clear how this algorithm extends to n>2 agents and believes it should be framed as a human-robot collaboration instead of a multi-agent problem.
This work is motivated by the human-robot interaction paradigm. However, the MARSRL algorithm could be applied to generate risk-sensitive multi-robot interactions if desired. Additionally, the RS-ToM framework can work for n>2 agents by factorizing the state and action (e.g., $a_1$ x $a_2$ x…x $a_n$) in Algorithm 1 and extending the equilibrium solution (QRE with level-k reasoning) to initialize a level-1 player (see Appendix A.5) to assume all other agents have random policies where the remaining procedure remains the same. We will add an explanation of how to extend to n>2 agents into the end of Sec. 2.2.
You also make an important point about how to align with humans “with different equilibria”. This is an ongoing and active research field called multi-value alignment where more sophisticated multi-objective optimization methods, such as the MAP algorithm (Wang et al., 2024), must be applied. We will discuss this at the end of Sec 2.3.
>**C3:** The reviewer suggested stating the assumptions about (a) requiring access to the transition model and (b) a completely collaborative scenario earlier.
We will add (a) to the beginning of Sec. 2 and (b) to the beginning of Sec. 3.1.
> **C4:** The reviewer mentions that Def. 2.4 should be a multi-agent MDP.
We agree and will state that Def. 2.4 defines a multi-agent MDP when the joint action and state spaces are factored out.
> **C5:** The reviewer is skeptical about our statement “median CPT parameters …generally describe risk-averse behavior”
You are correct in that both averse/seeking behaviors emerge from median parameters depending on context. We say “generally” to refer to the fact that, under random prospects, we expect the loss to be disproportionately represented in the choice since $\ell = 2.25$. Therefore, these parameters generate risk-averse preferences in most cases. We refer you to Appendix B.1 for additional discussion.
> **References**
Li, C., Wang, T., Wu, C., Zhao, Q., Yang, J., and Zhang, C. Celebrating diversity in shared multi-agent reinforcement learning. In Advances in Neural Information Processing Systems, volume 34, pp. 3991–4002. 2021.
Wang, X., Le, Q., Ahmed, A., Diao, E., Zhou, Y., Baracaldo, N., Ding, J., and Anwar, A. Map: Multi-human-value alignment palette, arXiv preprint arXiv:2410.19198, 2024.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses. I am happy to see they will update the paper based on the comments I made earlier. However, I still see the lack of human experiments as a critical issue. I understand that the previous papers already showed CPT model better explains human decisions, but as the paper itself acknowledges, those were limited to simple, tabular settings. Showing incorporating CPT models increases performance in more complex settings via simulations, without first showing humans still take their decisions following the CPT model is questionable. If the latter is not verified/correct, that would mean the paper is just solving a problem that does not exist in reality.
So I still hold my belief that the paper must report human subject study results. It seems I am not the only reviewer who raised this issue (3 of the 4 reviewers did it), which I hope, further highlights the importance of such studies for this paper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response but respectfully disagree that this paper requires human studies for publication in ICML. We believe the critique that prevents the reviewer from recommending acceptance is that *we need human studies to validate that our approach has value in real world settings*. We have responded to this below:
# 1. Support of Prior Work:
We disagree that extending tabular to deep learning invalidates support from prior findings showing CPT can model humans. In regards to “the paper itself acknowledges, those were limited to simple, tabular settings,” we are pointing out that these approaches have practical limitations when learning policies in complex settings, not that the approach is inherently invalid if learning were possible.
> **Example:** neglecting computational issues of tabular methods finding optimal policies, it is reasonable to expect that previous findings continue to apply as complexity is scaled up. Since deep learning is a function approximator of this optimal tabular policy, it is follows that we would recover similar findings given our approximation is good.
Thus, we strongly disagree that this paper is possibly “solving a problem that does not exist in reality” as it is well motivated by prior works such as (Kwon et al., 2020; Ferreira et al., 2021).
Empirical validation of CPT in *explicitly* complex settings simply do not exist due to the gap in algorithms that enable learning (i.e., our contribution). [(Gao et al., 2022)](https://doi.org/10.1016/j.ijdrr.2022.102904) mentions a similar challenge: “studies on applying CPT to transport-related problems have been limited to examining the choice of strategies or alternatives, and no study has used CPT to drive the movements [with finer resolution].” They do not address complexity at our scale but do show that their CPT model with finer resolution actions matched *real human data*. This implies CPT holds when extending to higher resolution policy spaces and addresses the reviewer’s concern if “humans still take their decisions following the CPT model” in more complex settings.
# 2. Must Contain Human Experiments:
We disagree that this work must report human subject study results to have sufficient value for ICML.
**2.1 AI Contributions:** We are planning to conduct human experiments but, in order to do that, we must *first develop the fundamental AI approaches that enable future human studies* (i.e., without first having a model, we cannot validate it in human studies). During development of RS-ToM, we were able to make the aforementioned algorithmic contributions that are independently valuable to the AI community. From the reviewer’s initial comments, it seems they support these contributions up until the need for human studies. However, we show sufficient evidence in simulation that we can overcome the mentioned AI challenges and strongly believe that human studies warrant a separate paper for the reasons mentioned in the original rebuttal.
**2.2 Simulations are a Common Paradigm:** Using simulations to validate human(-like) modeling is a common paradigm in this field. We have provided 3 works published in reputable AI conferences that follow our paradigm:
1. The ICML paper in [(Prashanth et al., 2016)](https://proceedings.mlr.press/v48/la16.html) asserts that “CPT realistically captures the attitude of the road users towards delays” and shows that their CPT algorithm better optimizes simulated humans’ subjective utility than “traditional expected delay optimizing algorithms”
2. The AAAI paper in [(Tian et al., 2021)](https://doi.org/10.1609/aaai.v35i7.16750) shows the model’s ability to generate CPT behaviors relative to risk-neutral baseline in simulation. They remark that “our solution provides an interpretable and heterogeneous human behavioral model.”
3. The AAMAS paper in [(Ghaemi et al., 2024)](https://dl.acm.org/doi/10.5555/3635637.3663134) shows generation of diverse CPT behaviors in simulation and connect to human modeling by remarking “…[is] a suitable framework to show the tangible effect of loss aversion in human-like agents.” and “A potential application of the proposed framework is calculating CPT risk-sensitive policies of human agents in real-world settings”
**2.3 Other Reviewers:** We agree with all 3 reviewers that human experiments are an important step and in deployment of this work in realistic settings. However, we do not believe other reviewers agree that the lack of human studies warrants an insufficient contribution to ICML or that this research problem is unsupported by the current literature. While we acknowledge that there may be different opinions for scoping this work, this is an important and well-motivated step towards human modeling in complex settings with significant contribution to ICML.
> **Final Remark:** We again thank the reviewer for their comment and hope that this evidence justifies the motivation and current scope of this paper | Summary: This paper proposes a new model of risk-sensitive multi-agent coordination towards better aligning autonomous agents with human utilities. The authors define a risk-sensitive ToM that affords adaptation to a partner with unknown risk-sensitivity in in a zero-shot
fashion by pre-training several policies with different risk preferences, and online matching observed behavior to these policies.
Claims And Evidence: Claim: The proposed model can collaborate with humans better than an AI partner without RS-ToM
Supported, within the scope of this task.
Might not generalize to beyond toy scenarios, in general the same behavior could arise not only from risk sensitivity, but also from other approximations to sequential decision-making (see suggested refs).
Methods And Evaluation Criteria: Yes. I am very well familiar with Overcooked domain for Cooperation.
Theoretical Claims: Skimmed the math, seems fine.
Experimental Designs Or Analyses: Yes
Supplementary Material: Did not see SI
Relation To Broader Scientific Literature: The lit review is mostly good, but a few references are missing.
1. The authors should discuss the advantages and the reasons for choosing a RL approach over Bayesian ToM previously used with Overcooked. The authors claim to achieve zero-shot understanding of the human partner without prior collection of human data, however the downside to this method is the need for excessive per-training. How would this scale?
2. The authors take assumption of noisy-rationality as a baseline, however in field of modelling human planning, it is well established that people may be using a variety of approximations that are more elaborate than injecting noise, most notably limited planning horizon, heuristics, simplified space representations.
In the original BToM papers (e.g. Baker et al 2017) high-temperature noise is simply a convenient catch-all for non-alignment with the optimal policy, given that they use trivial scenarios such that will not allow to differentiate between the various approximations. I do not suggest reviewing this entire field of human planning models, but I suggest carefully framing the limitations.
Essential References Not Discussed: Overcooked as a model of Cooperation with ToM:
https://onlinelibrary.wiley.com/doi/full/10.1111/tops.12525
This paper evaluates prospect theory in human sequential decision making, alongside limited planning horizon, and a model of numerosity in distance/area perception, showing that prospect theory may indeed generalize to spatial planning, yet is not an exclusive cause of such deviations.
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012582
Other Strengths And Weaknesses: The paper is well written and original.
I'm leaning to accept, but I'd like to see a better discussion of limitations and related work.
Other Comments Or Suggestions: Edit: I have increased my rating by a point, to Accept.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your comments.
> **C1:** The reviewer commented on the generalization of our algorithm beyond these toy scenarios and that alternative models may give rise to the same behavior. They suggest that we mention this in limitations.
We generally agree that the observed behaviors could arise from alternative models (e.g., limited planning horizon). However,
1. it is unlikely that alternative models would consistently differentiate risk-averse/seeking policies and that the novelty of our approach has distinguishable value across use-cases and in more complex tasks.
2. As for generalizing beyond toy scenarios, we chose to extend Overcooked for this very reason: it is a well accepted baseline for coordination tasks that is simple enough for validation but rich enough to allow diverse coordination strategies. Once validated, we can scale to more complex domains where higher differentiability is available. In the second paragraph of Limitations and Future Work, we mentioned “increasing the resolution of the candidate policies may be helpful for scaling to tasks with a more diverse set of risky-decisions available (e.g., continuous control problems affording continuous variation in risk-sensitive strategies)”.
To address these two components, we add the following discussion to the Limitations section:
- “Due to the limited number of strategies (e.g., traverse puddle or detour), alternative models, like limited planning horizon and heuristics, could potentially give rise to the observed behaviors. However, these similarities are likely to fade as decisions become more complex and afford more variation in behaviors. While we believe that modeling risk-sensitivity is an essential factor in risky settings, humans are subject to many biases and heuristics. Therefore, a multimodal approach (Kryven et al. 2024) that captures risk-sensitivity along with other human planning characteristics will be pursued in future work.”
> **C2:** The reviewer suggested discussing why we took our RL approach instead of the “Bayesian ToM previously used with Overcooked.”
As you pointed out, the Bayesian ToM approach was used in (Wu et al. 2021) where they formulate their ToM as reasoning over intentions and sub-tasks. However, we reason over risk-sensitive preferences in more complex tasks and our RL-approach is more generalizable since it
1. Defines continuous variation over a larger latent space (i.e. CPT parameters)
2. Does not need manual re-definition of mental states (i.e. sub-tasks) when working on a new task
3. Applies deep RL to overcome state space complexity issues that exist in their tabular bounded real-time dynamic programming (BRTDP) approach to policy solutions.
We will add discussion of this article when introducing ToM and Overcooked.
> **C3:** The reviewer mentioned that the focus on pre-training is a downside and asked "How would this scale?”
We view the focus on pre-training as an advantage where we address excessive pre-training and scaling in the following ways:
1. Pre-training is more affordable/scalable than current approaches that require collection of human data priors for personalization (Kwon et al., 2020), especially when risk to the human is present. Additionally, we can aggregate an increasing resolution of an RS-ToM over time which affords arbitrary complexity that meets the specific needs of the more sophisticated tasks.
2. We do not need to continue training to adapt to more humans. Once we have completed the pre-training, we have a fully deployable model for all types of human partners. It is very scalable in this sense.
3. The future work section includes an approach to reducing pre-training requirements by using model interpolation methods but we will also add discussion of transfer learning (Weiss, 2016) to better address the challenge of scaling.
4. This algorithm can also scale to teams with more than two agents. See Reviewer 3GLy’s response to C2 for details.
> **C4:** The reviewer suggested references of (Wu et al. 2021) and (Kryven et al. 2024).
We have incorporated discussion of these papers in the sections mentioned in the previous response.
> **C5:** “Did not see SI”
We included the SI in the same PDF as the main paper.
> **References**
Houlihan S. D., Kleiman-Weiner M., Hewitt L. B., Tenenbaum J. B. and Saxe R. Emotion prediction as computation over a generative theory of mind, Philosophical Transactions of The Royal Society. 2023.
Kryven, M., Yu, S., Kleiman-Weiner, M., Ullman, T., and Tenenbaum, J. Approximate planning in spatial search. PLOS Computational Biology, 20(11):1–21, 11 2024.
Weiss, K., Khoshgoftaar, T.M. & Wang, D. A survey of transfer learning. Journal of Big Data, 3, 9. 2016.
Wu, S. A., Wang, R. E., Evans, J. A., Tenenbaum, J. B., Parkes, D. C., and Kleiman-Weiner, M. Too many cooks: Bayesian inference for coordinating multi-agent collaboration. Topics in Cognitive Science, 13(2):414–432, 2021. | Summary: The authors consider the problem of a two-agent cooperative sequential decision task, in which one of the agents is a human, and address the problem of learning to coordinate behavior in the second agent. The idea is to model human behavior as following cumulative prospect theory, i.e. scaling probabilities and values according to a specific family of functions. By inferring which such function from a set of predefined parameters most closely describes human behavior, the agent can leverage this model to solve the multi-agent choice problem using DDQN. This involves computing the expected action of the human agent with level-k reasoning. Transitions are stored in a buffer for computational efficiency. The algorithm is applied to synthetic data with different simulated human agents in a new variant of the overcooked task and it is shown that performance in coordination is close to optimal as quantified by an oracle baseline and significantly improved with respect to a model of the other agent assuming noisy optimality.
Claims And Evidence: The main motivation for the proposed model is that it captures human behavior better than alternative methods, as it models human choices with cumulative prospect theory. However, the evaluation is done exclusively on synthetic data. Human experiments would have been more convincing. For the synthetic data, the claims are convincing, both the inference of the agent's risk attitude as well as performance.
Methods And Evaluation Criteria: The newly devised risky overcooked task makes sense for evaluating cooperation with potentially nonneutral risk attitudes.
Theoretical Claims: The proofs involve standard contraction mapping arguments. Adapting to cumulative prospect theory, one can leverage the fact that the utility weighting functions are monotonically non-decreasing. However, I am not sure how moving from quantal response equilibrium to an approximate action from the agent using level-k reasoning affects the argument.
Experimental Designs Or Analyses: I could not check any code or simulations but the evaluations in terms of risk attitude and performance seems sound.
Supplementary Material: I have skimmed the s.m. and paid attention to A.1, A.5-A.6.1and B.1.
Relation To Broader Scientific Literature: The paper sits somewhere between multi-agent RL, cognitive science, inverse RL, and alignment in human-machine interaction.
Essential References Not Discussed: Maybe citing some work on inverse RL may be adequate, particularly IRL work that allowed the agent to be suboptimal with respect to the known rewards such as work involving soft policies or work connecting IRL to preference elicitation?
Other Strengths And Weaknesses: Definition 2.3 could be easier to read by writing the bounds on the sums in the initial definitions in terms of l instead of two different ks, especially because the expectation then uses l.
Other Comments Or Suggestions: Typos:
“Humans like are said to have“
“they are holding where rationale for how values for pρ can be found “
“risk-sensitivity We also”
Questions For Authors: How does moving from quantal response equilibrium to an approximate action from the agent using level-k reasoning affects the proof of convergence?
Why did you add a decaying weight to the likelihood P(O|π) of observations further in the past?
Why did you not include at least a limited human study as the motivation is to better model human behavior?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your effort in providing comments and respond to them below.
> **C1:** The reviewer asked why we did not include a limited human study.
We recommend you check the response to Reviewer 3GLy’s C1 for a full description of the summary we provide here.
While “...capturing human behavior better than alternative methods” is the end goal of this research, that is not exactly the claim we are making in this particular paper. The claim we are making is that *we can overcome the challenges in state-of-the-art methods that already show adaptation to human risk-preferences improves human-robot/AI interaction*. Specifically, our approach overcomes:
1. the limited task complexity afforded by tabular RL methods (Danis et al., 2023; Kwon et al., 2020) through our “Deep MARSRL with CPT” algorithm.
2. the requirement for data priors to enable personalization (Kwon et al., 2020) through our generation of risk-sensitive candidate policies offline.
We believe the simulations support this claim. Consequently, the algorithmic contributions are independently significant from human studies, fill an important gap in the existing literature, and fits the scope of ICML well. We are conducting these human experiments in an ongoing study that warrants its own paper. This way, we can maintain the focus on AI in the current ICML paper and shift towards a more human-centered perspective in the next work.
> **C2:** The reviewer asked how the level-k reasoning affects the convergence proof.
Please check out Proposition A.6 and the supporting Definition A.4 and Definition A.5. Here, we leverage the fact that this is a common-payoff game to show there exists a globally optimal point to every stage game (Assumption A.3) and that the quantal response equilibrium converges to this globally optimal game solution as $\lambda\rightarrow\infty$.
> **C3:** “Maybe citing some work on inverse RL may be adequate, particularly IRL work that allowed the agent to be suboptimal with respect to the known rewards such as work involving soft policies or work connecting IRL to preference elicitation?”
While our related work is focused on RL to support the contribution of our RL algorithm, we acknowledge the relevance of IRL and did previously include a CPT-IRL paper that related to this work (Sun et al., 2019). This was to contextualize an alternative to our approach and illustrate a major hurdle that our work seeks to address: personalized adaptation to risk-preferences (e.g., learned from IRL) requires a human data prior which may be costly or infeasible to collect, especially when human risk is present. This is one of the primary advantages of using a cognitively valid model of human behavior (i.e., CPT) in that we do not need a data prior to achieve human models in risky settings.
To address this comment, we added review of additional work on IRL to better contextualize the problem:
1. (Rothkopf & Dimitrakakis, 2011) A more standard approach to Bayesian IRL for preference elicitation when observing suboptimal demonstrations.
2. (Cheng et al., 2023) Applies a more interactive approach risk-sensitive to IRL through interactive querying.
3. (Bergerson, 2021) The authors identify several approaches to multi-agent IRL that identify suboptimal human traits (i.e. noise, biases and heuristics) but the existing work shows limited attention on integrating cognitive models like prospect theory.
> **C4:** The reviewer suggested notational edits to Definition 2.3 and pointed out a few typos.
We have updated these items.
> **C5:** The reviewer asked what the purpose of the decaying weight of previous observations in the belief update was for.
This is intended to accommodate human agents shifting their risk preferences online where newer observations are more representative of their risk preferences at the current moment. Using a forgetting factor like this is a common approach in recursive inference to ensure stability and representativeness of the inferred parameter (Liu et al., 2016). While the simulated agents in this paper are fixed, the decay is intended to emulate settings with real humans to maintain external validity. We will add an explanation of this into Sec. 2.3.
> **References**
Bergerson, S. Multi-agent inverse reinforcement learning: Suboptimal demonstrations and alternative solution concepts. Computing Research Repository, abs/2109.01178, 2021.
Cheng, Z., Coache, A., and Jaimungal, S. Eliciting risk aversion with inverse reinforcement learning via interactive questioning, arXiv preprint arXiv:2308.08427, 2023.
Liu, C., Zhang, W., and Tomizuka, M. Who to blame? learning and control strategies with information asymmetry. In 2016 American Control Conference, pp. 4859–4864, 2016.
Rothkopf, C. A. and Dimitrakakis, C. Preference elicitation and inverse reinforcement learning. In Machine Learning and Knowledge Discovery in Databases, pp. 34–48, 2011. | null | null | null | null | null | null |
BackSlash: Rate Constrained Optimized Training of Large Language Models | Accept (poster) | Summary: This work proposes Rate-Constrained Training (RCT), a novel training method that allows for training LLMs in a way that allows for effective compression of their weights at the end of training. The main idea is to derive a weight regularizer by assuming a specific distribution over the model weights, the generalized Gaussian (GG) distribution (with a shape parameter that is adapted during training). The authors argue for this choice empirically by showing how the training model weights have a better fit under that distribution. This regularizer essentially becomes an $L_p^p$ norm for the weights of the network, with the $p$ being dynamically updated during training. After model training, the authors use exp-Golomb (EG) codes in order to compress the resulting weights. The argument for this choice is that the EG codes can achieve almost the entropy limit for GG sources. The authors experiment with RCT and various LLM architectures and simple classification tasks.
Claims And Evidence: The paper claims are moderately supported by the evidence. While the authors do show improvements upon "normal training" with RCT, I think what is missing is a more convincing evaluation against the Gaussian model (or alternative the GG model with a shape parameter of two) and the Laplace model (i.e., the GG model with a shape parameter of one), which are two most important baselines. Furthermore, it is unclear what the "normal training" for the models presented at, e.g., Table 4 is, which makes the interpretation of the results more difficult. For example, since the authors assume that the Gaussian model is widely used, I would expect an evaluation against a "normal training" run that uses weight decay for the parameters (along with codes that are appropriate for such a distribution).
Methods And Evaluation Criteria: The evaluation criteria make sense, but, given that LLMs are more or less used as general purpose models, I would have expected some other tasks, besides classification, to be used (such as language modeling with next-token prediction).
Theoretical Claims: I checked the mathematical details and they were mostly ok, albeit with some assumptions that do not necessarily hold true in practice (e.g., for the transition from Eq. 3 to Eq. 4). More specifically, the argument that $\delta$ is typically small only makes sense when we have a large bit-width. When that is small, e.g., 2, that $\delta$ can be quite large in practice. As a result, the approximation at Eq. 4 might be off in such cases. Why did the authors not use the GG CDF for proper discretization into a quantization grid?
Experimental Designs Or Analyses: The experimental design is ok but this work is missing critical baselines as mentioned above. More specifically:
* A "normal training" run where a model is trained with an L2 regularizer (i.e., weight decay)
* A more standard "sparse training run" where a model is trained with an L1 regularizer
Besides those results, I am also wondering a couple of other things:
* Is the adaptivity of the shape parameter necessary for good compression? Since you train the model from scratch, the parameters can adapt to the specific $L_p^p$ penalty you employ and thus can be easily compressed via appropriate codes for that norm.
* The authors apply entropy coding to the indices of quantized values (page 5, line ~240 second column), however it is unclear if those indices actually follow the desired discretized GG distribution. I would have expected some analysis there.
Supplementary Material: I did not check the supplementary material.
Relation To Broader Scientific Literature: The two main novel contributions of this work against the broader literature are
* Proposing an $L_p^p$ norm as a regularizer for the weights during training of the model with a $p$ that is automatically updated during training. This is a relatively minor contribution but a novel one nonetheless.
* Using EG codes for compression after training. This is the main novel contribution of the work relative to prior art.
Essential References Not Discussed: Nothing as far as I am aware of.
Other Strengths And Weaknesses: Overall, I found the paper interesting and the combination of the GG distribution and the EG codes in the LLM setting is novel and yielded good results in the experiments shown. The authors also considered several architectures during the evaluation, which is another bonus.
One more weakness that the authors could work on is (besides what mentioned above) the argument for the extensive application of the Gaussian distribution. It is based on old references about Gaussian initialization of neural network weights. One critical point here is that in those works the weights are only **initialized to be Gaussian** and do not assume that they are Gaussian after training. In fact, the weights after training become more heavy-tailed (which also acts a supporting argument as to why GG distributions with shape < 2 makes sense) [1]. It would be good if the authors provide more supporting evidence for this claim.
[1] Fortuin et al., Bayesian neural network priors revisited, https://openreview.net/pdf?id=xkjqJYqRJy
Other Comments Or Suggestions: * It is unclear what the bit-width is for the experiments at Fig.7; this would allow for better understanding the effects of the quantization steps.
* While I agree that training a model for a specific RD will yield better benefits at the end, it is a bit impractical for large scale model training as one would need to retrain the (expensive) large model multiple times for finding the desired tradeoff (e.g., to pick an appropriate $\lambda$). This is why in practice, post-training compression is more desirable.
* It would be good to show the Gaussian fit at Figure 1 for comparison purposes.
* What is the performance if you fix the shape parameter at the specific value you found at the end of training and then train again? This would highlight whether the adaptivity of the shape parameter is necessary.
Questions For Authors: No other questions besides the above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Comment #1: Evaluation against the Gaussian model and the Laplace model.**
**Response:** We thank the reviewer for valuable suggestion. We have conducted experiments with L0.5, L1, L2 regulations, the latter two correspond to the Gussian and Laplacian models. As can be seen, RCT with EG achieved the best accuracy and compression. **We also found that the** **shape parameter of the latest deepseek-7B is 0.85, also significantly different from 0.5, 1, or 2.**
|| Final Shape Parameter | Accuracy | EG Code | FL Code |
| ---- | --------------------- | -------- | ------- | ------- |
| DGGR | 0.13| 91.18% | 1.37 | 10.00 |
| L0.5 | 0.22| 91.88% | 2.90 | 10.00 |
| L1 | 0.15| 90.65% | 1.52 | 10.00 |
| L2 | 0.10| 88.29% | 1.16 | 10.00 |
**Comment #2: It is unclear what the "normal training".**
**Response:** We thank the reviewer for the comment. “Normal training” refers to "fine-tuning models with only cross entropy".
**Comment #3: I would have expected some other tasks to be used.**
**Response:** Many thanks to the reviewer for this advice. **If accepted we will also add results from text generation experiment using the latest deepseek-7B and evaluate performance by next token accuracy.** Results show 74% compression using RCT and EG0 over an already highly optimized model. In addition, EG0, the same code used for BERT, Llma, and GPT achieved nearly identical coding efficiency as Huffman, which required higher complexity. Additionally, the measured shape parameter (**0.85**) of deepseek-7B **further validates GG distribution assumption.**
|| **FL** | **EG** | **Huffman** | **EG Compress** | **Huffman Compress** | **Accuracy** |
| ------------------- | ------ | ------ | ----------- | --------------- | -------------------- | ------------ |
| **Normal Training**|11.00|5.93| 4.70| 46%|57%|99.97%|
| **RCT**|11.00|2.90| 2.81| 74%| 74%| 99.97%|
**Comment #4: $\delta$ can be quite large in practice.**
**Response:** We appreciate the feedback. $\delta$ represents the quantization step and Fig. 1 shows almost all parameters are concentrated in $[-0.2,0.2]$. So $\delta$ may be much smaller (such as $2^{-8}$ or $2^{-16}$) than 2. We will clarify in the paper if accepted.
**Comment #5: Is the adaptivity of the shape parameter necessary for good compression?**
**Response:** We are profoundly thankful to the reviewer for the comment. As shown in Comment #1, experiments with L0.5, L1 and L2 and the DGGR show that adapting shape parameter did lead to performance and compression gains.
**Comment #6: It is unclear if those indices actually follow the desired discretized GG distribution.**
**Response:** Many thanks to the reviewer for this advice, it is very valuable to us. We checked the distribution of indexes and found they does follow the GG, but its shape is slightly different due to value mapping. For example, the shape of parameters and index of BERT with normal training is 1.36 and 1.47; RCT is 0.26 and 0.30. It will not affect the entropy coding because “**EG code is robust to GG sources of various shapes**”.
**Comment #7: The references do not assume that weights are Gaussian after training.**
**Response:** We thank the reviewer for the feedback. We revised the description in Section 3 “**Most research assumes that LLM parameters follow the Gaussian distribution in the initialization and rarely discussed the distribution after training.**”, and cited reference [1] in paper. Thank you again for such valuable argument.
**Comment #8: It is unclear what the bit-width is for the experiments at Fig.7.**
**Response:** Many thanks to the reviewer for this advice. We sincerely apologize for our omission and provide the bit-weight below.
| **Quantization Step** | $2^{-4}$ | $2^{-8}$ | $2^{-12}$ | $2^{-16}$ | $2^{-20}$ | $2^{-24}$ | $2^{-28}$ | $2^{-32}$ | $2^{-36}$ |
| --------------------- | -------- | -------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- |
| **Normal Training**|7|10|13|16|19|23|26|26|26|
| **RCT**|7|10|13|16|19|22|25|25|26|
**Comment #9: One would need to retrain the (expensive) LLMs multiple times for finding the desired tradeoff.**
**Response:** We greatly appreciate your helpful suggestion. Indeed the setting of $\lambda$ needs more research. However, retraining (expensive) LLMs many times is not necessary, since $\lambda$ can be **adjusted** in training. Specifically, during training, if the shape of the distribution remains large, $\lambda$ can be adjusted. We also found that the value could be set to the same empirical one to achieve the best results in our experiments.
**Comment #10: It would be good to show the Gaussian fit at Figure 1 for comparison purposes.**
**Response:** We are profoundly thankful to the reviewer for bringing this issue to our attention and we have added the Gaussian fit and GG fit to Fig. 1. | Summary: The paper introduces Rate-Constrained Training (RCT), which can integrate compression during the training phase using rate-distortion optimization. The authors observed that LLM parameters follow a generalized Gaussian distribution (GG) with shape parameters less than 2. Thus, the proposed idea is to use rate-distortion theory during training, model parameters with GG distributions, and then apply exp-Golomb coding. The authors conducted experiments on various models and downstream tasks with LLMs. They demonstrate RCT can reduce memory usage by 60-90\% without accuracy loss and is better than post-training compression.
Claims And Evidence: The main claims are largely supported by theoretical and empirical results.
Weaknesses:
1. Authors claim "EG codes is robust with regard to parameter mismatches." in lines 61-62, there is no analysis or experimental results on it.
2. The authors mentioned that "different regulations during training may impact parameter distribution" in lines 110-112. I would like to see any experimental support or observations.
3. The soft gradient clipping($\epsilon$) is not evaluated.
Methods And Evaluation Criteria: Yes, but there are weaknesses:
1. Limited discussion on convergence time
2. Only evaluate classification tasks; it is better to run some experiments on generation tasks.
Theoretical Claims: Equations and algorithms are sound.
Experimental Designs Or Analyses: Weaknesses:
1. No comparison to other entropy coders in the same setup.
2. No ablation study on EG's contribution and RCT solely
3. The selection of $\lambda$ may be sensitive to different tasks, but there are no experiments and analysis.
Supplementary Material: Yes, GolombCode.py
Relation To Broader Scientific Literature: The paper synthesizes ideas from information theory, statistics, and deep learning into a novel framework. It can be a bridge between classical compression theory and LLMs optimization.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Comments on writing-related:
1. The contributions are too long; it is better to summarize the main contributions in very few sentences.
2. Line 85 (section 2.2), "other works" are missing citations.
3. line 164: tick -> trick, line 206: difficult -> different?
4. Figure 6 is hard to interpret. Please consider enlarging the critical part or normalizing the values for better visualization.
5. I think it is better to explicitly describe the ν estimation step in Algo 1.
Questions For Authors: See the above weaknesses.
1. why omit $1/N_p$ in Eq7?
2. is the $\alpha$ in line 320 the same as $\alpha$ in Eq2? If not, please check the notions carefully.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Comment #1: Analysis of "EG codes is robust with regard to parameter mismatches."**
**Response:** We are sincerely grateful to the reviewer for providing such valuable advice. Table. 3 demonstrates 0-order EG's optimality across models with varying shapes, confirming its robustness. Theoretical analysis can be found in Wen & Villasenor, https://ieeexplore.ieee.org/document/761289.
**Comment #2: Analysis of** **"different regulations during training may impact parameter distribution".**
**Response:** We greatly appreciate your helpful suggestion, and agree with the advice. We added L0.5, L1, and L2 to BERT fine-turning using the IMDB shown below. The original BERT has shape 1.36 and was fine-tuned to 0.22, 0.15, and 0.10 under L0.5, L1, and L2 regulations, with differences in size and performances. This shows the importance of shape parameter estimation. The shape parameter of the latest deepseek-7B model is **0.85**, also significantly different from 0.5, 1, or 2.
| | | | | |
|---|---|---|---|---|
||Shape|Accuracy|EG|FL|
|DGGR|0.13|91.18%|1.37|10.00|
|L0.5|0.22|91.88%|2.90|10.00|
|L1|0.15|90.65%|1.52|10.00|
|L2|0.10|88.29%|1.16|10.00|
**Comment #3:** **The soft gradient clipping ($\epsilon$) is not evaluated.**
**Response:** We are sincerely grateful for your valuable suggestion regarding $\epsilon$. $\epsilon$ prevents gradient explosion when $\nu<1$ and $1/\epsilon$ is the threshold so 1 was an empirical choice. Our experiments with $\epsilon=0.1,1,10$ show $\epsilon=0.1$ brought unstable gradient, $\epsilon=10$ led to slow compression and $\epsilon=1$ was a good compromise.
**Comment #4:** **Limited discussion on convergence time.**
**Response:** Thank you for pointing out this problem. Number of RCT convergence’s rounds relates to the selection of $\lambda$. We trained for 10 epochs in Fig.2 and Fig.3. $\lambda=1000$ converges at 700 steps (3 epochs). Smaller $\lambda$ took longer.
**Comment #5:** **Run some experiments on generation tasks.**
**Response:** Thank you for the suggestion. **We added a text generation experiment with the latest deepseek-7B and evaluate it by next token accuracy below**. Huffman and EG code have the same code length with RCT, but the implementation of EG code is more efficient. Furthermore, the shape parameter of deepseek-7B is **0.85.**
| | **FL** | **EG** | **Huffman** | **EG Compress** | **Huffman Compress** | **Accuracy** |
| ------------------- | ------ | ------ | ----------- | --------------- | -------------------- | ------------ |
| **Normal Training** | 11.00 | 5.93 | 4.70 | 46% | 57% | 99.97% |
| **RCT** | 11.00 | 2.90 | 2.81 | 74% | 74% | 99.97% |
**Comment #6:** **No comparison to other entropy coders.**
**Response:** We found that although Huffman coding shows a small advantage in efficiency over EG when applied to compressing model parameters trained using conventional training, there is no advantage of using Huffman code during RCT. At the same time, Huffman code will need to be designed for each round of training and for each model, with $O(k\log k)$ complexity that cannot be parallelized, while EG has very simple software and hardware implementations and can be applied to all models and tasks. **In all our experiments, the same EG0 code was used.**
| Model | EG | Huffman | FL |
| ----- | ---- | ------- | ----- |
| BERT | 2.64 | 2.42 | 10.00 |
| GPT | 2.46 | 2.25 | 11.00 |
| Llama | 1.72 | 1.66 | 10.00 |
| Gemma | 1.16 | 1.15 | 11.00 |
| Task | EG | Huffman | FL |
| --------- | ---- | ------- | ----- |
| Sentiment | 2.64 | 2.42 | 10.00 |
| Spam | 2.42 | 2.19 | 10.00 |
| Topic | 3.61 | 3.18 | 10.00 |
| Q-A | 2.90 | 2.81 | 11.00 |
**Comment #7:** **No ablation study on EG's contribution and RCT solely.**
**Response:** We thank the reviewer for the feedback. We will incorporate more comparisons with RCT using Huffman coding, conventional training (no entropy coding), L0.5, L1, L2 regulation to demonstrate the usefulness of EG.
**Comment #8: (1) Contributions are too long. (2) "other works" misses citations. (3) tick -> trick, difficult -> different. (4) Figure 6 hard to interpret. (5) Describe the ν estimation in Algo.1. (6) why omit $1/N_p$ in Eq7? (7) Is the** $\alpha$ **in line 320 the same as it in Eq2?**
**Response:** We are extremely grateful to the reviewer for catching these issues. We will fix these typos and clarify throughout the paper if it is accepted.
**Comment #9:** **The selection of $\lambda$ may be sensitive to different tasks, but there are no experiments and analysis.**
**Response:** We are sincerely grateful to the reviewer for the comment. We agree that the optimal setting of $\lambda$ needs more research, which we are currently doing. In our extensive experiments it was set to the same empirical value to achieve the best results. | Summary: The paper presented Rate-Constrained Training (RCT) for Large Language Models (LLMs), exploring model compression in the training stage. The paper showed that parameters of representative LLMs typically followed generalized Gaussian instead of vanilla Gaussian. The paper further enforced the distribution constraints (DGGR) for model training, which eased the parameter entropy encoding with exp-Golomb (EG) codes. RCT demonstrated promising compression performance with different model architectures and parameter scales.
## Update after rebuttal
The reviewer appreciated the authors’ rebuttal that resolved some of the concerns, e.g. comparisons to vanilla training + vanilla huffman coding, L1/L2 regularization. The reviewer therefore raised the score.
Claims And Evidence: Please refer to the strengths and weaknesses
Methods And Evaluation Criteria: Please refer to the strengths and weaknesses
Theoretical Claims: Please refer to the strengths and weaknesses
Experimental Designs Or Analyses: Please refer to the strengths and weaknesses
Supplementary Material: The supplementary material contains only code, which was not verified by the reviewer
Relation To Broader Scientific Literature: The literature review looks sufficient
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths
i) The exploration of compression rate aware LLM training is interesting.
ii) The paper provided various analysis, e.g. shape parameter changes in training, analysis with different lambda, etc
iii) The paper is easy to follow
Weaknesses
The main concern of the reviewer is on the evaluations
i) Baselines. While RCT was claimed to consider compression during training, the parameter encoding was applied after the model was trained, similar to vanilla solutions, e.g. huffman coding. In the paper, fixed length coding was listed as the only baseline, which was insufficient. The authors are encouraged to include baselines like, vanilla training + vanilla huffman coding, etc
ii) Ablations.
The readers would be of interest to understand the advantages of several experimental designs, e.g. dynamic shape parameter vs vanilla L1/L2 regularization
iii) Model architectures.
Most of the experiments/analysis were performed on BERT and classification tasks. The readers may wonder if RCT scales/generalizes well
iv) Presentation
Providing diagrams like Figure 2-5, while helping with the interpretation, is not sufficient. For example, it wont be easy for followup works to do the comparison. The authors are encouraged to provide also the actual numbers shown in Figures.
Other Comments Or Suggestions: L205, right column, "difficult" -> "different" ?
Questions For Authors: N.A.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Comment #1: Baseline: include baselines like, vanilla training + vanilla Huffman coding, etc.**
**Response:** We greatly appreciate the suggestion, and will incorporate Huffman coding as a baseline. We found that although Huffman coding shows a small advantage in efficiency over EG when applied to compressing model parameters trained using conventional training, there is no advantage of using Huffman code during RCT. At the same time, Huffman code will need to be designed for each round of training and for each model, with $O(N)$ complexity that cannot be parallelized, while EG has very simple software and hardware implementations and can be applied to all models and tasks. **In all our experiments, the same EG0 code was used.**
| Model | EG | Huffman | FL |
| ----- | ---- | ------- | ----- |
| BERT | 2.64 | 2.42 | 10.00 |
| GPT | 2.46 | 2.25 | 11.00 |
| Llama | 1.72 | 1.66 | 10.00 |
| Gemma | 1.16 | 1.15 | 11.00 |
| Task | EG | Huffman | FL |
| --------- | ---- | ------- | ----- |
| Sentiment | 2.64 | 2.42 | 10.00 |
| Spam | 2.42 | 2.19 | 10.00 |
| Topic | 3.61 | 3.18 | 10.00 |
| Q-A | 2.90 | 2.81 | 11.00 |
**Comment #2: Ablations: dynamic shape parameter vs vanilla L1/L2 regularization.**
**Response:** We are grateful for your valuable suggestion and added L0.5, L1, and L2 to BERT fine-turning on IMDB, as shown below. In principle, L0.5, L1, and L2 need to assume “the parameters obey GG shapes of 0.5, 1, and 2”, while in reality the shape parameters of the parameters were below 0.22. The latest deepseek-7B model has a shape parameter of **0.85**. Such differences call for taking shape parameter into consideration during RCT, which achieved good results.
| | Final Shape Parameter | Accuracy | EG | FL |
| ---- | --------------------- | -------- | ---- | ----- |
| DGGR | 0.13 | 91.18% | 1.37 | 10.00 |
| L0.5 | 0.22 | 91.88% | 2.90 | 10.00 |
| L1 | 0.15 | 90.65% | 1.52 | 10.00 |
| L2 | 0.10 | 88.29% | 1.16 | 10.00 |
**Comment #3: Architectures: The readers may wonder if RCT scales/generalizes well.**
**Response:** We thank the reviewer for pointing out this. **Since submitting the paper, we conducted experiments with the latest DeepSeek model and obtained similar results**. We will include the results in the final manuscript if accepted. **Below is the result from text generation using deepseek-7B.** This further confirmed the viability of RCT. Again, Huffman coding has an advantage over EG when used to compress a model that have already been trained, at additional traning and implementation costs. Whereas for RCT training, there is virtually no difference. Additionally, the measured shape parameter (**0.85**) of deepseek-7B **further validates the necessity of shape parameter estimation.**
| | **FL** | **EG** | **Huffman** | **EG Compress** | **Huffman Compress** | **Accuracy** |
| ------------------- | ------ | ------ | ----------- | --------------- | -------------------- | ------------ |
| **Normal Training** | 11.00 | 5.93 | 4.70 | 46% | 57% | 99.97% |
| **RCT** | 11.00 | 2.90 | 2.81 | 74% | 74% | 99.97% |
**Comment #4: Presentation: The authors are encouraged to provide also the actual numbers shown in Figures.**
**Response:** We thank the reviewer for the suggestion and will annotate each node in Figs. 3-4 with specific data values for clarity as shown below.
| **Lagrange multiplier** | **0** | **1** | **10** | **100** | **1000** |
| ----------------------- | ------ | ------ | ------ | ------- | -------- |
| **EG Code** | 7.31 | 6.66 | 5.78 | 4.03 | 2.64 |
| **Huffman Code** | 5.47 | 5.25 | 4.81 | 3.59 | 2.42 |
| **Fixed Length Code** | 10.00 | 10.00 | 10.00 | 10.00 | 10.00 |
| **Train Dataset** | 99.54% | 99.69% | 99.56% | 99.47% | 99.52% |
| **Test Dataset** | 93.63% | 93.85% | 93.93% | 93.68% | 92.59% |
**Comment #5: L205, right column, "difficult" -> "different"?**
**Response:** We thank the reviewer for catching the typo and will proof-read the paper very thoroughly if accepted. | Summary: The paper introduce Rate-Constrained Training (RCT), a method integrating rate-distortion optimization into the training process of Large Language Models (LLMs). RCT leverages a generalized Gaussian (GG) distribution to accurately model LLM parameters and uses exp-Golomb (EG) coding for entropy-efficient parameter encoding. The method dynamically optimizes both model performance and compression rate during training, and reduces model size (60%-90% memory reduction) while maintaining or slightly improving accuracy.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: no
Relation To Broader Scientific Literature: A novel approach to integrate rate distortion optimization methodology to the LLM training.
Essential References Not Discussed: no
Other Strengths And Weaknesses: strengths:
1. Introduces a novel rate-distortion optimization framework integrated directly into the training phase, addressing a significant gap in existing compression techniques.
2. Experiments show that LLM parameters follow a generalized Gaussian distribution (shape parameter v < 2), significantly improving modeling accuracy over traditional Gaussian assumptions.
3. Employs exp-Golomb codes which closely approach entropy limits, are robust to parameter distribution changes, and simplify implementation on hardware and software platforms.
weakness:
1. The optimal selection of the Lagrange multiplier (λ) is pretty empirical, there is no theoretical guidance or any adaptive strategies to systematically select λ across different tasks.
2. Incorporating iterative shape parameter estimation and rate calculations within every training step introduces non-trivial computational overhead, more discussions would be needed.
3. although partially addressed via soft gradient clipping, instability in gradient computation remains a concern, especially for the cases when v is close to 1.
Other Comments Or Suggestions: update after rebuttal: I appreciate the responses from the author. I'd like to keep my score.
Questions For Authors: Could the authors elaborate on the computational complexity introduced by the required "value mapping" before entropy coding? and provide insights into practical implementations
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Comment #1: The optimal selection of the Lagrange multiplier (λ) is pretty empirical, there is no theoretical guidance or any adaptive strategies to systematically select λ across different tasks.**
**Response:** We thank the reviewer the valuable feedback. Indeed the theoretical foundation for optimal setting of $\lambda$ is critical and currently under investigation. In our extensive experiments however we found that the same empirical value could be used for different models and tasks to achieve the best results.
**Comment #2: Shape parameter estimation and rate calculations within every training step introduces non-trivial computational overhead:**
**Response:** We thank the reviewer for the comment. In our experiments, statistics for estimating the shape parameter are collected during training of the previous round, incurring little additional overhead. Overall, current implementation of RCT fine-tuning of BERT on A100 GPUs using the IMDB dataset required 8.63 min/epoch, compared with conventional training at 7.92 min/epoch. This overhead stems mainly from DGGR gradient descent, as shape parameter evaluation added negligible time. Further optimization is possible, such as storing quantized DGGR gradients and replacing explicit computations with lookup tables.
**Comment #3:** **Although partially addressed via soft gradient clipping, instability in gradient computation remains a concern, especially for the cases when v is close to 1.**
**Response:** Thanks to reviewer for highlighting potential gradient instability in soft gradient descent. Our retraining experiments with/without RCT show that soft gradient clipping effectively mitigates unstable gradients' impact. For instance, with clipping, RCT achieves **mean=1.70 (std=2.18)** versus normal training's **mean=1.89 (std=1.23)**. Extensive preliminary experiments confirm that gradients did not cause significant performance fluctuations. Should soft gradient clipping fail, other means such as fixed value gradient clipping exist.
**Comment #4:** **Could the authors elaborate on the computational complexity introduced by the required "value mapping" before entropy coding? and provide insights into practical implementations.**
**Response:** Value mapping converts parameters to sorted indices via hash tables, with O(N) time/space complexity and the process can be highly parallelizable. In our serial CPU implementation, the "value mapping" step for 110M parameters takes 0.16s, while the entire encoding process consumes 3.20s in total. | null | null | null | null | null | null |
Exact risk curves of signSGD in High-Dimensions: quantifying preconditioning and noise-compression effects | Accept (poster) | Summary: The paper studies the precise risk curves of signSGD in high dimensional limit for quadratic loss with Gaussian data under certain assumptions on the label noise. It contrasts the risk curves with SGD and quantifies the differences in terms of four effects - effective learning rate, noise compression, preconditioning and gradient noise reshaping. The exact risk curves are numerically verified for various examples.
Claims And Evidence: The claims in the work are theoretical and proofs are provided in the Appendix.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I had a brief look at Appendix A.1, B and C, which seem mostly correct to me.
Experimental Designs Or Analyses: N/A
Supplementary Material: I went through Appendix A.1, B and C.
Relation To Broader Scientific Literature: The work is well placed among the recent works related to analyzing preconditioned gradient algorithms.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The main strength of the paper is the rigorous extension of HSGD to signSGD and then using the result for the clear discussion of the differences between the risk curves of the two.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Can the authors explain the reasoning behind why they expect $K_\sigma$ to have a power-law spectra when $K$ has power-law spectra?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the review, we’re glad that you appreciate our contributions. Our expectation that $K_\sigma$ inherits a power-law spectrum from $K$ is mostly speculative. We should qualify that this claim is basis dependent. If $K$ is diagonal then $K_\sigma$ is the identity, which collapses any power-law structure. So this is more about `very non-diagonal' matrices.
This is probably best defined as having eigenvectors which are very spread out over the space, such as in CIFAR-10 or in an i.i.d. random matrix. In the case of CIFAR-10, the map appears to preserve the power-law structure but change the power. We've included an additional plot to illustrate this effect:
https://anonymous.4open.science/r/revision-2025-stuff-F275/cifar_eigenvalues.pdf
We don't know of an existing theorem demonstrating this effect.
We’ll revise this section to make ourselves clearer. | Summary: This paper studies SignSGD, which can be viewed as Adam without the moment accumulator, in the high-dimensional limit. The main goals of the paper are to quantitatively understand the observed preconditioning and "noise compression" effects of SignSGD in practice. Toward this end, a limiting SDE (SignHSGD) and ODE (SignODE) are derived for SignSGD on square-loss linear regression. Notably, this requires non-trivial analysis due to the discontinuous sign operation. The authors then compare SignSGD to SGD in various ways, demonstrating four key differences: 1. effective learning rate, 2. noise compression, 3. diagonal preconditioning, 4. gradient noise rescaling, each of induces regimes where SignSGD may be favorable over SGD or vice versa. Most notably, SignSGD is expected to provide benefits for many non-Gaussian or heavy-tailed noise classes, supporting folk knowledge.
Claims And Evidence: This claims in this paper are well-supported by rigorous theorem statements and proofs, and supporting results in the appendix. Numerical experiments are presented to support the theoretical trends.
Methods And Evaluation Criteria: The numerical experiments are helpful for contextualizing the theory, and are documented in the appendix.
Theoretical Claims: I did not check all the proofs entirely. However, I checked the proof strategies of the main Theorems and they make sense to me. I also checked the comparisons between SGD and SignSGD; these are correct by my verification.
Experimental Designs Or Analyses: A sufficient description of experiment set-ups is contained in the main paper, with additional details contained in Appendix D and H.
Supplementary Material: I checked through most of the supplementary material to understand the proof structure of the major claims. I also checked through the additional section (Appendix E) regarding a partial extension to Adam.
Relation To Broader Scientific Literature: This work belongs to the general category of literature concerned with capturing the practical behavior of optimization algorithms in deep learning settings. In particular, this paper takes the approach of deriving the corresponding continuous-time SDE/ODE of its algorithm of interest, SignSGD, and deriving some key properties therein. On the other hand, most of prior works along this line study SGD. However, it is well-understood in practice that vanilla SGD experiences a host of problems in deep learning, for which various practical adjustments have been introduced. I think this work references the important prior work leading to this point, and provides some interesting tools and insights toward closing the "insight gap" between SGD and more practical methods like SignSGD and Adam. In particular, the results formalizing the unique interaction of SignSGD and noise are potentially a fruitful way of studying this type of adaptive gradient methods. Furthermore, I think the "case-by-case" results, such as when diagonal preconditioning hurts or helps (Section 4.3) are also potentially fruitful, as they may serve as a basis for understanding the relative benefit of other families of non-diagonal preconditioning methods in deep learning, such as KFAC and Shampoo.
Essential References Not Discussed: None as far as I can tell.
Other Strengths And Weaknesses: I think this paper is quite well-written and understandable. The main insights and arguments are likely of interest to the machine learning optimization community. I have described some of the strengths that stood out to me earlier. I have a few minor things to point out / ask:
- The analysis in this paper is restricted to linear regression and MSE loss (which is acknowledged in the Discussion), and already seems quite involved. It would be helpful to highlight to what degree the broader class of noise considered here can expand expressivity of the problem set-up. Additionally, can the "linearized" analysis here potentially extend to the NTK regime (despite the expressivity gap the NTK regime itself suffers)?
- In the main theorems (Thm 1 and 2), the gap between the discrete descent method and flow) suffers an Exp(T) factor from an application of Gronwall's inequality. This implies for fixed dimension $d$, the drift blows up exponentially in time, which doesn't seem to practically be the case. What would be required to make this bound less conservative, if possible?
Other Comments Or Suggestions: Minor comments:
- The authors should remember to put in the Impact Statement.
- Line 304 first column: "very SignSGD" -> "SignSGD is very".
(Possibly errant) suggestions:
- As previewed earlier, some other works have considered departing from diagonal preconditioning. The most relevant algorithm in this class is Shampoo (or more recently still, Muon). The base curvature matrix between SignSGD and Shampoo I believe are the same (AdaGrad), but Shampoo does a layer-wise "Kronecker-Factorization" rather than the diagonal. I think it would be extremely interesting to see if this type of preconditioning has similar beneficial interaction with noise, and whether it broadens the class of covariances where preconditioning is helpful.
Questions For Authors: None beyond the above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your comments, we address some key questions here.
**NTK Regime**
If we write consider the mean square loss for a general neural network $f(\theta, x)$ in the NTK regime such that $f(\theta, x) = f(\theta_0, x) + \nabla_{\theta} f(\theta_0, x) ^T (\theta_k - \theta_*)$ then the risk, in a student-teacher setup, is given by
$$
\mathcal{R}(\theta_k) = E [ (f(\theta_k, x) - f(\theta_*, x)^2 ] = E[( \langle \nabla_{\theta} f(\theta_0, x), \theta_k - \theta_*\rangle^2] = (\theta_k - \theta_*)^T E[\Theta(x,x)] (\theta_k - \theta_*)
$$
where $\Theta$ is the NTK. Therefore in order to describe the NTK regime, we need to understand the expected signSGD updates where the “data” is given by $\nabla_{\theta} f(\theta_0, x)$. Right now, we must assume this is Gaussian. Then, if the model is such that $\nabla_{\theta} f(\theta_0, x)$ is Gaussian then our results will hold as written. In the more likely case that $\nabla_{\theta} f(\theta_0, x)$ isn’t Gaussian but is sufficiently regular (such as subgaussian) then if we can formulate our results for this larger class of data distribution then we can capture the behaviour of this NTK regime.
Quite possibly our equations, if not our method of proof, already hold for subgaussian data given the good agreement with the CIFAR-10 and IMDB datasets which are not themselves Gaussian.
**Exponential blowup in T**
This $e^T$ term appears because we allow for very flexible learning rate scheduling, including schedules where the risks diverge. With an additional restriction on the smallness of the learning rate, one could improve the exponential explosion in $T$. It will not go away entirely, however -- optimistically one could put a $polylog(T)$. On very long time-frames there will always be a chance for the stochastic optimizer to explode.
We don't have a full picture of how a proof like this would go, but roughly you would want to show that the projection of the errors in each eigendirection of $\bar{K}$ are $O(1/\sqrt(d))$ and crucially remain bounded that way in time due to the contractive nature of the dynamics in that eigendirection. This is also the reason the small stepsize is needed -- to ensure the dynamics tends to contract in each eigendirection, and hence the approximation errors are shrunk as well. (Our current proof uses a resolvent argument, which is an average over all eigendirections; this is technically simpler, but would not be good as the contractive-ness of the optimizer dynamics is lost). | Summary: The paper studies the dynamics of signSGD in a linear regression setting with Gaussian covariates $x \sim N(0,K)$ and noisy labels $y = \langle x,\theta_\ast \rangle + \epsilon$. The authors derive a limiting SDE (signHSGD) that describes the dynamics of $\theta$ as the dimension $d \to \infty$ which depends on both the covariance $K$ and the distribution of $\epsilon$. After transforming this process onto a process on the residual $r$, they derive a deterministic equivalent (signODE) which approximates the loss of of signSGD as $d \to \infty$. Finally, they compare the SDE for signHSGD to the corresponding SDE for SGD to compare the behavior of the two algorithms and to predict when signSGD should outperform vanilla SGD.
**Update after Rebuttal:** I am satisfied with the author's responses and have updated my score.
Claims And Evidence: This paper is focused on theoretical understanding and the technical claims are supported by the proofs in Appendices A, B.
There is a minor claim/conjecture which is unsupported (lines 416-424). It would be good to include a few words about why the authors expect this to hold.
Methods And Evaluation Criteria: This paper is focused on developing theoretical understanding in a linear regression setting, and the experiments on more realistic datasets support the idea that Gaussian universality may allow the analysis to extend beyond the Gaussian setting. However, this is not a central claim of the paper.
Theoretical Claims: I skimmed the proofs in appendices A, B but did not verify their correctness.
Experimental Designs Or Analyses: n/a
Supplementary Material: I skimmed the proofs in appendices A, B but did not verify their correctness.
Relation To Broader Scientific Literature: This paper is attempting to understand the effectiveness of coordinate-wise adaptivity (as in Adam) in a concrete theoretical setting. The approach in this paper is most closely related to (Collins-Woodfin & Paquette 2023) which performed a similar analysis for SGD.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
- The paper is generally well written and includes an extended discussion of the different terms in eq. 8
- The theory seem to match experiments incredibly well (in the linear regression setting)
- The derivation of the SDE for signHSGD is highly nontrivial
Weaknesses:
- As acknowledged by the authors, this paper focuses on the simple setting of linear regression with Gaussian covariates. However, even in this setting, we have a very poor understanding of coordinate-wise adaptive optimizers like signSGD or Adam.
- The paper focuses on online SGD with batch size 1. It would be interesting to extend these results to the more general case.
- There is limited discussion about the effect of the noise reshaping $K \to K_\sigma$.
Other Comments Or Suggestions: - The use of $\eta_t'$ for the signSGD learning rate and $\eta_t := \eta(t)$ for the rescaled learning rate is a bit confusing
- The discussion for the effects of $\epsilon$-compression (4.2) and preconditioning (4.3) are clear but the noise reshaping (section 4.4) is a bit handwavy. In particular, the conjectures in the last paragraph are difficult to follow and could merit additional explanation in the Appendix, even if there is no rigorous theory to back them up. It could also be useful to run some ablations with and without the reshaped noise covariance to check if it's actually beneficial or just a side-effect of the preconditioning.
- In Assumption 5, $z$ is not defined. Is the assumption that this holds for all $z$? Could this assumption simply be replaced with the assumption that $v^T (\theta_0 - \theta^\ast)$ is sub-Gaussian for any $v$? Also, why does this assumption hold for a deterministic $\theta_0,\theta_\ast$? If $v = R(z;\overline{K})_i$ is aligned with $\theta_0 - \theta^\ast$ then Assumption 5 forces $\|\theta_0 - \theta^\ast\| \lesssim d^{-1/2}$?
Questions For Authors: - This paper focuses on the batch-size $1$ setting. How are both signHSGD and the strength of the SDE approximation affected by the batch size? Would your analysis recover the "square root" scaling rule in Malladi et al. for Adam? Additionally, would the analysis extend to batch sizes that grow with $d$?
- The analysis restricts to learning rates $\eta_t' = \Theta(1/d)$ (eq. 6). Is the issue that if $\eta_t' \gg 1/d$ the noise will dominate and the SDE becomes degenerate in the limit $d \to \infty$?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the author for their detailed review and address weaknesses and questions below.
## Weaknesses
**Noise reshaping**
For the $K \to K_\sigma$ mapping, there is strong dependence on the eigenbasis of $K$. For example if $K$ is diagonal then $K_\sigma = Id$. Otherwise, if the eigenbasis is relatively random (such as what is seen in CIFAR and in Figure 4), the operation can preserve some spectral properties of $K$. For example, in CIFAR-10, $K_\sigma$ appears to preserve the power-law structure of $K$, albeit with a different power (plots [here](https://anonymous.4open.science/r/revision-2025-stuff-F275/cifar_eigenvalues.pdf)).
One major point (discussed below in response to the learning-rate) is that the the trace of $K_\sigma$ can be much larger than the trace of $K$, which will cause sign-SGD gradient noise to be a major slow-down
Rigorously saying more is probably difficult. But, we'll rewrite this section in order to be clearer.
**Assumption 5**
Thank you for catching this. We like your suggestion to instead assume $v^T(\theta_0 - \theta_*)$ is subgaussian with constant $O(d^{-1/2})$. You're right that general deterministic initialization and targets do not fit the assumption. For the record, the assumption is for all $z \in B(||K||)$.
## Questions
**Batch sizes**
This is a very good question -- thanks for bringing it up. We have worked out some of the mathematics for batch sizes small compared to $d$, and plan to add the following results to an appendix.
So first, just to be clear, we are following usual optimization (and standard package) conventions, and we are applying the minibatch average prior to applying the sign-function.
We will take a fixed batch size b independent of the dimension of the problem. One could also look at other scaling regimes, and we expect (based on the fixed batch size case) that the behavior continues to hold for batch sizes that satisfy (b/d) to 0, but we’re only 100% confident for fixed $b,$ and $d \to \infty$. The good news is that even in this limited setting, the story is already interesting.
The bias-term (or descent-term) is the only term affected by minibatching. The gradient noise term (the Brownian term in homogenized sgd) is unaffected.
The homogenized-SGD equation is changed to
$$ \mathrm{d}\Theta_t= -\eta_t N_b \frac{\varphi^{(b)}(\left(\mathcal{R}({\Theta}_t)\right)}{\sqrt{2 \mathcal{R}(\Theta_t)}}
\overline{\mathbf{K}} \left(\Theta_t-\theta^{*}\right) \mathrm{d}t
$$
$$ + \eta_t \sqrt{\frac{2 \mathbf{K}_\sigma}{\pi d}} \mathrm{~d} \mathbf{B}_t$$
The $N_b$ is a numerical constant (the mean of a chi-variable with $b$ degrees of freedom, to the mean of a chi variable with $1$ degree of freedom). It is asymptotic to $\sqrt{b}$, consistent with what is shown in Malladi et al.
The $\varphi^{(b)}$ is the $\varphi$ of a new convolved noise. It is the $\varphi$ that results from looking at $\sum_{i=1}^b \epsilon^{(i)} \omega_i$ where $\epsilon^{(i)}$ are iid copies of the label noise and where $\omega$ is an independent random vector, drawn uniformly from the sphere of dimension $b$.
We have run some simulations implementing this (code is available in the anonymous code repository for the paper). Simulations at fixed learning rate, varying batch size are available [here](https://anonymous.4open.science/r/revision-2025-stuff-F275/rademacher_batch_comparison_dimension_500_evals_logspace_num_runs_10.pdf) and with $\sqrt{b}$ scaling [here](https://anonymous.4open.science/r/revision-2025-stuff-F275/rademacher_batch_comparison_dimension_500_evals_logspace_num_runs_10_rescaled_lr.pdf).
**Learning rate scaling**
Indeed, the choice of $\eta’ = O(1/d)$ is needed to prevent (gradient) noise domination. See the Hessian term of equation 94 where the Hilbert-Schmidt inner product is $O(Tr(K_\sigma))$. In this setting, if $\eta’$ decays slower than $1/d$ this noise term would dominate. Had $\eta’$ decayed faster, the noise term would vanish and we would be performing gradient flow in the high-dimensional limit.
For SGD it is possible to instead rescale by the “intrinsic dimension” of the data, defined as the trace of covariance normalized by its operator norm ( $Tr(K) / ||K|| $). We think a similar change could be made for signSGD but some care has to be taken since signSGD reshapes the gradient covariance to $K_\sigma$. This connects to our discuss about the reshaping of gradient noise. Given a diagonal $K$ with intrinsic dimension $O(\sqrt{d})$, signSGD still has intrinsic dimension $O(d)$ since $K_\sigma = Id$. This leads to signSGD having learning rates $O(d^{1/2})$ smaller than vanilla SGD to guarantee convergence, leading to slow dynamics.
Let us know if we can clarify anything else; if we have addressed your concerns satisfactorily, we hope that you will consider revisiting your review score. | Summary: The authors analyze Sign-SGD for linear regression in high dimensions by deriving limiting differential equations that describe its behavior. Their analysis quantifies four main effects: learning rate adjustment, noise compression, diagonal preconditioning, and gradient noise reshaping. The paper includes theoretical proofs and experimental validation on both synthetic and real datasets.
Claims And Evidence: Main Claims and Evidence:
Claims:
1. Sign-HSGD and its deterministic equivalent ODE are good models for the risk dynamics induced by Sign-SGD
2. The risk curves of Sign-SGD are well approximated by the risk curves of Sign-HSGD and this approximation improves as dimension grows.
3. Convergence to a neighbourhood of the solution (?) under a fixed learning rate
4. The condition number affects whether Sign-SGD or SGD is should be favoured
Evidence:
1. Figure 1
2. Theorem 1 / Theorem 2
3. Theorem 3
4. Theorem 4
The evidence to me was convincing and followed from my intuition of the algorithm, however I did not review the appendices in detail. I especially appreciated the mixture of theoretical analysis with a grounded set of experiments which verified the results derived seemed to make sense.
Methods And Evaluation Criteria: Benchmarks were a mixture of toy problems like linear regression under varying noise and preconditioned assumptions which more closely resembled the theoretical setting analyzed, as well as a set of more practical experiments on common ML benchmarks like CIFAR10 and IMDB.
Theoretical Claims: I did not verify the claims in detail but based on my experience they seem reasonable. I would like to double check that the result from theorem 3 roughly align with standard results from stochastic convex optimization regarding SGD with a fixed step-size. If they do not would the authors explain in more detail the differences.
Experimental Designs Or Analyses: The experimental designs at there face seem sound, and tailored to fit the larger narrative of the paper.
Supplementary Material: I did not have time to review the appendix in detail.
Relation To Broader Scientific Literature: I think that this work seems to address a unique and important problem posed by modern optimization research, and I believe a better understanding of the optimization dynamics of algorithms like sign decent would benefit the larger machine learning community.
Essential References Not Discussed: Possibly might want to look at "Heavy-Tailed Class Imbalance and Why Adam Outperforms Gradient Descent on Language Models"
Other Strengths And Weaknesses: Strengths:
1. Comprehensive theoretical framework:
- Proves convergence to limiting equations
- Derives exact formulas for limiting risk
- Quantifies specific effects (learning rate, noise compression, preconditioning, noise reshaping)
2. Strong empirical validation:
- Theory matches experiments even at moderate dimension (d=500)
- Works on both synthetic and real datasets
- Demonstrates convergence under different noise conditions
3. Clear practical implications:
- Shows when Sign-SGD might outperform SGD (based on condition numbers)
- Explains behavior under different noise distributions
Weaknesses:
1. Limited scope:
- Theory only covers linear regression
- Strongest results require specific noise conditions (C² density near zero)
- Extensions to non-smooth noise only work above risk threshold
2. Some findings lack theoretical foundation:
- Success on real (non-Gaussian) data isn't fully explained
- Authors note this is left for future work
3. Some practical aspects not addressed:
- Doesn't analyze mini-batch settings
- Doesn't fully connect to practical deep learning scenarios / analyze non-convex settings.
Other Comments Or Suggestions: - The "In a nonconvex setting, identifying ...." paragraph is broken and needs to be revised.
Questions For Authors: - Can you explain: "But we expect the conclusion of (27) remains mostly true in well-conditioned settings."
- From the above: I would like to double check that the result from theorem 3 roughly align with standard results from stochastic convex optimization regarding SGD with a fixed step-size. If they do not would the authors explain in more detail the differences.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and address some key points below.
## Minbatching:
We have added more details regarding batch sizes $b$ in our response to Reviewer ugE8. One particular consequence of minibatching is that our condition on the behaviour of the noise near $0$ relaxes significantly. Averaging multiple sources of noise effectively conditions the noise even for $b = 2$. Numerical results can be found [here](https://anonymous.4open.science/r/revision-2025-stuff-F275/rademacher_batch_comparison_dimension_500_evals_logspace_num_runs_10.pdf) for fixed learning rate, varying batch size.
## Questions:
**First question**
(27) is comparing the risk obtained by the optimally scheduled signSGD to optimally scheduled SGD when data is isotropic. We don’t know the optimal learning rate schedule in non-isotropic settings but what we expect is that whether or not signSGD outperforms SGD depends generally on the magnitude of $\psi$.
The argument why can be sketched like: for any signSGD learning rate $\eta^{S}(t)$ run SGD with a learning rate of $\eta(t) =\eta^{S}(t)\phi(R_t) / \sqrt{R_t}$. Now the descent terms are the same up to the $K$ vs $\overline{K}$ distinction. With this learning rate we see the function $\psi$ in the variance term on the SGD risk equation. If the various matrices involved ($K, \overline{K}, K_\sigma$) are similar enough (as they are when $K=Id$) then whether or not signSGD is beaten by SGD with this learning rate is again determined by the magnitude of $\psi$. This “similar enough” statement is what we mean by well-conditioned settings.
**Second question**
For SGD in the same setting, the limiting risk can be found in Equation (254) and is
$$
\frac{\eta v^2 Tr(K) }{2(2d-Tr(K)\eta)}.
$$
For sufficiently small $\eta$, the limiting SGD risk is $O(\eta v^2 Tr(K) / d)$. This is consistent with 'neighborhood' convergence for fixed step-size SGD when $v > 0$ (e.g. Theorem 5.5 of Garrigos-Gower '23, Handbook of Convergence Theorems) and convergence with fixed step-size in the case $v=0$.
In contrast we can use Theorem 3 to approximate the limiting signSGD risk as up to constants:
$$
\frac{\eta Tr(\overline{K})}{d} \max \left (\frac{\eta Tr(\overline{K})}{d}, v \right)
$$
Notably, even if $v = 0$ this limiting risk is not $0$. The simple explanation is that with SGD, the average magnitude of the gradient naturally decreases with the risk and so we effectively take smaller steps; however for signSGD the magnitude of the gradient is always $\sqrt{d}$ and does not change with the risk. To achieve similar results with signSGD the stepsize needs to be decreased. This aligns with what is seen in practice for algorithms like Adam.
## Other:
"In a nonconvex setting, identifying ...." paragraph. We have attempted to improve the wording for clarity, but we’d be happy for additional guidance here. Our intention is to say that dividing by the norm of the gradient could be reasonable in vanishing gradient situations, such as near saddles. | null | null | null | null | null | null |
Pointwise Information Measures as Confidence Estimators in Deep Neural Networks: A Comparative Study | Accept (poster) | Summary: This paper presents a comparative analysis of three point-wise information measures—PMI, PVI, and PSI—from both theoretical and empirical perspectives. The study theoretically analyzes the sensitivity of these measures to margin effects and intrinsic dimensionality, along with their convergence rates. Empirical validation is conducted through numerical experiments using benchmark computer vision models and datasets, evaluating the measures' performance in failure prediction and confidence calibration tasks.
## Update after rebuttal
I keep my score unchanged, and I think the authors addressed my concerns well.
Claims And Evidence: The claim that PVI is the "most well-rounded" measure cannot be justified solely by analyzing invariance and margin sensitivity. By definition, PVI critically depends on the predictive family $\mathcal{V}$. Improper specification of $\mathcal{V}$ may significantly degrade PVI's performance, necessitating rigorous theoretical and empirical investigation into $\mathcal{V}$'s role. While Tables 2-3 demonstrate PVI's superiority over post-hoc baselines, two critical questions remain unresolved:
1. How severely does misspecification of $\mathcal{V}$ impact PVI's effectiveness?
2. What principles should guide the selection of $\mathcal{V}$ for optimal PVI performance?
Methods And Evaluation Criteria: While the proposed experimental framework appears methodologically sound, three critical enhancements are required:
1. Systematic investigation of PVI's dependence on $\mathcal{V}$ across parameter configurations.
2. Quantitative benchmarking against baseline methods under controlled $\mathcal{V}$ variations.
3. Causal analysis elucidating why PVI outperforms post-hoc alternatives in specific $\mathcal{V}$ regimes.
Theoretical Claims: I have reviewed the proofs provided in the Supplementary Materials. The proofs demonstrate scientific validity under the specified theoretical assumptions outlined in this study.
Experimental Designs Or Analyses: The experimental design and analytical validity were checked. The experiments in the paper properly evaluated PMI, PSI, PVI, and other post-hoc methods under controlled experimental parameters. Detailed supplementary documentation delineates performance variations across operational conditions for PMI, PSI, and PVI. Despite this, further controlled ablation studies remain necessary to elucidate the relationship between PVI and the predictive family $\mathcal{V}$.
Supplementary Material: I conducted a comprehensive review of all texts in the Supplementary Materials, with particular attention to Sections B and C.
Relation To Broader Scientific Literature: The study systematically compared three uncertainty measures (PMI, PSI, and PVI) through theoretical analysis and empirical validation, demonstrating PVI's promising performance in uncertainty quantification for deep learning tasks. These findings not only advance key applications like failure prediction and misclassification detection, but also provide novel insights for enhancing model robustness against adversarial attacks.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
1. The comparison of point-wise versions of MI, SI, and VI provides an insightful and significant contribution to quantifying uncertainty in deep learning models. The paper offers both theoretical and empirical comparisons of these measures.
2. The paper demonstrates that PVI outperforms other post-hoc methods in most failure prediction and confidence calibration scenarios. This suggests PVI could be a promising approach for exploring adversarial robustness.
Weaknesses:
1. The theoretical framework requires further development by relaxing the stringent assumption in Section 3.1 that requires $\mathcal{V}$ to be a fully connected neural network.
2. The role of $\mathcal{V}$ warrants more rigorous theoretical analysis and empirical ablation investigation.
Other Comments Or Suggestions: Typos:
1. Lines 066-069 in right column: By doing so, .... in the data.
2. line 1212: sa1me
Questions For Authors: 1. To what extent does misspecification of $\mathcal{V}$ influence the efficacy of PVI?
2. What guiding principles should inform the selection of $\mathcal{V}$ to optimize PVI performance?
3. In Table 2's third numerical column, why does PVI exhibit consistently larger confidence intervals when AUPR$_{f,error}$ serves as the evaluation metric?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive remarks and valuable suggestions on the paper. Below, we address the concerns and questions raised by the reviewer.
**Weakness 1 (On the Assumption for PVI's Theoretical Properties)**
This is a valid point, and we will explicitly specify the limitations of this assumption for our theoretical results relating to PVI. However, having said that, we note that our results can be extended to non fully-connected networks by considering the input as the feature layer that eventually feeds into a fully-connected architecture. For instance, we can consider the feature output after the convolution layers in a CNN, which is input to a fully connected set of layers before yielding the final output. For those cases, our results are still applicable.
**Question 1 (On the Misspecification of $\mathcal{V}$)**
Yes, misspecification of $\mathcal{V}$ can reduce the applicability of both the theoretical and empirical results in our work. However, in our work we assume that the knowledge of the model’s architecture is available. In the cases when the model is entirely a black box, we won’t be able to arrive at our theoretical and empirical results in this paper. To that end, we are including additional experiments where we either over or under-specify the complexity of $\mathcal{V}$, and see how it affects the results.
**Question 2 (On the Selection of $\mathcal{V}$)**
Ideally, $\mathcal{V}$ should match the architecture of the model used to generate the predicted outputs. This alignment ensures that the PVI estimation accurately reflects the confidence characteristics of the original model.
**Question 3 (On the $AUPR_{f,error}$)**
Since $AUPR_{f,error}$ emphasizes the accurate ranking of rare error cases (i.e., treating incorrect predictions as positives and using low confidence scores as indicators), methods that can precisely distinguish which specific samples are likely to be errors will naturally perform better. It is essentially asking “How well can we rank and detect errors using the model’s uncertainty/confidence score?” PVI outperforms others for this metric reflects its ability to identify misclassification, even when errors are rare. We hypothesize that this may be attributed to PVI's incorporation of both the network architecture and the prior probabilities of the output classes, which helps mitigate overconfidence on misclassified samples. | Summary: The authors propose a new indicator for quantifying the confidence of a neural network: the pointwise information between an input’s features and the corresponding output. When the information is high (corresponding to a large reduction in the entropy of the true label when conditioned on the input), we expect that the input features are especially useful for making predictions (thus confidence should be higher).
Three measures of pointwise information are compared: mutual information, V-information, and sliced mutual information. The authors first investigate these measures along the dimensions of transformation invariance, sensitivity to geometric properties of the feature distribution, and the convergence rates of their estimators. They then conduct a series of experiments that demonstrate the usefulness of these PI measures for failure/selective prediction and confidence calibration.
### Update after rebuttal
My main concerns about this work still remain — namely, that the empirical results are not strongly motivated or justified by the theoretical discussion (as currently presented), and that the experiments in their current form are not convincing enough for practitioners to consider adoption of this method.
This paper would be significantly strengthened by the use of more-realistic datasets (not toy ones like MNIST, Fashion-MNIST, CIFAR-10, etc.), especially if such experiments can demonstrate a clear and consistent advantage to PI estimation methods over existing UQ techniques. I maintain my score.
Claims And Evidence: The authors claim that pointwise information (PI) is a useful indicator of predictive uncertainty and that PVI outperforms other PI estimation methods and UQ benchmarks. While experiments offer partial support, the connection between empirical evidence and theoretical analysis is poorly demonstrated.
In Section 3, invariance is emphasized as crucial for confidence estimation, yet PMI—the most invariant method—does not perform best. The justification for this (that excessive invariance may be counterproductive) is unconvincing and inconsistent. Remark 1 states that “it is important to be invariant to bijective transformations T in the context of confidence estimation”, and that the “ideal scenario is when the above is true for any invertible, and thus information-preserving transformation T”. However, Remark 9 suggests “the fact that PMI is invariant to a much larger degree of non-linear homeomorphisms may not always be advantageous”. These claims directly contradict each other. If invariance matters for confidence estimation, why does it not lead to observably better empirical performance? I would have appreciated an explicit demonstration of this principle, perhaps in an experiment where each method is applied to intentionally transformed data and results are compared.
The discussion of sample-wise margin (which seems to refer to class separation in feature space) lacks a unifying theory across PMI, PSI, and PVI, making Propositions 4 & 5 and Theorem 1 difficult to contextualize. Consequently, takeaway T3 appears unconvincing. Moreover, the correlation-to-margin experiment, combined with the results of Section 4, suggests that margin sensitivity does not directly drive confidence estimation performance, calling into question its relevance to the main analysis. This section might be better placed in the appendix.
Takeaway T4, on convergence rates, is the most practically relevant, as a PI estimator’s performance should correlate with the usefulness of its information scores for UQ. However, the theoretical discussion is buried in the appendix—it should be in the main body.
Section 4 presents mixed empirical results, with no method universally superior (as evidenced by the plethora of statistical ties in Tables 2 and 3). While PVI is the most consistent top performer, it is not dominant. The claim that PVI is “the most well-rounded” across invariance, margin sensitivity, and convergence rate is unconvincing—these factors are neither comprehensive nor clearly predictive of empirical success. Theoretical takeaways seem driven by empirical results rather than the other way around, weakening the paper’s core claims.
Overall, while the paper presents interesting theoretical insights, the lack of a clear and consistent connection between theory and empirical results weakens its central claims. Key theoretical principles, such as invariance and margin sensitivity, are not convincingly linked to empirical performance, and in some cases, the arguments appear contradictory. Additionally, the empirical results do not strongly support the claim that PVI is the best method, as its advantages seem marginal rather than definitive. A more thorough justification of the theoretical takeaways, along with targeted experiments to validate key assumptions, would significantly strengthen the paper’s contributions.
Methods And Evaluation Criteria: I am concerned with the authors’ use of ECE for calibration, given its known issues with discontinuity [1] and binning dependence [2].
The absence of transformer-based models leaves open the question of whether the reported PI method improvements hold for more modern architectures. This is particularly relevant since transformers' softmax probabilities have been shown to calibrate better than those of ConvNets [3].
The dataset choices are also underwhelming. The lack of standard-resolution (224x224) images raises concerns about scalability, as results on toy datasets like MNIST and F-MNIST may not generalize to realistic settings. It is disappointing that the largest datasets (Tiny-ImageNet, DS-ImageNet) were relegated to the appendix, with DS-ImageNet seemingly missing benchmarks for several algorithms.
[1] Błasiok, J., Gopalan, P., Hu, L., & Nakkiran, P. (2023, June). A unifying theory of distance from calibration. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing (pp. 1727-1740).
[2] Nixon, J., Dusenberry, M. W., Zhang, L., Jerfel, G., & Tran, D. (2019, June). Measuring Calibration in Deep Learning. In CVPR workshops (Vol. 2, No. 7).
[3] Minderer, M., Djolonga, J., Romijnders, R., Hubis, F., Zhai, X., Houlsby, N., ... & Lucic, M. (2021). Revisiting the calibration of modern neural networks. Advances in neural information processing systems, 34, 15682-15694.
Theoretical Claims: I did not check the correctness of any proofs.
Experimental Designs Or Analyses: The experimental designs appear sound, but the analysis in Section 4 offers little beyond summarizing tables in words. Several claims also warrant scrutiny:
- For F-MNIST, the authors attribute PVI’s superiority over PMI and PSI to its “well-rounded” nature, citing invariance and margin sensitivity. However, it’s unclear why this explanation applies specifically to F-MNIST but not to CIFAR-10, where PVI and PSI are statistically tied, or to MNIST, where PVI and PMI perform similarly.
- In Section 4.2, the claim that “PVI significantly outperforms [all benchmarks] when assessing the average ECE… by a large amount” is questionable. While the means in Table 3 support this, the large standard deviations undermine the statistical significance of the difference.
- On page 8, the authors assert that “for average ECE, it seems that the improvement [of PVI over PMI and PSI] for more complex datasets and architectures is more significant.” This claim would be far more convincing if PMI/PSI results were reported for Tiny-ImageNet and DS-ImageNet. Without them, the argument for PVI’s superiority on complex datasets remains unsubstantiated.
Supplementary Material: I carefully reviewed appendices A and C, and lightly skimmed Appendix D.
Relation To Broader Scientific Literature: This study builds directly on Zhu et al. (2022), which demonstrated that common calibration techniques—such as label smoothing, mixup, focal loss, and temperature scaling—can inadvertently exacerbate the confidence gap between correct and incorrect predictions. Since these methods primarily operate by adjusting softmax probabilities, the authors explore whether alternative confidence estimation approaches, such as pointwise information (PI), can mitigate this issue. While the paper does not introduce new methods for estimating PI, its key contribution lies in the novel application of these techniques to confidence calibration and uncertainty quantification.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: The application of pointwise information (PI) for confidence estimation is a novel and promising research direction. I commend the authors for this innovative approach and hope future work will further validate their hypothesis.
However, the manuscript would benefit from improved clarity and coherence, particularly in its argumentative structure. The theoretical and empirical sections feel somewhat disjointed, with the developed theory not clearly informing or justifying the experimental results. As a result, the theoretical contributions appear underutilized. Since the empirical results are middling, this lessens the overall significance of the current contribution.
A key limitation not yet discussed is the computational cost of the proposed approach. The authors frame their method as a post-hoc alternative, emphasizing the practical challenges of modifying network architectures or retraining models. While technically post-hoc (in that it does not require retraining the original model), the proposed approach is highly computationally expensive. This is evident from the study’s restriction to smaller datasets and lower-resolution images: “we do not report the results [on Tiny -ImageNet and DS-ImageNet] for PMI and PSI as they are very computationally expensive for large-scale datasets”. If my understanding is correct, each PI estimate requires training a separate neural network—sometimes with the same architecture, which may not always be accessible in black-box settings. Given the mixed empirical results, the computational overhead may not be justified, raising concerns about the method’s practicality.
Overall, while the paper introduces an interesting application of PI for confidence estimation, its practical viability remains uncertain due to the high computational cost and the lack of a clear theoretical-to-empirical connection. Strengthening the coherence between these aspects and further justifying the trade-offs involved would significantly enhance the paper’s impact.
Other Comments Or Suggestions: It is atypical to see the ECE reported in (what I think) are 100x amounts. ECE is typically bounded by [0, 1].
Table 9 reports exceptionally high error rates for ResNet101 trained on DS-ImageNet (80% train error, 85-86% generalization error). Are you sure that your models are adequately trained here? If not, we should question the validity of any confidence calibration results obtained on this dataset. I am also concerned about the 52% test error rate for DenseNet121 trained on Tiny ImageNet (while the validation error is 11%).
I noticed a few typos / grammatical errors while reading. Here is a list:
- Item 3 in “Motivation”, first paragraph: “We note that this measure is rooted in probability, and estimates priors and posterior probability measures” (did you mean to put prior here? The plural doesn’t make as much sense)
- Item 3 in “Motivation”, first paragraph: “This is unlike the typical neural network output, which, although *it* is supposed to model the conditional probabilities of each class p(y|x), often turn*s* out to be not a good indicator of the true uncertainty” (changes marked with stars)
- Item 3 in “Motivation”, first paragraph: “By doing so, *they* can potentially reduce inherent bias…” (changes marked with stars)
- Item 1 in “Contributions”: “We found that PVI outperforms PMI and PSI”. Did you mean to switch into past tense here?
- In equation 9, I believe you replaced $psi$ with $\psi$. You do not define $\psi$ anywhere in the paper.
- Item T3 in Section 3.3: “we see PMI to be invariant to hard margin, and yet PSI being sensitive to hard margin” (the italicized portion should be replaced with something like “while PSI is sensitive to hard margin”).
Questions For Authors: 1. The authors discuss the potential counterproductivity of excessive invariance, yet the theoretical importance of invariance is emphasized. Could the authors elaborate on how invariance relates to predictive performance?
2. In Section 4, the authors claim that the superior performance of PVI over PMI and PSI on F-MNIST is due to PVI’s “well-roundedness.” Why do the authors believe this characteristic is particularly beneficial for F-MNIST, and not for datasets like CIFAR-10 or MNIST, where PVI does not show the same degree of superiority?
3. The concept of "sample-wise margin" appears to be inconsistent across the methods (PMI, PSI, PVI). Can the authors provide further clarity on how this property should be interpreted or applied in practice, especially in light of the conflict between the correlation-to-margin results and Section 4?
4. Given the computational cost of PI methods, do the authors see any feasible ways to reduce the complexity of PI estimation while retaining its benefits?
5. Does PVI assume access to the model architecture, and if so, how does that align with the authors’ proposed use in black-box settings?
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive remarks and valuable suggestions on the paper. Below, we address the major concerns and questions raised by the reviewer. We will address all other points in the revision.
**Question 1 (On Invariance)**
We would like to clarify that we meant bijective linear transformations (in Remark 1). If the PI measures are not invariant to linear transformations, then it could pose an issue. We highlight this point with the following example. Let us consider PMI between a neural network layer $T$ and the output labels $Y$, and assume that $T’$ denotes another rendition of $T$ which has the same information but arises from a different initialization of the network. If the relationship between $T$ and $T’$ is linear, then PMI’s invariance is helpful. However, if the relationship is non-linear, as PMI’s invariance to non-linear invertible and continuous transformations also means that it outputs the same degree of uncertainty estimation when $T’$ is related to $T$ in a non-linear manner. If the function is highly non-linear, then the estimated label for $T’$ should have a different level of confidence compared to $T$, as the neural network’s remaining layers are limited in the ways it can transform the features $T’$ (considering finite networks). Therefore, in this case, the heavily invariant nature of PMI can be counterproductive for reflecting the uncertainty of predictions. Note that if PMI’s invariances were limited to linear transformations, this won’t be an issue.
**Question 2 (On PVI Superiority)**
For MNIST, where classifiers achieve very high accuracy and most examples are correctly classified, AUROC can give an overly optimistic view of performance, resulting in smaller observed differences across methods. For STL-10, we plan to run additional repetitions to reduce performance variability. In the case of CIFAR-10, when considering confidence intervals rather than standard deviations, PVI clearly outperforms all other methods for AUROC. Furthermore, PVI consistently achieves the best results in terms of AUPR (error), except for STL-10, which - as noted - may require more repetitions for stable evaluation. This indicates that PVI reliably assigns lower confidence to misclassified examples.
**Question 3 (On Sample-Wise Margin)**
A direct, 1-to-1 comparison with a unified sample-wise margin framework is challenging due to the significantly different nature of each measure. However, we believe there is much to be learned from their individual contributions, and we will emphasize how each result adds value to understanding uncertainty and confidence estimation. Prop. 4 shows that PMI is insensitive to margin when the classes are well-separated. This observation is motivated by prior work (e.g., Grønlund et al., 2020), which connects margin to generalization and prediction confidence. Thus, PMI’s inability to encode margin under such conditions is particularly relevant in the context of confidence estimation. In contrast, PSI and PVI retain meaningful notions of sample-wise margin. These more general measures can naturally extend to the case where class-conditional distributions $P(x|y=0)$ and $P(x|y=1)$ are non-overlapping—for example, by setting $\epsilon =0$ in Eq. (9), Prop. 5 holds without further assumptions.
Empirically, we observe that PSI correlates most strongly with margin (Table 1), aligning with this theoretical intuition. However, it's important to distinguish between tasks: in the correlation-to-margin experiment, confidence reflects a sample’s sensitivity to decision boundaries, regardless of correctness. In contrast, tasks like misclassification detection, selective prediction, and calibration emphasize predictive reliability, where confidence is tied to accuracy. This difference in interpretation explains why PSI excels in margin-based correlation while PVI outperforms in accuracy-driven tasks. The results are not inconsistent but instead highlight the differing emphases of these evaluation criteria.
**Question 4 (On Computational Cost)**
While training additional models required for PI estimation can be computationally expensive, this training is performed only once. Inference, on the other hand, is efficient. For PVI, for example, the inference time is comparable to standard label prediction. Moreover, if multiple models from different runs are available, they can be directly reused for PVI estimation without retraining. We hope our findings can motivate future research toward developing more computationally efficient approaches for pointwise information estimation.
**Question 5 (On PVI & Model Architecture)**
While PVI assumes access to the model architecture, it does not require any modifications to the network architecture (as in MC Dropout) or training procedure (as in focal loss). Such alterations change the model’s prediction behavior, whereas PI methods operate post hoc do not alter the original predictive outputs.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the thorough response. I will begin my reply by going through each enumerated point.
### Question 1
This is a helpful clarification, and I would encourage the authors to further refine their manuscript to ensure this confusion does not arise with other readers.
### Question 2
If the results on MNIST are truly not to be trusted / will not adequately indicate separation between methods, I wonder if they should be included in the main body of the paper. As it stands, multiple methods on the MNIST task statistically tie for all but AUPR, which weakens any claims you make about the superiority of PVI.
For CIFAR-10, the confidence intervals (+- 1 sd) still indicate that PVI is in a series of statistical ties across metrics (unless I am missing something obvious).
If STL-10 may require more repetitions for stable evaluation, I would encourage the authors to run those experiments and report the revised results. As it stands, the reader is still presented with a series of statistical ties across metrics.
### Question 3
I appreciate the additional clarification regarding "sample-wise margin" as it is used to mean various things for each method. My initial opinion regarding this analysis still stands: as presented, sensitivity to margin does not appear to correlate with better uncertainty quantification. This discussion, while informative, does not belong in the main body of the paper and may be distracting from your overall claims.
### Question 4
I agree with the authors that training, not inference, is the computationally expensive component of PI estimation. I also hope that more computationally efficient approaches can emerge to make PI estimation more tractable. However, there is not a lot of signal from the empirical results suggesting that such a direction would be worthwhile for researchers, since other, far cheaper methods of uncertainty quantification perform similarly.
### Question 5
Thank you for the clarification.
## Overall
My main concerns about this work still remain — namely, that the empirical results are not strongly motivated or justified by the theoretical discussion (as currently presented), and that the experiments in their current form are not convincing enough for practitioners to consider adoption of this method.
This paper would be significantly strengthened by the use of more-realistic datasets (not toy ones like MNIST, Fashion-MNIST, CIFAR-10, etc.), especially if such experiments can demonstrate a clear and consistent advantage to PI estimation methods over existing UQ techniques. I maintain my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer and present additional results.
**Q2 and Q4:** We have re-trained VGG-16 on STL-10 and ResNet-50 on CIFAR-10 with added regularization to ensure low test errors (8.94% and 4.78%). Furthermore, as recommended, we have incorporated more realistic datasets: ResNet-101 on CIFAR-100, InceptionV3 on Stanford Dogs, and DenseNet121 on TinyImageNet. Additionally, we include 3 more commonly used metrics for failure prediction (Zhu et al., 2023): FPR at 95% TPR and EAURC, and 3 robust calibration metrics (Nixon et al., 2020): Static Calibration Error (SCE), Adaptive SCE (Ada-SCE), Class Conditional Adaptive SCE (CC-Ada-SCE), and RMS Class Conditional Adaptive SCE (CC-Ada-SCE-RMS).
We report only MSP, PVI, and the top-performing benchmark method. 95% confidence intervals are provided in brackets (10 trials each). Note that STL-10 performance has overlaps due to higher sd, because of very small data size.
Additional benchmarks are available at: https://shorturl.at/CWIqp.
|Model|Dataset|Method|AUROC $\uparrow$|AUPR (succ) $\uparrow$|AUPR (err) $\uparrow$|FPR@95TPR $\downarrow$|AURC$\downarrow$|E-AURC$\downarrow$|
|-|-|-|-|-|-|-|-|-|
|VGG16|STL10|MSP|**91.43 (0.26)**|**99.07 (0.06)**|49.19 (1.13)|49.64 (1.46)|**12.72 (0.79)**|**8.59 (0.52)**|
|||SM|**91.89 (0.25)**|**99.13 (0.05)**|48.70 (1.63)|48.92 (1.72)|**12.22 (0.76)**|**8.09 (0.48)**|
|||**PVI**|**92.33 (0.81)**|**99.16 (0.08)**|**55.09 (3.46)**|**42.79 (3.28)**|**11.95 (0.67)**|**7.73 (0.84)**|
|ResNet50|CIFAR10|MSP|93.33 (0.32)|99.62 (0.03)|42.21 (1.18)|37.70 (1.34)|4.83 (0.31)|3.67 (0.27)|
|||LM|93.80 (0.23)|99.65 (0.02)|42.74 (1.23)|35.85 (1.23)|4.54 (0.26)|3.38 (0.21)|
|||**PVI**|**94.82 (0.39)**|**99.70 (0.02)**|**60.45 (1.87)**|**26.86 (1.59)**|**4.08 (0.20)**|**2.92 (0.22)**|
|ResNet101|CIFAR-100|MSP|84.98 (0.97)|95.02 (0.36)|59.93 (1.68)|64.09 (1.39)|66.32 (3.08)|41.08 (3.00)|
|||LM|86.27 (0.28)|**95.65 (0.14)**|59.40 (0.55)|65.23 (0.94)|**61.13 (1.82)**|**35.89 (1.07)**|
|||**PVI**|**87.74 (0.69)**|**95.72 (0.23)**|**69.71 (1.80)**|**49.65 (2.14)**|**60.13 (2.13)**|**34.89 (1.91)**|
|InceptionV3|Stanford Dogs|MSP|81.54 (0.34)|92.88 (0.33)|57.22 (0.69)|68.58 (1.10)|87.73 (3.61)|57.74 (2.42)|
|||LM|**83.51 (0.44)**|**94.05 (0.32)**|56.73 (0.69)|70.46 (1.44)|**78.31 (3.55)**|**48.33 (2.40)**|
|||**PVI**|**84.32 (0.75)**|**93.80 (0.27)**|**64.86 (2.14)**|**59.89 (2.28)**|**79.88 (2.11)**|**49.87 (2.33)**|
|DenseNet121|TinyImageNet|MSP|86.56 (0.47)|94.03 (0.31)|70.25 (0.65)|58.97 (1.17)|89.21 (2.80)|45.88 (2.26)|
|||LM|86.50 (0.35)|**94.37 (0.18)**|66.60 (0.83)|65.44 (1.33)|**86.91 (1.68)**|43.58 (1.38)|
|||**PVI**|**88.83 (0.58)**|**94.78 (0.34)**|**76.87 (0.93)**|**47.95 (0.83)**|**83.00 (2.52)**|**39.67 (2.62)**|
|**Model**|**Dataset**|**Method**|**SCE**|**Ada-SCE**|**CC-Ada-SCE**|**CC-Ada-SCE-RMS**|
|VGG16|STL10|MSP|0.59 (0.03)|0.52 (0.04)|**1.15 (0.06)**|**8.11 (0.18)**|
|||**PVI**|**0.55 (0.00)**|**0.44 (0.01)**|**1.20 (0.00)**|**8.19 (0.00)**|
|ResNet50|CIFAR10|MSP|0.35 (0.02)|0.28 (0.02)|0.60 (0.02)|5.41 (0.10)|
|||**PVI**|**0.33 (0.00)**|**0.25 (0.00)**|**0.58 (0.00)**|**5.25 (0.02)**|
|ResNet101|CIFAR-100|MSP|0.20 (0.00)|0.17 (0.00)|0.24 (0.00)|4.04 (0.05)|
|||**PVI**|**0.19 (0.00)**|**0.16 (0.00)**|**0.23 (0.00)**|**3.98 (0.01)**|
|InceptionV3|Stanford Dogs|MSP|0.20 (0.01)|0.18 (0.00)|**0.21 (0.01)**|3.75 (0.05)|
|||**PVI**|**0.19 (0.00)**|**0.17 (0.00)**|**0.21 (0.00)**|**3.66 (0.01)**|
|DenseNet121|TinyImageNet|MSP|**0.12 (0.00)**|**0.11 (0.00)**|**0.14 (0.00)**|**3.09 (0.04)**|
|||**PVI**|**0.12 (0.00)**|**0.11 (0.00)**|**0.14 (0.00)**|**3.10 (0.01)**|
**Theoretical Concerns (Q3)**: Yes, from a theory perspective, convergence rates and invariance analyses are more relevant for explaining the empirical observations, as noted in T4 and T2. As such, we will move the sample-wise margin analysis to the Appendix.
Furthermore, after reviewing margin bounds in generalization theory, we can now clarify why margin sensitivity doesn’t correlate well with performance.
Intuitively, while a larger margin $d$ (distance to decision boundary) for a sample $X$ may initially suggest higher confidence, the prediction's reliability also depends on how rapidly the classifier's output changes at $X$. If it changes quickly, a large $d$ does not guarantee safer/confident predictions; likewise, a smaller $d$ then does not immediately imply lower confidence if the rate of change is slow. This variability in the rate of function change limits the connection between sample-wise margin and true confidence.
Furthermore, margin-bounds in generalization theory reinforce this (Theorem 1.1 of [1]), showing that the generalization gap depends on both the margin and the network's Lipschitz constant (rate of change). Thus, confidence and generalization are influenced by both factors, not margin size alone.
[1] Bartlett, P. L. et al. Spectrally-normalized margin bounds for neural networks. Advances in neural information processing systems, 30 (2017). | Summary: This paper proposes to use three information-theoretic measures—Pointwise Mutual Information, Pointwise V-Information, and Pointwise Sliced Mutual Information, as post-hoc confidence estimators for neural network predictions. It theoretically analyzes their invariance properties, sensitivity to geometric features, and convergence rates, finding that PVI performs best for confidence calibration and failure prediction tasks
Claims And Evidence: No problematic claims made.
Methods And Evaluation Criteria: Some of the results (especially bold notation, for example in table 3) are unclear.
However, if I understand the results correctly, the empirical evaluation supports the made claims.
Theoretical Claims: Definitions and assumptions regarding the information theoretic measures are clearly presented.
The theoretical analysis (invariance, sensitivity to margin, intrinsic dimensionality, and convergence) is well-reasoned and appears mathematically sound.
Experimental Designs Or Analyses: The experimental design seems convincing, except two issues:
- Results are reported only over 5 repetitions
- Comparison to prominent approaches of uncertainty estimation, such as Bayesian neural networks, is missing
Supplementary Material: I reviewed section B (theoretical results).
Relation To Broader Scientific Literature: Connection and comparison to multiple lines of work Approaches for uncertainty estimation is missing:
- Ensembles based methods
[e.g., Lakshminarayanan et al.]
- methods based on noise addition [e.g., Maddox et al., 2019].
- Bayesian neural networks [e.g., Tishby et al., 1989, Denker and LeCun, 1990, Ovadia et al., 2019, Kingma et al., 2015, Gal and Ghahramani, 2016, Graves, 2011, Louizos and Welling, 2016]
Essential References Not Discussed: No critical previous works seem to be omitted except the aforementioned lines of work.
Other Strengths And Weaknesses: The paper provides a theoretically solid and interesting contribution that enhances confidence estimation using information-theoretic measures. It provides thorough theoretical analysis, however empirical evaluation lacks comparison to the aforementioned lines of work.
I am willing to raise my score if the authors provide sufficient evidence that their method at least provides comparable results to these previous methods.
Other Comments Or Suggestions: See above.
Questions For Authors: It seems that PMI and PSI work well only on simple tasks. My guess is that this is due to estimation complexities. Could the authors clarify whether PMI and PSI indeed encounter those and why?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive remarks and valuable suggestions on the paper. Below, we address the concerns and questions raised by the reviewer.
> Empirical evaluation lacks comparison to the aforementioned lines of work.
The aforementioned lines of work are not post-hoc calibration methods. These methods do not compute confidence level on the original network outputs and thus will provide unfair comparison. Instead, we are expanding our comparisons to include additional post-hoc calibration methods, such as Ensemble Temperature Scaling (Zhang et al., 2020), Parameterized Temperature Scaling (Tomani et al., 2022), Class-based Temperature Scaling (Frenkel et al., 2021), Group Calibration (Yang et al., 2024), and Consistency Calibration (Tao et al., 2024). We are currently running these experiments and will include the results as soon as they become available.
> It seems that PMI and PSI work well only on simple tasks. My guess is that this is due to estimation complexities. Could the authors clarify whether PMI and PSI indeed encounter those and why?
The performance gap can be partially explained by the convergence properties of PMI and PSI. Specifically, the convergence rate of PMI depends inversely on the minimum of $p(x)$ and $p(x|y)$ (refer to Eq. (43) in Appendix B.3). When either of these probabilities is low, estimation becomes unreliable due to increased variance. In simpler datasets such as MNIST, class-conditional distributions are more concentrated, resulting in higher values of $p(x)$ and $p(x|y)$, which in turn leads to better convergence and performance for PMI. In contrast, complex datasets often exhibit more dispersed and overlapping class distributions, making these probabilities smaller and estimation more difficult. The same reasoning applies to PSI, which relies on similar terms across projected spaces. This explains why PMI and PSI perform better on simpler datasets and lag behind on more complex ones.
References:
- Zhang et al. (2020). Mix-N-Match: Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning.
- Tomani et al. (2022). Parameterized Temperature Scaling For Boosting the Expressive Power in Post-hoc Uncertainty Calibration.
- Frenkel et al. (2021). Network Calibration By Class-Based Temperature Scaling.
- Yang et al. (2024). Beyond Probability Partitions: Calibrating Neural Networks With Semantic Aware Grouping.
- Tao et al. (2024). Consistency Calibration: Improving Uncertainty Calibration via Consistency Among Perturbed Neighbors. | Summary: The paper explores the use of information-theoretic measures—specifically pointwise mutual information (PMI), pointwise V-information (PVI), and pointwise sliced mutual information (PSI)—to estimate prediction confidence in deep neural networks (DNNs) post-hoc, without modifying network architecture or training. It investigates their theoretical properties (invariance, margin sensitivity, convergence rates) and evaluates their effectiveness in failure prediction and confidence calibration using benchmark computer vision datasets (e.g., MNIST, CIFAR-10, STL-10). Main findings include PVI outperforming PMI, PSI, and existing baselines (e.g., maximum softmax probability) in both tasks, attributed to its balanced invariance and margin sensitivity. Key contributions are: (1) a comparative analysis of PMI, PSI, and PVI, with PVI showing superior performance; (2) theoretical insights into their invariance and geometric properties; (3) sensitivity analysis to sample-wise margin, with PSI correlating most with margin; and (4) convergence rate derivations, suggesting PSI’s advantage over PMI. Experiments also demonstrate PSI’s effectiveness in generating localized saliency maps.
Claims And Evidence: Claims are well-supported by experiments (e.g., Tables 2, 11) and proofs (Appendix B), showing PVI’s superiority and PSI’s margin sensitivity. The PVI convergence claim lacks empirical backing beyond theory.
Methods And Evaluation Criteria: Using PMI, PSI, and PVI post-hoc is practical; evaluation metrics (AUROC_f, ECE) and datasets (MNIST, CIFAR-10) are appropriate for confidence estimation.
Theoretical Claims: Checked Propositions 1-5 and Theorem 1 (Appendix B); they’re correct, though Theorem 1’s spherical assumption simplifies real-world applicability.
Experimental Designs Or Analyses: Failure prediction (4.1) and calibration (4.2) experiments are sound, with robust metrics and baselines. Saliency map design (D.3) is valid.
Supplementary Material: Reviewed Appendices A-D; they support claims with proofs, experiment details, and extra results (e.g., Tables 16-18).
Relation To Broader Scientific Literature: It contribute the understanding of point wise estimation of confidence calibration based on information theory.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. thorough study on point wise Confidence Estimation
weakness:
1. did not include the OoD experiment as usual uncertainty estimation papers.
2. Despite the thorough study on PI measures, the paper lack the recommendation or suggestions for readers to apply these measures. I see some improvement upon some of the tasks, but looks like not consistent. I am familiar with the calibration task, but the results improvement seems trivial and the results of MSP not align well with my experience
3. A recent paper[1] also discuss a way to compute sample wise confidence, it would be better to compare with.
4. comparison many other post hoc calibration method is missed table 3.
[1] Consistency Calibration: Improving Uncertainty Calibration via Consistency among Perturbed Neighbors
Other Comments Or Suggestions: 1. line 764 has a reference missing problem
Questions For Authors: 1. Since the paper normalize these PI measures using a softmax function, does the softmax lose the ability of PI to express uncertainty?
2. For table 1, I wonder what is the correlation between vanilla neural network output(softmax value) and margin? How does it compared to these PI measure?
3. Can you give more details about "estimate the PVI between X and Y in the experiment "?
4. I am a calibration person, for table 3, a normal ECE for a temperature scaled MSP (resent for cifar10 column) is around 2, but your results is 10, which is confusing to me. Maybe using a more standard, well accepted baseline[1] can give a fair comparison.
[1] Calibrating Deep Neural Networks using Focal Loss
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive remarks and valuable suggestions on the paper. Below, we address the concerns and questions raised by the reviewer.
**Weakness 1 (On OoD Experiment)**
The primary motivation of our work stems from the observation that existing calibration methods can adversely affect failure prediction performance. This motivates our focus on both calibration and failure prediction as key evaluation tasks. We may explore the extension to OoD in future work.
**Weakness 2 (On Takeaway of Results)**
Our results demonstrate that PVI consistently provides strong performance across both failure prediction (Table 2 and Table 11) and calibration (Table 3 and Table 15) tasks.
**Weakness 3 and 4 (On Comparison to Other Methods)**
We will be expanding our comparisons to include additional post-hoc calibration methods, such as Ensemble Temperature Scaling (ETS), Parameterized Temperature Scaling (PTS), Class-based Temperature Scaling (CTS), Group Calibration (GC), and Consistency Calibration (CC) as suggested by the reviewer. We are currently running these experiments and will include the results as soon as they become available.
**Question 1 (On Softmax Scaling)**
The softmax function is applied to re-normalize the PI measures for interpretability as probabilities. Unlike standard logits, PI measures are derived from a principled probabilistic framework and thus inherently capture pointwise relationships in the data. In contrast, logits are unnormalized scores lacking direct probabilistic meaning. Importantly, applying softmax to PI values does not remove the uncertainty signal they encode - it preserves the relative structure of uncertainty while transforming the values into confidence-like scores. High PI values (reflecting high confidence or low uncertainty) remain high after softmax normalization, and low values (reflecting low confidence or high uncertainty) remain low.
**Question 2 (On Softmax Correlation to Margin)**
Below are the correlation between softmax value and margin:
| Method | MLP, MNIST | CNN, F-MNIST | VGG16, STL-10 | ResNet50, CIFAR-10 |
|---|---|----|---|----|
| softmax | 0.954±0.011 | 0.935±0.026 | 0.913±0.004 | 0.882±0.009 |
Softmax output is most sensitive to the gap between the top two logits (assuming all other logits are much smaller), which is what the margin approximation is also capturing. Naturally, it will have the highest correlation with margin. That being said, the goal of this experiment is to empirically verify the properties of PI measures.
Also note that a measure can have high correlation with margin because it closely tracks the model's logit gap - indicating how "close" a sample is to the decision boundary - but this doesn’t guarantee good performance in failure prediction or calibration. Margin-based measures like softmax may appear confident even on out-of-distribution or adversarial inputs, leading to overconfidence. In contrast, other measures might be less tightly coupled to margin but are more sensitive to ambiguity, distributional shifts, and model failures, making them more reliable for detecting errors and producing well-calibrated confidence.
**Question 3 (On PVI Estimation)**
We provided more details on PVI estimation in Appendix A.3.3. Following the approach of Ethayarajh et al. (2022), we estimate the PVI between $X$ and $Y$ by training an auxiliary model with the same architecture. This measures how well $Y$ can be predicted from $X$ using $\mathcal{V}$. Thus, in a way it’s capturing the confidence of the model $\mathcal{V}$.
**Question 4 (On ECE Results)**
In Mukhoti et al. (2020), data augmentation was used to improve the performance of ResNet-50 on CIFAR-10, resulting in a test error of 4.95. In contrast, our test error is 13.72 (Table 9), which accounts for the observed difference in ECE. Nevertheless, we are currently running experiments to reduce the test errors and will share the updated results once available.
References:
- Ethayarajh, K., et al. (2022). Understanding Dataset Difficulty with V-Usable Information.
- Mukhoti, J., et al. (2020). Calibrating Deep Neural Networks using Focal Loss. | null | null | null | null | null | null |
Retrieval Augmented Zero-Shot Enzyme Generation for Specified Substrate | Accept (poster) | Summary: This paper introduces a novel method for de novo enzyme design using retrieval augmentation for the generative process. The core of this method is to take a given substrate for the enzyme, search in an enzyme database for protein sequences that are enzymes of similar substrates, align these protein sequences together, and provide these multiple sequence alignments to the generator model, which designs a sequence. Finally, a discriminator will predict how well the substrate and the generated sequence fit together. In addition, the authors construct a new enzyme-substrate database for training and retrieval.
Claims And Evidence: This method generates enzyme protein sequences that it claims are superior to that of other methods. This is done by comparing the kcat predicted by UniKP of their generated sequences compared to other methods. However, this is not a reliable metric as UniKP is a simple regression model trained on protein language model embeddings, and although it is state of the art, it is not accurate enough to deem one model better than the other. This is doubly true in this case since the difference between the kcat’s of this model compared to other methods is small.
There are also claims made that do not seem to be true even if the UniKP model’s outputs are treated as ground truth. For example, the results in Figure 2. It is claimed in section 4.5 that “Fig. 2(c) shows the necessity of guidance to generate the enzymes with high kcat.” I do not agree with this interpretation of the figure’s data. In Fig. 2c, one can only see a slight improvement in the kcat, which is unremarkable when taking into account the distribution of the predicted kcat’s. On the other hand, the only significant result of the guidance seems to be decreasing the pLDDT by a large amount in Fig. 2d.
Methods And Evaluation Criteria: It is difficult to evaluate whether generated enzymes achieve high catalytic efficiency in silico, which is the main issue of this paper. Even if this model is very performant, the evaluation criteria is not predictive or sensitive enough to make the claims that are underpin this paper.
On the other hand, I do like the use of AutoDock Vina for the docking results, as that is a structure-based method that has been widely validated. This could be interesting for further exploration, although the low confidence AlphaFold2 structures generated by this model currently will make docking unreliable as well.
Theoretical Claims: The paper does not focus on formal theoretical claims.
Experimental Designs Or Analyses: The experiments regarding foldability and designability of the generated sequences are conventional and have no issues. However, the central claims of this paper rely upon prediction of enzymatic kcat’s by UniKP. This is a computational model, relying upon Prot-T5 embeddings. Since these are only predictions, driven only by the sequence, it is hard to interpret the differences in kcat, especially minor ones such as those in this paper. In other words, the claims of this paper are too strong to merely be based on the kcat predictions of UniKP.
Supplementary Material: The authors provide detailed dataset descriptions, related work and limitation in supplementary.
Relation To Broader Scientific Literature: This paper has designed and implemented a novel, creative, and highly promising method for protein sequence generation using retrieval augmentation. I think that this type of method will be very important for the advancement of the field. Furthermore, development of the substrate-enzyme database will accelerate enzyme research on its own.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: See above
Other Comments Or Suggestions: NA
Questions For Authors: Have the authors considered using AlphaFold3’s support for ligands to filter their predictions and score them against other models?
How does the sequence similarity of the binding pocket of the ligand look compared to the retrieved sequences from the database? Is the catalytic site stable or does the model make mutations here?
How sensitive are the model’s generations to the size of the database? Would a smaller database, particularly with less ligands available, make the outputs noticeably worse? How much more data is needed for the database to improve the model’s outputs?
The average log kcat is reported for Table 1. However, when designing enzymes de novo, it is likely that the sequences with max kcat will be tested. Could you also report max kcat of different methods for these tasks so that the shape of the distribution of the outputs becomes clear?
Have the authors considered distillation of the UniKP model into the discriminator used here, letting the predictions of UniKP for the model’s designs train the discriminator further? Would this improve results?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Area Chair and Reviewers,
We sincerely thank you for your thoughtful and thorough evaluation of our paper.
# `UniKP for evaluation`
Thanks for your comment. Using the predictive model for scoring is what we do in practice to filter better proteins. Before the wet lab experiment, proteins are scored by the predictive model, and proteins with a higher score are tested with higher priority. The advances in predicted scores suggest higher priority for wet lab experiments. We plan to test the designed enzymes in the wet lab, and the selection is based on the score.
The evaluation of protein properties depends on machine learning model predictions, particularly when no traditional methods are available as alternatives. The direct *in silico* evaluation of catalytic capability is by adopting the predictive model, and it is also a necessary step for filtering before wet lab experiments. This process is also adopted in protein design tasks for different features with predictive models for different properties.
# `Guidance for kcat improvement`
In this manuscript, we want to show how far the $k_{cat}$ can achieve, because a higher predicted $k_{cat}$ value makes designed proteins a higher priority for wet lab experiments. In real-world enzyme design, experts can decide the preference for predicted $k_{cat}$ or plddt, and then use a version of the model trained with or without guidance. We show the comprehensive effect of the module in facilitating experts' selection of model versions.
# `AlphaFold3’s support for ligands to filter`
Thank you for your suggestion. We will use more structure prediction models in evaluation in future versions of this manuscript.
# ` Sequence similarity of the binding pocket in retrieved and the catalytic site`
Thanks for your suggestion. The binding sites are usually not adjacent in sequence, so the sequence's similarity cannot be compared. The **binding sites** of an enzyme in the Uniprot annotation are usually discrete positions, which are not adjacent to each other in the protein sequence. Therefore, the sequence similarity cannot be compared based on several unconnected positions.
There is no enough annotation related to every enzyme's **catalytic site**. Our dataset is created based on Rhea, which is an expert-curated knowledgebase of chemical and transport reactions of biological interest. However, there is no pocket or catalytic site annotation on the enzymes.
# `Influence of database size`
Thanks for your suggestion. We will use the full enzyme dataset in real-world new enzyme designing tasks. Of course, the number of substrates in the database decides the quality of retrieved proteins. Therefore, using the largest reliable enzyme knowledge base as the retrieval database is required in real-world tasks.
# `Max kcat for Table 1`
The max $k_{cat}$ value in Table 1 is in the table below.
| Model | Sepiap-terin | Propylene oxide | Levo- glucosan | cGMP | L-Pro | Pyridoxine | leukotriene A4(1-) |
| ------------ | ------------ | --------------- | -------------- | ----- | ----- | ---------- | ------------------ |
| Ground Truth | 0.703 | 0.785 | 0.736 | 0.288 | 0.159 | 0.573 | 0.676 |
| Random | 0.050 | 0.303 | 0.734 | 0.072 | 0.311 | 0.616 | 0.056 |
| Mutation | 0.898 | 0.838 | 0.837 | 0.452 | 0.155 | 0.527 | 0.666 |
| Retrieved | 0.409 | 0.955 | 0.981 | 0.401 | 0.156 | 0.639 | 0.856 |
| ProtGPT2 | 0.816 | 0.910 | 1.028 | 0.621 | 0.724 | 0.769 | 0.555 |
| ProGen2 | 1.244 | 0.771 | 0.965 | 0.833 | 0.789 | 0.810 | 0.800 |
| ZymCTRL | 0.004 | 0.604 | 0.774 | 0.348 | 0.454 | 0.728 | 0.538 |
| NOS | 0.349 | 0.590 | 0.488 | 0.341 | 0.412 | 0.711 | 0.447 |
| LigandMPNN | 0.653 | 0.778 | 1.174 | 0.251 | 0.497 | 0.786 | 0.976 |
| Ours | 1.102 | 1.117 | 1.065 | 0.642 | 0.909 | 0.934 | 1.470 |
# `Distillation of the UniKP into the discriminator`
Thanks for your suggestion. In real-world tasks, we will consider utilizing other predictive models, including UniKP, as the discriminator as long as it is end-to-end and preserves the gradient. We will also try to use predictions for further rounds of training. The reason we do not adopt it in the current manuscript is that the evaluation relies on UniKP, which should not be seen by modules in training and generation. | Summary: The paper introduces SENZ, a substrate-specified enzyme generator, a RAG-based method to retrieve known enzymes and generate new enzymes based on a substrate. The authors define the task as generating a protein that serves as an enzyme for a given small molecule target. As a first step, the authors design a retrieval algorithm that retrieves a protein for a given query molecule based on catalyzing properties. Then a discrete diffusion model generates new enzymes conditioned on the retrieved enzymes and substrate.
## Update After Rebuttal
I thank the authors for addressing my review. I have decided to stay with my score of weak accept.
Claims And Evidence: * The authors claim that SENZ can generate enzymes for a particular substrate.
* The retrieval quality is not measured.
The claim of the model being superior to any others is slightly not convincing. There’s essentially only 3 comparisons across the models: The average magnitude of the turnover enzyme number over 7 tasks, the average property scores, and the docking score of one target. The second one is fine. The first one only reports on the average performance of each model’s generation, but doesn't include the generation performance comparison, which I think is equally important. The third one seems to be cherry-picking as there’s only one target being compared. I have detailed what I’d like to see in the question part below.
Methods And Evaluation Criteria: Section 4.2: The authors assess the capability of SENZ to generate new proteins by generating new enzymes given a substrate and computing the turnover number
The values in Table 1 are a little strange. Random enzymes are significantly worse than unconditionally generated enzymes, but they are also randomly generated. Standard deviations are not provided along with the mean scores for each task, but possibly more than 10 sequences are needed to get valid evaluation results
Shouldn’t the retrieved enzymes match the ground truth proteins? Is the retrieval from a different set of enzymes?
Line 297: How are unconditionally generated enzymes different from natural enzymes?
In general, the proposed metric is not a strong indicator of the efficacy of the metric.
Theoretical Claims: N/A
Experimental Designs Or Analyses: See Evaluation Criteria.
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The retrieval component of the pipeline is very similar to fuzzy search algorithms and semantic search algorithms and is not particularly novel.
Substructure similarity is measured with Tanimoto distance of Morgan fingerprints which does not capture 3D information.
Other Comments Or Suggestions: Line 143 Col 1: Unclear what “(with an identity exceeding 30%)” means
The notation in equation 6 is non-standard and a bit confusing. In equation 5, enzymes are presented as x but then enzymes are represented as P(m_i). Is P(m_i) a set of enzymes as there may be multiple protein matches given a substrate?
Line [157-163 Col 2: ]The following paragraph is difficult to parse.
Questions For Authors: How is the diversity and novelty of your model’s generation? For example, for a given substrate generated enzymes, how different are they, and how many are previously not seen in the database or from the testing set?
For the generation results you have in Experiment 4.1, what’s the average docking score for each to its target?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Area Chair and Reviewers,
We sincerely thank you for your thoughtful evaluation.
# `Retrieval quality`
The quality of retrieved enzymes is reflected in the baseline method named '**Retrieved**' in Table 2. That row is as follows:
| Method | $k_{cat}$ | pLDDT |
| --------- | --------- | ----- |
| Retrieved | 0.351 | 85.9 |
# `Random versus unconditionally generated enzymes`
1. 'Random' are randomly generated amino acid sequences. They are not randomly selected enzymes.
2. Unconditional generated proteins reflect the distribution of the training set of the baseline method. These models are trained to generate natural-like proteins.
The comparison reflects the features of $k_{cat}$ prediction model in evaluation. A more natural-like protein obtains a higher score than a pure random amino acid sequence.
# `Retrieved enzymes versus ground truth`
The retrieval is from a different set which contains no ground truth enzymes. The retrieval module can never see the ground truth proteins.
# `Line 297: Natural versus unconditionally generated enzymes`
1. Natural enzymes are ground truth, which are the enzymes for target substrates.
2. Unconditionally generated enzymes are generated by baseline models. They do not exist in nature.
# `Novelty of retrieval and substrate 3D information`
Our novelty is searching similar substrates and using their enzymes as reference for the generation model, not the metric about molecule similarity. We use a well-recognized method to calculate the similarity between molecules in the retrieval database. Morgan Fingerprint (MF) is an easy and recognized method to map molecules into the same vector space as representation. Tanimoto distance is designed to measure the similarity between vectors like MF.
MF represents structure without relying on 3D coordinates. It encodes atom environments in a molecule, capturing local atomic neighborhoods. It can indicate whether certain functional groups or motifs are present in a molecule. It allows the comparison of molecules based on shared substructures.
# `Identity exceeding 30%`
It means using `mmseqs2` with `--min-seq-id 0.3`. In Line 143, it means any two protein sequences from different subsets (training set, validation set, or testing set) have a maximum of 30% position overlap. This restriction is to avoid data leakage in protein sequence dataset split. Proteins are not relevant under the threshold.
# `Eq 5,6 and Line [157-163 Col 2]`
Notation for protein:
1. $x$ is a single protein.
2. $\mathbb{P}^{(m)}$ is a set of proteins related to molecule $m$. If the molecule is in the retrieval database, these proteins are the enzymes of the molecule; If the molecule is not in the retrieval database, these proteins are the enzymes of similar molecules.
Equation 6 is a definition relying on equation 5: if the target molecule is not in the retrieval dataset, the proteins for some molecules in the dataset will be collected and serve as the proteins for the target molecule.
Line [157-163 Col 2: ] is the process of retrieving proteins for a target molecule.
1. Calculate the similarity between the target molecule and molecules in the database.
2. Order molecules in the database descending by similarity.
3. Get all the enzymes of each molecule in database in order until a desired amount is obtained.
4. If a molecule has too many enzymes, only part of them is selected. It is to prevent all retrieved proteins from being enzymes of a single molecule.
# `Diversity and novelty`
1. The **diversity** of generated enzymes for the same substrate.
It is measured by the metric **#clusters**. Enzymes for the same target substrate are generated at first. Then they are clustered by a threshold of 30% identity, which means every two proteins from different clusters do not have an overlap of 30%. The number of clusters is recorded. Ground truth enzymes of the same substrate are also clustered, and the number of clusters is obtained. The absolute difference between clusters is the resemblance of diversity to natural distribution. If the generated enzymes have approximately the same number of clusters as natural ones, they have a natural distribution in terms of diversity.
2. The **novelty** of generated enzymes.
It is measured by the metric **BLASTp**. It can be regarded as the distance from a protein to the nearest in all other natural proteins. It is the identity of the generated enzyme with the nearest different known protein, which is obtained by BLASTp2 in the SwissProt database. We also get this value of natural enzymes, and the absolute difference between it and of generated proteins is calculated. If the distance of generated enzymes is similar to that of natural ones, they have a natural distribution in novelty.
# `Average docking score `
We will calculate the docking score in the next version of the manuscript. Runnig docking needs to manually adjust the protein structure file with the substrates'. | Summary: This paper presents a retrieval-augmented approach for enzyme sequence generation, conditioned on substrates. The authors introduce a novel benchmark designed to address the zero-shot setting, which is aligned with downstream generation tasks. The proposed method utilizes a diffusion model to iteratively generate enzyme sequences, conditioned on multiple sequence alignment results and the target substrate. Empirical evaluations demonstrate that their method outperforms existing baselines in terms of generation quality.
Claims And Evidence: The authors review existing protein generation methods and assert that "existing work on conditional generation cannot be repurposed directly to generate desired enzymes that catalyze specific substrates represented as small molecules" (lines 42-45). However, this claim overlooks several existing approaches that address enzyme generation with a focus on catalysis, such as ProCALM[1], GENZYME[2], and EnzyGEN[3]. Additionally, in the context of enzyme sequence generation, there are notable datasets and benchmarks that should be considered, including ReactZyme[4], CARE[5], EnzymeMap[6], and EnzyBench[3].
**Reference:**
[1] Yang J, Bhatnagar A, Ruffolo J A, et al. Conditional enzyme generation using protein language models with adapters[J]. arXiv preprint arXiv:2410.03634, 2024.
[2] Hua C, Lu J, Liu Y, et al. Reaction-conditioned De Novo Enzyme Design with GENzyme[J]. arXiv preprint arXiv:2411.16694, 2024.
[3] Song Z, Zhao Y, Shi W, et al. Generative enzyme design guided by functionally important sites and small-molecule substrates[J]. arXiv preprint arXiv:2405.08205, 2024.
[4] Hua C, Zhong B, Luan S, et al. Reactzyme: A benchmark for enzyme-reaction prediction[J]. Advances in Neural Information Processing Systems, 2025, 37: 26415-26442.
[5] Yang J, Mora A, Liu S, et al. CARE: a Benchmark Suite for the Classification and Retrieval of Enzymes[J]. arXiv preprint arXiv:2406.15669, 2024.
[6] Heid E, Probst D, Green W H, et al. EnzymeMap: curation, validation and data-driven prediction of enzymatic reactions[J]. Chemical Science, 2023, 14(48): 14229-14242.
Methods And Evaluation Criteria: In the "Problem definition" part, the authors formalize the generative function as $\textbf{x}=G(\textbf{m},s_{EC},\textbf{C}_m,\textbf{C}_x)$, where they consider $\textbf{m}$ as a single condition during training. However, this raises the question of whether their approach effectively addresses the problem outlined in the Introduction, namely that "the catalytic capability of an enzyme is not solely determined by how it structurally interacts with the substrate molecule." Does the current setting fully capture the complexity of enzyme catalysis beyond just the structural interactions?
The authors also use $k_{cat}$ as an evaluation criterion, which is derived from a predictive model known to have significant limitations. This raises concerns about the reliability of the results. The authors should provide a justification for the reliability of the selected predictive model. Additionally, since the model is designed to generate enzyme sequences that can bind to the target substrates, it would be important to also compute the binding affinity as part of the evaluation.
Please clearly indicate what novel contributions come from the authors versus have been taken from other papers and combined. For example, it seems that most of the generative model is based on diffusion model and not introduced by the authors. Is the approach for substrate-conditioning developed by you or by others? The substrate-indexed retrieval method for example (to the best of my knowledge) is a novel contribution of the authors.
As discussed in Section 2, there are several enzyme-substrate datasets available. What differentiates your dataset from others in terms of quality and coverage? For example, EnzyBench contains over 100,000 enzyme-substrate pairs, which is a large and diverse dataset. Given that sequences are typically much easier to gather than structures for proteins or enzymes, can the authors ensure that their model is capable of learning meaningful patterns with a more limited dataset?
Additionally, the authors claim in line 50 that learning the interactions between substrates and enzymes is challenging. However, their method simply concatenates the embedding of the target substrate with the noisy enzyme sequence during training. Does this approach effectively address the complexity of learning substrate-enzyme interactions, or does it oversimplify the problem?
Theoretical Claims: There is no theoretical proof.
Experimental Designs Or Analyses: I have some concerns regarding the validity of the $k_{cat}$ results. The authors claim that their model performs better than the baselines, and even outperforms the ground truth. However, in the case study, the reported Vina score is -3.075, which indicates that binding with the target substrate is unlikely. How can the authors reconcile these discrepancies in their results? Additionally, they should compare their docking results with the ground truth for a more reliable assessment.
The authors use pLDDT to evaluate sequence validity, which is an appropriate initial step. However, once the validity of the generated sequences is confirmed, further evaluations should follow, such as computing $k_{cat}$ and binding affinity. It would be helpful if the authors reported the $k_{cat}$ for the proportion of valid sequences as part of this process.
The proposed approach retrieves sequences based on substrate similarity rather than protein similarity, which distinguishes it from other methods. Can the authors provide evidence to demonstrate the significance of this design choice?
Finally, the authors only assess the influence of substrate guidance as part of the contrastive learning loss. However, the role of the concatenation operation for the generator remains unclear. Could the authors elaborate on the rationale behind this design choice and its impact on model performance?
Supplementary Material: No
Relation To Broader Scientific Literature: This paper introduces a method for zero-shot substrate-specific enzyme generation, making a contribution to the field of machine learning for protein design. The proposed approach has potential applications in biomaterial synthesis and industrial biocatalysis, with possible implications for sustainable chemistry and pharmaceutical development.
Essential References Not Discussed: Please see the section of **Claims And Evidence**.
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: They should provide more details for the selected seven tasks.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Area Chair and Reviewers,
We sincerely thank you for your thoughtful and thorough evaluation of our paper.
# `Mentioned reference`
We will include these six pieces of literature in the related work section in the final version of this paper since modification to the manuscript is not allowed currently. We plan to test our model on the mentioned dataset.
# `Complexity of catalysis`
Our approach to capture the complexity of enzyme catalysis beyond just the structural interactions is **by generation based on reference enzymes**. Catalytic capability is not fully decided by binding affinity or docking position but also by some unclear features that cannot modeled directly. The reason to use similar molecules' enzymes as references is these enzymes are known to function for similar molecules and thus contain necessary features even if the feature itself is unclear. The discriminator guidance provides clues to necessary features by serving as a loss.
# `Predictive model and Binding affinity`
The reliability of the predictive model can be checked from its original paper. In our paper, the rule-based method generated enzymes' predicted $k_{cat}$ is reasonable in terms of order, since the totally random generated protein << retrieved proteins $\approx$ mutation of ground truth < ground truth. The rule-based baselines are to evaluate if the predictive model can predict values in reasonable order.
To reflect binding affinity, Vina score is provided in the case study. We are still implementing massive automated evaluation by AutoDock-Vina, since there are frequent cases unsuitable for the input check.
# `Novelty`
The approach of expressing substrate conditions to the model is novel, which includes two novel parts.
1. **Retrieval based on substrates**
The idea of using a substrate to find related enzymes and using related emzyes to facilitate new enzyme generation is novel. The approach to align related enzymes and serve as the base of generation is novel. This approach makes zero-shot generation practical.
2. **Adopt discriminator in training**
The approach of using a discriminative mode to describe the generation target of catalytic capability is novel.
# `Dataset`
Our dataset is created based on Rhea, which is an expert-curated knowledgebase of chemical and transport reactions of biological interest - and the standard for enzyme and transporter annotation in UniProtKB. Our dataset inherits the quality and coverage from Rhea.
To control the quality, we carefully designed the **dataset split**.
1. Sequence identity threshold
EnzyBench partitioned training, validation, and testing sets based on a sequence identity threshold of 50%, but ours is 30%.
2. Distribution of enzymes in split
Unlike EnzyBench split enzymes based on their third-level categories, we split the enzymes by substrates. We guarantee that no two enzymes in different splits share the same substrate.
There are 34982 entries, so it cannot be regarded as limited, given the fact that high-quality annotated enzyme data is precious.
# ` Substrate-enzyme interaction complexity`
Our model does not aim to model the substrate-enzyme interactions directly, but to use retrieved reference enzymes to facilitate generation. We use the target substrate to retrieve enzymes, which contain the desired enzyme's properties. The model does not need to model substrate-enzyme interactions directly but needs to learn generation from references.
# `Vina score`
We provide the **docking results with the ground truth** here. The Vina score of the ground truth enzyme is **-3.036**, which is very close to our generated enzyme. It means that our generated enzymes bind the substrate slightly better than the ground truth with no discrepancy. We will add a subfigure in Figure 3.
About the value -3.075 and -3.036. The ligand only has five atoms and thus is small. It is hard for a small ligand to get a high score.
# $k_{cat}$` for valid sequences`
Thanks for your suggestion. When setting plddt threshold as 70%, our model-generated protein above the threshold has an average predicted $k_{cat}$ of **0.362**.
# `The significance of retrieval`
Thank you for identifying our novelty. This design aims to provide indirect substrate information in the format of protein sequences. Substrate-based retrieval is adopted for two significant reasons.
1. Without any protein as input, there is no anchor protein sequence to search for similar sequences.
2. With only a target substrate as input, the retrieval method has to use this molecule as the key.
# `Substrate concatenation`
It provides direct information on the target substrate by serving as a single token prompt persistent in the diffusion process. The sequence to generate has known positions from two dimensions then:
1. The rows of aligned sequences.
2. The column of the target molecule's embedding.
# `7 tasks`
Please refer to the response to Reviewer MkMZ. | Summary: This manuscript proposes the SENZ method for zero-shot substrate-specified enzyme generation. Its key points include: (1) defining the task and constructing a substrate-enzyme dataset; (2) retrieving relevant enzymes based on substrate similarity; (3) generating new enzymes via a diffusion model guided by a classifier for optimization; and (4) validating on diverse substrates with superior performance over existing methods.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: All good
Essential References Not Discussed: No
Other Strengths And Weaknesses: This manuscript has several notable strengths:
1.The SENZ method achieves zero-shot substrate-specified enzyme generation, innovatively proposing a way to generate novel enzymes for specific target substrates without direct supervision data.
2.The method integrates retrieval-augmented generation technology, addressing the lack of positive samples in zero-shot scenarios by retrieving enzymes with substrates structurally similar to the target as prompting signals.
3.The generated enzymes are comprehensively evaluated across multiple dimensions, including catalytic capability (kcat), foldability (pLDDT), and similarity to known enzymes (BLASTp), with experimental results demonstrating the effectiveness of the method.
However, the manuscript has the following issues:
1.The 7 substrates used in the experiments (Sepiapterin, Propylene oxide, Levo-glucosan, cGMP, L-Pro, Pyridoxine, leukotriene A4(1-)) listed in Table 1—where do they originate from? Please specify the source and explain why these particular substrates were selected for the experiments.
2. It is suggested that the authors consider removing structural elements such as signal peptides and transmembrane domains, which may affect expression and activity. This would enhance the feasibility and efficiency of the generated enzymes in practical applications, particularly in in vitro expression systems.
Other Comments Or Suggestions: It is recommended that when the "Uncond" class methods are first introduced in the experimental section, a brief explanation of how unconditional generation models (such as ProtGPT2 and ProGen2) are utilized to generate enzyme sequences for specific substrates should be provided.
Questions For Authors: Same to Other Strengths And Weaknesses
Ethical Review Concerns: No
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Area Chair and Reviewers,
We sincerely thank you for your thoughtful and thorough evaluation of our paper. Below are our detailed responses to your comments:
## **`Source of the 7 substrates used in the experiments and reason of selection`**
Thanks for your comment. We selected these substrates because they are common and important in enzymatic reactions. Then, it is worthy to design new enzymes for these substrates.
1. Sepiapterin [1]
**Reason for Designing Enzymes**: Enhancing the efficiency and specificity of sepiapterin reductase can improve tetrahydrobiopterin (BH4) production, which is crucial for neurotransmitter synthesis and nitric oxide production. [1]
[1] Thöny B, Auerbach G, Blau N. Tetrahydrobiopterin biosynthesis, regeneration and functions. Biochem J. 2000 Apr 1;347 Pt 1(Pt 1):1-16.
2. Propylene Oxide [2]
**Reason for Designing Enzymes**: Engineering epoxide hydrolases or monooxygenases can provide higher enantioselectivity and stability under industrial conditions for the production of chiral intermediates in pharmaceuticals. [2]
[2] Erik J de Vries, Dick B Janssen, Biocatalytic conversion of epoxides. Current Opinion in Biotechnology. Volume 14, Issue 4, 2003, Pages 414-420.
3. Levoglucosan [3]
**Reason for Designing Enzymes**: Developing specific glucosidases can enable efficient hydrolysis of levoglucosan into fermentable sugars, facilitating biofuel production from biomass pyrolysis products. [3]
[3] Donovan S. Layton, Avanthi Ajjarapu, Dong Won Choi, Laura R. Jarboe. Engineering ethanologenic Escherichia coli for levoglucosan utilization. Bioresource Technology,
Volume 102, Issue 17, 2011, Pages 8318-8322.
4. cGMP [4]
**Reason for Designing Enzymes**: Developing specific glucosidases can enable efficient hydrolysis of levoglucosan into fermentable sugars, facilitating biofuel production from biomass pyrolysis products. [4]
[4] Lucas KA, Pitari GM, Kazerounian S, Ruiz-Stewart I, Park J, Schulz S, Chepenik KP, Waldman SA. Guanylyl cyclases and signaling by cyclic GMP. Pharmacol Rev. 2000 Sep;52(3):375-414.
5. L-Proline (L-Pro) [5]
**Reason for Designing Enzymes**: Engineering proline racemase or proline dehydrogenase can enhance the production of D-proline, a valuable chiral building block in pharmaceutical synthesis. [5]
[5] Tanner JJ. Structural biology of proline catabolism. Amino Acids. 2008 Nov;35(4):719-30.
6. Pyridoxine [6]
**Reason for Designing Enzymes**: Designing enzymes for pyridoxine is important for enhancing its role in oxidative stress resistance by modulating its singlet oxygen quenching properties, which can be applied to improving fungal resilience, developing antioxidant therapies, and advancing fluorescence-based imaging techniques. [6]
[6] Bilski P, Li MY, Ehrenshaft M, Daub ME, Chignell CF. Vitamin B6 (pyridoxine) and its derivatives are efficient singlet oxygen quenchers and potential fungal antioxidants. Photochem Photobiol. 2000 Feb;71(2):129-34.
7. leukotriene A4(1-) [7]
**Reason for Designing Enzymes**: Developing selective leukotriene A4 hydrolase inhibitors can lead to new anti-inflammatory drugs with fewer side effects.
[7] Haeggström JZ. Structure, function, and regulation of leukotriene A4 hydrolase. Am J Respir Crit Care Med. 2000 Feb;161(2 Pt 2):S25-31. doi: 10.1164/ajrccm.161.supplement_1.ltta-6.
## **`To remove structural elements such as signal peptides and transmembrane domains`**
Thanks for your suggestion. This process can provide the model with clearer sequence patterns by eliminating undetermined sections. In the future version, we will test the performance after removing signal peptides and transmembrane domains in the whole dataset.
## **`Detail of unconditional generation baselines`**
ProGen2 and ProtGPT2: We utilized the pre-trained weights for both ProGen2 and ProtGPT2 to directly generate sequences with a maximum length of 1024. No information related to target substrates is provided to either model. These models serve as baselines for the capability of protein language models to generate sequences without specific functional guidance. | null | null | null | null | null | null |
MAPLE: Many-Shot Adaptive Pseudo-Labeling for In-Context Learning | Accept (poster) | Summary: The paper proposes is an effective framework for enhancing many-shot ICL performance in scenarios with limited labeled data. B selecting unlabeled samples for pseudo-labeling based on their influence on labeled data and by adaptively selecting demonstrations tailored to each query, the method significantly reduces the reliance on costly labeled data while improving the in-context learning performance.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I skimmed through it, but I didn’t carefully examine the proof.
Experimental Designs Or Analyses: Yes. I checked the experimental designs including baselines, implementations and ablations.
Supplementary Material: Yes, including Appendix B/C/D.
Relation To Broader Scientific Literature: The paper proposes selecting the most valuable demonstrations for many-shot pseudo-label learning.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written and well-structured, making it easy to follow.
2. The ablation study is comprehensive, providing useful insights into the contributions of different components.
Weaknesses:
1. Practicality Concerns: The proposed method is overly complex and impractical for real-world applications. The framework requires constructing two influence graphs and selecting different demonstrations for each test query, resulting in significant computational costs. However, the performance improvements do not appear substantial enough to justify the high cost of this approach.
2. Unstable Gains from Pseudo-Labeling: As shown in Figure 3, increasing the number of pseudo-labeled samples does not consistently lead to performance improvements across most tasks. This introduces an additional challenge of tuning the pseudo-labeling budget. In practical scenarios, there is no reliable way to verify the correctness of pseudo-labels, which may lead to worse performance than a simple few-shot approach.
3. Limited Experimental Scope: The experiments are somewhat narrow in scope. The authors only evaluate the framework using the Gemini model and on relatively simple tasks. To strengthen the empirical validation, the paper should include experiments with additional models, such as LLaMA and Qwen. Moreover, the selected tasks should be more representative and general, such as those from the SuperGLUE and MATH benchmarks.
4. Lack of Discussion on Theoretical Upper Bound: The paper does not discuss the upper-bound performance of the proposed method. For instance, if many-shot data were fully annotated with golden labels, how would the proposed approach compare against a retrieval-augmented generation (RAG) baseline? A discussion on this aspect would provide a clearer perspective on the fundamental limitations of the method.
5. Comparison to More Direct Labeling Approaches: With the cost of large model inference decreasing, a straightforward alternative would be to use a sota model (like GPT4) to label all data directly. The paper does not clearly articulate the advantages of the proposed method over this simpler and more practical alternative. A thorough comparison is necessary to justify the additional complexity introduced by the framework.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: >**W1.** Practicality Concerns.
>
**Response**: Thank you for raising concerns regarding computational complexity. To address practicality, our framework incorporates strategies to improve efficiency significantly:
By employing a KV cache, we reduce computational costs by fixing labeled and pseudo-labeled demonstrations across queries, allowing caching of demonstrations within the LLM prior to inference. As demonstrated in Table 2, this approach effectively enhances efficiency without loss of accuracy, making MAPLE more practical for real-world scenarios. A detailed analysis is provided in Appendix D.
Additionally, the influence graph construction process is precomputed. In this way, we ensure that its computational cost does not scale with the number of queries, further enhancing efficiency and practical feasibility.
>**W2.** Unstable Gains from Pseudo-Labeling.
>
**Response**: We would like to clarify that in our experiments, MAPLE **consistently outperforms other baselines**. MAPLE is primarily compared quantitatively against 5 baseline methods on 8 datasets with 5 settings (i.e., the number of pseudo-labeled samples). The results are shown in Figure 1. Among the 8 datasets, MAPLE performs the best on 5 datasets in all settings. On the other 3 datasets, MAPLE performs the best in 4 out of 5 settings. Therefore, these results can validate MAPLE's strong performance.
>**W3.** Limited Experimental Scope.
>
**Response**: We appreciate your suggestion regarding the experiments. Our current experiments include both Gemini 1.5 Pro and Gemini 1.5 Flash, which are widely adopted and representative models for many-shot ICL [1,2]. Due to limited rebuttal time, we were unable to include additional models like LLaMA and Qwen, but we plan to explore them in future work. To broaden task diversity, we have added results on a math benchmark GSM8K, and the results demonstrate the superiority of MAPLE on math tasks. We will expand to more datasets in the next version.
[1] Agarwal et al. Many-Shot In-Context Learning. NeurIPS 2024.
[2] Baek et al. Revisiting In-Context Learning with Long Context Language Models. arXiv 2024.
|Method|20|60|100|
|-|-|-|-|
|Random|90.0|91.0|91.5|
|RAG|90.5|92.0|93.0|
|MAPLE|92.5|94.0|95.0|
>**W4.** Lack of Discussion on Theoretical Upper Bound: The paper does not discuss the upper-bound performance of the proposed method. For instance, if many-shot data were fully annotated with golden labels, how would the proposed approach compare against a retrieval-augmented generation (RAG) baseline? A discussion on this aspect would provide a clearer perspective on the fundamental limitations of the method.
>
**Response**:
We appreciate your point. While deriving a theoretical upper bound for many-shot ICL is impractical due to the complexity of LLMs like Gemini 1.5 Flash, we provide an empirical upper bound by comparing MAPLE to RAG using 40, 80, and 120 fully labeled examples. MAPLE consistently outperforms RAG even under full annotation, highlighting its effectiveness and robustness beyond limited-label settings.
|Embed|GPQA|Banking77|
|-|-|-|
|RAG+Golden|36.8/37.8/40.4|78.0/81.7/83.3|
|MAPLE+Golden|38.3/42.4/44.9|79.3/81.7/86.2|
>**W5.** Comparison to More Direct Labeling Approaches: With the cost of large model inference decreasing, a straightforward alternative would be to use a sota model (like GPT4) to label all data directly. The paper does not clearly articulate the advantages of the proposed method over this simpler and more practical alternative. A thorough comparison is necessary to justify the additional complexity introduced by the framework.
>
**Response**: Thank you for raising this important point. We agree that directly using a state-of-the-art model like GPT-4 (or Gemini) to label all data is a natural and increasingly viable alternative. In fact, our zero-shot baseline involves using Gemini to label the entire dataset directly. As shown in Figure 3, MAPLE significantly outperforms this zero-shot approach across all tasks.
Moreover, MAPLE only requires labeling a very small portion of the data. For instance, XSum contains over 2 million training examples, yet MAPLE achieves strong performance with at most 100 pseudo-labeled samples—representing a reduction of over 99.99% in labeling cost, even with decreasing inference costs.
To further highlight MAPLE’s advantage, we include results on GPQA using Gemini 1.5 Pro. In Table 1, MAPLE achieves 44.9% accuracy, while—as reported in Figure 8 in [1]—even fully using all labeled data yields less than 44% accuracy. This clearly demonstrates the effectiveness and efficiency of our method.
[1] Agarwal et al. Many-Shot In-Context Learning. NeurIPS 2024.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ detailed rebuttal and the additional experiments, especially the inclusion of new results on GSM8K and comparisons under fully labeled settings.
While the authors make a reasonable case for improved efficiency through caching and precomputing influence graphs, my primary concern around the practicality remains, i.e., the proposed method still involves multiple parts—pseudo-labeling, influence estimation, adaptive demonstration selection. In many realistic settings where simplicity and interpretability are crucial, this level of complexity may be difficult to justify, especially given the relatively modest performance gains in some scenarios.
In light of the new evidence, I will raise my score to weak reject, acknowledging the merits of the empirical updates and clearer discussion.
---
Reply to Comment 1.1.1:
Comment: We thank you for your further comments and appreciate the opportunity to respond. While we understand your concerns regarding practicality and perceived complexity, we would like to emphasize that the **performance improvements are meaningful**, and **the modular design does not render the method impractical**. Our clarifications are as follows:
1. **Performance improvements are meaningful.** To demonstrate simplicity and interpretability, we directly compare MAPLE with the zero-shot labeling baseline, which you've described in Weakness 5 as “more practical and straightforward” (i.e., using a SoTA model to label all data directly). We report average performance across **different task types** and find that **MAPLE significantly outperforms the practical baseline**, particularly on classification tasks, where it achieves a relative improvement of 50%. These substantial gains underscore that, **despite involving multiple components, MAPLE delivers performance improvements that justify the added complexity.**
|Method|Summarization|Reasoning|Classification|Question Answering|
|-|-|-|-|-|
|Zero-shot|16.1|49.5|42.5|34.3|
|MAPLE|20.1|53.7|66.9|37.7|
2. **The modular design does not render the method impractical**.
- First we would like to claim that **each component in MAPLE contributes meaningfully to the final performance**. We provide detailed evidence through breakdowns and ablation studies, as referenced in our responses to Reviewer K6CP (W2) and Reviewer qNQK (Q3).
- Second, the computational cost of MAPLE remains manageable.
- (1) **Pseudo-labeling**: we enhance efficiency by **selectively** pseudo-labeling only the top-P nodes with the highest influence scores, instead of pseudo-labeling all train data, significantly reducing API calls.
- (2) **Influence estimation**: The graph construction requires the computation of the relevance score $r$ among any pair of nodes, which will be $\mathcal{O}(|\mathcal{V}|^2)$. To compute shortest paths, we use breadth-first search for each node, and the cost is $\mathcal{O}(|\mathcal{V}|+|\mathcal{E}|) = \mathcal{O}(|\mathcal{V}|)$ as $\mathcal{E}=\mathcal{O}(k|\mathcal{V}|)$. Therefore, the whole shortest path computation cost is $\mathcal{O}(|\mathcal{D}_L||\mathcal{V}|) = \mathcal{O}(|\mathcal{V}|)$ as $|\mathcal{D}_L|\ll |\mathcal{D}|=|\mathcal{V}|$. Notably, the above cost is only required **once** before inference, and **does not scale with the number of test-time queries**. With more queries involved during the test, the computational cost of the graph becomes more negligible.
- (3) **Adaptive demonstration selection**: We emphasize that adaptive demonstration selection is an **optional component** that offers a **trade-off between efficiency and performance**, as discussed in Sec. 4.5. In MAPLE, we incorporate personalized demonstrations for each query, which incurs additional cost but effectively filters out unhelpful examples, leading to better performance. As shown in Figure 3, this adaptive strategy also improves performance in RAG settings. To accommodate efficiency-focused scenarios, we also provide a variant of MAPLE with fixed demonstration selection and KV caching (Sec. 4.5). This variant enables faster inference with only a mild sacrifice in performance, offering a flexible solution based on deployment needs. The complexity comparison of MAPLE and the KV cache variant is provided in Appendix D.
In summary, **each component of MAPLE is either lightweight or designed to offer a meaningful trade-off between performance and efficiency**. We also provide practical variants to accommodate different deployment scenarios. Therefore, we sincerely hope that our detailed responses can help clarify the practical aspects of our framework and address your concerns. Thank you so much for your effort in reviewing our work!
Sincerely,
Authors of Submission 14586 | Summary: This paper presents MAPLE, a method for pseudo-labeling in many-shot ICL settings. Key innovation includes similarity-based selection for pseudo-labeling and demonstration example selection.
Claims And Evidence: 1. It’s interesting to study many-shot ICL under pseudo-label settings, which has practical value.
2. The author claims “strong performance” for the MAPLE, but didn’t specify the baselines nor quantitative results for comparison.
Methods And Evaluation Criteria: 1. The baselines are reasonable, and the datasets are up-to-date and commonly used.
2. It seems the embedding module is important for MAPLE, and an ablation on that is important.
Theoretical Claims: NA
Experimental Designs Or Analyses: See methods
Supplementary Material: Yes
Relation To Broader Scientific Literature: It's an interesting extension towards both many-shot ICL and pseudo-labeling.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: 1. **IMPORTANT** Most figures are not rendered correctly. Labels for legend and axes are missing.
Questions For Authors: 1. Do you have any explanation why MAPLE works well for GPQA, given the diversity of topics in GPQA?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: >**Claim** The author claims “strong performance” for the MAPLE, but didn’t specify the baselines nor quantitative results for comparison.
>
**Response**: In our experiments, MAPLE is primarily compared quantitatively against 5 baseline methods on 8 datasets with 5 settings (i.e., the number of pseudo-labeled samples). The results are shown in Figure 3. Among the 8 datasets, MAPLE performs the best on 5 datasets on all settings. On the other 3 datasets, MAPLE performs the best on 4 out of 5 settings. Therefore, these results can validate MAPLE's strong performance.
>**Exp** It seems the embedding module is important for MAPLE, and an ablation on that is important.
>
**Response**: Thank you for the suggestion. We have conducted ablations using Sentence-BERT (SBert) [1] and DeBERTa [2] as alternative embedding models, evaluating MAPLE with 20, 60, and 100 pseudo-labeled examples. While performance varies across models, MAPLE consistently outperforms baseline, demonstrating its robustness and effectiveness regardless of the specific embedding choice.
|Embed|Date|GoEmotion|
|-|-|-|
|RAG+SBert|51.4/52.4/54.4|31.3/32.7/33.3|
|MAPLE+SBert|52.7/54.0/55.2|34.7/36.7/37.3|
|RAG+DeBERTa|52.0/53.6/55.2|32.7/33.7/34.4|
|MAPLE+DeBERTa|54.4/55.2/57.6|37.3/37.2/39.3|
[1] Reimers N, Gurevych I. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.
[2] He P, Liu X, et al. DeBERTa: Decoding-enhanced BERT with Disentangled Attention.
>**Comm1.** IMPORTANT Most figures are not rendered correctly. Labels for legend and axes are missing.
>
**Response**: We sincerely appreciate your feedback on figure clarity. In the revision, we will ensure all figures are correctly formatted and rendered.
>**Q1.** Do you have any explanation why MAPLE works well for GPQA, given the diversity of topics in GPQA?
>
**Response**: Thank you for pointing this out. MAPLE's strong performance on GPQA can be attributed to its adaptive demonstration selection, which tailors pseudo-labeled demonstrations specifically for each test query. This adaptability allows MAPLE to effectively handle the topic diversity in GPQA by selecting demonstrations that are contextually relevant to each individual query. Consequently, MAPLE can leverage pseudo-labeled samples to improve performance even when the topics are diverse.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. You can consider putting the abalation results in the Appendix. I don't have any other comment, and I'll keep my current rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer y2DN,
Thank you for your thoughtful review and constructive feedback. We appreciate your time and effort, and to strengthen the quality of our work, we will include the ablation results in the Appendix in the final version.
Best regards,
Authors | Summary: The paper considers semi-supervised many-shot in-context learning setting, i.e., having small labeled and large unlabeled support sets to perform in-context learning with long-context LLMs. Authors argue that within this problem setting it would be beneficial to (i) identify the most impactful unlabeled samples to pseudo-label them, and, subsequently (ii) use adaptive example selection mechanism to select examples for each test query from the union of the pseudo-labeled examples along with labeled ones. To reach both, authors leverage the concept of node influences in graphs. In particular, first, the graph is built with nodes representing examples from both labeled and unlabeled sets, and edges are assigned according to the similarity of the examples in some embedding space. Subsequently, top-p nodes from the set of unlabeled examples are selected according to the score that is lower bound to the node influence. Adaptive demonstration selection is built in the similar fashion, relying on node influence of the labeled and pseudo-labeled samples on the test query. Authors evaluate their approach of the diverse set of problems and show that it outperforms the considered baselines.
Claims And Evidence: See Questions for Authors.
Methods And Evaluation Criteria: The proposed method is evaluated using challenging datasets and with the recent models.
Theoretical Claims: I checked Theorem 3.2 and briefly checked the proof of Lemma A.1.
Experimental Designs Or Analyses: Overall, seems valid.
Supplementary Material: I checked the proof of Theorem 3.2 and briefly checked the proof of Lemma A.1. The rest consists of the dataset details and the prompts used, along with KV Cache analysis section which was briefly checked as well.
Relation To Broader Scientific Literature: Many-shot in-context learning is very recent and promising approach to perform adaptation of large-context LLMs. Given the high data labeling cost, it is important to consider the semi-supervised setting, thus I believe that the paper studies an important topic.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: See Questions for Authors.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Do you really need to cut Top-K edges for each node? Since, you still will be selecting top-P nodes only after all, and it’s single cost thing, you can just run Top-P selection over fully connected graph? In other words, is it correct that Top-K only controls speed and does not affect the performance, or there is some interplay between both Top-K and Top-P that affect the performance?
2. Given Figure 6 shows that, mostly, it is more beneficial to put pseudo-labeled examples at back and examples with ground truth labels closer to the query, could you also run the baseline when you just put questions for unlabeled samples (without pseudolabels). It is basically Unsupervised ICL of [1], which was shown to improve upon few-shot baseline that employs only labeled set.
3. What I am currently missing is the ablation of what part of the proposed approach is actually the most important or the evidence that both are important.
4. There are some inconsistencies in set of baselines for different tables and Figures. For example, Table 3 misses zero-shot, few-shot and RAG-Adapt (compared to Figure 1). Similarly, Figure 4 misses RAG-Adapt.
5. Maybe I missed it somewhere, but what was used as the embedding model $f$ to construct graphs?
6. Can we compute node influence directly and not rely on the lower-bound? Does lower bound have some benefits? How the performance will be different if the node influence would be computed directly?
[1] Agarwal et al. Many-Shot In-Context Learning. NeurIPS 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: >**Q1.** Do you really need to cut Top-K edges for each node? Is it correct that Top-K only controls speed and does not affect the performance, or there is some interplay between both Top-K and Top-P that affect the performance?
>
**Response**: Thank you for the question. We want to clarify that Top-K pruning is essential and does influence selection. Our influence score in Eq(8) depends on the shortest path and its count. In a fully connected graph, all node pairs are directly connected with only one shortest path of length 1, which removes meaningful structural differences and reduces influence estimation to near-random. Moreover, computing the shortest path on a fully connected graph incurs complexity in $\mathcal{O}(|\mathcal{V}|^2)$. Thus, Top-K is important for both efficiency and effective selection.
>**Q2.** Given Figure 6 shows that, mostly, it is more beneficial to put pseudo-labeled examples at back and examples with ground truth labels closer to the query, could you also run the baseline when you just put questions for unlabeled samples (without pseudolabels). It is basically Unsupervised ICL of [1], which was shown to improve upon few-shot baseline that employs only labeled set.
>
**Response**: Thank you for the suggestion, and we have added results using only unlabeled questions without pseudo-labels, following the unsupervised ICL setup (#labeled=20, #unlabled=100). Consistent with Figure 6, we observe that placing labeled examples closer to the query still leads to better performance. However, without pseudo-labels, the benefit from unlabeled examples is limited (compared to Fig 6.), highlighting the importance of label information—even if approximated—for effective many-shot ICL.
|Dataset|Salient|GPQA|
|-|-|-|
Rand w/w.o. R|65.6/66.0|33.3/34.3|
Rag w/w.o. R|68.8/68.2|33.8/34.8|
MAPLE w/w.o. R|70.4/70.8|35.3/36.1|
>**Q3.** What part of the proposed approach is actually the most important?
>
**Response**: We appreciate the reviewer’s question. The RAG-Adapt baseline in our paper can be seen as a variant of MAPLE without the graph structure, as both rely on Contriever for relevance score. Without pseudo-labeling, as in our response to Q2, performance drops due to the lack of label information. Further, in our response to Reviewer K6CP W2, we detail how removing individual components of the influence score degrades performance. Together, these results demonstrate that each part of MAPLE is crucial to its overall effectiveness.
>**Q4.** There are some inconsistencies in set of baselines for different tables and Figures. For example, Table 3 misses zero-shot, few-shot and RAG-Adapt (compared to Figure 1). Similarly, Figure 4 misses RAG-Adapt.
>
**Response**: Thank you for pointing this out. We believe the reference is to Table 1 (rather than Table 3) since Table 3 is a list of prompts. Due to rebuttal time constraints and API budget limits, we have now additionally run Gemini 1.5 Flash for zero-shot, few-shot, and RAG-Adapt on Table 1, as well as RAG-Adapt for Figure 4. We will include these updated results in the revised version of the paper.
|Datset|0-shot|few-shot|20|40|60|80|100|
|-|-|-|-|-|-|-|-|
|Banking77|75.1|76.9|77.0|76.9|76.5|76.7|78.5|
|Date|49.1|52.9|53.6|53.5|53.1|55.4|56.1|
|GPQA|34.3|35.8|37.1|36.2|35.7|32.5|34.0|
|Datset|50|100|150|200|
|-|-|-|-|-|
|Banking77|79.3|81.3|81.3|81.7|
|Date|56.4|59.2|59.4|63.4|
>**Q5.**What was used as the embedding model to construct graphs?
>
**Response**: Thank you for the question. We use Contriever as the embedding model. This is mentioned in the right column of line 115: “...the relevance score r(vi, vj), as defined in Contriever...” For clarity, we will also explicitly restate this in the Implementation section in the revised version.
>**Q6.** Can we compute node influence directly and not rely on the lower-bound? Does lower bound have some benefits? How the performance will be different if the node influence would be computed directly?
>
**Response**: We claim that it is computationally impractical to directly calculate the node influence. As stated in Eq. (17) in Appendix A, the node influence between any pair of nodes, $v_i$ and $v_j$, is calculated from the iterative expansion of the neighboring nodes of $v_i$. This involves all the nodes that exist in any path between $v_i$ and $v_j$, and their corresponding derivatives regarding the embedding of $v_j$. This can be computationally prohibitive when the number of such nodes is massive due to the long distance between $v_i$ and $v_j$.
Therefore, we propose to rely on the lower bound to compute the influence score instead of directly computing the node influence. With our proposed Theorem 3.2, the influence score is computed based on the shortest path distance and the number of shortest paths. Thus, it is much easier to compute, compared to the massive computation of derivatives in the original node influence. | Summary: This work develops a semi-supervised in-context learning framework by exploiting small amount of labeled and large unlabelled dataset. A Ken graph is built upon the labeled and unlabelled dataset. The unlabelled samples (nodes) that are similar to the labeled ones are selected
For pseudo labelling. Finally, demonstrations that are highly relevant to the test query are selected from the combined dataset for prediction.
Claims And Evidence: The claims made in this paper are mainly supported with empirical results.
Methods And Evaluation Criteria: The methodology and benchmarking datasets are mostly appropriate.
Theoretical Claims: Not thoroughly checked.
Experimental Designs Or Analyses: The experiment designs are mostly appropriate. However, evaluation on higher number of pseudo labeled samples are missing which may demonstrate the potential upperbound of the proposed method.
Supplementary Material: No supplementary is submitted.
Relation To Broader Scientific Literature: Semi-supervised learning has been thoroughly investigated. The proposed method exploits a graph and the influence score shares similarity with graph based SSL and label propgation. However, integrating SSL to ICL is novel.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength:
1. Exploiting graph for relevance calculation could exploit the data manifold and potentially leads to better results.
2. Semi-supervised in-context learning can alleviate the reliance on excessive human annotation.
Weakness:
1. Building a graph for selecting relevant unlabelled samples itself induces additional computation overhead. There is no discussions on the computation cost for graph construction.
2. The design for influence score seems arbitrary. It would be good to see if keeping only the shortest path or number of shortest paths is worse.
3. It is worth noting that the impact of increasing the number of pseudo labeled samples is indeterministic. For GoEmotion, Banking77 and Date, the performance may still go up with more pseudo-labels. While, Tracking 7 seems does not benefit any pseudo labeled samples. A deeper analysis is necessary.7
Other Comments Or Suggestions: It is unclear why the x axis starts with 20 in figure 3. With 0 pseudo-labeled samples, would this be equivalent to few-shot?
Questions For Authors: - Please further explain why increasing the number of pseudo labels may harm certain datasets.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: >**W1.** Building a graph for selecting relevant unlabelled samples itself induces additional computation overhead. There is no discussions on the computation cost for graph construction.
>
**Response**: Thank you for bringing up this point. The graph construction requires the computation of the relevance score $r$ among any pair of nodes, which will be $\mathcal{O}(|\mathcal{V}|^2)$. To compute shortest paths, we use breadth-first search for each node, and the cost is $\mathcal{O}(|\mathcal{V}|+|\mathcal{E}|) = \mathcal{O}(|\mathcal{V}|)$ as $\mathcal{E}=\mathcal{O}(k|\mathcal{V}|)$. Therefore, the whole shortest path computation cost is $\mathcal{O}(|\mathcal{D}_L||\mathcal{V}|)$. Notably, the above cost is only required **once** before inference and thus is independent of the number of queries. With more queries involved during the test, the computational cost of the graph becomes more negligible. Moreover, as shown in Table 2 in our paper, the adaptive demonstration selection component does not incur much computational cost.
Thank you so much for your suggestion. We will include this discussion in the appendix.
>**W2.** The design for the influence score seems arbitrary. It would be good to see if keeping only the shortest path or number of shortest paths is worse.
>
**Response**: We appreciate your suggestion and have added results (#labeled=20, #p-labeled=100) using only the shortest path and the number of shortest paths. While the shortest path captures how quickly information can travel, it overlooks robustness—relying on a single path can be fragile to noise or minor data variations. On the other hand, using only the number of shortest path captures redundancy but disregards distance; many long paths may not imply a strong influence. Our influence score is designed to capture both efficiency (via short paths) and robustness (via multiple paths), resulting in a more reliable and informative demonstration selection for many-shot ICL.
|Dataset|Banking77|GoEmotion|GPQA|
|-|-|-|-|
|len(shortest path)|75.3|37.6|36.4|
|# \|shortest path\||78.6|37.2|36.9|
|Influence score|80.8|38.1 |37.4|
>**W3.** It is worth noting that the impact of increasing the number of pseudo labeled samples is indeterministic. For GoEmotion, Banking77 and Date, the performance may still go up with more pseudo-labels. While, Tracking 7 seems does not benefit any pseudo labeled samples. A deeper analysis is necessary.
**Q1.** Please further explain why increasing the number of pseudo labels may harm certain datasets.
>
**Response**:
Thank you for highlighting this point. The inclusion of additional pseudo-labeled samples can sometimes harm performance because pseudo-labels generated by LLMs are not always accurate. Incorrect pseudo-labels may introduce misleading information when used as demonstrations, negatively influencing LLM predictions for specific datasets [1]. However, our method addresses this issue by adaptively selecting pseudo-labeled samples based on their relevance to each test query. This approach mitigates the negative impact of inaccurate pseudo-labeling, as demonstrated by consistent improvements across most tasks. We note that the performance decline is limited to certain datasets, likely due to the higher difficulty or inherent ambiguity of the samples. Nevertheless, MAPLE consistently outperforms baselines even though the number of pseudo-labeled samples is not optimal.
[1] Agarwal et al. Many-Shot In-Context Learning. NeurIPS 2024.
>**Comm1.** It is unclear why the x axis starts with 20 in figure 3. With 0 pseudo-labeled samples, would this be equivalent to few-shot?
>
**Response**: Thank you for the observation. Yes, when the number of pseudo-labeled samples is 0, Random, RAG, and MAPLE all reduce to the few-shot setting. To avoid redundancy, we omit x=0 from the x-axis and instead include the few-shot performance as a green horizontal dashed line in Figure 3 for comparison. We will clarify this explicitly in the revised version. | null | null | null | null | null | null |
Impossible Videos | Accept (poster) | Summary: The paper introduces IPV-VID, a dataset designed to evaluate video understanding models on "impossible videos", which depict scenarios that violate commonsense. The study evaluates from two perspectives: video understanding and video generation. For video understanding, benchmark tasks such as VideoQA and video-text alignment reveal that state-of-the-art models struggle significantly on "impossible videos," exposing their limitations in temporal reasoning and commonsense knowledge. For video generation, the paper assesses the ability of text-to-video (T2V) models to generate high-quality "impossible videos" and proposes IPV-Score to evaluate their semantic consistency and visual quality. The findings highlight the challenges and opportunities for advancing both video understanding and generation in counterfactual and out-of-distribution scenarios.
Claims And Evidence: **Claim1: Existing video understanding models struggle with "impossible videos" due to their reliance on commonsense and reasoning beyond real-world scenarios.**
**Evidence**:
Experimental results show that state-of-the-art video understanding models perform significantly poor on the IPV-VID dataset, suggesting their limitations in understanding "impossible videos."
**Comment**:
While the results indicate poor performance on "impossible videos," this may not necessarily be attributed to the "impossibility" itself. The performance drop could also stem from the gap between synthetic and real-world data, such as differences in visual quality or semantic consistency.
The lack of comparative experiments on similarly scaled "normal synthetic videos" makes it difficult to isolate the effect of "impossibility" as the primary cause of the performance drop. Drawing such a conclusion without addressing this gap might be risky.
**Suggestions for Improvement**:
To better validate the unique impact of "impossibility" on model performance, future studies could include a control group with normal synthetic videos. This would help disentangle the influence of data quality from that of "impossibility."
**Claim2: The IPV-VID dataset provides high-quality "impossible videos" that are semantically consistent and visually realistic**.
**Evidence**:
The dataset was generated using T2V models and manually filtered to ensure semantic consistency and visual quality. The authors also proposed the IPV-Score as a metric to evaluate the quality of generated videos.
**Comment**:
Despite the reported quality control, generating high-quality videos remains a significant challenge for current T2V models. For instance, many models struggle with action coherence (e.g., unnatural movements and violations of physical laws), and only a few models, such as Hunyuan and Hailuo, perform reasonably well in this regard.
Since the dataset has not been fully released, it is unclear whether all "impossible videos" in the dataset strictly meet the claimed quality standards. Issues such as motion coherence and physical consistency may directly affect the validity of downstream tasks, but these aspects were not thoroughly addressed in the paper.
**Suggestions for Improvement**:
Increasing the dataset's transparency by fully opening sources would enhance its credibility.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Based on the provided document, there is no explicit mention of formal mathematical proofs or theoretical validations for the claims made. The paper primarily focuses on the construction of the IPV-BENCH benchmark, categorization of impossible video types, and empirical evaluations of existing video understanding and generation models. These aspects are more experimental and empirical in nature rather than relying on theoretical proofs.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper are well-constructed and align with the goal of evaluating video understanding and generation models using the IPV-BENCH benchmark. For video understanding, the results demonstrate that existing models struggle with "impossible videos," as evidenced by their poor performance on the IPV-VID dataset. This supports the claim that these models face challenges when reasoning beyond real-world scenarios. However, while the findings highlight a significant performance drop, it is worth noting that this may not solely be attributed to the "impossibility" of the videos. The gap between synthetic and real-world data, such as differences in visual quality or semantic consistency, could also play a role. Without comparative experiments on similarly scaled "normal synthetic videos," it remains difficult to isolate the unique impact of "impossibility" on model performance. This limitation in the experimental design underscores the need for a control group to disentangle these factors in future studies.
For video generation, the IPV-VID dataset is presented as a high-quality resource of "impossible videos," with semantic consistency and visual realism ensured through manual filtering and evaluation metrics like the IPV-Score. While these efforts are commendable, challenges inherent to current T2V models, such as issues with action coherence and physical consistency, suggest that the dataset's quality may not fully meet the claimed standards. Additionally, since the dataset has not been fully released, it is difficult to independently verify the quality of the videos or their impact on downstream tasks. A more transparent approach, including full dataset release and quantitative analyses of video quality, would strengthen the dataset's credibility and its utility for benchmarking.
Supplementary Material: Yes, I have read through the supplementary meterial.
Relation To Broader Scientific Literature: The paper’s contributions align with ongoing research in video understanding and generation, extending prior work on real-world datasets like Kinetics and Something-Something by introducing "impossible videos" that challenge models to reason beyond physical laws. This builds on ideas from benchmarks like CLEVRER but applies them to dynamic video data. In video generation, the paper advances text-to-video (T2V) research by focusing on generating semantically consistent "impossible videos," addressing limitations of current T2V models in temporal coherence and complex prompts. The proposed IPV-Score complements existing evaluation metrics like FVD and CLIP alignment. Additionally, the IPV-BENCH benchmark fills a gap in AI evaluation by targeting counterfactual reasoning, contributing to the trend of specialized benchmarks. These efforts position the paper as a significant step in addressing underexplored challenges in video reasoning and generation.
Essential References Not Discussed: Up to me, it seems to be sufficient.
Other Strengths And Weaknesses: **Strengths**
1. Easy to follow for the readers
2. Interesting.
**Weakness**
1. The experiment design and analyses can be imporved if the authors compare the performance between synthetic normal and impossible videos. More analysis could be offered on the difference between synthetic videos and the real ones.
2. Not fully open-source and the "impossible videos" data may not reach the standard that supposed to be.
Other Comments Or Suggestions: 1. To better validate the unique impact of "impossibility" on model performance, future studies could include a control group with normal synthetic videos. This would help disentangle the influence of data quality from that of "impossibility."
2. Increasing the dataset's transparency by fully opening sources would enhance its credibility.
Questions For Authors: 1. For the synthetic data generated by current T2V models, is there a significant performance difference in downstream video understanding tasks between "normal" videos and "impossible" videos? Similarly, for text-guided video generation, are there notable differences in visual quality and prompt-following between "normal" and "impossible" videos?
Clarifying this would help determine whether the challenges observed are truly due to the "impossibility" of the videos or are influenced by general limitations in synthetic data quality. This could validate or challenge the paper's claims about the unique challenges of "impossible videos."
2. Can "impossible videos" serve as negative sample guidance to significantly improve the performance of Video-LLM models in reasoning tasks or enhance T2V video generation quality?
If "impossible videos" can effectively guide reasoning or generation improvements, it would strengthen the practical value of the dataset and benchmark, highlighting its utility beyond evaluation.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for the encouraging review and valuable suggestions. We appreciate your acknowledgment of the paper’s novelty and will carefully address your concerns to improve clarity and rigor.
**Q1: Disentangling Impossibility vs. Synthetic Data Quality**
Disentangling impossibility from synthetic data quality is crucial for strengthening the rigor of our study. We leverage the Multi-choice QA task to investigate this. Due to the limited time during rebuttal, we were unable to generate and annotate **a large-scale set** of synthetic videos. We conducted a preliminary study on a smaller dataset:
1. Data Collection: We collect 420 synthetic videos from an existing work [1], generated using HunyuanVideo.
2. Annotation: We label each video with an action description.
3. Filtering: Videos were excluded if they:
- Lacked an explicit event (e.g., landscape scenes).
- Contained counterfactual content (ensure distinction from impossible videos).
After filtering, 200 videos remained.
4. Task Construction: We instructed GPT-4o to generate multi-choice questions and answers, adapting prompt from the impossible video setting.
5. Evaluation: Several popular VideoLLMs were evaluated and compared to impossible videos.
|Model|Acc. (Normal Vid)|Acc. (Impossible Vid)|
|-|-|-|
|Video-LLaVA|70.0|25.8|
|NVILA|89.5|62.2|
|LLaVA-NEXT|90.0|83.4|
The results show that **models consistently perform better on normal synthetic videos** than on impossible videos, indicating that **"impossibility" introduces a distinct challenge** for video understanding.
Recent studies [2] suggest that the visual quality gap between synthetic and real-world videos is rapidly shrinking. This further underscores the unique challenges posed by impossible videos, which are independent of synthetic data quality.
We appreciate this valuable suggestion. We will conduct a more comprehensive study in the revised version.
**Q2: Code/Data Release**
Upon acceptance, we will publicly release all **code and data**, including the IPV-Bench taxonomy, IPV-TXT prompts, IPV-VID videos, and evaluation protocols. To ensure accessibility, we will also include a detailed **Data Usage Instruction** in the Appendix.
Regarding concerns about video quality, e.g., action coherence, our human annotation filters out low-quality videos:
- If the action forms a semantically meaningful impossible phenomenon, we consider it a valid sample.
- If the action exhibits inconsistencies, artifacts, or unnatural distortions, we classify it as low-quality data and exclude it.
For a detailed data filtering criteria, please refer to **Response 2 of Reviewer F9mi**. Besides, we provide sufficient video examples on the anonymous website in the paper abstract for further reference.
**Q3: For T2V, any difference in visual quality and prompt-following between normal and impossible videos?**
Current T2V models are primarily optimized for normal videos, while impossible videos is often overlooked. Our assumption: creating impossible videos is more challenging than creating normal ones. To explicitly verify this, we conducted a human evaluation on normal text prompts:
1. We collected 420 synthetic videos from an existing work [1], generated using the HunyuanVideo model.
2. We annotated the visual quality and prompt-following of these videos.
3. During annotation, we excluded 83 prompts that describe impossible phenomena and evaluated the remaining 337 normal prompts.
||Visual Quality|Prompt Following|
|-|-|-|
|Normal|92.6|66.5|
|Impossible|88.9|37.2|
We observe that 1) normal visual quality is slightly better than impossible ones; 2) normal prompt following is significantly better than impossible prompt following, which further underscores the unique challenges posed by impossible videos.
We appreciate this insightful suggestion. In the revised paper, we will conduct a more comprehensive study to further strengthen our findings.
**Q4: Can impossible videos serve as negative samples to improve Video-LLM in reasoning tasks or enhance T2V generation quality?**
We recognize the potential in these directions:
- For Video-LLMs, impossible videos can serve as high-quality training samples to enhance reasoning capabilities. Since these videos introduce "novel" counterfactual knowledge beyond real-world data, they may help models develop stronger reasoning and generalization skills.
- For T2V models, while the community has focused heavily on physical law adherence, there is limited understanding of how to explicitly improve this. Impossible videos could serve as negative samples, reinforcing a more structured comprehension of physical plausibility.
Both directions present exciting and meaningful research opportunities. We hope the release of Impossible Videos will inspire further exploration in these areas.
[1] The Dawn of Video Generation: Preliminary Explorations with SORA-like Models. arXiv 2024.
[2] Cosmos World Foundation Model Platform for Physical AI. arXiv 2025.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the sufficient response. I am pleased with the contribution of this work. Therefore, I will increase my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer AP4m,
Thank you so much for your positive feedback and for helping improve our paper!
Best Regards | Summary: This paper introduces the novel concept of "impossible videos" as a challenging testbed for advancing video understanding and generation models. It proposes the IPV-Text benchmark composing of many anti-reality videos for evaluating the video LLM models in the understanding task. Results in the paper reveal that although the video LLM models excel at processing real-world scenarios, they struggle with anti-reality content which need deep reasoning rather than simple memorization. Besides, it also propose a IPV-VID benchmark, which has many anti-reality text prompts. These prompts can be used to prompt the T2V models to generate the corresponding videos. Results in the paper find that today's T2V models also struggle to generate the aligned videos, highlighting their reliance on pattern matching rather than true understanding. The benchmarks and findings not only identify crucial shortcomings in existing approaches but also establish promising directions for future research aimed at developing more robust and generalizable video AI systems.
## update after rebuttal
I appreciate that the authors take time to explain the implementation details and the metrics. These responses have addressed my concerns. I think the benchmarks proposed by the paper is promising on the future research for more robust and generalized video AI systems. I will increase my score and support its acceptance.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A, not proofs in this paper.
Experimental Designs Or Analyses: Yes, the experimental designs are validity.
Supplementary Material: Yes, all of the supplementary materials.
Relation To Broader Scientific Literature: This paper proposes two benchmarks for evaluating the understanding and generation model in video domain. These benchmarks focus on the impossible videos, a field previous benchmarks ignored. They are very valuable for the practitioner to probe whether their model truly understand the real world law or simple memory the training set.
Essential References Not Discussed: This paper should discuss some other benchmark regarding both the video understanding and generation.
Other Strengths And Weaknesses: Strength:
1. The motivation of this paper is compelling and thoughtful.
2. The paper is easy to follow.
3. Results in the paper is valuable for the research community to better understand current video understanding and generation task.
Weakness:
1. How to calculate the score in the Open-ended QA task?
2. When calculating the Prompt Following metric, it needs the human involve, which is time consuming and cost unfriendly. Are there any method to calculate this metric without the human?
Other Comments Or Suggestions: No
Questions For Authors: See the weakness in the Other Strengths And Weaknesses section.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your encouraging review and valuable suggestions. We appreciate your acknowledgment of the paper’s novelty and will carefully address your concerns to improve clarity and rigor. Below are our detailed responses:
**Q1: Better to explain more clearly about how to calculate the score in the Open-ended QA task.**
We appreciate the reviewer’s suggestion and provide a more detailed explanation below.
For evaluating the Open-ended QA task, we employ an LLM-based evaluator that compares model responses against the annotated text explanations in the benchmark. However, we empirically observed that directly instructing the LLM to assign scores led to instability. To address this, we propose a **justification-then-score** approach:
1. **Justification Step:** The evaluator first analyzes key matches or mismatches between the model’s response and the ground truth, providing a textual justification.
2. **Scoring Step:** Based on this justification, the evaluator assigns a semantic alignment score on a scale from 0 to 1:
- **1.0** – Perfect alignment
- **0.8-0.9** – Good alignment
- **0.5-0.7** – Partial alignment
- **0.1-0.4** – Weak alignment
- **0.0** – No alignment
This justification step is crucial for ensuring fair and stable score assignment. In this work, we employ GPT-4o as the evaluator.
In the current version of the paper, we briefly mention the use of GPT-4o in **Section 4, line 306**. We will include this detailed explanation in the revised version (or supplementary materials) to improve clarity.
We sincerely thank the reviewer for highlighting this point.
**Q2: Are there any method to calculate the Prompt Following metric without human?**
This is an insightful question, and we fully agree that an **automatic evaluation strategy** would enhance the scalability of our benchmark.
In our **main paper (Table 4)**, we report human-annotated results to provide reliable insights into the performance of current T2V models. Additionally, in **Appendix B.2**, we introduce an **automatic evaluation strategy** for impossible video generation. Specifically, the Prompt Following metric can be assessed using state-of-the-art Vision-Language Models (e.g., GPT-4o) in conjunction with a carefully designed prompting strategy. Experiment result clearly demonstrate the consistency between human evaluation and auto-evaluation, as revealed in Table 6 and Figure 6 in the Appendix.
We appreciate the reviewer’s insightful question and will further emphasize this discussion in the revised version.
**Q3: This paper should discuss some other benchmark regarding both the video understanding and generation.**
We appreciate the reviewer’s comment. In the Related Work section and Table 1, we have comprehensively discussed the relationships and distinctions between IPV-Bench and existing benchmarks across three key areas: video understanding, video generation, and AIGC video detection. This comparison highlights the unique contributions of IPV-Bench and its role in bridging gaps that are not addressed by prior benchmarks.
We will ensure that this discussion is clearly emphasized in the revised version. Thank you for your valuable feedback. | Summary: The paper introduces IPV-BENCH, a novel benchmark designed to evaluate video understanding and generation models from the perspective of impossible videos. It categorizes scenarios that violate physical, biological, geographical, and social laws. The main experimental results reveal that current models have difficulty understanding and generating such videos, pointing potential developments in video models.
## update after rebuttal
I appreciate the author's response and I maintain positive.
Claims And Evidence: The claims made in the paper are largely supported by the empirical results. However, some claims regarding the models' limitations in understanding impossible videos could be enhanced by more detailed quantitative metrics. For example, the authors claim most video models fall short on impossible videos, specific metrics across categories would strengthen this statement.
Methods And Evaluation Criteria: The methods, including the construction of the IPV-BENCH benchmark and the associated taxonomy, make sense. The evaluation criteria, encompassing the Judgment, Multi Choice, and Open-ended QA tasks, effectively assess the model's capabilities in understanding impossible scenarios. The diverse sources of video data, including synthetic, real, and community-generated content, enhance the robustness of the evaluation.
Theoretical Claims: The paper does not present formal proofs for any theoretical claims but relies on empirical evaluations.
Experimental Designs Or Analyses: The experimental designs are generally thorough, with well-defined tasks and clear criteria for measuring model performance. Whereas, providing more information about the filtering criteria for selecting videos would enhance the dataset's integrity.
Supplementary Material: The supplementary material presents abundant visualizations of the impossible videos.
Relation To Broader Scientific Literature: The paper highlights the gaps in existing benchmarks that do not address impossible or counterfactual videos. It may be related to some literatures that study how humans react with the impossible or counterfactual information.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The paper provides an interesting perspective of evaluating video model's ability.
The experimental results and analysis provide insights in how to develop video understanding and generation models.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and constructive suggestions. We are grateful for your recognition of our work’s novelty and will incorporate your recommendations to further strengthen the paper. Below, we address your comments in detail.
**Q1: Claims about model limitations of understanding impossible videos could be strengthened with category-specific quantitative metrics.**
We appreciate the reviewer’s suggestion.
Table 2 of the paper presents impossible video understanding metrics (Multi-choice QA and Open-ended QA) **across the four categories of IPV-Taxonomy**: Physical, Biological, Social, and Geographical. Notably, the **Physical** category proves to be the most challenging, yielding the lowest scores across both tasks.
Upon further analysis, we observed that videos in the **Physical** category often exhibit complex **impossible temporal dynamics**, requiring sophisticated temporal reasoning. In contrast, videos in other categories can be largely addressed through **world knowledge reasoning**, which aligns well with the capabilities of large language models (LLMs).
To further investigate this, **Table 3 reports an experiment** where videos are classified into two groups via human annotation:
- **Spatial** – Impossible phenomena identifiable from a single frame.
- **Temporal** – Impossible phenomena requiring cross-frame temporal reasoning.
Results indicate that models perform significantly worse on Temporal videos than on Spatial ones, highlighting temporal reasoning as a major bottleneck in understanding impossible videos.
**Q2: It is better to provide more information about the video filtering criteria.**
We appreciate the reviewer’s valuable suggestion. The goal of video filtering is to ensure that the selected videos: 1) Maintain high visual quality; 2) Clearly demonstrate impossible phenomena.
**Visual Quality Criteria:**
- **Accepted:** Clear, sharp videos with high aesthetic value and smooth temporal motion.
- **Rejected:**
- Videos with jitter, flicker, blur, large-scale artifacts, or indistinct/distorted foreground objects.
- Completely static videos with no visible changes.
- Videos that lack logical coherence and appear visually chaotic.
**Impossible Semantics Criteria:**
- The video must clearly depict an impossible, counterfactual phenomenon that cannot occur in the real world. The impossibility should be a salient event, rather than minor visual details that are difficult to perceive.
- The video should be in a photo-realistic style. Non-realistic styles (e.g., cartoon-style videos) are excluded to avoid confusion in video understanding.
We will include these detailed filtering criteria in the revised paper.
Thank you again for your insightful review. Your feedback is helpful on enhancing the rigor and clarity of our work. We are happy to address any further questions. | Summary: The paper introduces IPV-BENCH, a benchmark for evaluating video understanding and generation models using "impossible videos". It includes a taxonomy, a prompt suite (IPV-TXT), and a video dataset (IPV-VID). Evaluations reveal limitations in current models, highlighting the need for improved reasoning and generalization in non-real-world scenarios.
Claims And Evidence: The claims in the submission are not fully supported by clear and convincing evidence. The paper lacks a detailed release of code and benchmark datasets, which are crucial for reproducibility and validation. Additionally, focusing on "impossible videos" is niche and may not attract widespread adoption, limiting the benchmark's impact. These issues weaken the overall credibility and practical utility of the claims.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria, including the IPV-BENCH benchmark, are relevant for assessing video models on impossible videos.
Theoretical Claims: The paper does not present any theoretical claims or proofs that require verification. It focuses on empirical evaluation and benchmarking of video understanding and generation models.
Experimental Designs Or Analyses: The paper lacks detailed experimental designs and analyses, particularly in the evaluation of video generation models. The IPV-Score metric is introduced but not thoroughly explained or validated. Additionally, the absence of released code and datasets undermines the reproducibility and soundness of the experiments.
Supplementary Material: Yes. However, the supplementary material contains no useful information.
Relation To Broader Scientific Literature: No.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. Introduces a novel concept of "impossible videos" to challenge video models.
2. Constructs a comprehensive taxonomy and benchmark (IPV-BENCH) for evaluation.
Weaknesses:
1. Lacks release of code and benchmark datasets, limiting reproducibility.
2. Focuses on non-mainstream scenarios, potentially reducing broad interest and adoption.
Other Comments Or Suggestions: 1. Provide detailed supplementary materials, including code and benchmark datasets, to enhance reproducibility.
2. Highlight practical applications of impossible videos to justify their significance in the research community.
Questions For Authors: How do you plan to encourage broader adoption of IPV-BENCH as a standard benchmark in the video understanding and generation community?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. We greatly appreciate your insights on our paper. Below, we outline our responses to each point.
**Q1: Reproducibility: Code/Dataset Release**
We appreciate the reviewer’s concern regarding reproducibility. Upon acceptance, we will publicly release all **code and data**, including the IPV-Bench taxonomy, IPV-TXT prompts, IPV-VID videos, and evaluation protocols. Additionally, we will provide a detailed **Data Usage Instruction** in the Appendix to enhance accessibility. We believe that this comprehensive release will maximize the impact of Impossible Videos and foster further research in this area.
**Q2: Significance and Applications of Impossible Videos.**
We appreciate the reviewer’s valuable suggestion, as it will further enhance the impact of Impossible Videos.
**Significance:**
- As highlighted in our paper, this benchmark fills a critical gap in evaluating counterfactual video generation and understanding—an area currently absent in the community. This importance has also been **recognized by Reviewers F9mi and AP4m**.
- Evaluating **AI robustness and generalization** in out-of-distribution scenarios is a well-established challenge, particularly for autonomous systems encountering rare events. As noted by **Reviewers F9mi and JPJB**, "Impossible Videos" serves as a **robustness and generalization benchmark** for video understanding and generation models, addressing an overlooked yet crucial aspect of AI evaluation.
- As **Reviewers F9mi and AP4m** acknowledged, our work is closely related to broader topics in **counterfactual and causal reasoning in AI**. "Impossible Videos" provides a valuable case study for counterfactual reasoning in the video domain, encompassing both video understanding and generation.
**Applications:**
- "Impossible Videos" can be leveraged to enhance video understanding and generation models by improving their robustness and generalization capabilities.
- Real-world applications include:
- **Creative industries** – Enhancing special effects, game design, advertising, and filmmaking, etc.
- **Industrial safety** – Assisting in anomaly detection and risk assessment.
- **Advanced AI assistants** – Equipping AI systems with stronger reasoning capabilities for more intelligent decision-making.
**Q3: Experimental Details: IPV-Score and Evaluation**
We appreciate the reviewer’s feedback on this issue.
The computation of the IPV-Score is based on the statistics of **Visual Quality** and **Prompt Following** and is defined as:
$$
IPV Score=\frac{\text{Num. of High Visual Quality}\cap \text{Num. of Good Prompt Following}}{\text{Num. of All Vid.}}
$$
This metric intuitively aligns with our design philosophy: it measures the percentage of videos that satisfy both high visual quality and strong prompt adherence.
To improve clarity, we will include this equation along with a detailed textual explanation in the revised paper.
**Q4: How to Encourage Broader Adoption?**
We appreciate this constructive question and will take the following steps to encourage broader adoption:
- **Public Release** – Make all data and evaluation code openly available to enhance accessibility.
- **Comprehensive Documentation** – Provide detailed instructions for data usage to facilitate easy adoption.
- **Community Engagement** – Organize competitions and workshops on Impossible Videos to attract more researchers to this domain.
- **Integration with Existing Toolkits** – Collaborate with established toolkits to incorporate IPV-Bench into widely used evaluation suites.
- **Ongoing Benchmark Maintenance** – Maintain a leaderboard and regularly update the benchmark to ensure its relevance.
We sincerely hope that these efforts will inspire further research and drive innovation in the video understanding and generation community. | null | null | null | null | null | null |
Understanding the Statistical Accuracy-Communication Trade-off in Personalized Federated Learning with Minimax Guarantees | Accept (poster) | Summary: In this paper, the authors study a personalized FL objective and showed its statistical accuracy under strongly convex, smooth model. The authors then propose a new algorithm to solve the problem. Empirical results show that as the personalization level changes, the model is able to interpolate between pure local training and pure global training.
Claims And Evidence: - Problem 2 is different from the objective studied in Hanzley & Richtarik et al. 2020. The latter proposed a mean-regularized objective, where Problem 2 is a global-regularized objective. The proposed objective is more similar to what is proposed in Li et al. 2021 [1].
- In Section 5, it is incorrect to claim "Our work is the first to quantitatively characterize how changing the personalization degree
leads to the trade-off between communication cost and statistical accuracy." See section below for details.
[1] Li, T., Hu, S., Beirami, A., & Smith, V. (2021, July). Ditto: Fair and robust federated learning through personalization. In International conference on machine learning (pp. 6357-6368). PMLR.
Methods And Evaluation Criteria: - The proposed FedCLUP method seems to be exactly the same as the method is Li et al. 2021 [1]. I just read through that paper more carefully. It seems [1] has already performed convergence analysis and showed how personalization degree leads to tradeoff between communication and utility. Hence, the novelty and contribution of the proposed work is over claimed. In that case, I wonder whether the contribution is mainly an extended theoretical analysis of this prior work?
[1] Li, T., Hu, S., Beirami, A., & Smith, V. (2021, July). Ditto: Fair and robust federated learning through personalization. In International conference on machine learning (pp. 6357-6368). PMLR.
Theoretical Claims: - I did not check all the steps of the proof. But the convergence rate seems correct.
Experimental Designs Or Analyses: - Compared to the results in Li et al. 2021, I'm wondering whether the authors also observe the existence of an optimal lambda where the personalized model outperforms both pure global and pure local training. Currently, it looks like the evaluation does not outperform max{local, global}.
- Limited evaluation, the authors should perform experiments in settings with 1. varying level of heterogeneity, 2. datasets with natural partitions, such as CelebA, Sent140, etc.
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: As mentioned earlier, the key contribution of this work has a non trivial overlap with Li et al 2021, in terms of the method and findings. Please see previous sections for a detailed discussions.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and constructive comments. Below, we provide our detailed responses to each point. We hope these clarifications help address your concerns.
**Comparison with Ditto [1]**
Our work is significantly different from Ditto (Li et al. 2021 [1]) in multiple aspects and is far from being a simple extension of prior work.
***Different objective*** The formulation considered in Ditto [1] is given by
$$\min _ {\\{w^{(i)}\\} _ {i = 1}^m } f _ i(w^{(i)}) + \frac{\lambda}{2} \\|w^{(i)} - w^{(g)}\\|^2, \quad \text{s.t.} \quad w^{(g)} \in \operatorname{argmin} _ {w} G(f_1(w), \ldots, f _ m(w)).$$
Ditto first solves the lower-level problem to obtain $w^{(g)}$, then uses it in the upper-level problem to solve for each $w^{(i)}$. This differs from our formulation, where the local models $w^{(i)}$ and the global model $w^{(g)}$ are optimized jointly. Accordingly, we adopt a completely different set of techniques for analyzing the problem.
***Different Trade-off*** More importantly, although Ditto also investigates a trade-off phenomenon, they study a different trade-off compared with ours. They primarily investigate the effect of personalization to balance the trade-off between robustness to adversarial clients and the generalization benefits gained from collaboration. As such, the trade-off they study focuses on the strategy to minimize the statistical error of the final solution, and does not explicitly incorporate optimization error. In contrast, our work investigates the trade-off between statistical accuracy and communication efficiency, aiming to optimize the total error, which includes both statistical error and optimization error.
While Ditto briefly mentions a communication-utility trade-off in Section 3.2, it is only proposed as a potential direction for future analysis. While they provide some experimental results in their paper that illustrate this phenomenon, the trade-off between communication and statistical accuracy hasn't been *quantitatively analyzed* in the paper. In contrast, our work is specifically centered on the theoretical understanding of this trade-off. We provide a comprehensive theoretical analysis that explicitly characterizes how both statistical accuracy and communication complexity depend on the personalization parameter $\lambda$. As a unique contribution, our theory gives a principled understanding of how to select $\lambda$ to optimize the overall performance in federated settings (see our response to the next question for more details).
***Different Theoretical Results*** Additionally, we note that the theoretical analysis in Ditto is restricted to a simplified linear regression setting, as acknowledged in their paper under a "simplified set of attacks and problem settings". In contrast, we provide a tight theoretical analysis for a general function class $f$. Our statistical error bound is proved to be minimax optimal and our optimization convergence is shown to be linear. Establishing both convergence and generalization bounds in this more general setting is technically non-trivial and constitutes a key contribution of our work.
**Observing the existence scenarios that FedCLUP outperforms GlobalTrain and LocalTrain**
Such scenarios do exist. As shown in the left panel of [Figure](https://postimg.cc/Dm6w5vdw), FedCLUP achieves a lower total error than both LocalTrain and GlobalTrain between 5 and 40 communication rounds. Moreover, to reach a given target total error, there exists a unique optimal $\lambda$ that minimizes the required number of communication rounds. Based on this, we propose a dynamic tuning strategy to approximate the optimal $\lambda$, as demonstrated in the right panel of the same figure. Please refer to our response to Reviewer Krd5 for further discussion.
**Perform experiments with varying levels of heterogeneity**
We indeed consider different levels of heterogeneity in Appendix D.2. Table 4 sets the data heterogeneity by varying the number of classes for the clients' data.
**Perform experiment with natural partitions**
Thanks for the reviewer's suggestion. We add more experiments to study the CelebA and Sent140 datasets. The experiment details follow [1] and [2] respectively. As an example, in [Figure](https://postimg.cc/H8r5drTd), we observe similar trends as in Figure 1 and Table 1. We will cover more results, elaborate on these findings, and include them in the final version of the paper.
[1] Li, T., Hu, S., Beirami, A., \& Smith, V. (2021, July). Ditto: Fair and robust federated learning through personalization. In International conference on machine learning (pp. 6357-6368). PMLR.
[2] Duan M, Liu D, Ji X, et al. Fedgroup: Efficient federated learning via decomposed similarity-based clustering. IEEE SustainCom, 2021: 228-237.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response.
- *Different objective*: Thanks for pointing this out. I agree the objective is slightly different. However, looking into Li et al. 2021, I thought their algorithm is essentially also doing joint optimization? I also checked algorithm 1 in Appendix B and found that the solver is indeed different. I think a more proper way to compare the two works is to say that the proposed work is using a different solver than prior work.
- *Different objective*: The update in algorithm 1 is a bit hard to understand. What is the intuition of setting the model update for the global model as the gradient of the regularization term? Plugging the second last line into the last line, the update is basically scaling the global model with $(1-\gamma\lambda)$ and then subtract the linear combination local models? What's the motivation that this update finds the best global model for the objective?
- *Different Trade-off*: Thanks for the clarification.
- *Additional experiments*: I appreciate the authors for performing new experiments. For those plots, I would love to see where the global, local, other personalized FL methods stand.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our previous response and for the follow-up questions. Below we provide clarifications accordingly, and hope they address your concerns.
**Difference in Algorithm Between Ditto and Our Work**
We would like to reiterate that the key difference lies in the difference in the trade-off being studied. Next, we discuss the difference in terms of algorithm:
$\bullet$ ***Ditto***: Although Ditto updates both local and global models within a single communication round, this is essentially *a merging of two sequential steps*. Conceptually, their objective does not involve an explicit coupling between local and global models, therefore, their objective can be solved *sequentially*: one can first solve the global model (e.g., via FedAvg), and then solve each local model individually. The objective investigated by Ditto itself does not require a joint optimization approach.
$\bullet$ ***Our Work***: In contrast, our objective, which is fundamentally different from theirs, induces a *natural coupling* between the local and global models, which necessitates a joint optimization approach. This coupled structure is central to both our algorithmic design and theoretical analysis. As discussed in our main text and the previous response, this leads to a different algorithmic structure and analysis techniques.
**Understanding of the Model Update**
We thank the reviewer for raising this point. Due to space constraints, we refer to Section 4.2 of the main paper for full details and summarize the key idea here.
Problem (2) can be equivalently reformulated into the following bilevel form (Equation 10)
\begin{align*}
& \min \_{w^{(g)}} F(w^{(g)}):=\frac{1}{m} \sum\_{i=1}^m F\_i(w^{(g)}), \\
& \text { where } \quad F\_i(w^{(g)}):=\min \_{w^{(i)}} h\_i(w^{(i)}, w^{(g)}) .
\end{align*}
Thus global and local models can be solved iteratively. For the local model, given the current global model, updating the local model can apply gradient descent directly on $h_i(\cdot)$ (Equation 11). Similarly, for global model, as Lemma 2 states, we can also apply gradient descent on $F_i$, with the gradient given by $\nabla F_i(w^{(g)}) = \lambda(w^{(g)}-w_{\star}^{(i)}(w^{(g)}))$. Therefore, the global gradient indeed involves the regularization term, which naturally comes from the objective.
Now further notice that $\nabla F_i(w^{(g)})$ involves $w_{\star}^{(i)}(w^{(g)})$, the minimizer of the inner loop problem (defined in (11)), which is not directly available to use. Therefore, a natural approximation for $w_{\star}^{(i)}(w^{(g)})$ is the local model output in FedCLUP, yielding the approximating gradient $\hat{\nabla} F_i(w^{(g)})$. This is precisely the expression used in the second-to-last line of Algorithm 1. This explains why the local models are involved when updating the global model. Since our theoretical analysis considers a convex setting, applying gradient descent ensures convergence to the minimizer of the global model. We hope this clarifies the intuition and addresses the reviewer’s question.
**Additional Experiments**
Upon the reviewer's request, we further conducted experiments of GlobalTrain, LocalTrain, and an additional PFL method, pFedMe on these two new datasets. As [Figure](https://postimg.cc/23ffpjMh) 6(a) and 7(a) show, the results are consistent with our findings in Figure 1 of the manuscript and the analysis presented in our previous response [Figure](https://postimg.cc/Dm6w5vdw). Similarly, in [Figure](https://postimg.cc/23ffpjMh) 6(b) and 7(b) we observe similar trends for another PFL method, pFedMe, further supporting the observed trade-offs. We will expand on these findings and incorporate the results into the final version of the paper.
We hope our responses have clarified the reviewer’s questions. We sincerely appreciate the thoughtful feedback and hope it supports a positive re-evaluation of our submission. | Summary: This paper studies the trade-off of accuracy and communication in personalized federated learning and presents the theoretical analysis of the effects of personalization degree. The theoretical findings are validated on synthetic and real-world datasets.
Claims And Evidence: The claims are supported by theoretical analysis and experimental validations.
Methods And Evaluation Criteria: The proposed method and evaluation make sense.
Theoretical Claims: The theoretical claims and proofs seem to be correct.
Experimental Designs Or Analyses: The experimental design and analysis are sound in general.
Supplementary Material: The supplementary material provides more details and looks good.
Relation To Broader Scientific Literature: This paper contributes to the general federated learning community.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: **Strength**
- The analysis of the trade-off between personalization and communication is useful for the federated learning community.
- The findings are well supported by experimental results.
**Weakness**
- The practical implication on the choice of \lambda is not clear from the corollary 1. It would be helpful to further discuss the potential solution of \lambda.
- The definitions of high, medium, and low personalization degrees are not clear.
Other Comments Or Suggestions: NA
Questions For Authors: Please see the weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and constructive comments. Below, we provide our detailed responses to each point. We hope these clarifications help address your concerns.
**The practical implication on the choice of $\lambda$**
For the practical implication of personalization in our experiments, we include an illustrative example on synthetic data (similar to that in Figure~1a of the manuscript) that demonstrates how the personalization degree, controlled by $\lambda$, influences convergence behavior and total error. Specifically, we compare several settings: (1) local training, corresponding to the limiting case of FedCLUP as $\lambda \to 0$, (2) global training, corresponding to FedCLUP as $\lambda \to \infty$, and (3) FedCLUP with three intermediate values of $\lambda$.
As Figure 1 (See detail in [Figure](https://postimg.cc/Dm6w5vdw)) shows, different values of $\lambda$ yield distinct convergence speeds and final total errors. In particular, a smaller $\lambda$ (i.e., stronger personalization) results in faster convergence in terms of communication rounds due to the algorithm relying less on communication among clients. However, this comes at the cost of a larger final total error, as reduced collaboration across clients limits the statistical generalization. Conversely, larger $\lambda$ (i.e., weaker personalization or more collaboration) leads to slower convergence but consistently achieves a lower final total error due to the benefit of collective learning.
As a direct implication, shown in the left part of [Figure](https://postimg.cc/Dm6w5vdw) when aiming to reach a target total error, there exists a unique optimal $\lambda$ that minimizes the total number of communication rounds needed to achieve that error compared with FedCLUP with other personalization degree or GlobalTrain. This insight provides an important practical guideline derived from the theory established in Corollary 3.
**The potential solution of $\lambda$ in practice**
For the potential solution in practice, we propose a heuristic dynamic tuning strategy for practical guidance to approximate the optimal $\lambda$. As shown in the right part of [Figure](https://postimg.cc/Dm6w5vdw), the idea is to begin with a small $\lambda$ (i.e., close to LocalTrain) to leverage its communication efficiency. As soon as the validation performance plateaus or the statistical error stops improving significantly, we gradually increase $\lambda$. This allows the model to benefit from enhanced generalization through increased collaboration, at the expense of slightly higher communication cost. By progressively adjusting $\lambda$ in this way, one can finally identify the optimal personalization degree that balances the trade-off and minimizes the total communication cost required to meet a desired performance threshold.
For the broader impact, the dynamic strategy would possibly be applied to other regularization-based personalization federated learning algorithms as tuning guidance. It would save the communication cost a lot for achieving a target performance.
**the definition of personalization**
In D.1. Additional Experiment Details, for each experiment, the small $\lambda$ value represents a high personalization, the median $\lambda$ represent the median personalization degree, and the large $\lambda$ represents high personalization. The specific choice of $\lambda$ is specified in Appendix D.1.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarifications! I don’t have any further questions.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Krd5,
Thank you for your positive feedback and valuable suggestions. During the period of rebuttal, we have conducted more experiments as shown in [Figure](https://postimg.cc/23ffpjMh), where we used natural partition datasets with more complex models to verify our findings in [Figure](https://postimg.cc/Dm6w5vdw), giving practical guidance in selecting the personalization degree.
Once again, thank you for your review and positive evaluation, and we hope our response provides additional clarity and encourages a further positive assessment of our work.
Best regards,
The Authors | Summary: The authors theoretically address the accuracy-communication trade-off in personalized federated learning (FL). In other words, $\lambda$, which controls the regularization between global and local models, represents the accuracy-communication trade-off, and an analysis of this is conducted.
Claims And Evidence: - It is difficult to follow. While the theoretical explanation is well-structured, it would be helpful to summarize the key takeaways more clearly. The main claim is not entirely clear. Please clarify the distinct insights that this theoretical analysis offers beyond the broadly understood existing knowledge.
Methods And Evaluation Criteria: - I am unsure how Algorithm 1 differs from the existing algorithms. Additionally, is it correct that Algorithm 2 differs from Algorithm 1 only in terms of batch consideration?
Theoretical Claims: Yes.
Experimental Designs Or Analyses: - The models and datasets used in the experiment are all too outdated.
- The performance is lower compared to other algorithms used for comparison, raising concerns about its practicality.
Supplementary Material: Yes, only for algorithms.
Relation To Broader Scientific Literature: This study aims to theoretically analyze the trade-off between communication and accuracy, which is the fundamental aspect of FL.
Essential References Not Discussed: None
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: - While the theoretical aspects are important, I prioritize the practical aspects more. I wanted to clarify that this evaluation was made with that perspective in mind.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and constructive comments. Below, we provide our detailed responses to each point. We hope these clarifications help address your concerns.
**The summarized key takeaway as our main claim**
From a statistical perspective (Section 4.1), we provide a tight generalization bound for Problem 2 under standard assumptions. Our analysis quantifies how the degree of personalization $\lambda$ affects the statistical accuracy. We show that when collaborative learning is beneficial, increasing personalization reduces collaboration across clients, which may degrade the statistical accuracy of the problem solution.
From an algorithmic perspective (Section 4.2), we analyze the convergence of FedCLUP. Our results characterize how $\lambda$ influences the number of communication rounds required for convergence. Increasing personalization implies less dependence on server-client communication, which reduces communication overhead and thus improves communication efficiency.
By combining these two parts, summarized in Section 5—demonstrate a fundamental trade-off: higher personalization may improve communication efficiency but at the cost of statistical accuracy. This claim leads to a key practical insight: to achieve a specific target error, there exists an optimal choice of personalization degree that minimizes the total communication cost required to reach the desired performance. The claim is verified in Figure 1. (See [Figure](https://postimg.cc/Dm6w5vdw) and the response to reviewer Krd5 for a detailed description).
**Unsure how Algorithm 1 differs from the existing algorithms**
The existing algorithms solving Problem 2 are listed in Table 1, where most of the existing works either don't explicitly analyze the local convergence cost, like FedProx and pFedMe, or fail to demonstrate such a trade-off. For the designed algorithm, we quantitatively characterize how changing the personalization degree leads to the trade-off between communication cost and statistical accuracy. In contrast, most prior studies either do not explicitly analyze statistical convergence or fail to provide a fine-grained analysis linking optimization and statistical error, therefore lacking a theoretical guarantee for practical guidance.
**The Difference between Algorithm 1 and Algorithm 2**
Algorithm 2 is the stochastic version of Algorithm 1, accounting for the scenarios where the sample size is large and computing the full gradient is impractical. We also extended our analysis to this setting, accounting for the injection of stochastic noise into the algorithm.
**The models and datasets are all too outdated**
Thank you for the suggestion. We have extended our experiments to include more recent datasets and models such as CelebA (image classification) and Sent140 (NLP). We also experiment with more diverse models from CNN to LSTM. Some preliminary results are available [here](https://postimg.cc/H8r5drTd), showing similar trends as in Figure 1 and Table 1. We will add more details on the experiments, elaborate on these findings, and include them in the final version of the paper.
**The performance is lower compared to other algorithms used for comparison, raising concerns about its practicality**
We would like to clarify that the primary goal of our experiments is not to demonstrate that the proposed algorithm statistically outperforms existing methods. Instead, our experiments are designed to validate the theoretical insights and provide practical tuning guidance. To this end, the experiments mainly focus on evaluating one algorithm across different levels of personalization. This design enables a clean and focused investigation of the trade-offs described in Corollary 3.
Beyond the validation of trade-off, our analysis does provide practical guidance. First, based on the trade-off, there exists an optimal $\lambda$ for reaching the target error with the minimal communication cost (See Figure 1 or detailed description in Left of [Figure](https://postimg.cc/Dm6w5vdw)). For practicality, when the optimal $\lambda$ is unknown, we provide tuning guidance (See Right of [Figure](https://postimg.cc/Dm6w5vdw)): as soon as the validation performance plateaus or the statistical error stops improving significantly, we gradually increase $\lambda$. The dynamic strategy approximates the optimal $\lambda$, achieving target error with minimal communication cost (See the response under reviewer Krd5 for details). | Summary: This paper studies personalized federated learning, i.e., where data owners (clients) have their own distribution.
The paper studies in particular the trade off between exploiting more shared knowledge (increasing the communication cost) and relying mire on local data.
Claims And Evidence: The theorems are supported by proofs in the appendix.
The proofs are rather hard to follow
The paper contains experiments, which are illustrative and compare different levels of personalization but dont really support the claims and dont compare to existing methods. Table 2 does some comparison with pFedMe but does seem to show significant differences.
Methods And Evaluation Criteria: The authors claim to investigate the communication cost, but in reality only study the number of iterations. The communication cost is assumed to be linear in the number of rounds, but this ignores approaches such as fedavg and approaches which send only compressed gradients in every round.
The theoretical results are compared to other work in Table 1 but as the several results use different variables the comparison is difficult and it is unclear what we can conclude.
It would be interesting to see experiments that show how and when the results of the current paper are significantly better than existing work.
Theoretical Claims: I went through most proofs but they are hard to follow and confirming their correctness would require a huge amount of time
Experimental Designs Or Analyses: The experiments dont indicate which conclusions are statistically significant.
The experiments are illustrative, not aimed at proving claims, hence it is not critical that their design is soubd.
Supplementary Material: I tried to read mist of the supplementary material. A major part are mainly calculations not written in a way to be didactic to the reader.
Relation To Broader Scientific Literature: The work ignores alternative optimization algori5hms and other strategies which may affect the relation between the communication cost per round. The paper analyzes a single (simple) algorithm.
The paper compares with earlier work analyzing convergence speedn but as said Table 1 is not easy to interpret.
Essential References Not Discussed: --
Other Strengths And Weaknesses: My main concern is the significance of the work.
Other Comments Or Suggestions: * 115R : solution of i-th local model -> solution of the i-th local model
* Eq (5) the variable i in $\sum_{i=1}^m$ shadows the different variable i used among others in the preceding term $w_*^{(i)}$. Please use another index, e.g., j, instead
* 282R : a higher degree of collaboration will also results -> result
Questions For Authors: --
Ethical Review Concerns: --
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and constructive comments. Below, we provide our detailed responses to each point.
**The Structure of the Theoretical Proof**
We establish the statistical convergence rate (Theorem 1) in Section A.3. Specifically, Section A.3.1 derives the statistical rate for the global model $\tilde{w}^{(g)}$, and Section A.3.2 builds upon this result to establish the statistical rate for each local model $\tilde{w}^{(i)}$. We then derive the optimization convergence rate (Theorem 2) in Section B.2. The analysis proceeds in stages: we first establish the convergence rate for the global model in Section B.3, followed by the local convergence rate in Section B.5. We will further refine and reorganize the proof structure later.
**Significance of Experiment, Validation of Theoretical Claims, and Comparison with Existing Algorithms**
The primary goal of our experiments is *not* to demonstrate that the proposed algorithm statistically outperforms existing methods. Instead, our experiments are designed to validate the theoretical insights. To this end, the experiments focus on evaluating *one algorithm across different levels of personalization*, rather than making direct comparisons across algorithms. This design enables a clean and focused investigation of the trade-offs described in Theorems 1 and 2 and Corollary 3:
$\bullet$ Statistical Accuracy: As shown in Table 2 and Table 4 (Appendix), when collaborative learning is beneficial, decreasing the personalization degree improves generalization performance. This observation directly supports the theoretical result in Theorem 1.
$\bullet$ Communication Efficiency: Figure~1 and Figure 3,4,5 (Appendix) show that decreasing the personalization degree leads to slower convergence in terms of communication rounds. More communication is required to achieve the same level of optimization error, which is consistent with the theoretical analysis in Theorem 2 and Corollary 2.
$\bullet$ Trade-off: the trade-off characterized in Corollary 3 is verified: increasing the personalization degree reduces communication cost but may hurt statistical accuracy.
For practical implications, there exists an optimal choice of $\lambda$ that uses minimal communication rounds to achieve a given target total error (See Figure 1 or Left of [Figure 3](https://postimg.cc/Dm6w5vdw). This provides a tuning guidance for personalization to utilize the tradeoff. As Right of [Figure 3](https://postimg.cc/Dm6w5vdw) shows, as soon as the validation performance plateaus or the statistical error stops improving significantly, we gradually increase $\lambda$. The dynamic strategy approximates the optimal $\lambda$ and meets a desired error threshold with minimal communication rounds. Due to space constraints, we kindly refer the reviewer to our response to Reviewer Krd5 for a more detailed discussion.
**The communication cost and approaches with compressed gradients**
We define the communication cost as the total number of communication rounds. The vanilla FedAvg method, as far as we know, also measures the communication cost using this metric[1]. Compression schemes such as top-K pruning and quantization, on the other hand, are extra techniques one can integrate and apply to either the model or the gradient transmitted. This line of work is orthogonal to our study.
**Compare to other work in Table 1, but use different variables. So the comparison is difficult, and it's unclear for the conclusion**
The key takeaway from Table 1 is that most prior works omit statistical analysis, whereas we fill this gap by deriving a minimax-optimal statistical rate. Our communication cost is also nearly optimal (up to logarithmic factors [2]) and has a clear, interpretable dependence on the personalization parameter. This makes our work the first to quantitatively characterize the communication–accuracy trade-off. Indeed, different works adopt different notations and assumptions, making direct alignment difficult—we will add clarifications later.
**Focuses on a basic algorithm, omitting others that might influence communication costs per round**
The primary goal of this work is to provably show the trade-off and provide tuning guidance. The algorithm we propose is intentionally designed as a clean, simple, and principled instantiation for solving Problem 2. We have derived a minimax optimal statistical and a sharp linear optimization convergence bounds to clearly disentangle the role of the personalization degree in the trade-off.
**Typo error**
Thank you for pointing this out. We will carefully proofread the manuscript and correct all typos in the revised version.
We hope our responses address your concerns.
[1] Koloskova A, Loizou N, Boreiri S, et al. A Unified Theory of Decentralized SGD with Changing Topology and Local Updates[J].
[2] Hanzely F, Hanzely S, Horvath S, et al. Lower Bounds and Optimal Algorithms for Personalized Federated Learning[J]. 2020. | Summary: This paper proposed a personalized federated learning algorithm that captures the relationship between communication cost and the degree of personalization. Convergence theories are derived, showing that the total required gradient steps are irrelevant to the personalization degree $\lambda$, while the scaling of the communication cost w.r.t $\lambda$ is related to the conditional number $\kappa $. Numerical results further showed the convergence speed of their method under different personalization degree.
Claims And Evidence: Yes, the claims and results are all backup with proof or evidence.
Methods And Evaluation Criteria: Yes, the choice of the datasets makes sense, and the additional data heterogeneity experiments in the appendix showed the difference between various personalization settings.
Theoretical Claims: I didn't find the definition of 's parameter space quite convincing. The parameter space is a ball build around the weighted average of all local ground truth models. However, if goal this space is to capture the heterogeneity, shouldn't the ball measure the Euclidean distance between the global ground truth (the optimal model of the sum of all local models) instead of the weighted average of the local ground truth? A further explanation of why such measure is chosen to evaluate the statistical error would make the result more persuasive.
Experimental Designs Or Analyses: The theoretical result in this work is under strongly-convex settings, hence some experiments under strongly convex settings is expected with discussion related to the conditional number $\kappa$ . However, it seems the experiments are all under weakly-convex/non-convex, hence it would be nice if additional discussion on strongly-convex settings (synthetic data should be enough) can be added.
Supplementary Material: The proof in Appendix A and B looks solid, and the additional experiments in Appendix D showed that their method can also be generalized onto other regularization-based PFL methods.
Relation To Broader Scientific Literature: The work provides a new algorithm and theoretical insight for personalized FL to trade-off between communication cost and personalization level.
Essential References Not Discussed: Based on my knowledge, the paper provides a reasonable overview of prior work relevant to its key contributions. I am not currently aware of any essential references that are missing.
Other Strengths And Weaknesses: The rate they derived is based on strongly-convex. However, it unclear when under weakly-convex/non-convex cases, where possibily non-unique set of optimal/stationary solutions exists, whether if the accuracy-communication trade-off they discover can still hold true. Further analysis should provide a better insight for more real-world optimization problems.
Other Comments Or Suggestions: I am not currently aware of any typos or minor mistakes.
Questions For Authors: There are a few points I wish the authors to address:
1. The theoretical results are derived under strongly convex settings, but the experiments appear to be conducted in weakly convex or non-convex settings. It would be beneficial to include experiments under strongly convex settings, possibly with synthetic data, and discuss their relation to the condition number $\kappa$.
2. Shouldn't the definition of $w_*$ 's parameter space be centered around the global ground truth model instead? A clearer justification for this choice would strengthen the argument.
3. The derived rate assumes strong convexity, but it is unclear whether the accuracy-communication trade-off still holds in weakly convex or non-convex settings, where multiple optimal or stationary solutions may exist. Further analysis is needed to understand how these findings extend to real-world optimization problems.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and constructive comments. Below, we provide our detailed responses to each point. We hope these clarifications help address your concerns.
**The concern raises for the definition of parameter space**
A natural measure of local model's heterogeneity could be the average of their mutual distance given by $\frac{1}{m^2}\sum_{i = 1}^m \sum_{j = 1}^m \\|w_\star^{(i)} - w_\star^{(j)}\\|^2$, which is equivalent to $\frac{2}{m} \cdot \sum_{i = 1}^m \\| w_\star^{(i)} - w_\star^{(g)}\\|^2$ with $w_\star^{(g)} := \frac{1}{m} \sum_{i=1}^m w_\star^{(i)}$. Therefore, we define the data heterogeneity over the parameter space as $\frac{1}{m} \sum_{i = 1}^m \\| w_\star^{(i)} - w_\star^{(g)}\\|^2\leq R^2$. The definition is also used in prior works [1][2].
**Experiment with strongly convex and discuss the condition number -- least square problem, vary the choice of condition number, examine both communication and computation**
We thank the reviewer for this valuable suggestion. In response, we conducted additional experiments on a strongly convex problem: an overdetermined linear regression task. We strictly follow the choices of local step size, local computation rounds, and global step size as specified in Corollary 2. Please refer to [Figure1](https://postimg.cc/BLJmCB4b) and [Figure2](https://postimg.cc/zbRG0K7m) for detailed experimental results.
As suggested, we study the effect of the condition number $\kappa$ on convergence behavior. Specifically, for a fixed personalization parameter $\lambda$, we observe that larger values of $\kappa$ result in slower convergence rates with respect to the number of communication rounds. This empirical trend is consistent with our theoretical prediction in Corollary 2, where the number of communication rounds required to achieve a given target error $\epsilon$ scales with $\mathcal{O}(\kappa \frac{\lambda + L}{\lambda + \mu} \log(1/\varepsilon))$. Moreover, we observe that the impact of increasing $\kappa$ becomes stronger as $\lambda$ increases. This phenomenon aligns with our theoretical analysis, which shows that the sensitivity of communication complexity to $\kappa$ is amplified for larger values of $\lambda$. Similarly, we observe that increasing the condition number $\kappa$ also leads to a higher total number of gradient evaluations required to reach a target error. This observation is in agreement with our theoretical results where the total number of local gradient evaluations scales as $\mathcal{O}(\kappa\log(1/\varepsilon))$.
**The study focuses on strongly-convex problems, but the accuracy-communication trade-off may differ in weakly-convex or non-convex cases with non-unique solutions. Further analysis is needed**
Establishing the statistical rate of the solution relies on the strong convexity of the loss. This reason is that for strongly convex problems the minimizer is unique, which enables analyzing its property using first order optimality conditions. However, weakly convex and nonconvex problems can have multiple minimizers and stationary points. Under such cases, first order stationarity can no longer distinguish these points, leave alone characterizing their statistical accuracy. One may resort methods such as consider losses with special landscape (e.g., restricted strong convexity, one-point strong convexity) or algorithmic regularization to tackle some subclasses of nonconvex losses, yet such extensions are
highly nontrivial and would require a different set of techniques. We acknowledge that analyzing the convergence of the algorithm to a stationary point for weakly convex objectives could be more tractable, but doing so alone without having its statistical accuracy cannot reveal the communication-accuracy tradeoff.
[1] Chen, S., Zheng, Q., Long, Q., and Su, W. J. (2023). Minimax Estimation for Personalized Federated Learning: An Alternative between FedAvg and Local Training? Journal of Machine Learning Research, 24(262), 1-59.
[2] Duan Y, Wang K. Adaptive and robust multi-task learning. The Annals of Statistics, 2023, 51(5): 2015-2039. | null | null | null | null |
Mirror, Mirror of the Flow: How Does Regularization Shape Implicit Bias? | Accept (poster) | Summary: The authors propose to analyze the effect of combining implicit and explicit regularization on training dynamics using the mirror flow framework. While their framework is more general, they specifically examine the impact of weight decay when it is turned off at a particular training step. To investigate this, the authors conduct several experiments to assess the influence of weight decay on both the sparsity of neural networks and their performance.
Claims And Evidence: The authors suggest that their theoretical framework provides insights for designing appropriate regularization strategies, and they conduct experiments to support this claim. The author take the special case of turn-off weight decay to illustrate the application of their framework and let to future work do design temporal weight decay. In the experimental section, it would be beneficial to include an ablation study examining the impact of the time step at which weight decay is turned off. Additionally, deriving theoretical boundaries for (T,weight decay) and comparing them with the experimental results would strengthen the analysis.
Methods And Evaluation Criteria: To validate their theoretical findings, the authors investigate a specific case where weight decay is switched off at a particular training step T. They analyze the impact on accuracy and the ratio of the Nuclear norm to the Frobenius norm across various architectures (e.g., LoRA, attention mechanisms) and datasets, including ImageNet and the Shakespeare dataset.
Theoretical Claims: I have reviewed the proofs of Theorems B.1 and B.2 and have a few questions:
1. **B.1 – L785:** I do not fully understand the justification for $\partial_y\partial_y R(x,y) > 0$. Given that the authors consider the loss function $ L(x,y) = f(x) + \alpha y $, the second derivative should be $\partial_y\partial_y R(x,y) = 0$. Could you clarify where my reasoning is incorrect?
2. **B.1 – L804:** I did not fully grasp your demonstration regarding why $\nabla^2_x R$ is positive definite. Could you provide additional explanation?
3. **B.2 – L843 to L846:** You state that $d \nabla_x R^T_{a_t}(x^* - x_t) = - d \nabla_x f(x_t)^T (x^* - x_t).$ However, in my understanding, it should be $d \nabla_x R^T_{a_t}(x^* - x_t) = - \nabla_x f (x^* - x_t).$ Consequently, I do not see why we can conclude that $\nabla_x f (x^* - x_t) \leq \nabla_x f (x_t)^T (x^* - x_t).$
Experimental Designs Or Analyses: Regarding the **experimental section**, I appreciate that the authors demonstrate, in the case of linear hyperparameterization, that turning off weight decay improves the error in this particular setting, validating the theoretical results (Figure 2). However, I have some concerns about the experiments on real datasets in terms of accuracy and sparsity metrics:
1. **Tables 2 and 3 in the appendix:** The performance differences between the best turn-off setup and the best weight decay configuration (which is slightly smaller) do not appear to be very significant. In my view, it is difficult to draw a strong experimental conclusion because an additional hyperparameter (the step at which weight decay is turned off) needs to be tuned, and similar results can be achieved by selecting an appropriate weight decay value.
2. **Figure 3:** While the experiment illustrates the impact of the turn-off mechanism, the advantage of turning off weight decay in terms of the nuclear norm to Frobenius norm ratio is not entirely clear.
Could you clarify whether I have misinterpreted the analysis of these experiments?
Supplementary Material: I have reviewed the proofs of B.1 and B.2 presented in the supplementary material. However, I was unable to thoroughly verify the proof of C.2, as it is less familiar to me in terms of mathematical background.
Relation To Broader Scientific Literature: I have no strong opinion on this matter, as I am not particularly specialized in this litterature.
Essential References Not Discussed: I have no strong opinion on this matter, as I am not particularly specialized in this litterature.
Other Strengths And Weaknesses: Strengths: The paper is well-written, clear, and easy to understand.
Weaknesses: The proofs are less accessible at times due to brief explanations between certain steps, making it harder to follow the reasoning.
Other Comments Or Suggestions: None
Questions For Authors: I would appreciate responses to my theoretical and experimental analysis questions outlined in the "Theoretical Claims" and "Experimental Designs Or Analyses" sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s time and valuable feedback. We’re happy to address any further questions or concerns.
**Turning-off at different points**
We are happy to provide an ablation on turning off the regularization at different points for the vision transformers:
We disable weight decay (WD) at different epochs during fine-tuning of Tiny-ViT on ImageNet. We train for 300 epochs with AdamW (lr=1e-4, WD=0.2) and disable WD at epochs [50, 100, 150, 200]. We report the intersection epoch of nuclear norm/Frobenius norm ratio and validation accuracy with/without WD off. If no intersection occurs, we report final validation accuracy and ratio. Results show better performance for similar ratios, supporting our theory. We will include this additional experiment in the revised manuscript.
| WD Off | Intersect | Val Acc (WD) | Val Acc (Off) | Norm Ratio |
|--------|----------|--------------|---------------|------------|
| 50 | 104 | 70.7% | 72.4% | 6.9 |
| 100 | 195 | 70.2% | 72.6% | 6.5 |
| 150 | 270 | 71.1% | 73.4% | 6.4 |
| 200 | None | 70.3% (6.3) | 72.5% (6.1) | - |
**Theoretical bounds on $T$ and $\alpha$**
For underdetermined linear regression (Thm 3.6), we search for the schedule with the fastest convergence given a fixed desirable $a_T = \int_0^T \alpha_s ds < \infty$. This is equivalent to regularizing as much as possible and turning off regularization quickly. Yet, it is known that the saddle point near the origin slows down the training dynamics. Also, this is impacted by a finite learning rate inducing a discretization error. Table for Reviewer Eyxr identifies a solution to the expected trade-off.
Yet, the optimal $T$ depends on intricacies of the loss landscape and the training dynamics. For example, for the goal of sparsifying a neural network, gradually moving the implicit bias from $L_2$ to $L_1$ works well [1]. Intuitively, we expect to train with regularization until we hit a loss basin to switch it off soon thereafter. A theoretical bound presents a great challenge that is beyond the scope of this paper with general framework character.
[1] Jacobs, T. and Burkholz, R. Mask in the mirror: Implicit sparsification. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=U47ymTS3ut.
**Details of main proofs**
$R(x,y)$ is unrelated to the loss function $L$. At the end of Theorem A.7, it is defined by the reparameterization, i.e. $(g,h)$, and the initialization. The second derivative is positive. This follows from the definition of a Legendre function (Definition A.1), which is strictly convex. We will refer to this explicitly.
We can express the Hessian of $R$ in terms of block matrices that are derivatives with respect to $x$ and $y$ and then apply the block matrix inversion lemma (see Proposition 2.8.7 in [1]). The first diagonal block matches exactly the inverse of the constructed Hessian for the time-dependent Legendre function. As the inverse of a positive definite matrix is positive definite, it follows from the definition of positive definiteness that also this diagonal block matrix is positive definite. To see this more precisely, denote the inverse of the Hessian matrix with $A$. $A$ is positive definite if and only if for all $z = (x,y) \in \mathbb{R}^{n+1}$, $z^T A z > 0$. Now restricting this to particular $z = (x,0)$ with $x \in \mathbb{R}^n$ yields that the diagonal block matrix that we are interested in is also positive definite.
We suppressed the dependency of $\nabla R_{a_t}$ on $x_t$. We agree that it would be more clear to use $\nabla R_{a_t}(x_t)$ instead. In addition, we will separately explain the application of the time-dependent mirror flow relation and the contraction property to highlight how the contraction property is used.
We will include these extra explanations in the revised manuscript. Moreover, we will provide the block matrix inversion calculation according to Proposition 2.8.7.
[1] Bernstein, D. S. (2005). Matrix Mathematics: Theory, Facts, and Formulas with Application to Linear Systems Theory. http://engineering.nyu.edu/mechatronics/Control_Lab/ToPrint.pdf
**Experiment interpretation**
We agree that the improvement for the LORA experiment is lower, although significant. However, for the vision transformers, the increase in validation accuracy is substantial for the same ratio. The additional experiments above support this claim. Furthermore, we would like to emphasize that our experiment is not designed to maximize performance but to confirm the prediction of the theory. The desirability of a particular ratio is problem-dependent. For example, a lower rank (smaller ratio) might be desirable in cases where efficiency or interpretability is a priority. Moreover, the framework could be used to inspire and help design new regularization schedules that are tuned for better performance. | Summary: Motivated by the fact that the inductive bias of a trained neural network depends on implicit biases of the training algorithm and explicit regularization such as weight decay, this paper studies how the two interact. Appealing to previous works, they adapt the mirror flow framework for objectives with explicit regularization and use their new framework to demonstrate how explicit regularization can influence implicit bias. The key insight is that the implicit bias of the optimization algorithm can be adapted and controlled during the optimization procedure via dynamic explicit regularization. The paper demonstrates how the insights are manifest in experiments on sparse coding, matrix sensing, attention and LoRA fine-tuning illustrating potential applications.
## Update after rebuttal
After reading the reviews of others and seeing the rebuttals of the authors I have a better understanding of the work and its significance. I have appropriately moved my score up to a 3 (weak accept).
Claims And Evidence: I am unsure whether the claims are well supported. In particular, the claims about 3 distinct effects of explicit regularization mentioned in the introduction. Moreover, the statement of the claims are unclear and not precise.
Methods And Evaluation Criteria: The paper is mainly a theoretical work but the evaluation criteria used in the experiments are appropriate. Although the applications are unclear.
Theoretical Claims: The paper relies heavily on the setup and works of previous paper connecting the implicit bias of gradient descent to mirror flow. Largely building off of [1]. Without this background it is difficult to understand the setup or the motivation behind it. Moreover, too much of the technical details are left to the Appendix making it hard to read the paper. In light of this while I believe the theoretical claims are correct I am not familiar enough with this line of literature to verify their correctness.
[1] https://arxiv.org/abs/2207.04036
Experimental Designs Or Analyses: The experiments are sound although their implications are not clear.
Supplementary Material: No.
Relation To Broader Scientific Literature: I am familiar with the idea of implicit bias and as far as I know it is typically studied independent of any explicit regularizers. Given that neural networks are often trained with explicit regularization, such as weight decay, I believe this paper brings insight to the important question of how explicit and implicit regularization interact and would be useful to the broader machine learning community.
Essential References Not Discussed: I am not familiar with this line of research to suggest any essential references not discussed.
Other Strengths And Weaknesses: ### Strengths
The topic being studied is very interesting, novel, relevant and worthy of study. Extending the previous mirror flow framework to account for explicit regularization could also be of independent interest.
### Weaknesses
The biggest issue with the paper is presentation. I am not familiar with past work on mirror flow and this paper requires a lot of background knowledge about previous works connecting implicit bias of gradient descent to mirror flow. This makes it very difficult to understand the results and interpret their significance. The readability of the paper is also effected as many of the core details are left to the appendix and never formally defined. Moreover, the three different biases discussed in the introduction are also stated quite informally (except for the first) making it hard to really understand what they mean.
Other Comments Or Suggestions: ### Minor
- Line 193, $R_a$ or $R_{a_t}$?
Questions For Authors: 1. By doing this dynamic explicit regularization, the objective function is changing for each $t$. Therefore, when the algorithm terminates, for which objective is $x_{*}$ a solution?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to express our gratitude for the reviewer’s time and effort in providing valuable comments on our manuscript and appreciate the acknowledgement of the work's relevance and novelty. We would be happy to discuss any open questions or concerns, if there remain any.
**Background knowledge**
We appreciate the reviewer’s observation regarding the significant background knowledge needed to fully understand the details. Given the initial page limit, we focused on presenting the main storyline.
In the current manuscript, we elaborate on the implications with concrete examples in Section 4.
Moreover, the main theorems are accompanied by detailed descriptions of the key proof steps, along with characterizations of the regularization and a geometric interpretation. These findings and concepts have been formally defined, highlighting important details of the analysis.
As we would be allowed to use an additional page for the main manuscript upon acceptance, we would be happy to make adjustments. To this end, we will include more definitions from the appendix into the main manuscript like:
1. The definitions of a regular and commuting parameterization in Section 3 (Introduction part).
1. The definition of the Legendre function in Section 3 (Introduction part).
1. The definition of the Bregman function in Section 3.2.
With these additions, we believe the main storyline would be accessible to a broader audience, with the key idea being that explicit regularization changes the implicit bias.
**The 3 effects**
We believe that the introduction provides a precise definition of the three effects without requiring the explicit introduction of the parameterized Legendre function. In Section 4, we present concrete theoretical examples of all three effects. However, we are happy to further clarify these effects by including the definition of the parameterized Legendre function at the beginning of Section 4:
1. Type of bias: The shape of $R_a$ changes with $a$.
1. Positional bias: The global minima of $R_a$ changes with $a$.
1. Range shrinking: The range of $\nabla R_a$ can shrink due to a specific choice of $a$.
**Experimental implications**
The experiments are there to verify the theory and to show that the principles hold in more practical settings. A practical implication is the potential utility of dynamic regularization schedules.
For instance, in the case of quadratic reparameterizations, to get the best generalization accuracy for a desired sparsity ratio, it is beneficial to regularize more during the first half of training and then turn off weight decay in the second half. Moreover, our work provides a stepping stone for further research to derive better regularization schedules that take the implicit bias into account, as mentioned in the discussion.
**Miscellaneous**
We agree that it is more clear to use $R_{a_t}$ than $R_a$. The intent was to make a more direct reference to the previous definition of the parameterized Legendre function.
In addition, $x_*$ is a solution of the objective $f$ as the regularization is turned off eventually. In other words, after turning off the regularization the iterates move to a solution of the objective $f$.
We revise our manuscript accordingly. | Summary: The paper investigates how external regularization influences the implicit bias of gradient flow. By leveraging the equivalence between parameterized gradient flow and mirror flow, the study provides a detailed analysis of how external regularization alters this mirror flow. Within the framework proposed by Li et al. (2023), when the parameterizing function $g$ and the regularization function $h$ are commuting re-parameterizations, the gradient flow on the regularized loss is shown to be equivalent to a mirror flow with a time-varying Bregman function. This Bregman function depends on regularization through the parameter $a$, which is defined as the integral of the regularization over time. The equivalence is used to study the convergence of regularized gradient flow and across different parameterizations or
Claims And Evidence: a) The paper claims of different effects of regularization which it names as positional bias, type of bias and shrinking range which they claim as the main contribution of the paper. However, it feels that the the three effects are essentially the same, the range shrinking of $a$ changes the bias of $L_2$ to $L_1$ in quadratic parameterization.
Methods And Evaluation Criteria: N/A as the paper is majorly theoretical in nature.
Theoretical Claims: a) In Theorem 3.2, the authors need to clearly specify that $(g,h) : M \to R^{n+1}$ should be a commuting parameterizations. In the current form, it reads as $g$, $h$ are commuting parameterization among themselves and not necessarliy $[ \nabla g_i, \nabla h]$ = 0
Experimental Designs Or Analyses: a) In the experiment corresponding to Figure 2, it is unclear how the regularization coefficient for the $ell_1$ is chosen, for appropriate choice I expect it to match the loss with regularization before turning it off. Also it is unclear comparison when both the regularization are turned off at the same iteration.
b) It is hard to interpret the experiment corresponding to Figure 3, if the higher nuclear norm to Frobenious norm ratio is desired, should the ideal method be to just choose a lower weight decay constant ?
Supplementary Material: I have reviewed the proof of Theorem 3.2, 3.5 and 3.6
Relation To Broader Scientific Literature: The equivalence between mirror flow and reparameterized gradient flow is extensively used to argue about the implicit bias of gradient methods with various hyperparameters Woodworth et al., 2020; Pesme et. al. 2021. The work of Li et.al. 2022 lays the general framework when such a equivalence is possible and the current work extends this framework for regularized loss highlighting the differences.
Essential References Not Discussed: To the best of my knowledge, the essential references have been sufficiently discussed.
Other Strengths And Weaknesses: Strengths :
- Theorem 3.2 is a nice addition to the literature of implicit bias and mirror flow showing how the weight decay modifies this revealing that the integral of the coeffecient across time is a key determining factor.
Weakness :
- The framework is applied to different problems like matrix factorization and attention framework, however, the analysis holds when the matrices commuting. Hence, under the appropariate choice of eigenbasis, it is equivalent to the quadratic parameterization, it is recommended that the authors explicitly mention this.
Other Comments Or Suggestions: N/A
Questions For Authors: a) Theorem 3.6 uncovers an intriguing property: the integral of the regularization parameter is the sole determining factor. It would be nice to make this a bit concrete with experiments across some architectures, for example with various schedule which keep the integral to a constant value.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to express our gratitude for the reviewer’s time and effort in providing valuable comments on our manuscript and appreciate the acknowledgement of the theoretical contributions. Below, we address the raised questions and concerns, but would be happy to extend the discussion on request.
**How the 3 effects are distinct**
We provide some examples that highlight how the 3 effects are distinct from each other and also can occur independently from each other:
1. The range shrinking means that the actual range of $\nabla R_a$ changes during training; this does not occur for quadratic reparameterizations, for example.
2. The positional bias is distinct. When we initialize a quadratic parameterization at zero, the positional bias stays zero. So it does not change, while the type of bias changes still from L2 to L1.
3. The type of bias can be different in all these scenarios and even changes dynamically in most of our examples.
**Commuting condition and eigenbasis**
We will restate the commuting condition such that it is clear that indeed $(g,h)$ is commuting in Theorem 3.2. Moreover, we will mention the connection between an appropriate eigenbasis choice and the commuting condition explicitly after the quadratic parameterization result.
**Figure 2, L1 setting**
The training dynamics for the linear parameterization are different from that of the quadratic reparameterization. Therefore, it is hard or impossible to match the same performance in the experiment.
The main takeaway is that the type of implicit bias of the linear parameterization has not changed. In general, what would occur when the L1 regularization is turned off is that the training dynamics go to the L2 interpolation solution. To alleviate any concern, we have run an additional experiment for the L1 setting with higher regularization ($0.2$) and turned it off at the same time, this also leads to similar performance, i.e. it goes to the L2 interpolator with a reconstruction error of $0.32$. Therefore, the amount of regularization does not change the type of implicit bias. We will include this additional experiment in Appendix C of the revised manuscript.
**Figure 3, ratio**
We do not make claims about the desirability of these results; this would depend on the specific application. A lower rank (smaller ratio) might be desirable in cases where efficiency or interpretability is a priority, for example. For the practitioner it is relevant to understand how regularization impacts the rank and thus the solution. This insight is the purpose of our theory.
The main message of this Figure 3 is that we can obtain similar ratios by turning off weight decay. Appendix F furthermore demonstrates that this can achieve better generalization. Specifically, at similar ratios, the validation accuracy of a ViT on ImageNet can be improved by more than 1%.
This illustrates the utility of controllable implicit bias in practice, but the main purpose of our experiments is to verify our theory. We still believe our theory could inspire better algorithm design in future, as mentioned in the discussion.
**Different schedules for matrix sensing**
Consider the family of schedules with constant regularization strength $\alpha_i$ up to specific time T_i such that $\int_0^{T_i} \alpha_i ds = T_i \alpha_i = C$ for $C >0$ a constant. We choose $\alpha_i = [0.02, 0.2, 2, 20]$. In addition, we consider a linear and cosine decay schedule for the regularization with the same total strength (i.e. same integral), but the regularization is switched off after half of the training time to ensure convergence. To compare with the effect of turning off (t-o) the regularization, we include the constant schedule with regularization strength $\alpha=0.01$. Note that this schedule is also reported in Appendix C. We observe that all schedules with decay or turn-off (t-o) converge to a solution with the same nuclear norm of the ground truth, confirming Theorem 3.6, while the constant schedule does not reach the ground truth. We would be happy to include this additional experiment in Appendix C.
| Schedule | Nuclear norm | Train loss | Rec error | Time to 1e-7 train loss |
|------------------------------|-------------|------------|-----------|--------------------------|
| Constant 0.01, no t-o | 0.93 | 7.2e-4 | 3.9e-2 | - |
| Linear decay | 1.00 | 1.8e-8 | 2.3e-4 | 661 |
| Cosine decay | 1.00 | 1.7e-8 | 2.1e-4 | 624 |
| Constant 0.02, t-o | 1.00 | 1.1e-8 | 1.7e-4 | 716 |
| Constant 0.2, t-o | 1.00 | 2.7e-10 | 2.7e-5 | 209 |
| Constant 2, t-o | 1.00 | 2.1e-10 | 2.4e-5 | 209 |
| Constant 20, t-o | 1.00 | 7.9e-13 | 1.4e-6 | 239 | | null | null | null | null | null | null | null | null |
Provably Mitigating Corruption, Overoptimization, and Verbosity Simultaneously in Offline and Online RLHF/DPO Alignment | Reject | Summary: Reinforcement learning from human feedback (RLHF) aims to align generative models with human preferences. However, the quality of alignment training can be compromised by corrupted preferences, reward overoptimization, and bias toward verbose outputs. This paper proposes RLHF-COV and DPO-COV, two algorithms designed to simultaneously address these issues in both offline and online settings.
Claims And Evidence: For each of the three issues, i.e., preference corruption, reward overoptimization, and output verbosity, the vanilla RLHF objective is augmented with a corresponding loss component to address the issue. To account for potential corruption in preference, a noise term is added to the loss based on the Bradley-Terry model. To address reward overoptimization, a "pessimistic MLE" is adopted where a baseline policy is used to reduce overestimation of low-quality out-of-distribution samples. Lastly, to control verbosity, a length-based penalty is incorporated into the objective.
Each of these loss components, or similar variants, is conceptually sound and has been studied in prior works, albeit in isolation. This paper naturally integrates them into a single objective. While the proposed algorithms are well-motivated, the experimental evaluation remains limited. The study explores only a narrow set of tasks, primarily math and reasoning, with minimal analysis. Key questions remain: How does the algorithm perform on a broader range of tasks? How much corruption exists in the preference data, and to what extent can the proposed approach mitigate it? To what degree is reward overoptimization a concern, and why is tuning the beta parameter in vanilla DPO insufficient to address it?
Methods And Evaluation Criteria: The proposed methods make sense for addressing the issues of corrupted preferences, reward overoptimization, and length biases.
For evaluation, the paper uses length-controlled win rates in AlpacaEval 2.0 and task performance on math and reasoning benchmarks, including GSM8K and ARC. While the evaluation setup is reasonable, the experimental results and analysis are minimal, making it difficult to fully assess the extent to which the identified issues are present in the setup and how effectively the proposed algorithms and baselines address them. Please see above.
Theoretical Claims: Theoretical claims regarding the generalization error rate of the proposed algorithms have been reviewed at a high level but not to the extent of verifying the constants in the bounds.
Experimental Designs Or Analyses: The presented experimental results and analysis are sound, but to fully assess how well the proposed algorithms address the three issues in existing alignment methods, additional experimental results should be included. Please see above.
Supplementary Material: No supplementary material has been reviewed.
Relation To Broader Scientific Literature: The issues of preference corruption, reward overoptimization, and length biases are important challenges in alignment, typically studied in isolation. While more experiments are needed to assess whether addressing all of these issues in a single optimization is the most effective -- especially given that the extent to which each issue is present can vary -- the paper introduces algorithms to tackle them simultaneously and provides theoretical results on generalization error.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: None.
Questions For Authors: Q. How much more effective is the proposed pessimistic MLE compared to the regular KL-penalized objective in reducing overoptimization? What baseline policy is used in the experiments?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Claims And Evidence (1):** The study explores only a narrow set of tasks, primarily math and reasoning, with minimal analysis. Key questions remain: How does the algorithm perform on a broader range of tasks?
**A:** Thanks for your question. We are conducting experiments on the new tasks.
**Claims And Evidence (2):** How much corruption exists in the preference data, and to what extent can the proposed approach mitigate it?
**A:** Thanks for your questions. It is hard to count the number of corrupted labels in the large datasets, but we are sure there are corruptions in the AI-generated labels, so we use the performance gap (by LC win-rates and accuracies) between our algorithm and the 3 algorithms that do not directly tackle corruption (i.e., pessimistic/optimistic DPO algorithm, length-regularized DPO algorithm and vanilla DPO algorithm) to roughly reflect the extent of corruption mitigation.
However, we are working on the new experiments by corrupting x\% of the labels in the preference data with various x.
**Claims And Evidence (3):** To what degree is reward overoptimization a concern, and why is tuning the beta parameter in vanilla DPO insufficient to address it?
**A:** Thanks for your questions. Note that the robust DPO algorithm (corruption-only), length-regularized DPO algorithm (verbosity-only) and vanilla DPO algorithm in our experiments do not directly deal with over-optimization. Tables 1 and 3 show that the LC-win rates of these 3 algorithms are 0.57\%-1.68\% lower than those of our DPO-COV algorithms. Table 2 shows that these 3 algorithms are 0.38\%-3.16\% less accurate than our DPO-COV algorithm. These performance gaps show the effect of overoptimization.
Tuning beta alone can still cause overoptimization, because the KL-regularization could only control the scale of the gradient per training example, while adding the pessimistic term can further modify the gradient direction [1].
[1] Liu, Z., Lu, M., Zhang, S., Liu, B., Guo, H., Yang, Y., ... \& Wang, Z. Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
**Questions For Authors:** How much more effective is the proposed pessimistic MLE compared to the regular KL-penalized objective in reducing overoptimization? What baseline policy is used in the experiments?
**A:** Thanks for your questions. The pessimistic MLE is proposed not by us but by (Liu et al., 2024c; Cen et al., 2024; Ji et al., 2024; Yang et al., 2024), which have empirically demonstrated that the pessimistic MLE significantly outperforms the regular KL-penalized objective (i.e., vanilla DPO) in reducing overoptimization. For example, in Table 1 of (Liu et al., 2024c), the win rate of pessimistic MLE VS vanilla DPO is 56\%. As to the baseline policy $\pi_{\rm base}$, Line 307 left side said "Algorithm 1 takes $\pi_{\rm base}(\cdot|x)$ as the distribution of the preferable responses $a_i^w$ given $x_i=x$, which is well covered by $\mathcal{D}$." | Summary: The paper studied corruption, overoptimization, and verbosity simultaneously in offline and offline LLM alignment problems (RLHF and DPO). The authors give both theoretical and empirical guarantees.
Claims And Evidence: The claims are mostly correct.
Methods And Evaluation Criteria: They make sense.
Theoretical Claims: The theoretical claims are mostly correct.
Experimental Designs Or Analyses: The experiments are limited, which does not quite demonstrate the overall effectiveness of the proposed algorithm since the difference of win rates in Table 1 is subtle.
Supplementary Material: I went though the appendix of the theoretical part but didn't check the full details. Most of the statement seems true.
Relation To Broader Scientific Literature: The problem studied in this paper is interesting.
Essential References Not Discussed: The paper missed the important related work [1].
[1].Chowdhury, Sayak Ray, Anush Kini, and Nagarajan Natarajan. "Provably robust dpo: Aligning language models with noisy feedback." arXiv preprint arXiv:2403.00409 (2024).
Other Strengths And Weaknesses: Strengths:
The writing is clear, and I like that they give both theoretical and empirical results.
Weaknesses:
1. The paper missed some important related work. See the Questions part for more details.
2. The motivation to tackle the three problems in LLM alignment is unclear.
3. The paper seems to be a combination of the techniques from the three areas.
4. There are too many hyperparameters to be set, which is unrealistic in the real world.
Other Comments Or Suggestions: The equivalence of our proposed RLHF-COV and DPO-COV algorithms is incremental since DPO is a direct preference version for RLHF. And the equivalence of DPO and RLHF has been proved in previous work.
Questions For Authors: 1. Please discuss and compare your results with [1] for robustness.
2. What is the motivation to tackle the corruption, overoptimization, and verbosity simultaneously in the real world?
3. What is the reason for designing $y_i\xi_i$ instead of $\xi_i$ in loss function in Lines 158-160.
4. The assumption 3 is all-policy coverage. Can your method improve to singl-policy coverage?
[1].Chowdhury, Sayak Ray, Anush Kini, and Nagarajan Natarajan. "Provably robust dpo: Aligning language models with noisy feedback." arXiv preprint arXiv:2403.00409 (2024).
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Questions For Authors (1):** Please discuss and compare your results with [1] for robustness.
[1] Chowdhury, Sayak Ray, Anush Kini, and Nagarajan Natarajan. "Provably robust DPO: Aligning language models with noisy feedback." ArXiv:2403.00409 (2024).
**A:** Thanks for bringing this important work to our attention. [1] proposes a robust DPO method that solves data corruption well by incorporating a certain fixed label flipping probability $\epsilon\in(0,1)$ into the likelihood function, but it does not explicitly solve over-optimization and verbosity, while our work solves all these three issues simultaneoulsly. As to theoretical results, both works achieve generalization error rates of $\mathcal{O}(1/\sqrt{n})$ with $n$ possibly corrupted samples, but [1] uses expected true reward as the generalization error measure, while our generalization error measure (Eq. (20)) strikes a trade-off among expected true reward, distance to the reference policy (to solve over-optimization) and length penalty (to solve verbosity). The experimental results are not directly comparable as they involve different tasks and datasets. We will cite [1] in our revised work.
**Questions For Authors (2):** What is the motivation to tackle the corruption, overoptimization, and verbosity simultaneously in the real world?
**A:** Good question. All these three issues exist and can yield undesirable consequences in the real world as will be elaborated below. That motivates us to tackle these issues simultaneously.
The "Corruption" paragraph in the introduction reveals the reason and dangers of corruption in the real world: "However, preference labels given by human may be corrupted due to inexperience, inattention, personal bias, unclear context, and even malicious falsification (Bukharin et al., 2024). For instance, when fine-tuning LLM for automated content moderation on social media, malicious annotators may mislabel harmful contents like misinformation and hate speech as preferable, which misleads the LLM to generate such harmful contents." Hence, we need to tackle corruption.
The "Overoptimization" paragraph in the introduction reveals the reason and undesirable consequence of overoptimization: "RLHF and DPO may overoptimize the reward model, yielding LLM responses of high estimated reward but low actual quality (Gao et al., 2023; Casper et al., 2023)." This contradicts the goal of RLHF and DPO to make LLM responses helpful, honest, and harmless. Hence, we need to tackle overoptimization.
The "Verbosity" paragraph in the introduction said "LLM aligned by vanilla RLHF and DPO is likely to prefer verbose but possibly low-quality responses". In the real world, verbose responses can waste users' time and thus lose users. This consequence will be added to the revision. Hence, we need to tackle verbosity.
**Questions For Authors (3):** What is the reason for designing $y_i\xi_i$ instead of $\xi_i$ in loss function in Lines 158-160?
**A:** Good question. As shown in the footnote in the page 3, using $y_i\xi_i$ ensures that $\mathbb{P}(y_i|a_i^{(1)},a_i^{(-1)})=\sigma[r^*(x_i,a_i^{w})-r^*(x_i,a_i^{\ell})+y_i\xi_i^*], y_i\in\{-1,1\}$ is a valid probability measure satisfying $\sum_{y_i\in\{-1,1\}}\mathbb{P}(y_i|a_i^{(1)},a_i^{(-1)})=1$. To elaborate, since $y_i=1$ means $a_i^{w}=a_i^{(1)}$ and $a_i^{\ell}=a_i^{(-1)}$, we have $\mathbb{P}(y_i=1|a_i^{(1)},a_i^{(-1)})=\sigma[r^*(x_i,a_i^{(1)})-r^*(x_i,a_i^{(-1)})+\xi_i^*]$. Similarly, since $y_i=-1$ means $a_i^{w}=a_i^{(-1)}$ and $a_i^{\ell}=a_i^{(1)}$, we have $\mathbb{P}(y_i=1|a_i^{(1)},a_i^{(-1)})=\sigma[r^*(x_i,a_i^{(-1)})-r^*(x_i,a_i^{(1)})-\xi_i^*]=1-\sigma[r^*(x_i,a_i^{(1)})-r^*(x_i,a_i^{(-1)})+\xi_i^*]$, so $\sum_{y_i\in\{-1,1\}}\mathbb{P}(y_i|a_i^{(1)},a_i^{(-1)})=1$.
**Questions For Authors (4):** The assumption 3 is all-policy coverage. Can your method improve to single-policy coverage?
**A:** Assumption 3 only involve two fixed policies, the baseline policy $\pi_{\rm base}$ and the true optimal policy $\pi_{r^*}$. Why does the reviewer think it is all-policy coverage? Actually our Assumption 3 is the same as Assumption 2 of [1], which is claimed by [1] as single-policy coverage.
[1] Ji, X., Kulkarni, S., Wang, M., and Xie, T. (2024). Self-play with adversarial critic: Provable and scalable offline alignment for language models. ArXiv:2406.04274. | Summary: The authors identify three key challenges in LLM alignment: corruption, overoptimization, and verbosity. To address these issues holistically, they propose a unified approach through generalized formulations of RLHF and DPO called RLHF-COV and DPO-COV, respectively.
These formulations incorporate: noise modeling to mitigate corruption, optimistic/pessimistic regularizers to prevent overoptimization, and a length penalty to discourage verbosity. The authors present both offline and online versions of these objectives.
The authors provide win rates against GPT-4 on AlpacaEval and improvements on math and reasoning datasets to demonstrate the impact of the various improvements in their paper in the offline DPO-COV setting and have online results in the appendix.
Claims And Evidence: The main claim of this paper is that the they develop a unified algorithm to simultaneously address corruption, overoptimization, and verbosity issues in LLM alignment. This claim is supported through the extensive theoretical formulations and empirical results in the paper.
The only claim I find a bit problematic is the verbosity reducing one. The main issue is that verbosity reduction is addressed through a universal length penalty which would not take into account prompt-specific length requirements. The results don't include numbers on average response lengths, so it is unclear how well the length regularizer works. Another concern is that reducing length might affect reasoning performance because of shortened reasoning chains, but the evals on reasoning and math benchmarks don't sustain this.
Methods And Evaluation Criteria: The base model and dataset choices are all reasonable. However, evaluations are a bit lacking. More details under experiments and analysis.
Theoretical Claims: This paper is quite dense in terms of theoretical proofs and claims. I checked all the RLHF-COV proofs and found no issues.
Experimental Designs Or Analyses: ## Experiment suggestions
### Generalizability
It is hard to judge the performance of DPO-COV with just win rates against GPT-4 on AlpacaEval. It would be nice to see a few more benchmarks (ArenaHard and MT-Bench) or different base models (llama3-8b).
### Effect of Corruption levels in data
It would be helpful to see how different levels of corruption in the preference data affects alignment. The authors speak about malicious actors deliberately corrupting preference data as a motivation, so it would be helpful to see how robust this method is to an issue like that.
### Overoptimizations
An analysis into the patterns of reward hacking that were observed in the vanilla DPO run vs. DPO-COV run would also help illustrate how all these techniques work together to make a better model.
Supplementary Material: I did not review the supplementary material in detail.
Relation To Broader Scientific Literature: The paper proposed a unified approach to addressing very common issues in LLM alignment. Mitigating and finding good solutions to these problems is a priority in the broader research community. This paper shows that noise modeling to mitigate corruption, optimistic/pessimistic regularizers to prevent overoptimization, and a length penalty to discourage verbosity can all work together to create a better aligned model.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: All 3 mitigation strategies are proposed in previous works as cited in the paper. The main contribution of this paper is developing an objective that combines all three together. This paper also doesn't have adequate experiments to empirically study the effect of all three objectives applied together. Therefore, the novelty of this paper is limited.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Claims And Evidence:** The only claim I find a bit problematic is the verbosity reducing one. The main issue is that verbosity reduction is addressed through a universal length penalty which would not take into account prompt-specific length requirements. The results don't include numbers on average response lengths, so it is unclear how well the length regularizer works. Another concern is that reducing length might affect reasoning performance because of shortened reasoning chains, but the evals on reasoning and math benchmarks don't sustain this.
**A:** Prompt-specific length is a good future direction, which may be tackled by using length penalty coefficient $\omega(x)$ that depends on the prompt $x$. The larger length required for $x$, the smaller $\omega(x)$ should be.
We are working on experiments to include the numbers on average response lengths. Thanks for your suggestion.
For reasoning, we could use small length penalty coefficient to guarantee intact necessary reasoning steps, while trimming unnecessary steps and tokens. In the future, we may apply different levels of length penalty to different reasoning steps, to ensure a concise description of each step, rather than to reduce the number of steps.
**Generalizability:** It is hard to judge the performance of DPO-COV with just win rates against GPT-4 on AlpacaEval. It would be nice to see a few more benchmarks (ArenaHard and MT-Bench) or different base models (llama3-8b).
**A:** Thanks for your suggestion. We are working on the new experiments.
**Effect of Corruption levels in data:** It would be helpful to see how different levels of corruption in the preference data affects alignment. The authors speak about malicious actors deliberately corrupting preference data as a motivation, so it would be helpful to see how robust this method is to an issue like that.
**A:** Thanks for your suggestion. We are working on the new experiments by corrupting x\% of the labels in the preference data with various x.
**Overoptimizations:** An analysis into the patterns of reward hacking that were observed in the vanilla DPO run vs. DPO-COV run would also help illustrate how all these techniques work together to make a better model.
**A:** Thanks for your suggestion. We are working on the new experiments. | Summary: This paper introduces RLHF-COV and DPO-COV to mitigate performance degradation caused by corrupted preferences, reward overoptimization, and bias toward verbosity.
To this end, the authors apply three techniques: a noise regularizer to enhance robustness against corrupted preferences, pessimistic MLE to handle reward over-optimization, and a length penalty to control the verbosity issue simultaneously.
The authors first derive the offline RLHF-COV objective by incorporating all techniques into the RLHF objective. Then, they derive the offline DPO-COV objective by solving the offline RLHF-COV objective analytically, as proved in Proposition 1.
Theoretically, the authors prove that RLHF-COV and DPO-COV are equivalent (Proposition 2).
Moreover, with a bounded reward and a sufficiently large dataset, they show that the performance gap between the solution of offline DPO-COV and the solution of the desirable objective (20) is bounded with high probability.
Additionally, the authors provide the objectives for online RLHF-COV and online DPO-COV.
Finally, the authors demonstrate that offline DPO-COV achieves promising performance on the Argilla-DPO-Mix-7K dataset.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Need more explanation.
Theoretical Claims: Yes
Experimental Designs Or Analyses: The experiments should be revised.
First, to show the LC-win rate, Argilla-DPO-Mix-7K is used, while performance is compared using reasoning tasks such as GSM8K and ARC.
Why are different tasks required?
In addition, in the experiments section, readers may expect that DPO-COV is empirically robust against corrupted preferences and reward overoptimization.
However, why does the performance on reasoning tasks reveal such desirable properties of DPO-COV?
Finally, among the proposed methods, only the performance of offline DPO-COV is reported.
Supplementary Material: Appendix A.
Relation To Broader Scientific Literature: Handling real-world preference datasets that include corrupted preferences.
Essential References Not Discussed: -
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: The equations have poor readability.
For example, in Equation (12), (18), (23), and (24), what is the role of $\lbrace$ and $\rbrace$?
In addition, the line breaks in the equations are very hard to follow.
Questions For Authors: **[Question about Analysis]**
I am confused by the gap between the theory and my intuition.
I do not see any assumptions about noise; therefore, I may assume very large noise.
In this case, preferences become random, making it impossible to guarantee the performance of any RLHF algorithm, including DPO-COV, since the given data contains no information about preferences.
However, Theorem 1 guarantees the performance gap, which suggests that DPO-COV can obtain a reasonable solution.
Could you clarify this confusion?
**[Question about objective (20)]**
The authors suggest that objective (20) is a desirable objective.
However, I am not sure that (20) is truly a desirable goal, as there are many ways to address the verbosity issue that are not aligned with objective (20) (For example, SimPO or GRPO, etc)
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Experimental Designs Or Analyses (1):** To show the LC-win rate, Argilla-DPO-Mix-7K is used, while performance is compared using reasoning tasks such as GSM8K and ARC. Why are different tasks required?
**A:** Argilla-DPO-Mix-7K is the preference dataset we use for training, while GSM8K and ARC are benchmarks used for evaluation.
**Experimental Designs Or Analyses (2):** In addition, in the experiments section, readers may expect that DPO-COV is empirically robust against corrupted preferences and reward overoptimization. However, why does the performance on reasoning tasks reveal such desirable properties of DPO-COV?
**A:** Math and reasoning are two important categories of tasks in LLM evaluation [1], the superior performance of DPO-COV over these benchmarks compared to vanilla DPO exactly show that our method is robust against corrupted preferences and reward overoptimization.
[1]. Gao, L. et. al. (2024a). A framework for few-shot language model evaluation. \url{https://zenodo.org/records/12608602}.
**Experimental Designs Or Analyses (3):** Among the proposed methods, only the performance of offline DPO-COV is reported.
**A:** The performance of online methods is reported in Appendix A due to limited space of the main text.
**Other Comments Or Suggestions:**
The equations have poor readability. For example, in Equation (12), (18), (23), and (24), what is the role of
$\{$ and $\}$? In addition, the line breaks in the equations are very hard to follow.
**A:** Thanks for your suggestions. We will simplify these equations respectively as follows.
$$\min _ {r\in\mathcal{R},\xi\in\mathbb{R}^N} [\max _ {\pi\in\Pi} \mathcal{L} _ {N,\lambda}(r,\xi)+\eta V _ {\beta,\omega}(\pi,r)],$$
$$\min _ {\pi\in\Pi _ {\mathcal{R}}} [\mathcal{L} _ {N,\lambda}(r^{\pi},\xi^{\pi})+\eta V _ {\beta,\omega}(\pi _ {r^{\pi}},r^{\pi})],$$
$$\pi_{t+1}\in{\arg\min} _ {\pi\in\Pi}\min _ {r\in\mathcal{R},\xi^{(t)}\in\mathbb{R}^t}[\mathcal{L} _ {t,\lambda}(r,\xi^{(t)})-\eta V _ {\beta,\omega}(\pi,r)],$$
$$\pi _ {t+1} \in {\arg\min} _ {\pi\in\Pi _ {\mathcal{R}}} [\mathcal{L} _ {t,\lambda}(r^{\pi},\xi^{\pi,(t)})-\eta V _ {\beta,\omega}(\pi_{r^{\pi}},r^{\pi})].$$
The { } in the original equations unnecessarily repeat the expressions of the functions $\mathcal{L} _ {N,\lambda}$ (similarly $\mathcal{L} _ {t,\lambda}$) and $V _ {\beta,\omega}$ above that have already been defined in Eqs. (8) and (10) respectively, so we removed such repetitions.
**Question about Analysis:** I do not see any assumptions about noise; therefore, I may assume very large noise. In this case, preferences become random, making it impossible to guarantee the performance of any RLHF algorithm, including DPO-COV, since the given data contains no information about preferences. However, Theorem 1 guarantees the performance gap, which suggests that DPO-COV can obtain a reasonable solution. Could you clarify this confusion?
**A:** The convergence rate in Theorem 1 is $\mathcal{O}(\sqrt{||\xi^*|| _ 1/N})$ where $||\xi^*|| _ 1=\sum _ {i=1}^N|\xi_i^*|$ is the norm of the true noise. Right after Theorem 1, we implicitly assume the upper bound on the true noise by saying "Hence, as long as $||\xi^*|| _ 1\le\mathcal{O}[\log(N)]$ (much weaker than Assumption 4.2 of (Bukharin et al., 2024) that there exist constants $c_0,c _ {\infty}>0$ such that $\xi^*$ has at most $c_0$ nonzero entries and they range in $[-c_{\infty},c_{\infty}]$), the generalization error rate (22) has the order of $\mathcal{O}[\log(N)/\sqrt{N}]$."
**Question about objective (20):** The authors suggest that objective (20) is a desirable objective. However, I am not sure that (20) is truly a desirable goal, as there are many ways to address the verbosity issue that are not aligned with objective (20) (For example, SimPO or GRPO, etc.)
**A:** I agree that there are many methods to address the verbosity issue, and also there can be many generalization measures (including our Eq. (20)) that accounts for verbosity. All these methods and measures align in the final goal to strike a trade-off between expected reward and length. Therefore, as long as the length penalty coefficient controlling such trade-off is proper for a specific task, Eq. (20) is a desirable goal.
---
Rebuttal Comment 1.1:
Comment: First of all, I have an additional question:
- There are many RLHF algorithms for reasoning tasks, such as GRPO, RLOO, and REINFORCE++. Could you compare the proposed algorithm (possibly RLHF-COV) with these methods?
In addition, only one dataset and base model are used in the experiments - more datasets and base models should be included, as other reviewers have pointed out.
In addition, I'd like to clarify the questions:
1. Argilla-DPO-Mix7K contains test dataset. However, the authors use different datasets to evaluate it. Why?
2. I agree that math and reasoning tasks are very important tasks. However, superior performance on these tasks does not directly indicate robustness against corruption or reward overoptimization. To support this claim, you should demonstrate that the dataset is indeed corrupted, or that the model trained by DPO is overoptimized, whereas the model trained by DPO-COV is not.
3.I mean, why are there no results for RLHF-COV?
---
Reply to Comment 1.1.1:
Comment: **Additional question:** There are many RLHF algorithms for reasoning tasks, such as GRPO, RLOO, and REINFORCE++. Could you compare the proposed algorithm (possibly RLHF-COV) with these methods? In addition, only one dataset and base model are used in the experiments - more datasets and base models should be included, as other reviewers have pointed out.
**A:** We are conducting these new experiments as you suggested.
**Question 1:** Argilla-DPO-Mix7K contains test dataset. However, the authors use different datasets to evaluate it. Why?
**A:** In the updated Table 1: https://docs.google.com/document/d/1c_A6F5_VWR0VGLLUbKCaTko9WrvGsx1g39ovYQJnqUk/edit?tab=t.0, we have compared the negative log likelihood loss of the chosen responses among the offline DPO-type algorithms, over a hold-out test set of the Argilla-DPO-Mix7K dataset, which shows that our algorithm achieves the lowest test loss.
**Question 2:** I agree that math and reasoning tasks are very important tasks. However, superior performance on these tasks does not directly indicate robustness against corruption or reward overoptimization. To support this claim, you should demonstrate that the dataset is indeed corrupted, or that the model trained by DPO is overoptimized, whereas the model trained by DPO-COV is not.
**A:** We have randomly corrupted 25\% labels of the Argilla data. The results on this corrupted Argilla data are shown in Table 2 of https://docs.google.com/document/d/1c_A6F5_VWR0VGLLUbKCaTko9WrvGsx1g39ovYQJnqUk/edit?tab=t.0, which indicates that both our DPO-COV and the robust DPO are more robust to the corruption than the other non-robust DPO variants. We are working on the new experiments that compare the overoptimization level.
**Question 3:** Why are there no results for RLHF-COV?
**A:** RLHF-type algorithms such as RLHF and RLHF-COV are theoretically equivalent to their DPO counterparts (e.g. Our Propositions 2 and 3), but require additional training of large reward models. Therefore, like many other works that propose a new DPO-type algorithm, our experiments focus on DPO-type algorithms, while RLHF-COV is used as an intermediate step to derive our DPO-COV algorithms. | null | null | null | null | null | null |
IT$^3$: Idempotent Test-Time Training | Accept (poster) | Summary: This paper introduces IT3, a test-time training method that leverages idempotence to adapt model weights on-the-fly without requiring domain-specific auxiliary tasks. By enforcing that repeated applications of the model yield the same output, IT3 effectively projects out-of-distribution inputs onto the training data manifold, achieving improved robustness across diverse tasks and domains.
Claims And Evidence: Most claims are supported by extensive experimental evidence, but the theoretical justification of idempotence as a projection mechanism of the test-time training process are less substantiated.
Methods And Evaluation Criteria: While the proposed methods align conceptually with test-time training for handling distribution shifts, the experiments primarily involve smaller datasets and omit standard OOD benchmarks like WILDS, leaving broader applicability and real-world relevance underexamined.
Wilds: A benchmark of in-the-wild distribution shifts. ICML, 2021.
Theoretical Claims: The paper’s theoretical justification relies largely on intuitive arguments relating idempotence to test-time adaptation, rather than providing formal proofs. While the extended discussion references prior work (e.g., IGN), it does not include rigorous derivations confirming that test-time adaptation preserves idempotence on OOD samples, leaving this point insufficiently substantiated.
Experimental Designs Or Analyses: I examined the experimental design, and a key concern is the limited set of baselines: while the paper occasionally compares IT³ to methods like ActMAD, it does not consistently benchmark against TTT++ across all tasks with matching batch sizes and architectures.
TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?, NeurIPS 2021
Supplementary Material: Yes, I reviewed all of the supplementary.
Relation To Broader Scientific Literature: The paper extends the emerging line of work on test-time adaptation (e.g., TTT, ActMAD) by introducing idempotence as a unifying principle that obviates the need for domain-specific auxiliary tasks.
Essential References Not Discussed: No essential references appear to be missing beyond those already mentioned.
Other Strengths And Weaknesses: Strengths:
1. The approach leverages a general idempotence principle rather than domain-specific tasks, making it broadly applicable.
2. It consistently demonstrates improved performance on diverse tasks (e.g., tabular data, image classification, aerodynamics).
3. The method is relatively straightforward to plug into existing architectures and does not require extensive auxiliary supervision.
Weaknesses:
1. If the model’s initial prediction is incorrect, even with the frozen reference model, the iterative optimization can magnify the error signal.
2. The tasks used in experiments are relatively small-scale or synthetic, leaving questions about real-world robustness.
Other Comments Or Suggestions: 1. Provide more detail on using zero as a neutral signal in regression tasks, where zero may also be a valid label.
2. Briefly discuss tuning hyperparameters (e.g., optimization steps, learning rate) for test-time adaptation to guide practical deployment.
Questions For Authors: 1. Could you clarify how you prevent confusion between the neutral zero and legitimate zero labels in regression tasks, and whether an alternative placeholder value might be more appropriate?
2. How does IT³ scale in terms of memory and runtime when model sizes grow, and have you tested it on larger networks (e.g., ImageNet-scale) to confirm its practicality?
3. When the initial prediction for an OOD sample is substantially off, what mechanisms (if any) prevent the model’s iterative updates from further reinforcing incorrect outputs?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s insightful comments. We address the reviewer's concerns below and will revise the paper accordingly.
### **"The experiments primarily involve smaller datasets and omit standard OOD benchmarks like WILDS, leaving broader applicability and real-world relevance underexamined." / "limited set of baselines" / "The tasks used in experiments are relatively small-scale"How does IT³ scale in terms of memory and runtime when model sizes grow, and have you tested it on larger networks (e.g., ImageNet-scale) to confirm its practicality?"**
Following the reviewer’s suggestion, we conducted additional experiments on **ImageNet-C** and compared also to TTA baselines: **TENT, ETA and MEMO** across various batch sizes. The results are **[PRESENTED IN THIS PLOT](https://imgur.com/a/imagenet-corruptions-robustness-bGaru0u)**, which will be included in the final version of the paper. For each method and severity level, we present bars composed of 15 accuracy values—one for each corruption type. Our method consistently outperformed with a 3–5% improvement. While WILDS is clearly relevant, prior TTT works did not evaluate on it and we missed it. In the limited rebuttal time we had, we prioritized ImageNet-C and additional baselines experiments. We will test on standard OOD benchmarks when revising the paper.
Runtime analysis can be found in Appendix A. When revising, we will add analysis of memory consumption to all cases as well.
One case for batch size 128 ImageNet-C in GB:
TENT: 4.8
ETA: 4.9
MEMO: 13.5 (3 augmentations)
ActMAD: 7.2
IT$^3$: 7.4
Vanilla Model: 4.5
### **"... does not include rigorous derivations confirming that test-time adaptation preserves idempotence on OOD samples."**
Thank you for identifying this important issue. We acknowledge that the paper lacks a clear logical flow connecting idempotence to OOD adaptation. We provide here **[A SKETCHED LOGICAL CHAIN](https://imgur.com/XwHPhbc)** explaining exactly this. While not a formal proof, it is a rigorous description of this relation. This will be combined into the introduction in the paper. A detailed extended version will be added to the paper as appendix C. A very brief summary is below:
1. When using ZigZag approach to training, the discrepancy between recursive calls of the model, $|| y_1 - y_0 ||$, is highly correlated to how far out-of-distribution (OOD) an input is.
2. Thus, we can use $|| y_1 - y_0 ||$ as a loss, and reduce 'OODness'.
3. By doing this, we make the network more idempotent: $L_{\mathrm{IT^3}} = || y_1 - y_0 || = || f(x, f(x,0)) - f(x,0) ||.$
4. But properly minimizing $L_{\mathrm{IT^3}}$ is non trivial, so we employ the approach from IGN (Shocher et al.)
5. This can be understood as a projection onto a distribution. (Appendix B).
To empirically support this claim, we also analyzed the relationship between corruption severity and idempotence loss on ImageNet-C. The results are shown **[IN THIS LINK](https://imgur.com/a/idempotence-vs-imagenet-performance-ym0McKq)**. Each point represents a batch of data, with the x-axis showing the computed idempotence loss and the y-axis indicating classification accuracy. As expected, higher severity levels correspond to greater idempotence loss. Notably, we observed a strong negative correlation between idempotence loss and performance (Pearson correlation: –0.94).
### **"Briefly discuss tuning hyperparameters (e.g., optimization steps, learning rate) for test-time adaptation to guide practical deployment.**"
Thank you for this important comment. We will expand appendix A to include implementation details. Code will be released upon acceptance.
### **"Could you clarify how you prevent confusion between the neutral zero and legitimate zero labels?"**
We thank the reviewer for their important comment. The special 0 notation used does not represent an actual zero. Instead, it is a unique signal with the same dimensions as the labels, specifically chosen to differentiate it from actual label values. Our approach builds upon the ZigZag method from [1], where the choice of this signal is extensively discussed and justified. We understand that this notation may appear confusing, so we will revise the paper to include a clearer explanation to avoid any ambiguity.
### "**If the model’s initial prediction is incorrect, even with the frozen reference model, the iterative optimization can magnify the error signal." / "When the initial prediction for an OOD sample is substantially off, what mechanisms prevent... reinforcing incorrect outputs?"**
This is a valid point. Even with the frozen network trick, it is not impossible for such event to occur. Empirically, such cases are rare in comparison to cases where the TTT session improved the prediction. The number of TTT iterations is very limited. The network is reset after each instance so no long term damage can really accumulate. | Summary: This paper proposes a test-time adaptation (TTT) method named Idempotent Test-Time Training (IT3), that enforces idempotence to the model. Here, idempotence indicates that repeating a function to end in a stationary point (or fixed point). Specifically, IT3 predicts the output with a given input and randomly shown label, where at test-time the model is trained to minimize the ‘first prediction with null label as input’ and the successive model’s prediction with the self-predicted label. The method is tested across multiple domains and architectures, showing promising results.
----
**POST REBUTTAL**
For a short summary, I will raise the score to weak acceptance, but would like to highlight a small concern that I have.
Most of my concerns are addressed, leaving only one concern. "Why does this proposed method improve OOD robustness"?.
From a personal perspective as a reviewer (or a reader), the rebuttal did not fully clarify my concern and question. I think Idempotent somewhat makes sense to improve OOD robustness, as OOD makes the model struggle more to have a consistent output. However, it is also somewhat unclear why making Idempotent consistent on the in-distribution sample improves OOD robustness. I think this question is raised by other reviewers as well (e.g., WBsB).
But also from the author's perspective (I have also done some research on this domain), OOD robustness is hard to prove and sometimes an empirical science. So if the OOD robustness claim is toned down a little, it would be much helpful.
Claims And Evidence: While most of the claims are reasonable, I believe the key idea and engineering technique should be clarified, namely the justification of generalization ability of IT3 and the use of EMA.
**Lack of justification for IT3’s generalization ability to out-of-distribution (OOD) data.** \
While the paper presents IT3 as an effective method for handling OOD data scenarios, the justification for why enforcing idempotence leads to better OOD generalization. The intuition behind this connection needs to be more explicitly stated, and additional ablation studies could help clarify why this approach is effective beyond empirical results.
**Unclear contribution of exponential moving average (EMA) to the performance gains.** \
The role of EMA in improving performance is not well justified. While it is used to stabilize adaptation in the online setting, the underlying reasons for its significant impact on performance are not well explored. I think this result is quite interesting and justifying well will improve the paper's claim.
Methods And Evaluation Criteria: The evaluation criteria are correct for the proposed method. But I do believe the paper needs more baseline and experiments to fully support the paper's claim.
**Limited large-scale experiments.** \
The experiments primarily focus on CIFAR-10 (for image) and other relatively small-scale datasets. To convincingly demonstrate scalability, it would be beneficial to include results from larger datasets such as ImageNet. If it is hard to run ImageNet during the rebuttal it would be great if the author can run ImageNet subset. Without this, it remains unclear how well IT3 scales to more complex real-world settings.
**Missing baselines.** \
The author compares IT3 with some TTT methods but lacks comparison with augmentation-based baselines. While it is true that augmentation-based baselines require domain-specific knowledge, I believe most of the real-world applications actually know which augmentation to use at train/test time. In this regard, I believe it is great to compare with such baselines like "MEMO: Test Time Robustness via Adaptation and Augmentation." [1] Additionally, batch normalization-based approaches such as "Tent: Fully Test-time Adaptation by Entropy Minimization" [2] should be included, especially since batch normalization layers can be updated without requiring domain knowledge. BN-based method can be used without architecture modification, but this method requires architecture modification. Also, comparison with "single-point BN" used in MEMO would also be relevant.
**Computational overhead and fairness in comparisons.** \
IT3 requires maintaining two networks (original and EMA), which increases memory and computational requirements. This could lead to unfair comparisons with single-network baselines that do not require additional storage for EMA parameters. A fairer comparison should analyze compute overhead and consider normalized performance metrics that account for increased memory usage (or parameter count).
Reference\
[1] MEMO: Test Time Robustness via Adaptation and Augmentation, NeurIPS 2022\
[2] Tent: Fully Test-time Adaptation by Entropy Minimization, ICLR 2021
Theoretical Claims: There is no theoretical claim in the paper.
Experimental Designs Or Analyses: I have checked the soundness and validity of experimental design and analysis, which seems correct. I am not claiming that the experiment is sufficient rather claiming that the presented experiment seems correct (see other parts to find the weakness).
Supplementary Material: Yes, I have read the appendix part.
Relation To Broader Scientific Literature: I think the main contribution of this paper is introducing a new idea (i.e., Idempotence) to test-time training which is quite interesting. If the author could well connect the reason why Idempotence helps generalization to OOD dataset, it would be very helpful.
Essential References Not Discussed: The paper presented a good related work section.
Other Strengths And Weaknesses: For strength, I think the overall paper is well written and clearly discusses the difference between test-time adaptation and test-time training. Also, while I still do think the domain agnostic needs comparison with domain specific methods (e.g., augmentation-based), the paper did a great job to consider multiple domains (e.g., Tabular) that lack domain specific knowledge.
Other weakness are presented in other sections. I think the main weakness is the justification of Idempotent for OOD generalization. I kindly ask the authors to address this issue during the rebuttal.
Other Comments Or Suggestions: **Impact of label dimension (|y|) on performance should be studied.** \
Since the IT3 framework explicitly requires designing a network where y is part of the input, it is important to analyze how performance varies as the number of classes increases. The current experiments mainly focus on small-class problems like CIFAR-10 (in image), but it is unclear whether the method remains effective when dealing with datasets with higher label dimensionality. I think showing more high-dimensional (label) domain in the image domain will be great, as most readers and reviewers are more familiar with image baselines.
Questions For Authors: All questions are asked in other sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their detailed and constructive feedback. We address the reviewer's concerns below and will revise the paper accordingly.
### **"Lack of justification for IT3’s generalization ability to out-of-distribution (OOD) data. While the paper presents IT3 as an effective method for handling OOD data scenarios, the justification for why enforcing idempotence leads to better OOD generalization. The intuition behind this connection needs to be more explicitly stated, and additional ablation studies could help clarify why this approach is effective beyond empirical results."**
Thank you for identifying this important issue. We acknowledge that the paper lacks a clear logical flow connecting idempotence to OOD adaptation. We provide here **[A SKETCHED LOGICAL CHAIN](https://imgur.com/a/imagenet-corruptions-robustness-bGaru0u)** explaining exactly this. This will be combined into the introduction in the paper. A detailed extended version will be added to the paper as appendix C. A very brief summary is below:
1. When using ZigZag approach to training, the discrepancy between recursive calls of the model, $|| y_1 - y_0 ||$, is highly correlated to how far out-of-distribution (OOD) an input is.
2. Thus, we can use $|| y_1 - y_0 ||$ as a loss, and reduce 'OODness'.
3. By doing this, we make the network more idempotent: $L_{\mathrm{IT^3}} = || y_1 - y_0 || = || f(x, f(x,0)) - f(x,0) ||.$
4. But properly minimizing $L_{\mathrm{IT^3}}$ is non trivial, so we employ the approach from IGN (Shocher et al.)
5. This can be understood as a projection onto a distribution. (Appendix B).
To empirically support this claim, we also analyzed the relationship between corruption severity and idempotence loss on ImageNet-C. The results are shown **[IN THIS LINK](https://imgur.com/a/idempotence-vs-imagenet-performance-ym0McKq)**. Each point represents a batch of data, with the x-axis showing the computed idempotence loss and the y-axis indicating classification accuracy. As expected, higher severity levels correspond to greater idempotence loss. Notably, we observed a strong negative correlation between idempotence loss and performance (Pearson correlation: –0.94).
### **"Unclear contribution of exponential moving average (EMA) to the performance gains."**
The online version, as described in sec. 3.3 differs from the base version because they aim at different scenarios. In the base version, the assumption is that each input is a separate test and has no correlation or information about other inputs. Thus the network weights are reset back to the state they were at the end of the pre-training phase. In contrast, the online version is intended for a continual learning data stream with correlation and somewhat smooth transitioning between inputs. In this case, instead of resetting after each input, we leave the weights updated from the previous inputs.
Since the data keeps shifting in this scenario, over time, $f_{\theta}$ may diverge far from $F$, making it irrelevant. Instead, we need an anchor that is influenced by a reasonable amount of data, yet evolves over time. Our solution is to replace $F$ with an Exponential Moving Average (EMA) of the model. This means $f_{\text{EMA}}$ is a smoothed version of $f_{\theta}$ over time.
Note that the online-IT$^3$ results, depicted in Table. 1 achieve higher scores than the base model. This is due to the inherent difference between the base and the online scenarios. A correlated data stream allows aggregating information during test-time and therefore expected to exploit it and perform better.
### **"Limited large-scale experiments... it would be beneficial to include results from larger datasets such as ImageNet." / "I think showing more high-dimensional (label) domain in the image domain will be great." / "Missing baselines... lacks comparison with augmentation-based baselines."**
Following the reviewer’s suggestion, we conducted additional experiments on **ImageNet-C** and compared also to TTA baselines: **TENT, ETA and MEMO** across various batch sizes. The results are **[PRESENTED IN THIS PLOT](https://imgur.com/a/imagenet-corruptions-eI7JNfr)**, which will be included in the final version of the paper. For each method and severity level, we present bars composed of 15 accuracy values—one for each corruption type. Our method consistently outperformed with a 3–5% improvement.
### **"Computational overhead and fairness in comparisons... A fairer comparison should analyze compute overhead and consider normalized performance metrics that account for increased memory usage (or parameter count)."**
Runtime analysis can be found in Appendix A. We will add analysis of memory consumption to all cases as well.
One case for batch size 128 ImageNet-C, peak memory reserved with torch.cuda.max_memory_reserved() in GB:
TENT: 4.8
ETA: 4.9
MEMO: 13.5 (3 augmentations)
ActMAD: 7.2
IT$^3$: 7.4
Vanilla Model: 4.5 | Summary: This paper proposes a novel test-time learning objective based on idempotent learning. The pipeline is easy to use, appears to be task-agnostic and model-agnostic, and is thus more versatile than previous TTA solutions. Experiments across various tasks demonstrate the effectiveness of the proposed method. My detailed comments are as follows.
Claims And Evidence: The claim "enables on-the-fly adaptation to distribution shifts using only the current test instance, without any auxiliary task design." needs to be further clarified. Please refer to my Questions part.
Methods And Evaluation Criteria: Yes, The designs make sense.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: 2. For classification results, more comparisons with advanced fully test-time adaptation (TTA) methods, such as TENT, EATA, and DEYO, as well as evaluations on larger datasets like ImageNet-C, would strengthen the justification for the proposed method’s superiority.
3. For segmentation results, could the authors include additional evaluations on commonly used benchmarks such as KITTI-C and nuScenes? This would enable a fairer comparison, as the dataset used in this paper has not been widely adopted in previous works.
Supplementary Material: Yes
Relation To Broader Scientific Literature: The problem studied is a fundamental challenge in machine learning with great potential for practical applications.
Essential References Not Discussed: NO
Other Strengths And Weaknesses: ++Pros
The idea of leveraging idempotent learning for test-time adaptation (TTA) is interesting. This learning pipeline is not restricted to specific model architectures (e.g., MLP, CNN, Transformer) or specific tasks (e.g., classification, regression, segmentation).
The overall method is technically sound, and simple yet effective.
Extensive experiments on classification, tabular data regression, age prediction, and road segmentation demonstrate the promise of the proposed approach.
The paper is well-written and easy to follow.
Other Comments Or Suggestions: 5. The error bars in Figures 3, 4, and 8 are difficult to recognize and distinguish, particularly for color vision-deficient readers. Additionally, could the authors provide more qualitative results instead of bar charts? The absolute differences in bar heights are not intuitive to interpret.
Questions For Authors: 1 The authors claim that their method does not require designing additional auxiliary tasks or using extra data, making the adaptation on-the-fly. However, from my perspective, enforcing idempotency during training can also be considered an auxiliary task, and it also requires access to training data for supervised learning. Could the authors clarify this further?
4 Could the authors provide a computational complexity analysis comparing their method with existing baselines?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and encouraging feedback. We address the reviewer's concerns below and will revise the paper accordingly.
### **"The authors claim that their method does not require designing additional auxiliary tasks or using extra data, making the adaptation on-the-fly. However, from my perspective, enforcing idempotency during training can also be considered an auxiliary task,...**
Indeed, enforcing idempotence can be seen as an auxiliary task. However, it is not a domain-specific one. This is in contrast to approaches that use auxiliary tasks that are specific for data-type. Take for instance Sun et al., or Gandelsman et al. Their visual auxiliary tasks cannot be used on other types of data. Some others are architecture-dependent e.g., ActMAD (Mirza et al., 2023). Contrary to existing work, the IT$^3$ mechanism can be used out of the box for any task (we demonstrate image classification, aerodynamics prediction, aerial segmentation and tabular data) and any architecture (MLPs, CNNs, GNNs).
### **...and it also requires access to training data for supervised learning. Could the authors clarify this further?"**
During the pre-training phase, IT$^3$ uses the training data like any other supervised-learning method. At test time, the challenge is defined so that there is absolutely no access to the training data. At any point, the only thing given is the current test input.
### **"For classification results, more comparisons with advanced fully test-time adaptation (TTA) methods, such as TENT, ETA, and DEYO, as well as evaluations on larger datasets like ImageNet-C, would strengthen the justification for the proposed method’s superiority."**
Following the reviewer’s suggestion, we conducted additional large-scale experiments on **ImageNet-C**, comparing our method against popular TTA baselines including **TENT, ETA, MEMO and ActMAD**. The results are **[PRESENTED IN THIS PLOT](https://imgur.com/a/imagenet-corruptions-robustness-bGaru0u)**, which will be included in the final version of the paper.
As shown in the plot, our method consistently outperforms the baselines across all severity levels and batch sizes, achieving a 3–5% improvement in accuracy. These results further highlight the effectiveness and scalability of our approach.
### **"For segmentation results, could the authors include additional evaluations on commonly used benchmarks such as KITTI-C and nuScenes? This would enable a fairer comparison, as the dataset used in this paper has not been widely adopted in previous works.**"
Thank you for pointing out these benchmarks. They are relevant. We missed them because prior TTT works don't use them. We intend to remedy this when revising but, unfortunately, training on them would take more time than we have for this rebuttal. Thus, to provide additional results during the rebuttal period, we prioritized experiments on ImageNet-C, as discussed in our response to the previous comment.
### **"The error bars in Figures 3, 4, and 8 are difficult to recognize and distinguish, particularly for color vision-deficient readers."**
In the revised version, we will improve the clarity and accessibility of the error bars, following the approach used in **[PLOT](https://imgur.com/a/imagenet-corruptions-robustness-bGaru0u)**. Specifically, we made the error bars bolder and more visible, used brighter, more distinguishable colors, and added explicit labels to differentiate the bars. These improvements will be applied to all relevant figures in the paper, along with better-written captions to clearly explain the contents of each plot and guide interpretation—ensuring improved readability, including for color vision-deficient readers.
### **"Additionally, could the authors provide more qualitative results instead of bar charts? The absolute differences in bar heights are not intuitive to interpret."**
When revising, we will add an Appendix. D with some more qualitative segmentation results for the aerial photography road segmentation challenge. You can view these qualitative results via the **[FOLLOWING THIS ANONYMOUS LINK](https://imgur.com/a/JCOWYgq)**.
In the figure, we display the segmentation predictions of TTT, ActMAD, and IT$^3$ on OOD data. Quality scores are added on top of each prediction because they offer an objective measure of segmentation performance, helping to interpret the results more intuitively.
### **"Could the authors provide a computational complexity analysis comparing their method with existing baselines?"**
Runtime analysis can be found in Appendix A. When revising, we will add analysis of memory consumption to all cases as well.
One case for batch size 128 ImageNet-C, peak memory reserved with torch.cuda.max_memory_reserved() in GB:
TENT: 4.8
ETA: 4.9
MEMO: 13.5 (3 augmentations)
ActMAD: 7.2
IT$^3$: 7.4
Vanilla Model: 4.5
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. My main concerns have been addressed, and I would like to keep my original score. | Summary: The authors:
1. Make the claim that enfocing idempotence is benificial for test-time training tasks.
2. Design a paradigm that brings an auxiliary signal representing ground truth, and force the network to learn idempotence through minimizing $\Vert f_{\theta}(x, y) - y \Vert + \Vert f_{\theta}(x, 0) - y \Vert$.
3. Conduct experment on various dataset and shows promising results.
Claims And Evidence: All of the claims are generally adequate.
Methods And Evaluation Criteria: The evaluation benchmark is comprehensive and provides convincing evidence to show the effectiveness of proposed method.
Theoretical Claims: I checked the derivation and discussion of idempotence in main paper and appendix, and they are adequate.
Experimental Designs Or Analyses: The experiment covers various aspect of tasks, including image recognition, tabular prediction, which is overall convincing promising.
One issue, however, is line 235 in Experiments section: "we include the original TTT method and a newer more versatile approach". It’s unclear which auxiliary task "the original TTT" refers to, and whether other auxiliary tasks were evaluated. I hope the authors can clarify this point.
Supplementary Material: See section "Theoretical Claims".
Relation To Broader Scientific Literature: This paper discusses and brings a now paradigm for test-time training. Which might be instructive for future work and paradigm design in this field.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strenths:
1. The idea of bring idempotence to test-time training is innovative.
2. The experiments is comprehensive.
Weaknesses:
1. The paper's presentation lacks clarity. Specifically, one of its key claim is "if such a network is trained so that f(x, 0) = f(x, y) = y, then at test time the distance ||f(x, f(x, 0))−f(x, 0)|| correlates strongly with the prediction error". However, the authors only skim through it, which disrupts the logical flow. Although efforts are made on explaning how to keep idempotence during pretraining and test-time training phase, they should provide further explanation and intuition on why enforcing idempotence is beneficial for enhancing performance.
2. Similarly to point (1), I'm interested to see ablation studies examining the impact of enforcing idempotence within the TTT framework.
Other Comments Or Suggestions: Section 2.1: The argue "TTT operates per-instance, with no assumption that future test data will be similar. So previous work have treated TTA and TTT as distinct paradigms" seems weak. As some works, e.g. [1] and [2] do consider TTT under online setting.
[1] Gandelsman, Y., Sun, Y., Chen, X., and Efros, A. Test-time training with masked autoencoders.
[2] Renhao Wang, Yu Sun, Yossi Gandelsman, Xinlei Chen, Alexei A Efros, and Xiaolong Wang. Test-time training on video streams.
Questions For Authors: The authors should reorganize the presentation, as detailed in my "Other Strengths And Weaknesses" section. For the remaining points, the strengths and weaknesses are clear and straightforward in my view, so I probably won't change my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback. Below we address all the comments, and will revise the camera-ready version accordingly.
### **"The paper's presentation lacks clarity. Specifically, one of its key claim is "if such a network is trained so that f(x, 0) = f(x, y) = y, then at test time the distance ||f(x, f(x, 0))−f(x, 0)|| correlates strongly with the prediction error". However, the authors only skim through it, which disrupts the logical flow."**
Thank you for identifying this important issue. We acknowledge that the paper lacks a clear logical flow connecting idempotence to OOD adaptation. We provide here **[A SKETCHED LOGICAL CHAIN](https://imgur.com/XwHPhbc)** explaining exactly this. The introduction will be rewritten accordingly and a detailed extended version will be added to the paper as appendix C. In short:
1. When using ZigZag approach to training, the discrepancy between recursive calls of the model, $|| y_1 - y_0 ||$, is highly correlated to how far out-of-distribution (OOD) an input is.
2. Thus, we can use $|| y_1 - y_0 ||$ as a loss, and reduce 'OODness'.
3. By doing this, we make the network more idempotent: $L_{\mathrm{IT^3}} = || y_1 - y_0 || = || f(x, f(x,0)) - f(x,0) ||.$
4. But properly minimizing $L_{\mathrm{IT^3}}$ is non trivial, so we employ the approach from IGN (Shocher et al.)
5. This can be understood as a projection onto a distribution. (Appendix B).
### **"I'm interested to see ablation studies examining the impact of enforcing idempotence within the TTT framework."**
We provide empirical evidence analyzing the impact of enforcing idempotence in Fig. 2 of the main paper. It shows histograms of the distance from idempotence across four scenarios. For the original training and validation data, the model demonstrates very high idempotence (with the validation set slightly lower than the training set). For Out-of-Distribution (OOD) data, the model initially shows near-random idempotence. However, after applying our TTT optimization, the distribution of OOD data is significantly "shifted" from this random state towards that of the training and validation sets. This supports our claim that optimizing for idempotence indeed makes the model behave on OOD data similarly to how it behaves on in-distribution data.
To further investigate this, we conducted additional experiments on ImageNet-C, measuring idempotence loss across various corruption types and severity levels. **[(SEE ANALYSIS FIGURE HERE)](https://imgur.com/a/idempotence-vs-imagenet-performance-ym0McKq)**. Each point in the plot represents a batch of data; the x-axis shows the computed idempotence loss for that batch, while the y-axis indicates its classification accuracy. The data covers 15 corruption types and 5 severity levels, which are visualized using the colorbar. As expected, higher severity levels of corruption correspond to larger idempotence loss, further validating the effectiveness of enforcing idempotence within the TTT framework and highlighting its strong negative correlation with model performance (Pearson correlation: –0.94).
### **"Section 2.1: The argue "TTT operates per-instance, with no assumption that future test data will be similar. So previous work have treated TTA and TTT as distinct paradigms" seems weak. As some works, e.g. [1] and [2] do consider TTT under online setting."**
We conducted a new large-scale experiment on ImageNet-C that also includes comparisons to TTA methods TENT and ETA across various batch sizes. **([SEE RESULTS HERE](https://imgur.com/a/imagenet-corruptions-robustness-bGaru0u))** These will be added to the main paper in the camera-ready version.
As the reviewer points out, there is previous work [1, 2] about the online setting that we present in sections 3.3 and 4.6. However, unlike in [1,2], ours is intended for continual learning. Furthermore, we want to make a distinction between TTA, along with its many variants, and online-TTT: While the latter is used in a data-streaming context the former is used on a given dataset or batch, or the training data at hand.
### **"In line 235 in the Experiments section: It's unclear which auxiliary task 'the original TTT method' refers to or whether other auxiliary tasks were evaluated."**
Our apologies for this lack of clarity. By original TTT, we meant Sun et al. 2020. The auxiliary task used in their work is orientation prediction for images, with validation on image recognition benchmarks. We also compare to ActMAD (Mirza et al., 2023), which we adapted for other types of data and tasks.
### **"The authors should reorganize the presentation, as detailed in my "Other Strengths And Weaknesses" section."**
Thank you for helping us improve the presentation of our paper. We will reorganize accordingly.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for addressing my concerns. As previously mentioned, I would like to maintain my score. | null | null | null | null | null | null |
Signed Laplacians for Constrained Graph Clustering | Accept (spotlight poster) | Summary: This paper addresses the constrained graph clustering problem, where the goal is to partition a graph into clusters while incorporating domain knowledge in the form of MUST-LINK and CANNOT-LINK constraints. The authors establish a Cheeger-type inequality that relates the solution of the constrained clustering problem to the spectral properties of the graphs $G$ and $H$. The proposed algorithm solves a generalized eigenvalue problem and demonstrates performance improvements over traditional spectral clustering methods.
Claims And Evidence: The claims are well-supported.
Methods And Evaluation Criteria: This work addresses the problem of constrained clustering, where two graphs, $G$ and $H$, are provided as input. However, the experiments section solely compares the proposed method with the original spectral clustering algorithm, which is not inherently designed for constrained clustering tasks. Although Appendix B includes a comparison with the FC algorithm, it is important to note that FC represents an outdated baseline and does not reflect state-of-the-art performance.
Theoretical Claims: I checked the proof sketch and it appears to be correct.
Experimental Designs Or Analyses: The experimental design is proper.
Supplementary Material: Appendix B
Relation To Broader Scientific Literature: The established Cheeger-type inequality for constrained clustering is a generalization of the classical one as identified in Remark 3.3
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: * The establishment of a Cheeger-type inequality provides a strong theoretical foundation, linking the constrained clustering problem to the spectral properties of the graphs.
* While the idea of adding self-loops to ensure the invertibility of the Laplacian matrix, thereby accelerating computation, cannot be claimed as entirely original, it demonstrates a notable level of creativity.
Other Comments Or Suggestions: Appendix B writes "This section will be further updated with more comparative methods as we expand our experiment"
Questions For Authors: 1. Does the "sweep-set algorithm" in Algorithm 1 refer to kmeans?
2. Does the proposed method generalize to k-way (k > 2) clustering problems?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive evaluation and and valuable suggestions. Here is our response to the raised questions:
**Response to _Methods and Evaluation Criteria_:**
We agree that further comparison between our work with the state-of-the-art constrained clustering algorithms will significantly improve the value of our work. Following the suggestion, we will add more experimental results in the next version of our paper.
**Response to Q1:**
> Does the "sweep-set algorithm" in Algorithm 1 refer to $k$-means?
No. The sweep-set algorithm used in Algorithm 1 is a spectral procedure commonly used in classical Cheeger inequalities. This method involves sorting vertices according to their corresponding entries in the eigenvector solution obtained from the generalized eigenvalue problem. Then, the optimal clustering cut is identified by sequentially evaluating the constrained cut ratio for each prefix set of the sorted vertices.
It is worth mentioning that, in our preliminary experiments, we tried to replace the sweep-set step with $k$-means on the eigenvectors, and received almost identical results as the current approach. Hence, we decided to retain the classical sweep-set approach in our final experiments to maintain consistency with the classical Cheeger inequality applications. We will add necessary discussion in the next version of the paper.
**Response to Q2:**
> Does the proposed method generalize to $k$-way ($k>2$) clustering problems?
Generalizing our approach to the $k$-way clustering for $k>2$ is an important and natural direction of research, and we are currently working on this direction. Our initial work has indicated that the techniques required for the multi-way extension differ substantially from our currently presented one. Specifically, it requires us to develop a new objective function and substantially different spectral arguments to analyze the algorithm's performance. Given the amount of work needed for this generalization, we decided to make it as a separate and future work.
Thank you once more for the suggestions and questions. We're happy to answer any further questions during the discussion phase.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have no further questions and have decided to maintain my score. | Summary: The paper provides a spectral method for approximately optimising the cut ratio for a graph $G$ and a constraint graph $H$ using the smallest non-zero eigenvalues of two particular graph Laplacians. This is based on a proof of Cheeger-type inequality similar to the min-cut approach for a single graph. Practical considerations are made to approximate this problem using a signed Laplacian which provides computational speed-up and improved results. Small experiments are run on two synthetic datasets and one temperature/spatial real-world example.
Claims And Evidence: Main claims of paper where theoretical claims (see below).
It wasn't clear to me why the approaches in the related work at the top of page 2 were unsuitable for this problem. Why would sweep-cut versions based on these inequalities not work for the constraint clustering problem? Below this, the authors mention papers with a practical perspective (lines 068-070), which are not compared against in the experimentation, which may still be useful if currently lacking the same theoretical rigour.
Methods And Evaluation Criteria: It seemed unfair to compare CC and CC++ using graphs $G$ and $H$ only to spectral clustering techniques that only considered $G$. If the metric is to optimise ARI, then spectral methods acting jointly on $G$ and $H$ (or $G$ and $\bar{H}$) could be use, for example, using the unfolded adjacency matrix or its Laplacian. I think a fairer and wider comparison necessary for the paper to be accepted.
Theoretical Claims: Theorem 3.2 gives an upper bound for $\Phi_G^H$ with a convincing proof and sketch proof. I need more convincing about the statement about the approximation at the bottom of page 5 especially as the paper does not outline how the negative self-loop weight should be chosen.
Experimental Designs Or Analyses: Real-world example could be more impactful given some of the image segmentation experiments explored in referenced CC papers.
The metric used for successful separation in the temperature experiment is unusual. It would be more natural to perform a statistical test to see if the temperature are significantly different to better account for more variance when one cluster has relatively fewer data points.
Supplementary Material: Read Appendix A as part of theoretical claims. Appendix B is not referenced in the paper and it would be useful to mention that it contains experiences for other values of $n$.
Relation To Broader Scientific Literature: I do not know how much work relies on solving the graph constraint clustering problem.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Interesting problem that I have not considered before and I very much enjoyed learning about the theory of spectral CC. More motivation for the real-world application would help explain why this is a relevant problem as I felt I didn't get that feeling under the very end of the paper after reading some of the referenced work.
Other Comments Or Suggestions: "Cheer-type inequalities" should be "Cheeger-type inequalities" on page 1 by Related work.
Questions For Authors: Is it possible to create a corollary of Theorem 3.2 that expresses the upper bound in terms of the original graphs $G$ and $H$ rather the degree sequence equalised versions?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and valuable suggestions. Here is our response to the raised questions.
**Response to _Claims And Evidence_:**
> Why would sweep-cut versions based on these inequalities not work for the constraint clustering problem?
In our setting, sweep-cut is adapted from the classical Cheeger setting to work with eigenvectors of the generalized eigenproblem $ \Delta^G x = \lambda \Delta^H x $. Although standard in spectral clustering, here the procedure ensures that cuts reflect the joint structure of both graphs. We will clarify this briefly in the next version of the paper.
**Response to _Methods And Evaluation Criteria_:**
We agree that further comparison with state-of-the-art constrained clustering methods would enrich the paper. In this version, we compared our method with classical spectral clustering (SC), flexible constrained spectral clustering (FC), and our variants using self-loops (CC++). These are meaningful baselines given our focus on spectral guarantees. However, we acknowledge the importance of broader comparisons and plan to include these in the next version of our paper.
**Response to _Theoretical Claims_:**
The self-loop weights method serve a dual purpose. First, adding self-loops to $G$ balances the graph by equalizing the degree sequences of $G$ and $H$ (as noted in the manuscript, lines 118–119, right column). Second, including a small self-loop (with weight $\epsilon = 0.0001$) in $H$ ensures that the Laplacian $\Delta^H$ is invertible, which is necessary for solving the generalized eigenvalue problem $\Delta^G \mathbf{x} = \lambda\Delta^H \mathbf{x}$. This perturbation $\epsilon$ provides numerical stability, and we observed that the performance remains robust for other comparably small values of $\epsilon$.
**Response to _Experimental Designs Or Analyses_:**
We appreciate the suggestion to improve the statistical evaluation of the temperature dataset. In our current version, we evaluated the separation via differences in mean temperature between clusters. We agree that formal statistical tests (e.g., t-tests) would provide stronger evidence and intend to include them in a future revision. Despite this, our current metric demonstrated effective alignment between spatial structure and temperature variation. Nonetheless, even the simple metric showed that the separation achieved by our method aligned well with the underlying temperature variation.
**Response to _Other Comments Or Suggestions_:**
Thank you for catching the typo (“Cheer-type”). We will correct this in the next version of the paper.
**Response to _Questions For Authors_:**
We thank the reviewer for this insightful question. Indeed, we are currently exploring an extension of Theorem 3.2 that avoids the degree-sequence equalization and instead expresses the bound using the original graphs $G$ and $H$. To do this, we consider the more general setting of weighted graphs with vertex weights.
Specifically, for weighted graphs $(G, w^G)$ and $(H, w^H)$, we assign weights to the vertices such that $w^G(v) = w^H(v)$. While the normalized Laplacian typically assumes $w^G(v) = \deg_G(v)$, our setting leads to a weighted unnormalized Laplacian $\Delta^{(G, w)}$. This operator remains positive and self-adjoint, and satisfies that $\lambda_2(\Delta^{(G, w)}) > 0$.
In this formulation, the degree-sequence equalization step becomes unnecessary, albeit at the cost of working with a more general Laplacian operator. We believe a similar Cheeger-type inequality can be established under this setting, though doing so will require both theoretical and algorithmic adjustments. This is an interesting and nontrivial direction for future research rather than a mere corollary of the current work. | Summary: The paper considers the constrained graph clustering problem.
The input consists of two graphs, G and H, defined on the same set of nodes.
The clustering challenge is to group together nodes that are connected with large
weights in G and small weights in H.
The paper considers only clustering into two clusters, which requires
identifying a single subset S of nodes. It defines a clustering criterion
as the ratio between the cut of S in G and the cut of S in H.
Clustering is achieved by finding S that minimizes this criterion.
The paper proves an upper bound on this criterion, which can be expressed
in terms of eigenvalues. The proof is constructive, and suggests a method
of computing S that achieves the bound. Still, the solution requires solving a
generalized eigenvalue problem. The paper shows how to do that efficiently
by introducing the signed Laplacian, a generalization of the standard Laplacian
operator.
The paper evaluates the algorithm on artificial and real data.
They show that the algorithm is practical, and compares favorably
with other clustering techniques (that operate only on G without a knowledge of
H).
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Only superficially.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: I am not familiar with the constrained clustering. But the paper discusses
in detail comparison to recent results that are most likely the current
state of the art.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: As someone who is not familiar with this area I like this result very much.
The presentation is very clear,
the proofs are deep, and as much as I can tell they are valid.
My main concern is with identifying applications of this approach.
Where would the graph H come from?
Another point I am uncomfortable with is that the criterion does not
reduce to normalized cut when H is selected as the complete graph.
Instead, it reduced to regular cut, which is known not to be useful
in data analysis.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive evaluation and insightful questions. Here is our response to the raised questions:
> My main concern is with identifying applications of this approach. Where would the graph $H$ come from?
We agree that clarifying the role and origin of the constraint graph $H$ is essential for broader applicability. In practice, $H$ encodes side information, domain knowledge, or external constraints — often available in real-world settings but not directly encoded in the similarity graph $G$. Examples include:
- _Image segmentation:_ The graph $G$ connects each pixel to its spatial neighbors (e.g., via a 4- or 8-connected grid), with edge weights reflecting feature similarity such as color or texture. The constraint graph $H$ is constructed from user-provided annotations (e.g., brush strokes indicating foreground/background): MUST-LINK edges are added between pixels marked as belonging to the same object, and CANNOT-LINK edges are introduced between pixels annotated as different regions. This enforces spatial coherence guided by user intent.
- _Social networks:_ In signed or trust networks, $G$ encodes positive relationships (e.g., friendship, following, co-authorship) where the presence of an edge implies mutual affinity. In contrast, $H$ captures negative interactions (e.g., distrust, blocking, or rivalry) by placing edges between users that should not be clustered together. This dual representation supports community detection that respects both cooperation and conflict.
- _Complement graph:_ A practical and widely applicable construction is to set $H = \overline{G}$, i.e., the complement of $G$. Here, edges in $G$ represent strong (positive) similarities — interpreted as MUST-LINK constraints — while the absence of an edge in $G$ is interpreted as a weak or negative relation, thus becoming a CANNOT-LINK constraint in $H$. This approach is especially useful when no explicit constraint data is available, and could be defined as a default setting $H$ in that case.
We will clarify this construction and its practical relevance in the introduction of the revised manuscript.
> Another point I am uncomfortable with is that the criterion does not reduce to normalized cut when $H$ is selected as the complete graph.
Thank you for raising this thoughtful point. In fact, in earlier versions of our manuscript we included the following remark after Theorem 3.2, which answered the question above but was dropped due to page limit. We will add the the following remark in the next version of the paper.
_Remark 1_: Theorem 3.2 can be viewed as a generalization of the classical Cheeger inequality. Specifically, if we consider the graph $H$ as the complete graph with $w_{uv} = 1$ for all edges, then it is straightforward to show that
$$
\min_{\emptyset \subset S \subset V} \frac{w_G(S, V \setminus S)}{|S| \cdot |V \setminus S|} \leq 4 \sqrt{ \lambda_2(\Delta^G)},
$$
where $\lambda_2(\Delta^G)$ is the second smallest eigenvalue of the normalized graph Laplacian of $G$. Similarly, if we consider the graph $H = (V, E', w^H)$ as the complete graph with self-loops where
$$
w^H_{uv} = \frac{\deg^G(u)\deg^G(v)}{\mathrm{vol}(G)},
$$
then
$$
\min_{\emptyset \subset S \subset V} \frac{w_G(S, V \setminus S)}{\min(\mathrm{vol}(S), \mathrm{vol}(V \setminus S))}
\leq \min_{\emptyset \subset S \subset V} \frac{\mathrm{vol}(G)\, w_G(S, V \setminus S)}{\mathrm{vol}(S) \cdot \mathrm{vol}(V \setminus S)}
\leq 4 \sqrt{ \lambda_2(\Delta^G)}.
$$
These reductions confirm that our constrained cut formulation includes both the regular (combinatorial) and normalized Cheeger cuts as special cases, depending on how the constraint graph $H$ is chosen.
We thank the reviewer once more for the suggestions and questions. We're happy to answer any further questions during the discussion phase. | Summary: This paper considers the constrained clustering problems over two graphs.
This paper establishes the Cheeger inequality for the proposed algorithm for constrained clustering, which can be a counterpart of Cheeger inequality to the standard spectral clustering over a graph. The proposed algorithm improves spectral clustering, in a scenario that is challenigin for spectral clustering. This paper also experimentally demonstrates the effectiveness of the proposed algorithm.
Claims And Evidence: Claims seem sound. This paper provides the Cheeger type inequality for the constrained clustering problem. As far as written in related work, no previous work provided the proposed type of the Cheeger inequality. The proposed inequality is elegant; that is well adapted from the classical Cheeger inequality, and see clear contrast between the proposed and classical inequality.
Some may criticize this paper from a view where this paper only provides the bound when the number of clusters is two. However, in the classical Cheeger inequality for higher order has a different flavor even in the classical setting, like (Lee et al., 2014). Thus, I expect the higher order Cheeger inequality may be similar to the classical setting (much different than the current discussion), and therefore I do not think that this paper is not enough for this view point; this can be a distinct future work.
Methods And Evaluation Criteria: I think the methods make sense; that is streamlined with the existing view form the spectral clustering community.
Theoretical Claims: Although I have not checked every single detail of the proof, the claim seems be valid as far as I read.
Experimental Designs Or Analyses: I think the experimental design is fair.
Supplementary Material: I gave a skim to the supplementary material. Although it helped my understanding of this paper, I cannot give a review.
Relation To Broader Scientific Literature: I am curious that in which setting the proposed bound is tighter than the existing bound
$
\Phi^{G}_{H} \leq 16 \lambda_{2}(\Delta^{G}_{H})/\Phi(G)
$
by Koutis et al. (2023), since the both provide the bound for $\Phi^{G}_{H}$.
Essential References Not Discussed: This does not affect to my overall evaluation, but it is nicer if the author can wrap up the existing work around graph constrained clustering, maybe in Appendix. Also, it would be nice to emphasize that the Cheeger inequality is a very established one by citing the classical literature such as [1] and [2].
[1] N. Alon. Eigenvalues and expanders. Combinatorica, 6(2):83–96, 1986.
[2] N. Alon and V. D. Milman. $\lambda_{1}$, isoperimetric inequalities for graphs, and superconcentrators. J. Comb.
Theory B, 38(1):73–88, 1985.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
--
post rebuttal comment: I am satisfied with the answers and therefore I increased my score from three to four.
Questions For Authors: It is nice if the authors can answer the questions in my responses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer their positive evaluation, and constructive feedback. Here is our response to their questions:
**Response to _Relation To Broader Scientific Literature_:**
Our Cheeger-type inequality improves previous related results (including that of Koutis et al. (2023)). Our main theoretical result states that
$$
\Phi_H^G \leq 4\sqrt{ \frac{\lambda_2(\Delta^G_H)}{\lambda_2(\Delta^H)}},
$$
where $\lambda_2(\Delta^H)$ is the Fiedler eigenvalue of the Laplacian of constraint graph $H$. In contrast to Koutis et al., our bound does not rely on an auxiliary demand graph $D_G$. Instead, it incorporates both graphs $G$ and $H$ directly through the generalized eigenvalue problem. Koutis et al.'s bound is based on a product of isoperimetric constants $\Phi_G$, $\Phi_H^G$, $\Phi^G_{D_G}$, where the influence of $H$ is less direct. As a result, their bound remains tied to the structure of $G$ and a third graph $D_G$, whereas ours explicitly improves with the connectivity of $H$. Even as additional constraints are added to $H$, modelled by a graph $H + e$, our bound becomes tighter due to the fact
$
\lambda_2(\Delta^H) \leq \lambda_2(\Delta^{H+e}),
$
which reflects the improved connectivity of $H$. Hence, our inequality rewards richer and more informative constraint graphs.
As a concrete example, let's consider the case in which $H$ is the complete bipartite graph $K_{n,m}$ with normalized weights. The spectrum of its normalized Laplacian is known to be $ \{ 0, 1 \text{ (with multiplicity } n + m - 2), 2 \}$, so $ \lambda_2(\Delta^H) = 1 $. In this case, our bound simplifies to $ \Phi_H^G \leq 4 \sqrt{\lambda_2(\Delta_H^G)} $, which depends directly on the spectral relationship between $G$ and $H$.
This contrasts with the bound in Koutis et al. (2023), which involves an additional graph $D_G$ and the product $\Phi_H^G \cdot \Phi_{D_G}^G$, making the analysis more involved. Our formulation, in settings like $K_{n,m}$, yields a cleaner and more interpretable bound in terms of the original input graphs.
**Response to _Essential References Not Discussed_:** We thank the reviewer for the suggestion. In the next version of the paper, we will give a more detailed discussion on existing work around graph constructed clustering, and in particular emphasise the work on classical Cheeger inequality including [1] and [2].
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the response. I am satisfied with these and hence I will increase my score from three to four. | null | null | null | null | null | null |
Soup-of-Experts: Pretraining Specialist Models via Parameters Averaging | Accept (poster) | Summary: The paper proposes a method for combining a set of pretrained models by learning linear coefficients that are used to merge corresponding layer weights, thereby producing domain-specific architectures. In essence, the approach learns weighting coefficients similar to a router in mixture-of-experts models that dynamically determine how to combine the expert parameters based on the input domain. Experimental results demonstrate that this strategy yields promising performance improvements on specialized tasks.
Claims And Evidence: The paper makes several key claims, and overall, many of these are backed by experimental evidence. However, a few claims could benefit from additional substantiation.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are well-suited to the problem at hand. The central idea is to train a mixture-of-experts model on multiple datasets by optimizing a router that learns to weight the experts based on the input domain. At inference time, selected expert weights are linearly merged to quickly instantiate a specialist model tailored to a specific domain. The authors evaluate their approach on standard language modeling tasks using the next-token prediction loss
Theoretical Claims: The proofs are mathematically detailed and build upon well-established properties of Gaussian distributions and clustering. Overall, they appear to be correct under the stated assumptions.
Experimental Designs Or Analyses: The experimental designs are well conceived and align with the goals of rapid model specialization under a size constraint. However, the authors focus solely on next-token prediction as the evaluation metric, which limits the depth of investigation into the proposed method’s broader performance on downstream tasks. Additionally, the method is not directly compared to other merging techniques. For instance, there is no experimental comparison with alternative merging approaches such as hypernetwork-based methods that dynamically select specialized weights even though the concept appears similar. This leaves an open question as to how the proposed method performs relative to other state-of-the-art merging strategies.
Supplementary Material: The paper has no supplementary material.
Relation To Broader Scientific Literature: The paper synthesizes ideas from model merging, MoE architectures, and importance sampling into a unified framework. It extends prior findings by addressing the computational and scalability challenges inherent in training and serving specialized models, thereby contributing a novel method that is both theoretically grounded and practically relevant. However, the paper could benefit from a more detailed investigation into how its approach compares to hypernetwork-based methods which also enable rapid adaptation by dynamically selecting specialized weights and a more thorough discussion of alternative merging strategies.
Essential References Not Discussed: The paper builds on ideas from model merging, MoE architectures, and importance sampling. However, some essential related works could have been discussed more explicitly to contextualize its contributions:
Several key papers that provide additional context to the proposed method were not discussed in the paper. These include:
- **[0] Merging Experts into One: Improving Computational Efficiency of Mixture of Experts:** This work explores methods to merge multiple experts into a single model to enhance computational efficiency.
- **[1] Mixture of LoRA Experts:** This paper investigates leveraging LoRA-based adaptations within a mixture-of-experts framework with learnable gate.
- **[2] Mixture of In-Context Experts Enhance LLMs’ Long Context Awareness:** This approach uses a mixture of experts to improve the long-context processing capabilities of large language models.
- **[3] REMOE: Fully Differentiable Mixture-of-Experts with ReLU Routing:** REMOE proposes a fully differentiable routing mechanism using ReLU, which addresses training stability and efficiency issues.
- **[4] Mixture-of-Experts with Expert Choice Routing:** This work introduces an alternative routing strategy where experts are chosen based on an expert-choice mechanism rather than traditional softmax gating.
- **[5] HyperTuning: Toward Adapting Large Language Models without Back-propagation:** HyperTuning explores rapid adaptation methods for large language models without relying on full back-propagation.
- **[6] HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation:** HINT leverages hypernetwork-based techniques for instruction tuning, enabling efficient zero- and few-shot generalization.
Discussing these papers would enrich the context of the proposed method by comparing alternative strategies for expert merging, dynamic weight adaptation, and efficient model specialization.
Other Strengths And Weaknesses: **Strengths:**
- The paper presents an effective strategy to reduce the challenge of generating domain expert models. By learning weights for merging experts into a single specialized model, it provides a computationally efficient alternative to training separate models for each domain.
- The proposed method is practically sound—it is scalable and designed for rapid model specialization, which is supported by comprehensive experimental results and ablation studies.
**Weaknesses:**
- The experimental evaluation is narrow, as it is limited to next-token prediction metrics. This focus may not fully capture the performance improvements on a broader range of downstream tasks.
- The authors do not adequately compare their approach to existing expert merging techniques (e.g., methods detailed in [0] and [1]). They present their method as a first without discussing how it outperforms other strategies, including hypernetwork-based approaches.
- The experiments are conducted on relatively small architectures, and given the computational resources available, it would be beneficial to evaluate the approach on larger models (e.g., at least 1B-parameter models) to better assess scalability.
- It is unclear whether the number of tasks used in the expert merging experiments aligns with those used in the Soup-of-Experts experiments, leading to ambiguity about the direct comparability of the approaches.
Other Comments Or Suggestions: See above sections:
- In lines 047–051 (right column), the authors refer to “model soup” as a general term for model averaging in weight space. However, model soup is also a specific algorithm that starts with the best model, then averages with the second best if performance improves, uses that average as a new baseline for averaging with the third model, and so on—skipping any model that does not yield an improvement at a given step. It would be beneficial for the authors to clarify these differences when discussing the model soup concept.
- It would be valuable to compare the proposed method with existing expert merging approaches and evaluate its scalability on larger models.
- The captions are overly long, which makes them hard to read. It would improve clarity if detailed descriptions were moved to the discussion sections of the corresponding experiments instead of being included entirely in the captions.
Questions For Authors: 1. How does your method compare to hypernetwork-based and other mixture-of-experts merging approaches in terms of performance, generalization, and in-distribution accuracy?
2. How does the individual performance of a model instantiated via the Soup-of-Experts framework compare to that of a task-specific fine-tuned model on in-distribution data?
3. Is there a universal guideline for selecting the prior used for the dataset weights sampler, especially when the eventual application domain of the model is unknown?
4. Given that the number of dataset-specific experts in the baseline (e.g., 64) is much lower than the number of domains used in the meta-soup (4096), doesn’t this disparity affect performance? While training 4096 individual models may be computationally prohibitive, the difference in exposure could naturally lead to a higher generalization capacity. Could you provide more explanation on this issue?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer,
We thank you for your work reviewing our paper and for your comments. We are glad to read that "The experimental designs are well conceived", and we propose "a novel method that is both theoretically grounded and practically relevant".
We now try to address your concerns.
- "*the authors focus solely on next-token prediction as the evaluation metric.*"
We have now trained bigger 350M models with 32 experts, and evaluated the corresponding models on MMLU, arc_easy and arc_challenge. The reported accuracy is here (https://anonymous.4open.science/r/rebuttal_soup-E8F9/accs.pdf); we see that the gains obtained in loss also transfer to accuracy.
- "*the method is not directly compared to other merging techniques.*"
See answer to rev.No4u where we compare SoEs to model merging.
- "*The authors do not adequately compare their approach to existing expert merging techniques (e.g., methods detailed in [0] and [1])*"
We will add the following discussion to the paper.
[0] propose a variant of mixture of experts, where instead of mixing activations, they mix expert weights. However, the routing mechanism is still that of an MoE, based on input samples. On the contrary, our routing mechanism is based on input domains. For instance, one cannot use [0] to instantiate a small model, while it is one of the main features of SoEs.
[1] propose a routing mechanism to merge a bank of Lora weights. Similarly to the previous work, this cannot instantiate a frozen model of small size in a flash, while SoEs can.
- "*Several key papers that provide additional context to the proposed method were not discussed in the paper. These include*"
Thanks for these useful references, we will discuss the most relevant in the paper.
- "*The experiments are conducted on relatively small architectures, and given the computational resources available, it would be beneficial to evaluate the approach on larger models (e.g., at least 1B-parameter models) to better assess scalability.*"
As discussed above, we have now trained 350M models with 32 experts.
- "*It is unclear whether the number of tasks used in the expert merging experiments aligns with those used in the Soup-of-Experts experiments, leading to ambiguity about the direct comparability of the approaches.*"
We have compared the two approaches (see above). We want to highlight that the both methods are not used in the same way; SoE is a pretraining technique, while merging is used on fine-tuned models.
- "*model soup is also a specific algorithm [...] It would be beneficial for the authors to clarify these differences when discussing the model soup concept.*"
Indeed, we will clarify in the text that we simply mean model averaging when speaking of model soups (i.e. we only consider *uniform* soups using the Model soups' paper terminology).
- "*The captions are overly long*"
We will simplify the captions.
- "*How does your method compare to hypernetwork-based and other mixture-of-experts*"
As explained above and in table1, SoEs and MoEs serve different purposes: MoEs cannot instantiate a small model, and they are not aware of the input domain distribution. We will clarify.
- "*How does the individual performance of a model instantiated via the Soup-of-Experts framework compare to that of a task-specific fine-tuned model on in-distribution data?*"
We believe this is the experiment in fig.5, where we compare SoEs to fine-tuned models.
- "*Is there a universal guideline for selecting the prior used for the dataset weights sampler?*"
In our experiments, we used ad-hoc priors that are completely blind to the downstream tasks that are eventually going to be addressed. We still obtain very good performances compared to regular pre-training. Incorporating knowledge about the downstream distribution of tasks is an exciting research direction.
- "*Given that the number of dataset-specific experts in the baseline (e.g., 64) is much lower than the number of domains used in the meta-soup (4096), doesn’t this disparity affect performance?*"
We also tried training models with 64 domains, but the performances where worse than with 4096 domains. The routing mechanism learns to combine those domains in a meaningful way that makes sense for the experts.
- *"While training 4096 individual models may be computationally prohibitive, the difference in exposure could naturally lead to a higher generalization capacity. Could you provide more explanation on this issue?"*
We believe that these models would be over-specific, and have no general knowledge at all. We conducted that experiment with 64 individual models in fig.4, and those models are all very poor generalists. Such a method would therefore perform poorly compared to SoE.
We thank you again for your detailed review which help us improve the paper ! We hope that our comments have resolved your concerns. | Summary: This paper introduces Soup-of-Experts, a method that trains a group of expert models to construct a specialized model for a given target domain. The specialized model is obtained by linearly combining the expert models in parameter space. The architecture consists of a set of expert models, a shared model, and an MLP that generates blending weights based on input domain weights. During testing, the domain weights for each target domain are estimated either through an off-the-shelf method or a learning-based approach, leveraging training domain information for comparison. The proposed method demonstrates superior performance across various domains while maintaining a compact specialized model.
Claims And Evidence: • The explanation in Section 2.5 regarding the negligible overhead cost for training the group of experts is unclear. The authors state, “The forward pass through the experts also yields a negligible cost as long as the experts all fit in memory since it only requires adding several parameters.” However, this claim is confusing because the parameter space is duplicated n times, along with the saved feature maps. Additionally, both the forward computation and backpropagation scale linearly with the number of experts. Furthermore, it is unclear which “standard pre-training” is being referenced for comparison. The description in lines 306–309 suggests that the generic pretraining involves only a single model, making the comparison unclear.
• The paper's primary motivation is to incorporate domain information during training to learn specialized experts. This approach suggests that capturing a diverse distribution of domains at the meta-level should be beneficial. However, the experimental results indicate that selecting only two domains (s = 2) and sampling uniformly from them yields the best performance. This seems counterintuitive, as s = 2 appears overly sparse given the total of 4096 training domains. Such a small sample may not adequately capture the complex relationships among domains. Moreover, iterating over all 4096 domains would require at least 2048 iterations. Yet, the authors claim that this approach can reduce both training cost and time, which seems unintuitive and requires further clarification.
• Additionally, the model includes a shared branch, \mathcal{S}, which is common across all training domains and experts. Given the sparse domain sampling strategy, it is unclear whether this setup might lead to conflicts across different training iterations. Since \mathcal{S} is optimized toward the selected two domains at each iteration, substantial domain gaps between different domain pairs across iterations could lead to instability in training. The paper should address whether this issue affects model convergence and performance.
Methods And Evaluation Criteria: Yes
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: The experiment design and analysis are appropriate.
Supplementary Material: I read the supplementary material, which mainly focuses on hyperparameters and additional results.
Relation To Broader Scientific Literature: This paper is closely related to major learning paradigms in language models, including dense architectures and Mixture-of-Experts (MoE) structures. Additionally, the proposed method builds on the concept of model soup from previous literature, which averages models in parameter space to achieve a better tradeoff between in-distribution and out-of-distribution performance (Wortsman et al., 2022). The primary objective of this paper is to mitigate distribution shift and adapt the model more effectively to target domains.
Essential References Not Discussed: The references are thoroughly discussed.
Other Strengths And Weaknesses: This paper delivers good results on how to reduce the resources and computation needed when the model is expected to be specialized to a particular domain.
The proposed method is also easy to understand and does not contain complex settings/modules. It will benefit the future research in the area, especially larges models become the focus,s but it is also important to focus more on the specialized scenarios.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer,
We thank you for your review and for helping us improve the paper. We are happy to read that "The experiment design and analysis are appropriate.", that "this paper delivers good results", and that "the proposed method is easy to understand".
We now try to address your concerns.
- *" The explanation in Section 2.5 regarding the negligible overhead cost for training the group of experts is unclear. [...] Additionally, both the forward computation and backpropagation scale linearly with the number of experts."*
This is an important remark, thanks ! We will greatly clarify this part with the following analysis: We compare training a model of size d with generic pre-training and training a Soup of Experts with n experts of size d. We let b be the mini-batch size. The training loop of models consists of A) computing gradients and B) updating the parameters with Adam.
For the generic model, the cost of A) scales roughly as $db$ [1], we let $C$ be the proportionality constant. This computes $\nabla\_\Theta \ell(\Theta, x)$, with a cost $C\cdot d\cdot b$.
For the SoE, we need to compute the gradients w.r.t. the experts $E\_i$ and shared parameters $S$. Let $\Theta = S + \sum \alpha\_i E\_i$ the merged model. Then, the chain rule gives the gradients $\nabla_S L = \nabla\_\Theta \ell(\Theta, x)$ and $\nabla\_{E\_i}L = \alpha_i \nabla\_\Theta \ell(\Theta, x)$. In other words, the gradients w.r.t. the experts are simply obtained by rescaling the gradient wrt the merged parameters, which gives $n\cdot b$ floating point operations. Hence, the cost of A) is $C\cdot d\cdot b + n\cdot b$ for the SoE. Crucially, **the cost of backprop through the SoE is only midly affected by n**, as the gradients of the experts are obtained trivially from those of the merged model; the cost is **not** $C\cdot b\cdot n\cdot d$, which would be the same cost as that of training a much larger $nd$ full model.
For B, the generic pretraining needs to update the two adam EMA parameters, and then update the parameters.
In total, the flops count is 14d. For the SoE, we need to use Adam for all the parameters, so the cost of B) is 14$n\cdot d$.
Overall, one training iteration for generic pretraining costs $C_{gen} = Cdb + 14d$, while it is $C_{soe} = Cdb + 15nd$ for the SoE. Let us discard the cost of Adam for the generic pretraining, the relative increase is $C_{soe} / C_{gen} = 1 + \frac{15n}{Cb}$. Hence we see that it all depends on whether $n \gg Cb / 15$; the practical value of the constant $C$ depends on the architecture and other factors.
In a large batch size $b$ setting, the added cost is negligible.
We insist that in fig 4 and in https://anonymous.4open.science/r/rebuttal_soup-E8F9/reb_soe_time.pdf we report time as x-axis, where the additionnal (small) overhead is already factored in.
- "*Furthermore, it is unclear which “standard pre-training” is being referenced for comparison"*
We will clarify that standard pre-training trains one single model of size $d$, which in section 2.5. is compared to the cost of training one soup of experts with $n$ experts of the same size $d$.
-"*the experimental results indicate that selecting only two domains (s = 2) and sampling uniformly from them yields the best performance. This seems counterintuitive[...]*"
We want to clarify a possible misunderstanding: as explained in 2.4, we train using only $s$ active domains at a time, but these domains are sampled at random at each iteration. For instance, if we had only 3 domains and use $s=2$, sampling from $\pi$ might give these 4 samples: `[0, 0.2, 0.8], [0, 0.6, 0.4], [0.1, 0.9, 0], [0.7, 0, 0.3]`, where there are only 2 non-zero domain at each time, but their index change.
Hence, this strategy allows us to explore the space of all pairs of domains, not just one fixed pair of domain. In practice, we use 256000 iterations, so the model has seen data coming from each domain multiple times. We hope that this clarifies the "counterintuitive" observation that the SoE accelerates training. We will add this explanation in the text to make it clearer.
- "*the model includes a shared branch [...]. Given the sparse domain sampling strategy, it is unclear whether this setup might lead to conflicts across different training iterations.*"
Indeed, as you mention, there is a tension between domains, and the shared branch has to accomodate between all domains, just like a standard LLM has to share its capacity across different domains. However, we did not observe any particular instabilities during training, the training hyperparameters (gradient clipping, adam parameters) are standard, and the curves in fig.4 are very smooth.
We hope that our answers have addressed your concerns, and thank you again for your helpful review !
[1]: Kaplan, Jared, et al. "Scaling laws for neural language models." arXiv preprint arXiv:2001.08361 (2020). | Summary: This paper studies the problem of training specialized models efficiently for a new domain. This is done by design from the pretraining stage by training a model in a style akin to mixture-of-experts. The data is sampled across domains in a mini-batch and each expert corresponding to each domain is updated, together with the general model that is always updated. This results in one model for each domain and a general model. For specialization, the data from the new domain is categorized using nearest neighbors based on semantic similarity into a distribution. This distribution is them used to perform a weighted merging of models via averaging to obtain the specialist model.
The paper uses a model of 110M parameters based on the GPT-2 architecture and pretrains on RedPajama2 128 different experts with data from 4096 domains that are obtained using k-means on semantic similarity. The loss of the model is shown to be worse on the generic data than that of the regular model. However, when using data from 16 domains of the Pile data set, fine-tuning the obtained specialized model shows better loss than the generic model.
The paper is overall well-written. The task setup that is being studied is very relevant and important.
In my opinion, the major drawback of the paper is that is does not show evaluation on downstream data sets and only shows the loss. Even the paper's abstract mentions that the downstream performance is the main motivation for this setup. Additionaly, perplexity would be another interesting metric to see.
Another important drawback of this approach is that in this experiment, we need to keep n=128 experts trained, which will require a lot of space. Further, in order to efficiently do the model updates, all these experts need to be stored in memory, which is prohibitive for large models.
## update after rebuttal
Thank you for your response.
The additional experiments are indeed valuable and show positive results. It would be good to learn more about them and also present multiple downstream evaluations on more than 3 data sets.
Regardless, I am willing to improve my overall assessment in light on these two new results.
Claims And Evidence: Claims are not fully supported by the experiments as the downstream fine-tuning metrics and perplexity are lacking.
Methods And Evaluation Criteria: The major weakness in my view of this paper is that is lacks evaluation on downstream fine-tuned models with tasks in the specalist domain distribution. Even though the loss looks better, further validation on fine-tuning is critical to prove the effectiveness of the approach.
Further, the models trained are very small for generative LLMs (110M parameters). I suggest running an experiment on a larger model. An alternative would be to try this on BERT/RoBERTa models. Another model trained on this setup will add additional strength and show more generality.
Theoretical Claims: The main contribution of this paper is the new architecture for the pre-trained model. This is described well and in good amount of detail.
Experimental Designs Or Analyses: The Pile data sets may not be very different to the data that is present in the RedPajama2 set, as that contains common crawl snapshots, and the pile data sets are most likely present inside these snapshots (e.g. wikipedia).
Supplementary Material: I have read the supplementary material.
Relation To Broader Scientific Literature: See summary section
Essential References Not Discussed: Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models - Li et al 2022.
- this paper explores a similar setup
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: Page 5: sentence-bert (Devlin, 2018) is the wrong citation for sentence-bert. It would be good to know which specific model version was actually used for similarity.
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer,
We thank you for your review and for your insightful comments, which will help us improve the paper. We are happy to hear that "The paper is overall well-written" and that "The task setup that is being studied is very relevant and important."
We now address your concerns.
- "*the models trained are very small for generative LLMs (110M parameters). I suggest running an experiment on a larger model.*"
We have now trained larger 350M models, using 32 experts for the Soup-of-Experts. The training curves are available here: https://anonymous.4open.science/r/rebuttal_soup-E8F9/reb_soe_time.pdf. The behavior is essentially similar to that in fig.4 in the paper. Due to lack of resources, we could not train larger models.
- *"the major drawback of the paper is that is does not show evaluation on downstream data sets and only shows the loss."*
This is a good point. We have score the 350M models that we have trained on three datasets: MMLU, Arc-easy and Arc-challenge. We report the corresponding average accuracy as a function of time in this figure: https://anonymous.4open.science/r/rebuttal_soup-E8F9/accs.pdf
We observe that the SoE consistently outperforms generic pre-training. We will add these results to the paper. We believe that this result indeed makes the paper more convincing.
- *"Additionaly, perplexity would be another interesting metric to see."*
In all our experiments, we train and report with the next-token prediction loss; the perplexity can therefore be obtained by taking the exp of our curves in fig. 4, 5, 6, 7. It will not change the ordering of methods.
- *"Another important drawback of this approach is that in this experiment, we need to keep n=128 experts trained, which will require a lot of space. Further, in order to efficiently do the model updates, all these experts need to be stored in memory, which is prohibitive for large models."*
Indeed, we will clarify in 2.5 that the method requires storing all the expert's weights, which might be problematic for large model sizes. However, in the case where memory is not an issue, Soup-of-Experts achieves significantly better performances than standard pretraining at a fixed computational budget, as reported in Fig. 4 and https://anonymous.4open.science/r/rebuttal_soup-E8F9/reb_soe_time.pdf.
- "*Claims are not fully supported by the experiments as the downstream fine-tuning metrics and perplexity are lacking.*"
See above, this is now fully fixed.
- "*The major weakness in my view of this paper is that is lacks evaluation on downstream fine-tuned models with tasks in the specalist domain distribution. Even though the loss looks better, further validation on fine-tuning is critical to prove the effectiveness of the approach.*"
As seen above, we have validated that these gains in loss also transfer to downstream tasks.
- *"Another model trained on this setup will add additional strength and show more generality."*
As explained above, we have now trained larger 350M models. We think that this indeed improves the generality of our paper. We acknowledge that we focus solely on the decoder-only LLM architecture, which is one of the most widely used architecture nowadays.
- *"The Pile data sets may not be very different to the data that is present in the RedPajama2 set, as that contains common crawl snapshots, and the pile data sets are most likely present inside these snapshots (e.g. wikipedia)."*
Indeed, this is a judicious and important remark. We used the Pile as it has a variety of downstream domains that overlap more (wikipedia) or less (dm_mathematics) with the pretraining domain, in order to cover a variety of practical cases. We will clarify this in the text.
- "*Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models - Li et al 2022.*"
Thanks for the relevant reference that we missed, we will discuss it in the paper.
- "*Page 5: sentence-bert (Devlin, 2018) is the wrong citation for sentence-bert. It would be good to know which specific model version was actually used for similarity.*"
Thanks for spotting this ! Indeed, the model we use is a fine-tuned version of MPNET [1], obtained at https://huggingface.co/sentence-transformers/all-mpnet-base-v2. We will properly acknowledge this in the paper.
We thank you again for your work reviewing our paper and for your comments which help us improve the paper. We hope that your concerns have been alleviated.
[1] Song, Kaitao, et al. "Mpnet: Masked and permuted pre-training for language understanding." Advances in neural information processing systems 33 (2020): 16857-16867. | Summary: The authors propose Soup-of-Experts, which can quickly create a specialized model for a given mixture of data domain weights at test time. Soup-of-Experts jointly pretrains and learns a function to compute weights for a linear combination of expert model weights for specialization.
## update after rebuttal
I updated the review by increasing the score from 2 to 3 after the authors addressed the majority of my concerns. I still see scaling as a drawback due to the large number of experts that need to be updated jointly. This also makes the method more inflexible since all of the experts need to be trained at once during pretraining.
Claims And Evidence: The main claim, that soup-of-experts can be quickly used to obtain a pre-trained specialized model, seems somewhat well-supported by experiments comparing to other pretraining methods, as it performs slightly better than CRISP in the specialized loss while having much better performance in the generalized loss. It also has better specialization and similar general loss to pretraining a single similar-sized model.
Methods And Evaluation Criteria: The proposed method makes sense as a way to adapt model souping to pretraining. The benchmark datasets used have a variety of specialization domains and are useful for evaluation.
Theoretical Claims: There were no proofs
Experimental Designs Or Analyses: The experiments seem well-designed as a way to compare the models' general and domain adaptation capabilities. However, there are some additional baseline comparisons that would be useful (see Questions 1-2).
Supplementary Material: The per-domain results seem consistent with the overall results in the main paper.
Relation To Broader Scientific Literature: While model soups have been used previously with fine-tuned models and existing pretrained models, this paper instead focuses on test-time domain adaptation using linear souping by jointly pretraining expert models and a function on the domain weights.
Essential References Not Discussed: There are a number of similar approaches in the domain adaptation realm in the fine-tuning setting which are not discussed, such as rewarded soups [1] and personalized soups [2]. It would be useful for the authors to provide comparisons to these approaches, since they only provide comparisons to pretraining-based approaches and it is not clear if pretraining results in gains over similar works combining fine-tuned models.
It would also be useful to discuss previous methods souping pretrained models without requiring joint pretraining from scratch, such as [3].
[1] Rame et. al. Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards
[2] Jang et. al. Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging
[3] Imfeld et. al. Transformer Fusion with Optimal Transport.
Other Strengths And Weaknesses: Strengths:
1. The idea of jointly pretraining the experts and linear combination weights seems novel and interesting, compared to existing approaches that do one or the other
2. The experiments support the claim that this approach increases domain adaptation performance over existing pretraining-based methods
Weaknesses:
1. This approach seems very computationally expensive, since we first need a large number of domain experts which then need further pre-training to be adapted
2. The authors do not compare to or discuss approaches for souping fine-tuned models for domain adaptation.
3. Experiments are limited to small models (33M - 110M parameters)
Other Comments Or Suggestions: Typo in 2.5 section header
Questions For Authors: 1. When is this method advantageous over using a single pretrained model of the same total number of parameters as is contained in the experts? What is the total training time comparison between this larger model when compared to training the experts and Soup-of-Experts?
2. How does this method compare to existing approaches combining fine-tuned models from multiple domains? Soup-of-Experts seems inherently less flexible since the experts must be pretrained from scratch rather than leveraging existing fine-tuned models.
3. Is this approach feasible for larger models? The large number of experts seems like a computational challenge if the models are scaled.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer,
Thanks a lot for your review and for your questions and remarks; they will help us improve the paper. We are please to see that you found that the main claim is well supported, that "the proposed method makes sense", and that the experiments are well designed.
We now answer your questions and the points you have raised:
- "*There are a number of similar approaches in the domain adaptation realm in the fine-tuning setting which are not discussed, such as rewarded soups [1] and personalized soups [2]. It would be useful for the authors to provide comparisons to these approaches,[...]*"
Thanks for raising this point. While we already mentioned [1], we will discuss [2] and [3] in the paper. Thanks to your remark, we have implemented a comparison to standard model merging. To do so, we train both a SoE and a generic model during 64000 iterations, and for the SoE we use k=64 domains. Then, for each k domain, we fine-tune the generic model for T steps on that domain, yielding models $\theta_1,\dots \theta_k$ which are specialists for each domain.
We then take $d$ domains among the $k$ at random, and either a) instantiate the SoE with weights uniform over the $d$ domains, 0 otherwise, or b) merge the specialized models $\theta_i$ where i covers the $d$ domains. We use a linear combination of models to merge. We then report the average loss of the corresponding model a) or b) on the $d$ chosen domains. For the model merging, we always pick the number of fine-tuning steps T in [1000, 2000, ..., 10000] that yields the smallest loss for each individual experiment. The results are here: https://anonymous.4open.science/r/rebuttal_soup-E8F9/soe_vs_merge.pdf
When there is only $1$ domain, fine tuning is the best method, since it has trained a specialist. As soon as we need to merge >= 4 specialists, Soup-of-Experts become advantageous. Importantly, the SoE did not have to be trained on each domain individually: in total, it has done 64000 iterations, while the model merging required 64000 + 64 * 10000 steps, so about 10 times more compute ! We will add this insightful experiment to the paper.
- "*This approach seems very computationally expensive, since we first need a large number of domain experts which then need further pre-training to be adapted*"
We will clarify the text, the computational cost of the method is not high as long as we have enough memory to train the models. For instance, the x-axis in fig4 is the number of hours to train the models, the fact that the SoE is better means that it reaches better losses in less time; it is therefore not more expensive.
- "*The authors do not compare to or discuss approaches for souping fine-tuned models for domain adaptation.*"
See above.
- "*Experiments are limited to small models (33M - 110M parameters)*"
We have now trained a 350M model with 32 experts for 33B tokens,, see https://anonymous.4open.science/r/rebuttal_soup-E8F9/reb_soe_time.pdf . The method scales: it still is about twice as fast as generic pretraining to reach a certain level of specialized loss. We will add this to the paper.
- *"When is this method advantageous over using a single pretrained model of the same total number of parameters as is contained in the experts? What is the total training time comparison between this larger model when compared to training the experts and Soup-of-Experts?"*
This is an interesting question. To give a concrete order of magnitude, in the experiments on the paper we use 110M models with 128 experts, yielding in total 13B parameters. There are many differences between the SoE and a 13B model. First, the SoE will most likely have a much higher perplexity, since it instantiates a model 100 times smaller. However, as a trade-off, it will have a much higher throughput since it is a 100 times smaller model, and because of that the training costs are also orders of magnitude different. For instance, training a 13B model may take weeks with 128 GPUs (see table 2 in [3]), while our training takes ~1 day with 8 gpus. Overall, these models are very different. We will clarify this in the text.
- *"How does this method compare to existing approaches combining fine-tuned models from multiple domains?"*
See response above. Indeed, Soup-of-Experts is a pretraining method, it should be seen as complementary to usual model merging.
- "*Is this approach feasible for larger models? The large number of experts seems like a computational challenge if the models are scaled.*"
As said above, we could train larger 350M models on one node of 8 Gpus. We could train larger models with more computational ressources, but this is beyond our constraints.
We hope that we have convincingly answered your questions, and we thank you again for your review !
[3]: Touvron, Hugo, et al. "Llama 2: Open foundation and fine-tuned chat models." arXiv preprint arXiv:2307.09288 (2023). | Summary: The authors introduce "Soup-of-Experts" a method for pretraining language models so that they can quickly instantiate small, specialized models for different domain distributions. The architecture consists of a shared base model, multiple expert parameter sets, and a small learned routing function (an MLP) that maps domain weights (i.e., mixtures of training domains) to a set of coefficients used to linearly combine the experts at inference. Since each specialist is simply a linear combination of trained parameters (plus the shared parameters), it can be instantiated in a single forward pass of the routing MLP, with no further training or fine-tuning needed. Experiments on several language modeling tasks (drawn from RedPajama for pretraining and from the Pile for specialization) show that the proposed approach can produce strong specialist models that significantly outperform a single generic model while only slightly sacrificing performance on a broad domain. Soup-of-Experts also outperforms other baselines (Domain Experts, CRISP) in terms of specialization performance and computational efficiency.
Claims And Evidence: I consider the claims to be supported by sound evidence. Details are provided below:
Claim (1) - SoE can quickly produce specialized models targeted at specific domains by simply merging expert parameters with learned coefficients (no fine-tuning needed) >> The authors show that for 16 domains from Pile, SoE achieves lower perplexity/loss than a comparable single, generically pre-trained model.
Claim (2) - SoE is scalable >> The authors compare SoE to two baselines that train separate models for each domain. These baselines incur linear overhead (one pretraining done per domain), whilst SoE has a single pretraining phase for all possible domain mixture, then re-combines parameters at specialization.
Claim (3) - SoE almost preserved general performance >> Figure 4 confirms that SoE's overall perplexity on the broad (generic) dataset is only slightly worse than that of a purely generic model, suggesting that SoE retains broad-domain knowledge while also being adaptable.
Methods And Evaluation Criteria: From a reviewer’s standpoint, the methods and evaluation criteria are well aligned with the stated goal of obtaining strong specialized models from a single pretraining run. The choice of next-token prediction loss to measure performance is standard for language modeling tasks.
Theoretical Claims: * The paper does not provide a formal proof of correctness beyond standard machine-learning assumptions about parameter interpolation, so there are no complex proofs to verify.
* The theoretical argument is mostly intuitive, referencing earlier works on “model soups” / model merging and showing that experts with a shared initialization can be combined (especially when they are simultaneously trained).
Experimental Designs Or Analyses: Experiments are methodologically consistent (fair), meaning that each approach is trained with similar computational budgets (in tokens) and is then evaluated under the same metrics (next-token prediction loss). The authors report both the specialized loss and the generic training loss to show the trade-off between specialization and generality.
Supplementary Material: I have reviewed the supplementary material. The sections provide details on scaling from 35M to 110M parameter models, ablation on the meta-distribution’s support size, and a comparison with low-rank experts.
Relation To Broader Scientific Literature: The authors position SoE relative to large language models (generic but expensive) and specialized smaller models (efficient but less general). They frame SoE as a novel extension of “model soup” ideas and parametric interpolation methods but used during pretraining with learned mapping from domain weights to combination coefficients.
Essential References Not Discussed: The references appear sufficient to contextualize the core idea, but I did not carefully check for other similar references.
Other Strengths And Weaknesses: Strengths:
- Clear motivation and problem formulation, well-written submission.
- Novel combination of existing ideas (model merging + MoE-inspired expert routing).
- Empirical validation across datasets.
- Extensive ablation studies providing insights into method robustness (in the supplementary).
- Practical significance due to low computational overhead at specialization time.
Weaknesses:
- Low-rank experts did not yield expected improvements; there is a need for further investigation.
- Potential scalability challenges when significantly increasing the number of experts beyond tested scenarios.
- Limited theoretical analysis or no formal guarantees regarding stability.
Other Comments Or Suggestions: It would have been helpful to explicitly clarify how sensitive the results are to different embedding methods used for domain clustering.
Consider providing additional runtime benchmarks or memory usage comparisons explicitly against baselines.
Questions For Authors: It would be helpful if the authors clarified the effect of the MLP capacity on final performance. Does a larger routing network yield more fine-grained domain combinations?
Also, I am curious to know how sensitive is the method's performance to the choice of embedding method used during the clustering of pretraining domains?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer,
First, we thank you for your work, and for your remarks that will help us improve our work. We are happy to hear that *"the claims to be supported by sound evidence"*, that *"the methods and evaluation criteria are well aligned with the stated goal"*, and that *"Experiments are methodologically consistent"*.
We now address your concerns.
- *"Low-rank experts did not yield expected improvements; there is a need for further investigation."*
Indeed, we could not make this idea work, and honestly report it in the paper. We think making it work would enable another avenue to scale soup-of-experts, even though we could scale them without Lora (see https://anonymous.4open.science/r/rebuttal_soup-E8F9/reb_soe_time.pdf where we train 350M models, with 32 experts for the SoE).
- *"Potential scalability challenges when significantly increasing the number of experts beyond tested scenarios."*
We need to select a tradeoff between the number of experts and model size, at a fixed memory size. We have been able to train 350M models with 32 experts, see https://anonymous.4open.science/r/rebuttal_soup-E8F9/reb_soe_time.pdf
- *"Limited theoretical analysis or no formal guarantees regarding stability."*
Indeed, we do not conduct theoretical analysis, as we are unsure what type of results we would want to prove; note that papers in a similar flavor like [1,2,3] also do conduct any theoretical analysis
- *"It would have been helpful to explicitly clarify how sensitive the results are to different embedding methods used for domain clustering."* and *"Also, I am curious to know how sensitive is the method's performance to the choice of embedding method used during the clustering of pretraining domains?"*
Thanks for the suggestion. We will clarify in the text that we use this type of embeddings since it is the one which offers the best performance are reported in [4], fig.5. Due to lack of time in the rebuttal period, we could not obtain training runs using different embeddings, but we will make sure to add this ablation to the final version.
- *"Consider providing additional runtime benchmarks or memory usage comparisons explicitly against baselines."*
We note that fig.4 gives a time comparison between methods, just like the newly added larger scale experiment https://anonymous.4open.science/r/rebuttal_soup-E8F9/reb_soe_time.pdf .
The baseline and Soup-of-Experts training use the same hardware, that is one node of 8 A100 GPUs, and both have a GPU utilization of over 95%. We wil include these datapoints in the final paper.
- *"It would be helpful if the authors clarified the effect of the MLP capacity on final performance. Does a larger routing network yield more fine-grained domain combinations?"*
We have run an ablation where we change the width of the routing network. The MLP has input dimension 4096, output dimension 128, and we either take a hidden dimension of 4 * input_dimension or 4 * output_dimension. We could not see any difference between those two choices; the training curves looked similar. We will make sure to add a more thorough ablation in the final version of the paper.
We hope that our response clarifies the questions you have raised, and we thank you again for your thorough review.
[1]: Ha, David, Andrew Dai, and Quoc V. Le. "Hypernetworks." arXiv preprint arXiv:1609.09106 (2016).
[2]: Krajewski, Jakub, et al. "Scaling laws for fine-grained mixture of experts." arXiv preprint arXiv:2402.07871 (2024).
[3]: Wortsman, Mitchell, et al. "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time." International conference on machine learning. PMLR, 2022.
[4]:Grangier, David, et al. "Task-adaptive pretrained language models via clustered-importance sampling." arXiv preprint arXiv:2410.03735 (2024). | null | null | null | null |
Closed-Loop Long-Horizon Robotic Planning via Equilibrium Sequence Modeling | Accept (poster) | Summary: This paper introduces a novel approach for long-horizon robotic task planning using LLMs with Equilibrium Sequence Modeling. The core idea is to view planning as a self-refinement process converging to a fixed-point (equilibrium) plan that integrates feedback from the environment or a world model. Rather than relying on reinforcement learning or complex prompting strategies, the paper presents an end-to-end supervised training framework grounded in deep equilibrium models. Additionally, it introduces a nested equilibrium modeling loop to efficiently incorporate closed-loop feedback. The proposed method is evaluated on the VirtualHome-Env benchmark and demonstrates state-of-the-art performance compared to Tree-Planner and SELF-REFINE approaches, particularly in success rate (SR) and goal condition recall (GCR).
Claims And Evidence: The central claims are:
* Self-refinement can be formulated as an equilibrium sequence modeling problem.
* Such formulation enables efficient supervised training without reinforcement learning or verifiers.
* Nested equilibrium modeling improves closed-loop planning using minimal feedback.
* The approach scales better w.r.t inference-time computation and long-horizon task complexity.
The evidence provided includes:
* Theoretical derivation using implicit function theorem.
* Detailed experimental benchmarks with comparative tables (Tables 1-3).
* Clear ablations on feedback types, reuse strategies, and scalability (Figures 5 and 6).
The experiments are clear and the proposed method shows improvement over other approaches.
Methods And Evaluation Criteria: I actively follow the literature related to robotic task planning and LLM-based reasoning, though I am less familiar with the technical details of deep equilibrium models, which this paper builds upon. From this perspective, the author's core contribution—formulating plan refinement as a fixed-point problem and leveraging DEQ-style training—appears conceptually elegant and well-motivated. However, several aspects of the method raise important questions and concerns when assessed from a robotic planning standpoint.
First, the overall structure of the proposed Equilibrium Sequence Modeling, where planning is treated as a self-refinement process that converges to an equilibrium, fits naturally with how one might think about iterative plan correction in robotics. The idea of conditioning future plan iterations on environmental feedback (closed-loop refinement) is also quite compelling. However, the paper does not offer much intuition about the theoretical behavior of these equilibria—whether they always exist, whether they are unique, and what properties ensure convergence in practical LLM-based settings. These aspects are critical to establishing confidence in the method, especially in robotics, where convergence and stability can directly affect task execution.
Second, while I appreciate the effort to avoid complex reinforcement learning pipelines or externally curated verifiers, I find the training procedure somewhat underexplained from a robotics planning perspective. The authors argue that supervised learning suffices by using the final equilibrium plan $x^*$ as the input and the ground truth plan $y$ as the target, but it is not entirely clear how this helps the model learn to 'refine' in the planning sense, rather than simply mapping one plan to another. In practical terms, what guarantees that the system learns generalizable plan improvement heuristics, rather than overfitting to a specific dataset structure (e.g., VirtualHome-Env)? Also, the method seems to lack explicit regularization mechanisms or consistency constraints across refinement iterations.
Finally, from a planning systems perspective, I would have appreciated more clarity on how well this equilibrium-based method performs when feedback is noisy, delayed, or partially incorrect (scenarios that are quite common in real-world robotic environments). While the use of a world model to estimate internal feedback is a reasonable workaround, the model's robustness to imperfect feedback (or feedback that disagrees with reality) remains unclear.
Overall I like this new perspective in the LLM-based planning space but would appreciate more clarification on these questions.
Theoretical Claims: Yes, theoretical claims regarding differentiability through the fixed-point (via the implicit function theorem) are correct and consistent with standard literature (Bai et al., 2019; Krantz & Parks, 2002).
I am not familiar with the Jacobian-free approximation (Fung et al., 2022) though and would urge the Area Chair to rely on the reviews of other reviewers on the correctness of the claim (Line 73).
Experimental Designs Or Analyses: The experiments are well-designed. The authors isolate model contributions via ablation studies (feedback, compute).
However, the evaluation is constrained to a single benchmark (VirtualHome-Env), and further diversity in tasks/environments would have improved generalizability assessment.
Supplementary Material: Yes, Appendix A offers a helpful background on deep equilibrium models and their implementation, while Appendix B discusses benchmark and baselines. Appendix C and D include additional ablations and limitations. These materials were reviewed and are appropriately detailed.
I appreciate the authors providing this in the appendix as this helped me get a better understanding of the deep equilibrium models literature and also the experimental details. The authors have provided the code through a github repository. I have not verified the implementation though
Relation To Broader Scientific Literature: This paper contributes meaningfully at the intersection of LLM-based planning and implicit model training. It extends prior self-refinement work (Welleck et al., 2023; Madaan et al., 2023) by providing a principled optimization framework rather than heuristic prompting. It also offers a scalable alternative to methods like Tree-of-Thoughts (Yao et al., 2023) and SELF-REFINE.
Essential References Not Discussed: Some of the recent papers that use LLMs for long-horizon planning in both centralized and decentralized fashion, that allow re-planning (and self-verification to some extent):
1. Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenen- baum, Tianmin Shu, and Chuang Gan. Building cooperative embodied agents modularly with large language models, 2023.
2. Nayak, S., Morrison Orozco, A., Have, M., Zhang, J., Thirumalai, V., Chen, D., ... & Balakrishnan, H. (2024). Long-horizon planning for multi-agent robots in partially observable environments. Advances in Neural Information Processing Systems, 37, 67929-67967.
3. Jun Wang, Guocheng He, and Yiannis Kantaros. Safe Task Planning for Language- Instructed Multi-Robot Systems using Conformal Prediction. arXiv preprint arXiv:2402.15368, 2024.
Other Strengths And Weaknesses: Strengths:
1. The paper is conceptually elegant: combining DEQ with planning.
2. Removes reliance on RL while maintaining flexibility and robustness.
3. Strong empirical evidence and reproducibility through open code (did not verify the reproducibility of the code by running it).
Weaknesses:
refer to methods and evaluation section
Other Comments Or Suggestions: * Suggest clarifying the distinction between "plan convergence" and "task success"—sometimes conflated in figures.
* Consider adding visual explanations of failure cases across baselines (Figure 4 could be expanded).
## Update after rebuttal
I wasn't sure if the "official comments" were visible to the authors and hence am including them here:
I appreciate the authors' rebuttal and addressing my concerns in the review. I am assuming that the authors will include some of these changes in the camera ready version (if accepted). I would like to keep my scores the same.
Questions For Authors: 1. How sensitive is performance to the choice of initialization for fixed-point iteration? Would a poor initialization degrade convergence/stability?
2. Have you considered curriculum training—starting from short tasks and scaling up plan length? Would it improve convergence and data efficiency?
3. Can the model generalize to tasks beyond the VirtualHome-Env benchmark without retraining? For instance, can you reuse the same equilibrium modeling process in the ALFRED or RoboTHOR environment (were the prompts tailored heavily for the VirtualHome environment)?
4. Out of curiosity, what are the limitations of Jacobian-free approximation in your experience? Have you quantified the impact of this approximation empirically?
5. Do you anticipate your approach being practical on real-world robot hardware where interaction latencies and imperfect perception are constraints?
Refer to more broader questions in the Methods and Evaluation criteria.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for providing valuable feedback. Below are our responses to the raised concerns.
[W1] Existence and uniqueness of equilibrium points
* We empirically confirm that equilibria always exist across different initializations and tasks (see Table 5 in the appendix). The intuition behind this is that LLMs tend to repeat themselves under greedy sampling, leading to easier convergence.
* For the uniqueness of the equilibria, we run 10 seeds per task over 60 random tasks. As shown in the table below, our planner typically converges to 2-4 different equilibrium solutions for the same task. However, these solutions are highly consistent in that they either all succeed (SR$\rightarrow$100) or all fail (SR$\rightarrow$0), further confirming the stability of convergence.
|Task subset|#Equilibrium solutions|Average SR|
|-|--|-|
|I|2.080|97.33|
|II|3.885|5.91|
[W2] Intuition behind learning to refine
* Training the model to map the equilibrium plan $x^*$ to the ground truth $y$ is essentially mapping a suboptimal solution to a better solution, which involves self-refinement. Meanwhile, $x^*$ provides a good prior so that the training objective does not collapse to direct regression of the ground truth (as in standard supervised learning).
* The generalization of self-refinement is facilitated by our training objective, which avoids directly regressing to the ground truth and therefore overfits less to the training data. As further evidence, in the response to W4 below, we validate the generalizability of our method by evaluating it on ALFRED without retraining.
* The consistency during self-refinement (as shown in Figures 10-14) comes implicitly from the LLM's instruction-following capability and its tendency to repeat itself. We note that it is possible to incorporate explicit rules to generate structured output from LLMs, which may provide further performance improvements.
[W3] Robustness to noisy feedback
* Please see the response to Reviewer Rz3G's W2.
[W4] Generalization to more benchmarks
* Following your suggestion, we test pre-trained VirtualHome planners on ALFRED with updated prompts. For quick evaluation, we consider two planning metrics: task classification accuracy (across 7 task types) and recall of ground-truth action-object pairs in the predicted plan. As shown in the table below, our planner generalizes significantly better than the supervised trained planner.
|VirtualHome -> ALFRED|Task classification acc.|Action-object recall|
|-|-|-|
|SFT Llama|11%|0.50%|
|Ours|**54%**|**27.08%**|
[W5] Missing references
* Thank you for pointing out the recent papers on long-horizon planning. We will revise the draft to include their discussion.
[W6] Distinction between "plan convergence" and "task success"
* "Plan convergence" refers to the convergence of planner output, but this does not necessarily indicate "task success", which requires successful execution of the plan. See Figures 13a, 13b and 14a for failures after plan convergence.
[W7] Visual explanations of baseline failure
* Thank you for your suggestion. We provide more failure cases of the baselines in Figures 10 and 11 in the appendix, where they fail to correct a long plan efficiently. We will add more visualization explanations in the revision.
[Q1] Sensitivity to initialization
* To validate the performance stability w.r.t. initialization, we evaluate over 10 initial seeds for each of 60 random tasks. As shown in the results below, our method has a low standard deviation in SR and GCR and outperforms the strongest baseline Tree-Planner, confirming its stability. For convergence sensitivity w.r.t. initialization, see Table 5 in the appendix.
|Method|SR|GCR|
|-|-|-|
|Tree-Planner (N=50)|38.33|56.95|
|Ours|**43.50** $\pm$ 5.21|**68.88** $\pm$ 5.94|
[Q2] Curriculum training
* Following your suggestion, we perform curriculum training on tasks of increasing difficulty. Specifically, the model is trained for six iterations: the first four use subsets of task lengths below 5, 10, 15, and 20, and the last two use the full dataset. As shown in the table below, despite the reduced total data size (by 36%), the model achieves higher SR and comparable GCR in Both novel scenarios. This demonstrates the improved data efficiency of curriculum training.
|Curriculum training|Data size|Both|novel||Novel|scene||Novel|task|
|-|-|-|-|-|-|-|-|-|-|
|||SR|GCR||SR|GCR||SR|GCR|
|x|11469|51.61|**75.13**||**75.79**|**85.79**||**56.62**|**75.53**|
|$\checkmark$|7411|**54.83**|73.73||69.47|82.93||52.99|71.31|
[Q3] Generalization without retraining
* Please see the response to W4.
[Q4] Limitation of Jacobian-free approximation
* Please see the reponse to Reviewer Rz3G's W5.
[Q5] Practicality on real-world robots
* Please see the response to Reviewer nTaB's W1. | Summary: In this paper, a closed-loop long-horizon robot planning method based on equilibrium sequence modeling is proposed. By treating the planning as a self-optimizing fixed-point solution process, the implicit gradient of the deep equilibrium model is leveraged for end-to-end supervised training without the need for additional validators or reinforcement learning. From an analytical point of view, theoretical tools such as the implicit function theorem are used to optimize the training process and avoid complex backpropagation. In addition, the nested equilibrium mechanism is designed to dynamically allocate computing resources to optimize planning efficiency by combining environmental feedback and world model.
Claims And Evidence: Yes. The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem of closed-loop long-horizon robotic planning. The equilibrium sequence modeling approach, nested equilibrium solving, and world model integration address key challenges in this domain. The use of the Virtual Home-Env benchmark and relevant evaluation metrics provides a robust framework for assessing the method's performance.
Theoretical Claims: Yes, the theoretical claims and proofs in the paper are correct.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally sound and well-structured.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The key contributions of the paper are much related to the field of robotic planning.
Essential References Not Discussed: The paper has provided a comprehensive review of relevant literature.
Other Strengths And Weaknesses: Strengths:
1. The paper presents a novel approach by combining the concepts of deep equilibrium models with iterative refinement for robotic planning.
2. The application of the proposed method to the Virtual Home-Env benchmark demonstrates its potential for real-world robotic planning tasks.
Weaknesses:
1. In the training stage, it needs to rely on environmental feedback and real data, which may limit its generalization ability in scenarios where there is a lack of sufficient environmental interaction data. For example, in some real-world robotics tasks, obtaining large amounts of high-quality environmental feedback can be very difficult or costly.
2. In practice, environmental feedback may be disturbed by noise or inaccurate information, does this affect the self-refinement ability and final performance of the model?
3. The article mentions that models can have hallucinatory problems in some cases, such as the equilibrium planner and the world model that may generate plans or feedback that do not correspond to reality. Does this hallucinatory problem lead to planning failures or execution errors, especially in complex robotic tasks?
4. The model currently only supports text input and lacks the ability to process visual information, which limits its applicability in real-world robot tasks. In many real-world scenarios, robots need to process both visual and textual information for accurate planning and decision-making.
5. Although the paper theoretically proposes an optimization method based on equilibrium sequence modeling, in the actual implementation, the authors use approximation methods (such as ignoring the inverse Jacobian matrix) to simplify the training process. The authors should explain whether this approximation will cause the model to fail to achieve theoretically optimal performance in some complex tasks.
Other Comments Or Suggestions: No.
Questions For Authors: 1. In the process of iterative refinement, how to ensure that the final equilibrium point is the global optimal solution, not the local optimal solution? In theory, the equilibrium point (fixed point) satisfies the condition that the Lipschitz coefficient is less than 1, but how to deal with this problem in practical application to ensure that the sequence reaches convergence?
2. How are the environmental feedback and the world model feedback in the article coordinated and selected in practical applications? In some cases, the feedback from the world model may be skewed from the real-world feedback, will this affect the model's ability to self-refine?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful and constructive feedback. Below are our responses to the raised concerns.
[W1] Generalization to environment without feedback
* There are two workarounds in the absense of environmental feedback: (1) Use zero-shot LLM feedback during training. Since LLMs have been shown effective as zero-shot reward models in reinforcement learning [1], they could similarly provide guidance in our training setup. (2) Pre-train in environments with feedback. We validate this by evaluating pre-trained VirtualHome planners on the new ALFRED environment, where our planner generalizes well to a completely new environment (see reply to Reviewer Bgnq's W4 for details).
|VirtualHome -> ALFRED|Task classification acc.|Action-object recall|
|-|-|-|
|SFT Llama|11%|0.50%|
|Ours|**54%**|**27.08%**|
[W2] Robustness to noisy feedback
* To evaluate our model under noisy feedback, we randomly replace some environmental feedback with incorrect feedback during inference. As shown in the results below, our model exhibits stable performance under small amounts of noise ($\le$10%), demonstrating its robustness.
|Noise ratio|Both|novel||Novel|scene||Novel|task|
|-|-|-|-|-|-|-|-|-|
||SR|GCR||SR|GCR||SR|GCR|
|0%|51.61|**75.13**||**75.79**|85.79||**56.62**|**75.53**|
|10%|**53.23**|73.43||74.74|**85.84**||53.85|73.09|
|20%|50.00|73.10||73.68|83.22||54.49|71.80|
[W3] Hallucination of planner and world model
* Hallucination is inherent to LLMs and affects both our planner and world model. However, a key strength of our planner is its ability to self-correct based on feedback. As shown in Figures 10-14 in the appendix, the planner may start with an incorrect plan but improves it using feedback from the environment or the world model, leading to significantly better performance than standard LLM planners (see Tables 1 and 2).
* For the hallucination of the world model, we show in the reply to W2 that our method is robust to noisy feedback. Thus, even if the world model occasionally hallucinates, the feedback it provides still helps to improve performance (see Table 3).
[W4] Lack of visual information
* We acknowledge this limitation in Appendix D. While the current method is implemented on LLM planners focused on text input, it could be adapted to video-language planners [2] with visual capabilities. We will continue to investigate this line of planning in future work.
[W5] Impact of Jacobian-free approximation
* Theoretically, due to the complexity of transformers, the Jacobian-free approximation cannot guarantee convergence to the global optimal solution, which may limit performance. Nevertheless, such approximation is a necessary tradeoff for the computational feasibility of large language models, as demonstrated in [3].
* Empirically, we confirm that the model can sometimes get stuck in local optimal equilibrium solutions that lead to task failure (see the respone to Q1 below). While there might be several reaons, we suspect that this is partly due to the limitations of the Jacobian-free approximation. Improving this approximation could lead to better performance.
[Q1] Global optimality of equilibrium solution
* To assess the optimality of equilibrium solutions, we evaluate 10 seeds on each of 60 random tasks. The results show that: (1) our planner can converge to 2-4 different equilibrium solutions for the same task, and the solutions are highly consistent in that they either all succeed (SR→100) or all fail (SR→0), (2) it doesn't always converge to global optimal solutions, as there are still failure cases. This could be due to the limitation of Jacobian-free approximation discussed above.
|Task subset|#Equilibrium solutions|Average SR|
|-|--|-|
|I|2.080|97.33|
|II|3.885|5.91|
* In practice, we find that the equilibrium solving process always converges (see Table 5 in the appendix). This is largely due to the tendency of LLMs to repeat themselves under greedy sampling, which facilitates convergence.
[Q2] Use of environmental and world model feedback
* Our planner alternates between using environmental feedback and world model feedback at each outer-loop iteration, as illustrated in Figure 12c in the appendix. This allows for better performance within limited environmental interactions. We note that in practical applications, their frequencies could be adjusted according to the accuracy of the world model and the interaction budget.
* Despite the fact that the world model feedback may be skewed, it still provides useful information for self-refinement, resulting in improved performance (see Table 3) and convergence speed (see reply to Reviewer dmhf's W1). This can be explained by the response to W2, where our method is shown to be robust to noisy feedback.
---
[1] Ma, et al. Eureka: Human-level reward design via coding large language models. ICLR 2024.
[2] Du, et al. Video language planning. ICLR 2024.
[3] Choe, et al. Making scalable meta learning practical. NeurIPS 2023. | Summary: This paper introduces "Equilibrium Sequence Modeling," a novel framework for robotic task planning using Large Language Models (LLMs). The authors propose reformulating plan refinement as a fixed-point problem solvable with deep equilibrium models, enabling a supervised training scheme for improving self-refinement. The fine-tuned model is then used in the proposed framework, iteratively refining the plan while incorporating predicted environmental feedback from a fine-tuned LLM world model. The authors demonstrate improved success rate over several baselines on novel tasks from the VirtualHome dataset.
Claims And Evidence: The paper claims that the proposed method has improved planning performance, better scaling w.r.t inference computation, and effectively incorporates closed-loop feedback.
The evidence for these claims is a set of experiments on the VirtualHome-Env benchmark. The claims of improved performance are supported by the presented results, showing improved success rates over baselines. The results show that the method improves how quickly self-refinement converges, and also how much performance improves with increasing test-time computation. This suggests that the proposed method shows promise for effectively using inference-time resources.
To support the claim that the method effectively incorporates closed-loop feedback, the authors have an ablation which shows that including feedback correction improves planning performance.
The claim that the world model significantly reduces computational cost is plausible, but no data or evidence is given for this claim. The results also show a decrease in executability, which needs further explanation.
Methods And Evaluation Criteria: Using deep equilibrium models to improve the self-refinement process seems to be a reasonable approach. In principle the approach could be applied to domains beyond robot task planning.
W.r.t the claim of incorporating closed-loop feedback, one of the primary motivations for closed-loop feedback is to adapt to environmental disturbances. An experiment showing that incorporating closed-loop feedback better adapts to online disturbances would be much more compelling.
Training the model requires generating a dataset of plans by solving the equilibrium model, which is an expensive process. The quality of this original dataset also depends strongly on the quality of the original model. What if the initial self refinement fails to converge/improve and fails to produce an informative dataset? It would be informative to ablate the performance with different base models of different quality.
Theoretical Claims: The paper does not present formal proofs for theoretical claims. The core theoretical concept is the formulation of plan refinement as a fixed-point problem. While this is a reasonable approach, there is no analysis of the convergence of the fixed-point problem.
Experimental Designs Or Analyses: N/A, sufficiently covered above
Supplementary Material: The paper has code included, but I did not check it in detail.
Relation To Broader Scientific Literature: The paper relates to the broader scientific literature on LLM-based robotic planning and deep equilibrium models.. The use of deep equilibrium models for self-refinement is a novel contribution.
Essential References Not Discussed: Guiding long-horizon task and motion planning with vision language models, Z Yang, C Garrett, D Fox, T Lozano-Pérez, LP Kaelbling
Predicate Invention from Pixels via Pretrained Vision-Language Models. Ashay Athalye, Nishanth Kumar, Tom Silver, Yichao Liang, Tomás Lozano-Pérez, Leslie Pack Kaelbling
Generalized Planning in PDDL Domains with Pretrained Large Language Models. Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Pack Kaelbling, Michael Katz
Other Strengths And Weaknesses: The clarity of the paper could be improved:
- A brief description of the task planning problem in section 3 would be useful for setting the paper up.
- Figure 3 is not very clear to me at all. It contains all proposed components, i.e. the memory, world model, and planner. However it seems there should be a distinction between what is occurring at inference-time and what is occurring at training time. For example, from my understanding the memory is not used during inference-time but is used during training. In addition, what is the interaction between the world model and the environment during training in Figure 3? The meaning of the arrows to and from the world model / environment is not clear.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Why does using a combination of feedback from the world model and environment result in higher performance than the environment only?
2. The results show that the proposed method scores lower on executability than baselines. The reason given is because of illegal overlength inputs. Why is the proposed method more susceptible to this problem than the baselines?
3. Did the authors consider iteratively training and re-collecting a new dataset using the new refinement model? I am curious about the potential performance ceiling.
4. Did the authors come across any situations where the refinement method fails to converge?
5. How does the quality of the base model affect the dataset generation, and what happens if the initial self-refinement fails to converge?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the very constructive comments. Below, we first address the raised questions.
[Q1] Advantage of combined feedback
* The world model's feedback offers an alternative to environmental feedback when the interaction budget is limited. By combining both feedbacks, the planner can perform additional self-correction rounds within the same budget. This leads to faster convergence (see response to W1 below) and thus improved performance (see Table 3).
[Q2] Lower executability
* The lower executability of our method is due to format issues in overlength outputs. This is likely caused by complex prompts (including history plans and feedback) or limited memorization of ground truths. However, these errors are minor and can be fixed with simple post-processing, resulting in nearly 100% executable plans in the virtual environment.
[Q3] Iterative training with new dataset
* This is also how we train the model: after each iteration, we update the training dataset with the equilibrium solutions generated by the newly finetuned planner, and use this updated dataset for the next training iteration. As shown in the table below, the overall performance steadily improves over training iterations.
|#Iterations|Both|novel||Novel|scene||Novel|task|
|-|-|-|-|-|-|-|-|-|
||SR|GCR||SR|GCR||SR|GCR|
|0|00.00|00.00||00.00|00.00||00.00|00.00|
|2|51.61|74.67||65.26|82.78||**60.68**|**78.40**|
|4|51.61|73.30||68.42|82.93||53.20|72.81|
|6|**51.61**|**75.13**||**75.79**|**85.79**||56.62|75.53|
[Q4] Convergence analysis
* We observe that self-refinement consistently converges across different initializations. The table below summarizes the number of iterations to convergence over 10 runs on 60 random tasks, where all tested LLMs (including two new Qwen models) converge to an equilibrium within a finite number of iterations. The intuition behind this is that LLMs tend to repeat themselves, leading to easier convergence.
|Method|Mean|Std|Min|Max|
|-|-|-|-|-|
|Original Llama-3-8B|6.70|2.12|3.22|14.98|
|Original Qwen-2.5-0.5B|4.80|2.13|2.69|9.43|
|Original Qwen-2.5-7B|7.68|5.03|2.46|16.78|
|SFT Llama-3-8B|2.46|0.88|2.02|3.80|
|Ours|3.02|1.61|2.32|4.07|
[Q5] Dataset generation w.r.t. base model
* Since self-refinement consistently converges in practice, the resulting equilibrium can be reliably paired with ground truth to form meaningful training data. This holds true for both Qwen and Llama base models.
---
We also summarize other weaknesses mentioned in the review and provide our clarifications below.
[W1] Claim of reducing computational cost
* To clarify, we did not claim that the world model reduces computational cost. Instead, its goal is to reduce the number of environmental interactions, as mentioned in Section 3.4. To verify this claim, we compare the evolution of the success rate (SR) with and without the world model in the table below, which shows that the world model allows a similar or higher success rate even with fewer environmental interactions.
|#Interactions|World model|Both novel|Novel scene|Novel task|
|-|-|-|-|-|
|0|x|33.87|49.47|34.62|
||$\checkmark$|**37.09**|**65.26**|**40.17**|
|1|x|50.00|71.57|**53.41**|
||$\checkmark$|**53.22**|**75.78**|50.85|
|2|x|51.61|74.73|**55.12**|
||$\checkmark$|**56.45**|**76.84**|53.63|
|3|x|51.61|75.79|**56.62**|
||$\checkmark$|**56.45**|**77.89**|54.91|
[W2] Robustness to environmental disturbances
* We simulate environmental disturbances by randomly replacing some environmental feedback with wrong feedback during inference. As shown in the results below, our model exhibits stable performance under small amounts of disturbances ($\le$10%), demonstrating its robustness.
|Disturbances|Both|novel||Novel|scene||Novel|task|
|-|-|-|-|-|-|-|-|-|
||SR|GCR||SR|GCR||SR|GCR|
|0%|51.61|**75.13**||**75.79**|85.79||**56.62**|**75.53**|
|10%|**53.23**|73.43||74.74|**85.84**||53.85|73.09|
|20%|50.00|73.10||73.68|83.22||54.49|71.80|
[W3] Missing references
* Thank you for pointing out the related work on planning with VLMs and LLMs. We will update the draft to include their discussion.
[W4] Description of task planning problem
* Thanks for the valuable suggestion. We will add a brief description of the task planning problem at the beginning of Section 3 to improve clarity.
[W5] Clarity of Figure 3
* Figure 3 illustrates our planning framework during training, and thus includes training-specific components, e.g. the equilibrium memory you noted. The world model does not directly interact with the environment; instead, it is trained on past experiences stored in the equilibrium memory, as shown by the arrow pointing to it. The single arrow from the environment (and none from the world model) indicates that the planner receives feedback solely from the environment during training. We will revise the figure description to make this clearer. | Summary: This paper proposes an equilibrium model-based planner for decomposing high-level tasks into mid-level action sequences in an iterative manner taking environment and world model feedback. Experiments on VirtualHome-Env benchmark demonstrates that the approach can improve over a few existing approaches.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes, but it would be better if the proposed approach could be grounded from high level tasks and mid-level actions on to level real robot executable primitive actions or even control.
Theoretical Claims: Yes, the proofs in the appendix from the literature.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes, the re-iteration of the relevant literature and experiment details.
Relation To Broader Scientific Literature: Planning with LLMs or multi-modality foundation models is an important topic to push the AI frontier with physical interaction. This work depicts one way to do so in an iteratively refinement manner and improve over a few existing planning algorithms/paradigms using LLMs as backbone.
Essential References Not Discussed: No, as I am aware of.
Other Strengths And Weaknesses: Strength:
1. Introducing equilibrium sequence modeling for planning with LLMs.
2. With iterative refinement planning paradigm, environment feedbacks and world model feedbacks can be taken into the planning process.
Weakness:
1. Haven't connect the approach to real-world robot control yet to fully verify the validity of the proposal.
2. Successful rate is still low towards real application.
Other Comments Or Suggestions: 1. Figure 3, it would be better to clarify whether it is an inference framework, training framework or both. As it is now, the input-output relationship of the planner is not clear: no output. And how the output of the planner gets to the environment and world model, how the planner get feedback from them should be depict clearly. Similarly, for section 3.4 practical implementation, it would be better to specify the training and inference pipelines separately for clarification reason.
2. In the description for table 3, please clarify or define how the world model and environment feedbacks enter the model during training and inference time separately for each combination.
Questions For Authors: As mentioned above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We deeply appreciate your valuable suggestions, and we would like to address your main concerns as follows:
[W1] Connection to real-world robot control
* Our work focuses on high-level planning that decomposes each task into mid-level actions, as a complementary direction to low-level control. Extending this to real-world robotics with low-level actions presents several key challenges: (1) effective tokenization of actions and visual inputs, requiring progress in vision-language action models, (2) handling latency from the equilibrium solving process. To address the latter, our method includes designs such as reusing equilibrium solutions to reduce overhead.
[W2] Low success rate towards application
* The lower success rate can be attributed to three main factors: (1) limited data in the VirtualHome-Env dataset, with only 1360 action sequence annotations, (2) relatively small model size (Llama 3 8B), which may constrain capability, (3) the absence of multimodal inputs that humans naturally rely on. We expect significant performance gains by increasing model size and multimodal data.
[W3] Clarity of Figure 3 and Section 3.4
* Figure 3 illustrates our training framework. At each step, the planner takes context $c_t$ as input and outputs a plan $x_t$. Only the subset of equilibrium plans interact with the environment and are stored in the equilibrium memory. During training, the planner receives feedback solely from the environment based on these equilibrium plans.
* The training and inference pipelines are detailed separately in Appendix B.3. Both the planner and world model are trained using environmental feedback stored in the equilibrium memory. They then work together during inference, where the world model provides additional feedback to the planner.
[W4] Use of world model and environmental feedback in Table 3
* During training, we use only environmental feedback to train both the planner and the world model. At inference time, we evaluate the following four setups:
1. No feedback: Only inner-loop refinements without external feedback.
2. Environmental feedback: Up to 10 rounds, provided after each inner loop converges.
3. World model feedback: also up to 10 rounds, provided after inner loop convergence.
4. Both feedback: Alternating between environmental and world model feedback after each inner loop, within a total budget of 10 environmental interactions. | null | null | null | null | null | null |
Core Context Aware Transformers for Long Context Language Modeling | Accept (poster) | Summary: This paper addresses the challenge of long-context language modeling by proposing a Core Context Aware Attention (CCA-Attention) mechanism. The method improves efficiency by dynamically selecting core tokens within token groups while preserving local information. CCA-Attention consists of two main components: (1) a Globality-aware Pooling Module, which segments the input into groups and selects representative core tokens to approximate global attention, and (2) a Locality-preserving Module, which maintains a local attention window to retain fine-grained token interactions. Experimental results demonstrate improvements in both latency and memory usage.
Claims And Evidence: While efficiency gains are demonstrated, the actual performance improvement in terms of model accuracy/quality is "marginal" compared to baselines. Without seeing the exact numbers, it appears the efficiency-performance tradeoff may not be as compelling as suggested.
Generalizability claims: The experiments are limited to two LLaMA-7B models, making it difficult to conclude that these benefits would extend to other architectures or larger models.
Methodological effectiveness: Several technical aspects lack sufficient justification:
The rationale for fixed-size grouping in the pooling module
The potential information loss when using a single core token to represent groups with multiple high-attention tokens
The ability to perform well across varying parameter settings when trained on fixed values
Benchmark performance: The reviewer highlights significant discrepancies between reported results and official leaderboard scores on LongBench tasks, calling into question the validity of the performance evaluation.
Technical clarity: Several implementation details are insufficiently explained, including training strategies, parameter configurations, and variable definitions.
Methods And Evaluation Criteria: The methods and general evaluation approach make sense for the problem of efficient long-context modeling, but several gaps in the evaluation methodology and presentation limit the strength of conclusions that can be drawn from the results. The paper would benefit from more comprehensive model testing, clearer parameter specifications, and better explanation of observed performance patterns.
Theoretical Claims: The paper takes a primarily experimental approach to validating its method, with theoretical understanding coming from empirical results rather than formal proofs.
Experimental Designs Or Analyses: The experimental design appears to have several validity concerns, particularly regarding:
- Transparency of experimental parameters
- Benchmark result discrepancies
- Limited model testing
- Unclear training methodologies
- Potentially biased baseline comparisons
These issues substantially impact the ability to verify the paper's claims.
Supplementary Material: Yes, the Theoretical Analysison Reachability for CCA-Attention.
Relation To Broader Scientific Literature: The paper make a relevant contribution to the efficient long-context modeling literature, though with the methodological and evaluation limitations.
Essential References Not Discussed: The paper only compares against training-free methods, implying that important training-based approaches are missing. For example:
Position Interpolation methods (PI, NTK-aware PI)
LongLLaMA and other specifically fine-tuned long-context models
The absence of these references, particularly other competitive training-based methods, creates a significant gap in contextualizing the paper's contributions. By only comparing against a subset of training-free methods (and not the strongest ones), the paper fails to properly position its approach within the current state of the art for long-context modeling, potentially overstating its relative advantages.
Other Strengths And Weaknesses: Strengths
1. The paper focuses on an important and timely research problem—efficient long-context modeling.
2. Combining the long-range Globality-aware pooling module with the short-range Locality-preserving module intuitively makes sense and is quite innovative.
3. The paper is also well-written, and the illustrations are clear and detailed, making it easy to follow.
Weaknesses
1. The baselines used for comparison are all training-free methods, yet the average performance improvement is relatively small. While the proposed method achieves better latency and memory efficiency, the actual improvement in average score is marginal. A discussion on why this is the case would be helpful.
2. Experiments are only conducted on two LLaMA-7B models. To better demonstrate the generalization of the proposed method, it would be beneficial to evaluate it on larger models or different architectures.
3. What are the benefits of using fixed-size grouping in the Globality-aware pooling module? If multiple tokens within a group have high attention scores with the last token, wouldn't using one core token to represent them introduce significant approximation errors?
4. The scores in Table 1 seem unreasonable for tasks such as single document QA, multi-document QA, and summarization because there is a significant discrepancy compared to the official leaderboard in https://github.com/THUDM/LongBench/tree/main/LongBench. Could the authors clarify the reason for this difference?
5. In Figure 3, CCA-LLM uses different group size g and local window size s. The authors state that the group size g ranges from 16 to 2, and the local window size s ranges from 4096 to 1024. However, the specific parameter values used for each point are not annotated in the figure, which causes confusion given the apparent significant differences in performance. Additionally, I'm curious whether during the fine-tuning only fixed g and s are used; how can it perform well across varying g and s settings?
7. CCA-LLM’s training strategy in Table 1 and Table 2 is unclear. Was full fine-tuning or partial fine-tuning used? It would be helpful if the authors could provide results for different fine-tuning strategies to allow readers to better understand the trade-off between training cost and model performance.
Other Comments Or Suggestions: 1. Some variable definitions are unclear. For example, in line 225, what does s represent? Although Algorithm 1 mentions it as the local window size, it should be explicitly defined the first time it appears in the main text. Similarly, how is K_1^G obtained? Does it refer to the first dimension of K^G? A clearer explanation is needed.
Questions For Authors: 1. What is the batch size in the experiment shown in Figure 4? It appears that CCA-LLM can handle context lengths up to 128K without significantly increasing kv cache usage compared to full self-attention. What are the configurations of s and g here? Moreover, MInference barely reduces kv cache usage, which is also confusing. Could the authors clarify it?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > Q1. The paper only compares against training-free methods, important training-based approaches are missing. While the method achieves better latency and memory efficiency, the actual improvement in average score is marginal.
A1. We appreciate your emphasis on contextualizing our work against training-based methods. Below, we address the concerns raised:
**Method Positioning and Performance**
Our work focuses on **long-context modeling via efficient attention mechanism** for pretrained LLMs. This is distinct from position interpolation methods or long-context adaptation techniques like LongLLaMA, which do not modify attention in LLMs. By reducing computational costs ( **7.9×** on 128K context) while outperforming training-free baselines, our method offers immediate value to practitioners. Last, we emphasize our goal is to reduce redundancy in attention, which should be compared with those that also optimize attention (e.g., MInference).
**Further Experiments**
To address your concerns, we further compare our method with a training-based model, LongLoRA[ICLR 2024] with PI techniques. In Table I, **our CCA-LLM outperforms LongLoRA in modeling accuracy, inference speed and memory footprint**. We will include these in the revised manuscript.
Table I: Results on LongBench-E. ''FTL'' denotes the latency to generate the first token.
|Model|Avg.↑|FTL↓ (s)|Mem.↓ (GB)|
|-|-|-|-|
|LLaMA2-7B-32K|22.11|9.15|35.58 |
|• LongLoRA|21.58|9.15 (1.0x)|35.58 (0%↓)|
|• CCA-LLM (Ours)|**21.86**|**2.59** (**3.5**x)|**19.12** (**46**%↓)|
** Note: During inference, LongLoRA exhibits the same computational and storage overhead as full self-attention. The reason is LongLoRA's S2-Attention is only applicable in training and uses full self-attention during inference.
> Q2. More results on different architectures.
A2. According to your suggestions, we conduct more experiments on LLaMA3.1-8B and Qwen2.5-7B (128K context). The results in Tables II and III show CCA-Attention outperforms state-of-the-art counterpart MInference in terms of **performance, computational efficiency**, and **memory reduction** (see Table II and III).
We would include these results in the revised paper.
Table II. Comparisons of Llama3.1-8B-Instruct-128K on LongBench-E.
|Model|Avg.↑ (%)|FTL↓ (s)|Mem.↓ (GB)|
|-|-|-|-|
|Llama3.1-128K|37.93|9.55|40.38|
|• MInference|37.74|4.93 (1.9x)|35.95(11%↓)|
|• **CCA-LLM (Ours)**|**37.81**|**3.08** (**3.1**x)|**20.63** (**49**%↓)|
Table III. Comparisons of Qwen2.5-7B-128K on LongBench-E.
|Model|Avg.↑ (%)|FTL↓ (s)|Mem.↓ (GB)|
|-|-|-|-|
|Qwen2.5-128K|38.38|10.58|35.11|
|• MInference|36.72|4.86 (2.2x)|32.40 (8%↓)|
|• **CCA-LLM (Ours)**|**38.08**|**2.74** (**3.9x**)|**19.31** (**45**%↓)|
> Q3. Benefits of using fixed-size grouping.
A3. This allows for optimized memory access and parallel computation. While dynamic group sizes would introduce significant implementation complexities.
> Q4. If multiple tokens in a group have high scores with the last token, wouldn't using one core token to represent them introduce significant approximation errors?
A4. The mentioned issue can be significantly mitigated by the fine-tuning process, based on the finding: CCA attention with/without fine-tuning yields performance of 6.92 and 22.24 on LongBench-E, respectively. This confirms core token aggregation effectively captures group-wise information.
> Q5. Why are the scores in Table 1 different from the official leaderboard?
A5. The leaderboard tests **chat models** with SFT on LongBench, while we test **base models** without SFT on LongBench-E.
> Q6. Whether during the fine-tuning only fixed g and s are used? How can it perform well across varying g and s?
A6. We use fixed $g$ and $s$ during finetuning. The reasons for the promising performance with varying $g$ and $s$ are twofold:
- Globality-aware pooling module dynamically **pools groups of tokens into core tokens based on their importance, regardless of $g$**. Thus it generalizes to arbitrary $g$ during inference.
- Locality-preserving module employs rotary position embeddings that encode relative distances rather than absolute positions, which **decouples from specific $s$**, allowing adaptation to different $s$.
We would include these in the revised paper.
> Q7. Was CCA-LLM fully or partially fine-tuned in Tables 1 & 2? Could you compare them?
A7. Full fine-tuning. In Section C.3, we compare different fine-tuning strategies. Full fine-tuning achieves better performance than partial fine-tuning (22.24 vs. 21.96) on LongBench-E with more training hours (15.4 vs. 11.2).
> Q8. Implementation details in Figure 4.
A8. We use batch size=1, $g$=16 and $s$=1024.
> Q9. Why does MInference barely reduce KV cache usage?
A9. Minference's sparse attention is only applicable in pre-filling. During decoding, it uses full self-attention, leading to same KV cache usage.
---
We sincerely hope our clarifications above have addressed your questions. | Summary: This work proposes Core-Context-Aware attention to enhance efficient long-context modeling for LMs. Two modules are included: a global pooling module to compresses groups of tokens into core tokens and a local module to preserve local information. Experiments on various tasks and datasets show the effectiveness of the proposed approaches.
Update after rebuttal:
I do appreciate the authors' promise of improvements and extra results. Nevertheless, I think my major concern over the similarities between this work and previous work from the perspective of methodology is still not addressed, which is why I will keep my original score.
Claims And Evidence: Mostly.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental designs seem reasonable to me.
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: There have been a line of context-compression methods not fully discussed and compared:
Pooling-styled compression:
- Compressive Transformer: https://arxiv.org/pdf/1911.05507
Special-token based compression:
- Gist: https://arxiv.org/pdf/2304.08467
- Nugget: https://arxiv.org/pdf/2310.01732
- Landmark: https://arxiv.org/pdf/2305.16300
- Beacon: https://arxiv.org/pdf/2401.03462
- Dodo: https://arxiv.org/pdf/2310.02409
Other Strengths And Weaknesses: Strengths:
- The paper is overall easy to understand.
- The proposed methods have been shown effective for various tasks and datasets.
Weaknesses:
- The proposed methods share a variety of similarites to previous directions of context compression, some of which are not well discussed and compred with.
- There is a lack of ablation studies and analyses on some design factors of the proposed approach.
Other Comments Or Suggestions: - It might be interesting to explore more long-context tasks in addition to LongBench. Recently there have been a variety of long-context benchmarks such as RULER and HELMET, which cover a wider range of tasks and types of contexts.
- Llama 2 is used as the base model, which has a limited context window. It would be interesting to explore whether the proposed approach can help more recent models that have already extended to long contexts (such as Llama 3 and Qwen models).
Questions For Authors: - In the proposed method, there are some key hyper-parameters that need to be decided, such as the group size and local window size. I'm wondering how they are decided and what are their effects on the model's performance? It would be nice to include more analyses and ablation studies.
- I'm also wondering how much data will be needed to fine-tune the model with the proposed approach?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > Q1. The proposed methods share similarities to previous context compression methods[r1-r6]. r1: Compressive Transformers, r2: Gist, r3: NUGGET, r4: Landmark, r5: Beacon, r6: DODO.
A1. Thanks for your valuable comments. We clarify the differences below:
- **Problem Importance & Motivation**
The core challenge we tackle is efficient and effective long-context modeling in LLMs. While self-attention handles long sequences, its quadratic complexity and redundancy in ultra-long contexts (e.g., 128K) severely degrade efficiency and modeling performance. Unlike prior works either compress features via auxiliary networks [r1] or compress context with extra specific tokens [r2-r6]), our method aims to dynamically identify and strengthen task-relevant core-context while suppressing redundancy. This distinction is crucial: compression prioritizes shortening sequences, whereas our core-context awareness optimizes context redundancy in self-attention computation for long context modeling.
- **Novelty & Differences**
Our CCA Attention introduces two innovative, complementary mechanisms:
- Globality-aware Pooling Module: Unlike context compression methods [r1-r6], it dynamically groups tokens and pools them into core tokens based on their importance. This ensures the model **focuses on task-relevant context while reducing redundancy**.
- Locality-preserving Module: While prior compression methods may have a risk of losing local details, our module selectively preserves neighboring tokens to maintain locally relevant information.
**We would cite the above papers and include these discussions in the revised manuscript.**
---
> Q2. It might be interesting to explore more long-context tasks in addition to LongBench, such as RULER.
A2. According to your suggestions, we conduct more experiments on RULER on LLaMA2-7B-80K with varying context lengths (4K-64K). From the results in Table I, our method achieves **superior performance over MInference, a state-of-the-art counterpart**.
Table I: Comparisons of LLaMA2-7B-80K on RULER.
|Methods|8K|16K|32K|64K|Avg.↑|
|-|-|-|-|-|-|
|LLaMA2-80K|71.90|66.26|61.54|55.15|63.71|
|• MInference|67.78|65.32|**61.43**|52.77|61.83|
|• CCA-LLM|**68.15**|**66.31**|60.89|**54.88**|**62.56**|
> Q3. It would be interesting to explore whether the proposed approach can help more recent models (such as LLama 3 and Qwen models).
A3. We conduct more experiments on LLaMA3.1-8B and Qwen2.5-7B with context length 128K in Tables II and III, respectively. From the results, our method is able to help LLaMA3.1 and Qwen models improve computational efficiency and memory reduction in long-context modeling.
- **Computational efficiency**: CCA-Attention achieves 3.1× on LLaMA3.1-8B, and 3.9× on Qwen2.5-7B with 32K context.
- **Memory reduction**: CCA-Attention reduces 49% GPU memory footprint on LLaMA3.1-8B, and 45% on Qwen2.5-7B.
- **Performance**: CCA-Attention consistently outperforms MInference across all models, maintaining competitive accuracy with valina self-attention.
Table II. Comparisons of Llama3.1-8B-Instruct-128K on LongBench-E. ''FTL'' denotes the latency to generate the first token in the pre-filling stage.
|Model| Avg.↑ (%)|FTL↓ (s)|Mem.↓ (GB)|
|-|-|-|-|
|Llama3.1-128K|37.93| 9.55 |40.38 |
|• MInference|37.74|4.93 (1.9x)|35.95(11%↓)|
|• **CCA-LLM (Ours)**|**37.81**|**3.08** (**3.1**x)|**20.63** (**49**%↓)|
Table III. Comparisons of Qwen2.5-7B-128K on LongBench-E.
|Model| Avg.↑ (%)|FTL↓ (s)|Mem.↓ (GB)|
|-|-|-|-|
|Qwen2.5-128K|38.38|10.58 |35.11 |
|• MInference|36.72|4.86 (2.2x)|32.40 (8%↓)|
|• **CCA-LLM (Ours)**| **38.08**|**2.74** (**3.9x**)|**19.31** (**45**%↓)|
> Q4. There are some key hyper-parameters, such as the group size and local window size. It would be nice to include more analyses and ablation studies.
A4. In **Section C.3 of supplementary**, we show ablation studies on group size $g$ and local window size $s$. From the results, as $g$ increases, the efficiency will improve and the performance will decrease (in terms of PPL); as $s$ increases, the performance will improve and the efficiency will reduce. To achieve a balance, we select $g=16$ and $s=1024$ as default training setting.
> Q5. How much data will be needed to fine-tune the model?
A5. In **Section B.2 of the supplementary**, we show the details for fine-tuning: We use SlimPajama dataset to fine-tune the model, with only 2.1 billion tokens for LLaMA2-7B-32K and 5.0 billion tokens for LLaMA2-7B-80K.
---
We sincerely hope our clarifications above have addressed your questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
For "Novelty & Differences":
> Unlike context compression methods [r1-r6], it dynamically groups tokens and pools them into core tokens based on their importance.
I think there are little key differences between many previous work and this work, most of them use a fixed chunking-based sequence segmentation scheme, and reduce the original raw tokens to compressed token representations, which is similar to the method adopted in this work (though the detailed calculation methods may be sligtly different). In the sense of "dynamically selecting", many previous work as discussed before can also achieve this through self-attention.
> Locality-preserving Module: While prior compression methods may have a risk of losing local details, our module selectively preserves neighboring tokens to maintain locally relevant information.
I think adopting local sliding windows has been a common technique to keep local information, which is hard to be considered as a main novelty point.
In this way, I think at least some of these previous work should be considered in the comparisons (at least they should be discussed, which have been ingored in the current version).
For the extra experiments on latest llama and qwen models, thanks for the updates and I think these results are important and should be further extended and myabe replace the original results in the current version (also considering LLaMA2-7B-32K and LLaMA2-7B-80K are non-llama-offical extended models).
I do appreaciate the authors' extra experiments and also considering other reviewers' judgments, my current evaluation for this work is somewhat borderline (around 2.5, but unfortunately there is no option for this). Please feel free to let me know if there are any points that I may have missed or misunderstood.
To make this work more solid (towards a positive score from my side), I think:
1) Comparisons and discussions related to previous work should be carefully included,
2) More extended experiments with the latest models should be adopted as the main results.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much for your prompt response to our rebuttal. We hope to make the following further responses to your concerns and sincerely hope you would be satisfied.
> Q1. Comparisons and discussions related to previous work should be carefully included.
A1. We further highlight our novelty and clarify the key differences as follows:
**1. Problem Definition**
Our work focuses on improving long-context modeling efficiency by reducing redundancy in self-attention. Long-context modeling (e.g., 128K) is important for real-world applications like document analysis, and multi-turn dialogues. However, the quadratic complexity of full self-attention and the redundancy in long contexts pose two key challenges:
- **Performance Degradation**: The redundant context may hamper LLMs from capturing dependencies among crucial tokens, degrading representation performance
- **Unnecessary Computation**: The redundancy in attention introduces unnecessary computational and storage overhead, especially for extremely long contexts.
Our proposed **Core Context Aware Attention (CCA-Attention)** addresses redundancy issues in self-attention through:
- **Globality-aware pooling module** dynamic pools input tokens into core tokens based on their significance. This enables our method to automatically focus on the core context.
- **Locality-preserving module** captures the local and fine-grained context by focusing on neighboring tokens. It is not a standalone component but is complementary to the globality-aware pooling module.
The complementary nature of these modules ensures that **both high-level semantics (via core tokens) and low-level details (via local tokens)** are preserved, while improving the computational and storage effciency.
**2. Limitations of Existing Methods**
Existing approaches [r1-r6] on context compression primarily include two strategies:
- Gist[r2], Landmark[r4] and Beacon[r5] compress input/intermediate tokens into smaller sets of special tokens which can be cached and reused for compute efficiency.
- Compressive Transformers[r1], NUGGET[r3] and DODO[r6] introduce auxiliary networks to compress past tokens into compact representations with reduced dimensions.
These methods may encounter two key limitations:
- They may overlook the sparsity patterns of self-attention in long contexts. In contrast, **our CCA-Attention directly targets redundancy reduction in self-attention**. To this end, we propose a parameter-free group-wise pooling strategy that dynamically pools redundant tokens. Critically, the pooling weights are derived based on context-aware importance, eliminating the need for additional parameters or specialized tokens. **Our method retains full compatibility with existing LLM architectures while improving computational and storage efficiency**.
- They support acceleration either prefilling [r2,r3,r6] or decoding [r1,r4,r5] phase in inference. In contrast, our proposed CCA-Attention enables efficient long context modeling (e.g., 128K) while **accelerating both training and inference processes (including prefilling and decoding phases)**. CCA-Attention achieves significant speedups (7.9$\times$ speed up over vanilla self-attention on 128K context) while maintaining comparable model performance with a hardware-optimized operator with Triton. Crucially, our method reduces KV cache memory usage by up to 93% in 128K context, improving deployment feasibility without compromising accuracy.
**3. Contributions of Our CCA-Attention**
We would highlight our contributions as follows:
- Our CCA-Attention is an **efficient and plug-and-play module for existing LLMs in long-context modeling**. By using core tokens as proxies, CCA reduces attention complexity to linear while requiring minimal fine-tuning for pretrained LLMs.
- We introduce a dynamic globality-aware pooling module to derive core tokens based on their importance and a local-preserving module to retain local tokens. This combines **both high-level semantics and low-level details, leading to accurate long-term dependency modeling**.
- Experiments show CCA-Attention outperforms existing efficient attention methods, achieving a 7.9× speedup and a 93% KV cache reduction over full self-attention in 128K context with comparable accuracy.
We hope this clarification highlights the novelty and contributions of our method. **We will cite these papers and carefully discuss these differences from [r1-r6] in the revised manuscript.**
> Q2. More extended experiments with the latest models should be adopted as the main results.
A2. According to your suggestions, we will **update the main results in Tables 1 and 2 to prioritize experiments on LLaMA3.1-8B-128K and Qwen2.5-7B-128K**, as these are officially supported and widely recognized models.
---
We sincerely hope our clarifications above have addressed your concerns. Thank you again for your constructive suggestions to strengthen our paper.
Best,
Authors | Summary: This paper addresses the computational inefficiency and redundancy in Transformer-based Large Language Models (LLMs) when processing extremely long contexts (e.g., 128K tokens). The authors propose Core Context Aware (CCA) Attention, a plug-and-play mechanism comprising two modules: (1) a globality-aware pooling module that dynamically compresses input tokens into core tokens via weighted grouping, and (2) a locality-preserving module that retains fine-grained local context. The fusion of these modules reduces computational complexity from quadratic to linear while preserving full token reachability. Experiments on LongBench and multi-document QA benchmarks demonstrate superior performance (e.g., 7.9× speedup at 128K context) over methods like MInference and StreamingLLM, with reduced memory footprint.
## update after rebuttal
Thanks to the authors for addressing my concerns and providing additional results.
I didn't find novelty issue on the idea of core context modeling on my side.
I will keep my score.
Claims And Evidence: The claims in the paper are supported by clear evidence.
Methods And Evaluation Criteria: The methods presented in the paper are well-motivated, aiming to balance computational efficiency with model performance. However, I still have some question:
1. Figure 3 demonstrates stable performance with varying group sizes (g) and window lengths (s), what are the possible reasons why changing the size of g and s can work also well during inference?
2. Could the core token (Eqn.2) be biased toward later tokens in a group due to using the last token’s query?
Theoretical Claims: I have thoroughly reviewed the theoretical claims in the manuscript, which are well-structured and insightful. The authors provide a rigorous reachability guarantee through Proposition 1, demonstrating that the CCA-Attention mechanism preserves full token accessibility by integrating global and local attention scores. This formalization ensures that all tokens remain interconnected, addressing a critical aspect of long-context modeling. Additionally, the computational and storage complexity analysis is comprehensive, highlighting the method's superiority in reducing computation costs and optimizing key-value cache storage. These analyses underscore the method's efficiency and effectiveness for handling long-context tasks.
Experimental Designs Or Analyses: The experimental design is comprehensive, covering a range of benchmarks and metrics to validate the proposed method's effectiveness. The use of LongBench (Bai et al., 2023) and multi-document EM scores (Liu et al., 2024b) provides a robust assessment of long-context understanding. The inclusion of various model sizes (LLaMA2-7B-32K and LLaMA2-7B-80K) ensures scalability analysis. The comparison with state-of-the-art methods (StreamingLLM, LM-Infinite, MInference) highlights the competitive edge of the CCA-Attention mechanism.
I have a few questions regarding the experimental design:
1. The paper explores a fixed group size m for token grouping. While this simplifies experimentation, real-world applications often require balancing accuracy and efficiency across diverse tasks. Without requiring additional experiments, could the authors discuss: is there a principled way to set m for unseen tasks (e.g., smaller groups for high-precision tasks vs. larger groups for efficiency) ?
2. Table 2 shows that model performance does not consistently degrade with increasing text length. A more in-depth analysis or explanation would enhance the understanding of the method.
Supplementary Material: I have reviewed the supplementary material, including detailed proofs, implementation details and ablation studies.
Relation To Broader Scientific Literature: Although CCA builds on some sparse attention (eg., Longformer) and linear approximations (eg., RetNet), it introduces dynamic token compression, significantly reducing the computational costs and storage demands for the attention while outperforming other baseline methods like LM-Infinite and MInference.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
1. The method's primary strength lies in its compatibility with pretrained LLMs, such as LLaMA2. This plug-and-play design minimizes the need for extensive retraining, making it easier to integrate into existing models.
2. CCA integrates Triton, whose ability to optimize low-level operations enhances the method's efficiency, enabling faster inference times.
Weaknesses:
1. The method currently uses a fixed group size for inference, which limits its adaptability to varying text lengths and complexities. The paper lacks an analysis of adaptive grouping strategies, which could improve the method's flexibility and performance.
Other Comments Or Suggestions: 1. The claim of “significant improvements” seems over-stated. Adding “compared with other baseline methods” would make it more accurate.
2. Is it be clear if $C_t^1(x_t)$ is revised as $C^1(x_t)$?
3. In Figure 2, the bold font usage of g is inconsistent, and $A^G$ is confusing to me.
4. The meaning of “LLaMA2-7B-32K” and “LLaMA2-7B-80K” should be specified.
5. The notations $K$ and $V$ in 12-th in Algorithm 1 should be performed by \tilde.
Questions For Authors: 1. will the assumption of redundant text be related to the topic of testing cases? Math and general qa may have different degree of redundancy
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: >Q1. Figure 3 demonstrates stable performance with varying group sizes (g) and window lengths (s) during inference, what are the possible reasons?
A1. We deeply appreciate your valuable feedback. We design our CCA-Attention for flexibility, enabling promising performance across varying $g$ and $s$ settings without requiring additional fine-tuning. The reasons are twofold:
- Globality-aware pooling module dynamically **pools groups of tokens into core tokens based on their importance, regardless of the group size $g$**. The pooling weights are derived from intra-group attention scores, which depend on semantic relevance rather than fixed positional patterns. As a result, the module generalizes to arbitrary group sizes $g$ during inference.
- Locality-preserving module is inherently translation-invariant to window boundaries by employing rotary position embeddings that encode relative distances rather than absolute positions. This relative distance encoding naturally **decouples the module's operation from specific window size $s$**, allowing adaptation to different $s$ in inference while preserving local context relationships.
We would include these discussions in the revised paper.
> Q2. Could the core token (Eqn.2) be biased toward later tokens in a group due to using the last token’s query?
A2. In our CCA-Attention, the strategy of deriving core tokens based on the attention scores from the last token is both rational and effective. This is supported by two points:
- The attention map visualization in Section C.2 of supplementary material reveals a distinct pattern: **tokens that are important to the query receive consistently high attention scores from all subsequent tokens**. This indicates that important tokens, regardless of their position within a group, have a notable influence on the attention distribution. This suggests that our pooling weights assessed by the last group token are able to capture these crucial tokens.
- The consistent high performance in long-context modeling tasks confirms that our CCA-Attention effectively captures global and local dependencies within long contexts. This is a direct result of our pooling strategy, which shows that information relevant to the query is not overlooked.
We would include these discussions in the revised paper.
> Q3. Without requiring additional experiments, could the authors discuss: is there a principled way to set m for unseen tasks?
A3. We appreciate the reviewer's insightful comments. In practice, we are able to **set different group sizes according to the nature of the task**. For high-precision tasks (e.g., legal document analysis), smaller group sizes are preferable to ensure finer granularity and capture more detailed contextual dependencies. Conversely, for tasks prioritizing efficiency over precision (e.g., real-time dialogue systems), larger group sizes are able to reduce computational overhead. This trade-off aligns with the inherent flexibility of our proposed CCA-Attention, which allows dynamic adjustment of m during inference.
> Q4. Could the authors explain why the model performance does not consistently degrade with increasing text length in Table 2?
A4. As discussed in the paper, we discovered that full self-attention may face redundant context issues in sequence modeling. **This redundancy may hamper the modeling performance** by 1) weakening the focus on semantically important content, and 2) introducing noise from irrelevant context. More importantly, **this issue becomes more severe with longer sequences**.
To address this, we propose a core context aware attention mechanism, in which **non-core/irrelevant contexts will be compressed by weighted pooling**. In this way, CCA-Attention not only alleviates the redundant context issue but thus improves the long-context modeling performance. As a result, the advantages of our method become more prominent as the length of the context increases.
> Q5. Will the assumption of redundant text be related to the topic of testing cases? Math and general QA may have different degrees of redundancy.
A5. **Our assumption of redundant context is not related to the topic**. To verify this, we perform an analysis using LLaMA2-7B on 16K length corpora from two domains: math (MathPile dataset[1]) and general QA (WikiText dataset[2]).
By examining the attention weights of the final token relative to preceding tokens, we quantify the redundancy degree as the proportion of tokens receiving attention weights below 0.001. **The results demonstrate comparable levels of redundancy between math** (99.8% tokens' weights below 0.001) **and general QA context** (98.7% tokens' weights below 0.001), supporting our initial assumption.
[r1] A Billion-Token-Scale Pretraining Corpus for Math. NeurIPS 2024.
[r2] Pointer Sentinel Mixture Models. ICLR 2017.
---
We sincerely hope our clarifications above have addressed your concerns. | null | null | null | null | null | null | null | null |
LLMScan: Causal Scan for LLM Misbehavior Detection | Accept (poster) | Summary: The authors propose to detect misbehavior in LLMs through two major components. 1) They assess the contribution of individual input tokens and neural network layers on the final output. 2) They train a detector to classify prompts based on the properties of the analysis conducted in 1). They evaluate their approach on truthfulness, harmfulness, and toxicity and conduct experiments on 56 benchmarks.
References for the remaining review:
[1] Carlini et. al., "Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods", 2017
[2] Schwinn et. al., "Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space", 2024
## update after rebuttal
Most of my concerns have been addressed by the reviewer. I specifically liked the ablation study the authors provided based on the question of reviewer **4jWw** regarding individual components of their algorithm.
I would like to highlight the code should be released upon acceptance of this paper, which is the most major concern I currently have with the paper (no supplement).
I raised my score to "4"
Claims And Evidence: The authors claim:
- To propose a novel method to detect misbehavior in LLMs
- That their method identifies misbehavior successfully for 4 different types on 56 benchmarks
The authors provide:
- Results on a large number of datasets
Methods And Evaluation Criteria: The authors use common evaluation benchmarks and compare their proposed approach with recently proposed methods, including highly cited works. Moreover, the authors provide code, which makes it easier to understand the exact implementation settings of the experiments. I was not able to spot inconsistencies in the evaluation.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The claims made by the authors are well substantiated by the experiments performed. Moreover, the authors perform additional ablation studies to investigate the working mechanisms and design choices of the proposed algorithm
Supplementary Material: No supplement but code is provided
Relation To Broader Scientific Literature: The authors provide a comprehensive overview concerning related works. I am not well familiar with some of the literature but was able to put the work into context due to the references provided by the authors.
Essential References Not Discussed: I am not aware of essential references that were not discussed
Other Strengths And Weaknesses: Strengths:
- comprehensive benchmark including several baselines, multiple variations of misbehavior, and a large number of different datasets
- their method consistently outperforms baselines with few exceptions despite the variety of settings
Weaknesses:
- Adversarial example detection has shown to be considerably harder than estimated by several works [1]. The authors should highlight that their approach detects black-box adversaries not aware of the defense mechanism. A study with an adaptive attack trying to bypass the defense would be interesting. This could be achieved through continuous attack with reasonable effort (as a proof of concept) [2] (however, the proof of concept is a minor concern).
- The captions of tables and figures could be more comprehensive (minor).
- The fonts in FIgure 3 are very hard to read (minor).
I am currently between weak reject and weak accept and will reassess my decision based on the other reviews and the rebuttal of the authors.
Other Comments Or Suggestions: - remove inconsistency in notation regarding definitions (e.g., using square symbol in the end)
- section three presents very simple concepts in an overly complicated way.
- Overall the figures and tables take a lot of space (Table 2 and Figure 4) present relatively simple ablations but take considerable space. Some of the results could be moved to the appendix while keeping the general message.
Questions For Authors: - Could the authors provide an estimate of the computational overhead of their algorithm compared to the baseline approaches?
- Could the authors give their opinion on the ability of a white-box adversary to bypass their detection method?
# After rebuttal:
Most of my concerns have been addressed by the reviewer. I specifically liked the ablation study the authors provided based on the question of reviewer **4jWw** regarding individual components of their algorithm.
I would like to highlight the code should be released upon acceptance of this paper, which is the most major concern I currently have with the paper (no supplement).
I raised my score to "4" for now. I will continue to follow the discussion and change my score accordingly if it should be required.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your insightful comments. Please find our responses to your questions below.
**Q1.** Computational overhead compared to the baseline approaches.
**A1.** We thank the reviewer for highlighting the importance of evaluating the computational overhead of our method. To this end, we conducted experiments comparing the average detection time per input with baseline methods. We randomly sampled 100 prompts (average length: 30 tokens) and measured detection time across the benchmark models. The results show that our method introduces moderate overhead, remaining within the same order of magnitude as the model’s inference time. The table below summarizes the detection time (in seconds) and the corresponding inference time for reference.
|Model|Llama-2-7b|Llama-2-13b|Llama-3.1|Mistral|
|-|-:|-:|-:|-:|
|LLMScan|3.82|7.73|3.31|3.40|
|Lie Detector|6.79|13.24|7.53|8.24|
|RepE|0.05|0.08|0.05|0.05|
|ONION|0.30|0.30|0.30|0.30|
|Complete inference time|3.07|6.68|3.37|3.54|
On average, for 7B–8B models (Llama-2-7B, Llama-3.1, Mistral), our method requires around 3.5 seconds per input, compared to 7.5 seconds for the baseline lie detector and 0.05 seconds for RepE. For Llama-2-13B, it takes 7.7 seconds, while the lie detector baseline requires 13.2 seconds and RepE 0.08 seconds. For backdoor detection, the ONION baseline works as a pre-processing step and takes approximately 0.3 seconds per input across all models.
While RepE is the fastest due to its single-pass extraction, it sacrifices detection reliability, as discussed in the paper. Similarly, ONION performs poorly in backdoor detection. In contrast, our method is more efficient than the baseline lie detector while achieving stronger detection performance.
In summary, our method introduces moderate overhead and strikes a balance between efficiency and effectiveness. We have included a detailed breakdown of runtime performance across model sizes in the appendix.
**Q2.** Could the authors give their opinion on the ability of a white-box adversary to bypass their detection method?
**A2.** We thank the reviewer for raising this important point regarding the robustness of our method against white-box adversaries. We acknowledge that adversarial detection is particularly challenging under adaptive attacks and appreciate the opportunity to clarify the threat model in our work. While a white-box adversary could theoretically attempt to bypass our detection by minimizing causal signals, such attacks are highly non-trivial in practice for two reasons:
Layer-level causal effects in our method are discrete values based on separate interventions. This process produces non-differentiable outputs and discrete shifts in behavior. As a result, the causal signals they generate are not amenable to standard gradient-based optimization techniques. This makes it challenging even for a white-box adversary to perform targeted manipulation.
Token-level causal effects are based on statistical aggregation across calculated distances between intervened attention heads and the original one. This complex calculation process makes them inherently noisy and non-smooth. As a result, an adversarial would need to account for a wide range of token-level variations, as well as attention heads changes. This would significantly complicate the optimization process potentially adopted by an adaptive attacker.
We have added a discussion to clarify that our experiments were conducted to detect adversaries who are not aware of the defense mechanism, and a discussion on the difficulty of conducting adaptive attack.
**Comments:**
**C1.**: remove inconsistency in notation regarding definitions.
**A1.** Thank you for pointing out the inconsistency in our submission, and we have revised accordingly
**C2.** section three presents very simple concepts in an overly complicated way.
**A2.** We thank the reviewer for the comment. Our intention was to provide a formal foundation for our causal analysis, while acknowledging the practical compromises needed for token and layer level analysis. We acknowledge the reviewer’s concern that parts of Section 3 may have introduced unnecessary complexity and redundancy. In response, we have streamlined the definitions and moved some of them to the appendix.
**C3.** Overall the figures and tables take a lot of space.
**A3.** We will revisit the figures/tables and move them to appendix in case space is required (to clarify all issues during the rebuttal).
**Other response:**
We acknowledge the readability issue in Figure 3 and have revised it with larger fonts for better visibility. Additionally, we have expanded the captions of tables and figures to make them more informative. For example, the caption of Figure 2 now clarifies that it illustrates combined causal effects at the prompt and layer levels for a truth and lie response. | Summary: The authors present LLMScan, a technique to determine when the LLM is misbehaving via causal inference. They analyze the causal effects of input tokens by performing interventions on each token and measuring changes in attention scores, and in transformer layer by skipping the layer and comparing the output logits. From these changes, they create a casual effect input which is fed to a classifier which predicting if the model is misbehaving or not.
Claims And Evidence: Weaknesses
- The authors mention generating a causal map but from Figure 2 they show a single list of values for both the input tokens and layers.
- While their approach shows encouraging results, the paper lacks a rigorous mathematical framework to call this a “causal effect”, as when we skip a layer or change an input token, it will have other downstream effect on the output. Additionally adding instructions like "Answer the following question with a lie." can drastically change the output.
Methods And Evaluation Criteria: Yes
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: Analyzing changes in attention maps with changes per token for detecting misbehaviors is useful.
Essential References Not Discussed: Not sure
Other Strengths And Weaknesses: Strengths:
- This approach can detect different types of LLM misbehavior
- The performance is pretty good specially for Backdoor detection
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your insightful comments. Please find our responses to your questions below.
**Q1.** Causal map visualization showing both input tokens and layers.
**A1.** We thank the reviewer for their careful observation regarding the presentation of the causal map in Figure 2 and sorry for the confusion. In our work, the term “causal map” refers to the combined causal effects computed across two dimensions including 1) token-level causal effects, which correspond to contribution of the input (embedding); 2) layer-level causal effects, which correspond to contribution of the transformer layers. We chose the term as it resembles a “heat map” over the input and the internal layers.
**Q2.** The “causal effect” definition and concerns about downstream effects after intervention.
**A2.** We appreciate the reviewer for raising this thoughtful concern regarding the formal definition of causal effect in our work. As discussed in Section 3.1, a proper causality analysis would require a formal model such as structured causal model, and systematic intervention (as shown in equation 3). Such analysis is, however, often impractical as it is computationally intractable. What we adopted in this work is a lightweight and practical approach that is referred to in prior works [1, 2] as causal mediation analysis (CMA). In CMA, causal effects are approximated by comparing outcomes between the normal execution and an abnormal execution. In our context, the normal execution refers to the LLM’s standard output and the abnormal execution corresponds to the output when we apply targeted interventions, i.e., modifying a specific token or skipping a transformer layer. The causal effect is then approximated by measuring the difference between the two executions. We acknowledge the reviewer’s concern that interventions like changing a token or skipping a layer may introduce cascading effects throughout the model. Nonetheless, we show that such poor approximation of the actual causal effect is already effective for detecting LLM misbehavior. We have added more clarification on our causality analysis method and its limitation in the revised submission to clarify this issue.
We also appreciate the insightful comments on the downstream impact of adding instruction. This is in fact a question on the definition of misbehavior. Indeed, our definition of whether a model misbehaves (e.g., whether it is generating toxic responses) does not depend on the prompt. In other words, our method will detect misbehavior itself (such as generating toxic responses) even if the model is instructed to do so (such as with the instruction “answer the following question with a lie”). Such a choice could be considered reasonable if we aim to detect jailbreak attacks such as DAN that aims to induce toxic responses with “legit” excuses. Having said that, we agree that further research on detecting misbehavior considering the context is an extremely interesting one.
Reference:
[1] Locating and Editing Factual Associations in GPT (NeurIPS 2022)
[2] Investigating Gender Bias in Language Models Using Causal Mediation Analysis (NeurIPS 2020)
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing the issues I have. I have raised my score to 4. | Summary: In this paper, the authors introduce a novel framework for detecting misbehavior in LLMs using causal analysis. In particular, the proposed framework consists of two main modules: i) a scanner that can conduct causality analysis at both token and model layer levels, and ii) a detector which is trained on causal maps to identify LLM misbehaviors based on the generated causality distribution maps.
Claims And Evidence: The authors conduct intensive experiments to evaluate the proposed approach. However, more evidence is needed as “empirical results” supported the configuration of attention head distances and layer selection. It would be better to demonstrate and analyze the empirical results.
Methods And Evaluation Criteria: The proposed method is evaluated regarding accuracy and AUC score via comparative analysis on 14 datasets, and ablation study on the contribution of token-level and layer-level causal effects.
- In Section 3.1, the authors mention that for lie detection, the LLM “is induced to lie with the instruction”. In this case, does the LLM still misbehave? Same question for the toxic instruction prompts.
Theoretical Claims: The authors propose three definitions for generative LLM, token-level and layer-level causal effect respectively.
Experimental Designs Or Analyses: The authors employ Llama family and Mistral to evaluate the proposed method regarding AUC and accuracy, and also perform ablation study to assess token-level and layer-level causal effects.
- Table 2: toxicity detection, layer level, why 0.98 for Mistral is not in bold? The result is better than the other three LLMs at the layer level.
Supplementary Material: The authors can consider including the “empirical results” of attention head distances and layer selection in the supplementary material.
Relation To Broader Scientific Literature: The authors provide impact statement at the end of the manuscript.
Essential References Not Discussed: The authors introduced related work on the four different misbehavior detection scenarios. I suggest that the authors can consider further clarifying the research gap in Section 2.
Other Strengths And Weaknesses: The paper is well-organized, and the language is technical yet understandable for readers with domain knowledge. The figures are clear and the overall readability is good.
Other Comments Or Suggestions: - I wonder what the application scenario is, real-time detection or retrospective analysis?
- Section 1: “To evaluate the effectiveness of LLMSCAN, we conduct experiments using four popular LLMs across 13 diverse datasets.” I assume it should be 14 datasets based on the evaluation section.
- Section 4.2: “This suggests that truthful responses concentrate relevant knowledge in select layers …” -> “selected”.
Questions For Authors: - In Section 3.1, the authors mention that for lie detection, the LLM “is induced to lie with the instruction”. In this case, does the LLM still misbehave? Same question for the toxic instruction prompts.
- How do you obtain the “empirical results” for attention head distances and layer selection?
- Table 2: toxicity detection, layer level, why 0.98 for Mistral is not in bold? The result is better than the other three LLMs at the layer level.
- I wonder what the application scenario is, real-time detection or retrospective analysis?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your insightful comments. Please find our responses to your questions below.
**Q1.** Concerns on LLM is induced to misbehavior (e.g., lie) with instruction.
**A1.** We thank the reviewer for the careful evaluation and for raising an important question about the definition of misbehavior. In our work, we consider the model to misbehave regardless of whether the behavior is prompted. That is, our detection focuses on the output behavior itself (e.g., generating toxic content), not the intent of the prompt. This design aligns with real-world scenarios like jailbreak attacks (e.g., DAN) that induce harmful outputs through seemingly legitimate instructions. Having said that, we agree that further research on detecting misbehavior considering the context is an extremely interesting one.
**Q2.** “Empirical results” to support the configuration.
**A2.** We thank the reviewer for highlighting the need for more empirical evidence to support our design choices and appreciate the opportunity to clarify how these configurations were determined. In response, we have conducted additional experiments on each task and included the results under the appendix in the revised submission.
Our design choices were motivated by both prior research and the need to balance detection accuracy and efficiency. Specifically, we chose to use attention head outputs instead of hidden states for two main reasons. First, prior studies [1, 2] show that attention layers effectively encapsulate the model’s stored knowledge, including harmful or inappropriate content—making attention heads a more direct and sensitive signal for detecting misbehavior. Second, attention head outputs are considerably smaller than hidden states, reducing computational overhead. To further balance detection accuracy and efficiency, we adopt a sparse sampling strategy. For layer selection, we sample three representative layers from the early, middle, and late stages of the model, reflecting the progressive evolution of internal representations [3]. This also loosely mirrors how humans process information, from perception to reasoning. For each selected layer, we sample three attention heads to ensure functional diversity without incurring unnecessary overhead. We have clarified these design choices in the revised submission.
In our empirical evaluation, we compared our method against two baselines to assess the effectiveness of our sparse sampling strategy. The first baseline applies LLMScan with all layers, sampling three attention heads per layer, following our standard head selection. The second baseline uses all attention heads but only from the three selected layers. We conducted experiments on four detection tasks, each with one representative dataset. The results, shown below (token-level detection accuracy), demonstrate that our sampling strategy achieves comparable or better performance while avoiding unnecessary noise:
|Task|Dataset|LLMScan (Ours)|LLMScan w/ all layers|LLMScan w/ all attention heads|
|-|-:|-:|-:|-:|
|Lie Detection|Question1000|0.94|0.88|0.93|
|Jailbreak Detection|AutoDAN|0.97|0.95|0.96|
|Toxicity Detection|SocialChem|0.61|0.59|0.62|
|Backdoor Detection|CTBA|0.96|0.94|0.92|
This confirms that selecting representative layers and heads is sufficient to capture the key causal signals for effective detection.
Reference:
[1] Locating and Editing Factual Associations in GPT (NeurIPS 2022)
[2] On the Role of Attention Heads in Large Language Model Safety (ICLR 2025)
[3] Transformer Feed-Forward Layers Are Key-Value Memories (EMNLP 2021)
**Q3.** About the understanding of Table 2: toxicity detection.
**A3.** We thank the reviewer for their careful reading and for pointing out the potential confusion regarding the bold values in Table 2. At table 2, we compare the performance of our method based on token-level causal effect only and layer-level causal effect only, on the same task and the same models. We highlighted the better results. For example, for toxicity detection tasks, the token-level performance for the Mistral model is 1.0 accuracy. Since the accuracy of the layer-level analysis is only 0.98, we highlighted the token-level performance for this case. We have revised the table caption to clarify this.
**Q4.** Application scenario: real-time detection or retrospective analysis?
**A4.** We thank the reviewer for raising this question regarding the application scenario. Our method is primarily designed for real-time detection, as it analyzes the LLM’s internal behavior during inference to enable on-the-fly detection of potential misbehavior. We have added a detailed discussion on the intended usage and the computational overhead of runtime detection in the appendix (see our replies to Reviewer 8BQP-Question 2, and Reviewer RtXP-Question 1 for details).
**Others**
Thanks for pointing out the typos. We fixed all of them in our revised submission.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for clarifying the questions and addressing my previous comments. | Summary: This paper proposes a detection mechanism named "LLMSCAN" for identifying potential "undesirable" generation behaviors during large language model (LLM) inference. The core approach is to utilize causal analysis to "intervene" in the input tokens and the transformer layers of the model, and to measure the causal impact values of each part based on the changes in the output distribution and attention. This forms a causal map. Then, based on these causal maps, the paper trains a classifier (detector) to judge whether the model may produce untruthful, harmful, toxic, or backdoor attack-induced abnormal outputs. The paper conducts experiments on multiple common LLMs and a wide range of benchmark tests (including lie detection, jailbreak attack detection, toxicity detection, backdoor attack detection, etc.), and the results show that this method has a high detection rate and accuracy, and can issue early warnings to avoid the generation of truly harmful content.
Claims And Evidence: The paper claims that "LLMSCAN" can effectively distinguish between normal and malicious behavior by monitoring the intervention changes in the internal attention distribution and the skip outputs of some transformer layers. Overall, the experiments in the paper, compared with the baseline methods (such as black-box methods or traditional filtering methods based on output), all show excellent performance. The detection of various types of malicious behavior in this paper has given AUCs over 0.95 or equivalent high accuracy rates.
From the results given in the paper, most of these claims are supported by relatively strong data, each type of malicious behavior is compared with existing or classical detection methods, and it is confirmed that stable detection performance can be maintained under different model scales and different attack/error information scenarios. Since the experiments were conducted on multiple public benchmarks and provided ablation experiments (such as comparing detection performance using token-level causal features or layer-level causal features alone), this supports the authors' claims about the generalizability and effectiveness of the method. These pieces of evidence are consistent with the main theoretical concepts proposed in the paper.
Methods And Evaluation Criteria: The author's method first performs two-level interventions on the LLM during reasoning: 1) Replace each input token with a special character one by one and observe the changes in attention scores; 2) Skip or "bypass" different transformer layers and compare the differences in output logits. Then, these difference information is combined to form a "causal map." A classifier is then used to distinguish these causal maps, with the goal of distinguishing normal outputs from different categories of abnormal outputs.
This design reasonably observes the internal attention mechanism and hierarchical outputs of the LLM for causal observations and uses them as feature inputs to a relatively lightweight detector. The evaluation criteria mainly rely on AUC, accuracy, and comparison with the baseline. These evaluation indicators are suitable for measuring detection tasks and cover the performance of methods under unbalanced or various scenarios.
Theoretical Claims: The paper does not contain large sections of pure theoretical derivation, but instead borrows the concept of causal inference to define "intervention" and "causal effect." The main formulas such as causal effect $CE_x$ or $CE_{x,\ell}$ and so on, all belong to the relatively standard "do operation" framework. Such definitions do not require very complex proof, and therefore there is no need for rigorous higher-order mathematical proof. As described, these theoretical parts are logically consistent and do not have obvious gaps. I have not found any serious mathematical errors that require refutation or correction.
Experimental Designs Or Analyses: I reviewed the paper's detection experiments on different LLMs (including Llama-2-7b, 13b, Mistral, etc.), comparison baselines, and tests on different types of malicious behaviors. The sampling, training-test split, and presentation of AUC and Accuracy were all clear, and the scale of the experiments was sufficient to demonstrate the effectiveness of the method. From the given experimental data, the experimental design is reasonable:
1. When comparing with similar methods (such as output-based detection or ONION defense, etc.), it showed a significant improvement in accuracy and AUC.
2. Similar regular conclusions were presented for models of different scales and different datasets.
These designs and analyses can well validate the proposed "LLMSCAN" method, and no major defects have been found so far.
Supplementary Material: The author listed more visualization results, sample datasets, and more detailed layer selection methods in the appendix. It mainly supplements the details related to the main text, without providing new substantial conclusions. I briefly reviewed the data examples and visualization charts, and found them consistent with the statements in the main text, without any contradictions.
Relation To Broader Scientific Literature: The detection of adversarial examples, fake information detection, and the analysis of internal attention mechanisms are all important issues in the current natural language processing security field. Compared with methods that are only based on output text, LLMSCAN utilizes causal intervention mechanisms to evaluate the presence of potential harmful outputs from changes in internal activations and attention, which is consistent with the early exploration of hierarchical activations and attention in explainable AI or neural network visualization analysis. At the same time, the paper complements some previous works that mainly conduct security detection at the output or prompt level.
Essential References Not Discussed: I did not find the essential but omitted key references.
Other Strengths And Weaknesses: Advantages include the method's good versatility, as it can detect undesirable behaviors (lies, toxicity, backdoors, etc.) in different scenarios with a unified approach, and it is relatively efficient, requiring no significant modifications to the LLM itself. The theoretical part also closely adheres to the standard definition of causal analysis.
Disadvantages lie in the fact that the method requires inner access or modification to the LLM (such as hierarchical skipping and other interventions), which may not be feasible for commercial large models or API environments that are completely black-box. Additionally, in the experiments for backdoor attack and defense competitions, although the detection rate is high, some extreme prompts or attack methods may vary greatly in real-world scenarios. Relying on the accessibility of all model layers is a strong assumption.
Other Comments Or Suggestions: The paper briefly mentions the impact on large models when skipping the specific implementation of a certain layer during the scanning process, such as whether it will bring pressure on memory or speed. Perhaps more technical details can be provided in the appendix, explaining the resource costs of executing this method in LLMs of different sizes, as well as whether it has an actual impact on the model's response speed.
Questions For Authors: Have you considered the API access scenarios in actual deployment? Can this method simplify or approximate the implementation for commercial models that cannot directly access the output of the transformer layer?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your insightful comments. Please find our responses to your questions below.
**Q1.** API access scenarios.
**A1.** We thank the reviewer for raising this important and practical question regarding the applicability of our method. Our method is primarily designed for users who train/finetune their own models and would like to monitor and diagnose their model when serving their customers. It assumes white-box access to the transformer layer outputs to compute token-level or layer-level causal effects. We do agree that this limits applicability to commercial black-box LLMs. This limitation can be potentially addressed through two ways:
1. Approximate token-level causal effect based on output distribution: In our method, we rely on logits different before and after intervention to compute the casual effect. If the logits are not available, we can replace it with token embedding differences or semantic difference before and after the intervention. Note that this wouldn't work for approximating the layer-wise causal effect.
2. Shadow model approximation: We would train a shadow model based on existing open-source LLMs and use the layer-wise casual effect of the shadow model to approximate the commercial one.
We will explore both directions in future work and look forward to extending our method to broader settings when transformer internals are not accessible.
**Q2.** Resource costs of executing this method in LLMs of different sizes.
**A2.** We thank the reviewer for highlighting the importance of discussing the resource overhead of our method. To answer the question, we have run experiments to evaluate the overhead of our method.
The efficiency of our method depends primarily on the input length and the number of models layers. Our experiments with an A100 server on the models studied in the submission show that layer-level causal effect computation takes about 0.08 seconds per layer, and token-level computation averages 0.04 seconds per token. Note that analyzing the casual effect of a layer or token takes much less time than generating the response since we only need to generate the first token to conduct the analysis.
To evaluate the overall time overhead, we run the models with and without our method and compare the time. For models with 31 layers, such as Llama-2-7b, with our method, the models take around 6.89 seconds (3.82 seconds is spent on running our method) per input to generate the outputs, whereas without our method, the models take about 3.07 seconds to complete inference process. For models such as Llama-2-13b with 40 layers, it takes around 14.41 seconds (7.73 seconds is spent on running our method) per input with our method, and 6.68 seconds without our method. We remark that this overhead can be significantly reduced if we run our analysis and the original model in parallel, i.e., the overall time becomes 3.82 seconds and 7.73 seconds. The details of detection time are shown in the table below. Specifically, we randomly sampled 100 prompts with an average length of 30 tokens and measured the average detection time and inference time per input across all benchmark models.
|Model|Llama-2-7b|Llama-2-13b|Llama-3.1|Mistral|
|-|-:|-:|-:|-:|
|LLMScan|3.82|7.73|3.31|3.40|
|Complete inference time|3.07|6.68|3.37|3.54|
In terms of memory overhead, if we run our method and the generation sequentially, there is little overhead; if we run them concurrently, because we need to create an additional model when analyzing the causal effect for each layer and each token (whilst the original model is executing to generate the response), the memory consumption doubles.
We have added a new subsection to discuss the resource costs of our method under the appendix of our revised submission. | null | null | null | null | null | null |
Federated Generalised Variational Inference: A Robust Probabilistic Federated Learning Framework | Accept (spotlight poster) | Summary: The paper introduces FEDGVI, a novel probabilistic framework for federated learning that is designed for both prior and likelihood misspecification. ## update after rebuttal I will retain my original score.
Claims And Evidence: Good.
Methods And Evaluation Criteria: Fair.
Theoretical Claims: Good.
Experimental Designs Or Analyses: Fair.
Supplementary Material: Yes, experimental details.
Relation To Broader Scientific Literature: Yes.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: Strengths:
1. The writing of the paper is clear.
2. The author propose solid theoretical analysis and empirical validation.
Weaknesses:
1. There is no ablation study regarding the hyperparameter selection.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of the clarity of our paper and the theoretical results, as well as their important suggestion for an ablation study that we have now conducted and which will significantly improve the empirical study of this work. We will include these in the revised version of the main paper.
**Hyperparameters of FedGVI**
In particular, we have carried out an ablation study on the selection of the hyperparameters of FedGVI in the training of a Bayesian Neural Network on 10% contaminated MNIST data, split across 5 clients, extending the results discussed in Section 5.5. We vary the $\delta$ parameter of the Generalised Cross Entropy, and the $\alpha$ parameter of the Alpha-Rényi divergence in the client optimisation step (Eq. 4), and record the predictive accuracies on the uncontaminated test data for the FedGVI posteriors found with the respective hyperparameters:
| | $\delta=0.0$ | $\delta=0.2$ | $\delta=0.4$ | $\delta=0.6$ | $\delta=0.8$ | $\delta=1.0$ |
| :--: | :--: | :--: | :--: | :--: | :--: | :--: |
| $\alpha=0.0$ | 93.84 | 96.86 | 98.08 | 97.74 | 97.45 | 97.17 |
| $\alpha=0.5$ | 96.32 | 96.91 | 97.97 | 97.99 | 97.76 | 97.52 |
| $\alpha=1.0$ | 96.54 | 96.98 | 97.92 | 98.05 | 97.92 | 97.72 |
| $\alpha=1.5$ | 96.68 | 96.07 | 97.67 | 98.04 | 97.96 | 97.88 |
| $\alpha=2.5$ | 96.84 | 94.79 | 97.19 | 98.17 | 98.09 | 97.87 |
| $\alpha=5.0$ | 97.37 | 92.96 | 95.81 | 97.95 | 98.11 | 98.05 |
where we note that $\delta=0$ implies the negative log likelihood since the loss converges to it as $\delta$ tends down to zero (Zhang & Sabuncu, 2018). For the Alpha-Rényi divergence, $\alpha=1$ implies the Kullback-Leibler divergence, and $\alpha=0$ implies the reverse Kullback-Leibler divergence, i.e. $D_{AR}^{(0)}(q:q^{\backslash m})=D_{RKL}(q:q^{\backslash m}):=D_{KL}(q^{\backslash m}:q)$ (Amari, 2016). This means that for $\alpha=1.0$ and $\delta=0.0$, we recover PVI.
We propose to add these results on hyperparameter selection in FedGVI as an annotated heat map to Section 5.5 in the revised version of the paper; the figure can be viewed here: https://anonymous.4open.science/r/Resources-3CEF/ablation_study.png.
In the figure, we present the maximum result achieved across all server iterations and plot the percentage errors on uncontaminated test data for the different hyperparameter combinations. As evident from the table most of the combinations of $\alpha$ and $\delta$ explored show stability of the posterior regarding classification accuracy with different hyperparameters, however some care should be placed in selecting these since the wrong choice could significantly degrade model performance, e.g. $\alpha=5.0$ and $\delta=0.2$, where we do not sufficiently filter out outliers but place increased weight on the cavity distribution through the high alpha. Nevertheless, the majority of settings outperform PVI, especially for $\delta=0.6$.
**Learning Rate Selection**
Furthermore, we also present additional results on learning rate selection for the ADAM optimiser (Kingma & Ba, 2015) in the 3 Client FedGVI and PVI regimes presented in Tab. 1 of the paper. So far, we have fixed the learning rate to be 5e-4 in the BNN experiments of the paper, which we now vary while keeping the divergence and loss parameters fixed for the respective method. We highlight the best result for each learning rate in bold.
| Learning Rate $\eta$ | 1e-2 | 5e-3 | 1e-3 | 5e-4 | 1e-4 | 5e-5 |
| :--: | :--: | :--: | :--: | :--: | :--: | :--: |
| PVI | 96.34$\pm$0.16 | 96.50$\pm$0.18 | 96.72$\pm$0.06 | 96.76$\pm$0.07 | 96.01$\pm$0.05 | 95.39$\pm$0.06 |
| FedGVI $D_{AR}^{(2.5)}$ | 96.84$\pm$0.12 | 96.91$\pm$0.02 | 97.16$\pm$0.04 | 97.18$\pm$0.03 | 96.51$\pm$0.19 | 95.65$\pm$0.03 |
| FedGVI $\mathcal{L}_{GCE}^{(0.8)}$ | 98.22$\pm$0.07 | **98.30$\pm$0.03** | 98.15$\pm$0.01 | **98.08$\pm$0.08** | 97.07$\pm$0.06 | 95.84$\pm$0.04 |
| FedGVI $D_{AR}^{(2.5)}$ + $\mathcal{L}_{GCE}^{(0.8)}$ | **98.31$\pm$0.10** | 98.24$\pm$0.07 | **98.23$\pm$0.06** | 98.06$\pm$0.09 | **97.50$\pm$0.01** | **96.35$\pm$0.08** |
Here, FedGVI with a robust loss outperforms PVI in every scenario.
We also want to point out that by not carefully selecting the hyperparameters of FedGVI, as well as the learning rate, and keeping these constant across the BNN experiments, we have shown that you don't require extensive knowledge to adapt existing PVI approaches to FedGVI and outperform. For instance, FedGVI performs even better for the robust losses with a higher learning rate, but we have shown in Tab. 1 in the paper that it outperforms even when not carefully selecting a learning rate.
Furthermore, choosing $\delta=0.6$ and $\alpha=2.5$ would have performed better when varying only the robustness parameters of FedGVI.
**References not in paper**
Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, 2015. | Summary: The paper introduces *FEDGVI* (Federated Generalized Variational Inference), a probabilistic federated learning framework designed to be robust against both prior and likelihood misspecification. FEDGVI generalizes Partitioned Variational Inference by integrating robust and conjugate updates, thus reducing computational complexity at client devices. It provides theoretical results demonstrating fixed-point convergence, optimality of the cavity distribution, and robustness to model specification. Empirical evaluations on synthetic and real-world datasets highlight improved robustness and predictive accuracy compared to existing FL methods.
Claims And Evidence: The claims made in the paper regarding robustness and improved predictive performance are generally supported by the theoretical analysis and experimental results. However, the examples provided are standard in statistical and ML literature and are specifically chosen to highlight FEDGVI's strengths over PVI, which clearly struggles in some of these settings. The claims related to applicability in more general settings may require further validation.
Indeed, federated hierarchical Bayesian models (which I provide references to below) would likely outperform FEDGVI in many of these settings. While hierarchical models may be out of scope for this work, methods designed for "personalised" federated settings are addressing similar issues. Such methods would be more convincing baselines, and they should at least be mentioned and referenced in the related work. Future work combining generalized Bayesian methods and hierarchical models in federated learning would be very interesting.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria, including theoretical robustness under various divergence-based losses and experimental evaluation on common benchmark datasets, are reasonable.
Theoretical Claims: The theoretical results, such as the proofs of robustness and convergence of FEDGVI (e.g., Theorems 4.8 and 4.11, Lemma 4.5, Proposition 4.9), are clearly stated, and are **the biggest strength of the paper**. I reviewed proofs provided in the supplementary material, and they appear sound and rigorous. No issues were identified.
Experimental Designs Or Analyses: Experimental designs involving the clutter problem, logistic regression, MNIST, and FASHIONMNIST datasets are sound overall, though somewhat limited in terms of diversity and complexity. Experiments were explicitly crafted to demonstrate FEDGVI’s advantages, which may slightly exaggerate the practical benefits.
Supplementary Material: I did not review any supplementary material outside of the appendices.
Relation To Broader Scientific Literature: FEDGVI effectively integrates robust Bayesian inference methods (GBI and GVI) with federated learning. It builds upon prior work in Bayesian FL and robust inference, clearly identifying its position and contributions within the broader context. However, the common claim in generalized Bayesian methods literature that each new approach inherently "generalizes" existing Bayesian approaches is somewhat overstated. Generalization in this context is an expected consequence of employing generalized Bayesian inference, rather than a novel contribution specific to each individual application of Generalized Bayes.
Essential References Not Discussed: As mentioned previously I think the following references should at least be mentioned as hierarchical/personalized approaches to Bayesian federated inference and offer an alternative or additional approach to dealing with some of the issues that might arise in the numerical examples provided in the paper:
- Kotelevskii, Nikita, et al. "Fedpop: A bayesian approach for personalised federated learning." Advances in Neural Information Processing Systems 35 (2022): 8687-8701 (was cited, but just in a citation dump of various FL methods)
- Kim, Minyoung, and Timothy Hospedales. "Fedhb: Hierarchical bayesian federated learning." arXiv preprint arXiv:2305.04979 (2023).
- Hassan, Conor, Robert Salomone, and Kerrie Mengersen. "Federated variational inference methods for structured latent variable models." arXiv preprint arXiv:2302.03314 (2023).
- Zhang, Xu, et al. "Personalized federated learning via variational bayesian inference." International Conference on Machine Learning. PMLR, 2022.
Other Strengths And Weaknesses: **Strengths** of the paper include the theoretical analysis, clear positioning within existing literature, and the presentation of explicit theoretical gurantees of robustness and convergence. Additionally, the paper offers a systematic integration of GBI with federated learning, which can be valuable for further theoretical explorations. **Weaknesses** of the paper are that the experimental setups are simplistic and somewhat contrived, limiting the empirical evidence of generalization to more challenging, realistic scenarios. The paper's methodology is straightforward (applying generalized Bayesian inference in FL context), potentially limiting originality and novelty in methodological contribution.
Other Comments Or Suggestions: The paper would benefit from additional complex real-world experiments beyond simple label contamination scenarios, such as realistic non-IID client data distributions, large-scale deployments, or federated learning tasks with genuine privacy or computational constraints.
Questions For Authors: 1. Do the authors see any unique methodological contributions that the use of generalized Bayesian inference could specifically enable in federated learning settings?
2. What future directions do the authors envision for further integrating generalized Bayesian methods with hierarchical or personalized federated learning approaches?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for appreciating our theoretical results and especially for taking the time to review our proofs.
**Claims & Evidence**
> the examples provided are [...] specifically chosen to highlight FEDGVI's strengths over PVI
FedGVI with the negative log likelihood and KL divergence is equivalent to PVI, i.e. FedGVI(NLL,KLD)=PVI. In the idealised setting where the DGP $p_0=p_\theta$ is known, PVI with a correctly specified likelihood is preferable to FedGVI(RobustLoss, KLD), i.e. using a robust loss will degrade performance; see Knoblauch et al. (2022), Jewson et al. (2018), Bissiri et al. (2016). Arguably, we rarely have accurate likelihood functions, since these are either too complex to work with or practitioners opt for standard likelihoods without integrating domain expertise.
We will amend Sec. 3.2 and add:
*It is important to highlight that GVI and FedGVI may underperform when using robust losses in the case of correct likelihood specification.*
**Relation to Literature**
When viewing GVI/GBI as an optimisation problem on the space of probability distributions $\mathcal{P}(\Theta)$, Bayesian inference, VI, hierarchical Bayes/VI, all target a single element of this space. These methods either target the standard Bayesian posterior explicitly, or the posterior within some variational family with closest Kullback-Leibler distance to the Bayesian one.
Through GBI and GVI we are able to target different elements of a subspace of $\mathcal{P}(\Theta)$, then simply a single point; in that regard, these approaches do 'generalise' Bayes. In the paper, 'generalised' is inherited from GVI and GBI. We should note that in the FedGVI setting, GBI and GVI allow us to generalise PVI or FedAvg to a broader subspace of possible posteriors. We will clarify this naming distinction in the main paper.
We have made a figure to highlight this, see https://anonymous.4open.science/r/Resources-3CEF/FedGVI.png, which we will include in the paper.
**Question 2, Missed References, Hierarchical Bayes**
Following your suggestion we will add and discuss the proposed references in relation to FedGVI and GVI, giving an additional discussion on personalised and hierarchical Bayesian FL in Sec 1.1. Furthermore, we propose to add the following to Sec. 6:
*An interesting future direction is to extend FedGVI within personalised FL settings (Kotelevskii et al., 2020) and hierarchical Bayesian FL through latent variables (Kim & Hospedales, 2023), as well as the use of a structured posterior approximation (Hassan et al., 2023; 2024), in order to incorporate client level variations. Incorporating the hierarchical model structures and additional inductive biases from such settings, while maintaining conjugacy and favourable computational complexity, remain as open challenges.*
**Weakness 1, Comments, Experimental Design**
We have now carried out further empirical and ablation studies, https://anonymous.4open.science/r/Resources-3CEF, see also our responses to reviewers *4k22* and *gKSi*, which we will include in the revised paper. Nevertheless, as we have shown even the 'simplistic' MNIST setting already provides challenges to traditional Bayesian FL making it worth studying.
We also agree with the reviewer that extending FedGVI to more complex data sets and scenarios, such as investigating different aspects of FL for instance privacy constraints, the cross-device setting, and massively distributed scenarios, present intriguing future directions for FedGVI. We thank the reviewer for pointing this out and will be included in Sec. 6.
For 'computational constraints' see our next answer.
**Weakness 2, Question 1**
> unique methodological contributions that the use of [GBI] could specifically enable in [FL]
To the best of our knowledge, we have proposed the first theoretically grounded probabilistic FL framework that deals with model misspecification. Our framework brings the following methodological advances:
FedGVI can be more computationally efficient than PVI through the use of GBI, as we have shown in Prop. 4.9. The conjugacy with exponential family likelihoods through the robust score matching loss enables faster computation than PVI since it does not require sampling at the local clients. See also our response on computational complexity to Reviewer *gKSi*.
Through the guaranteed posterior robustness by Theorem 4.11, our prediction accuracy is less affected by outliers which would be detrimental for instance in medical settings; see e.g. Jonker et al. (2024) for FL in medicine.
The server divergence and the potential for robust aggregation at the servers (Sec. 6) without fundamentally changing our approach allows for unique optimisation on a global level not provided by Bayesian FL approaches.
We thank the reviewer for encouraging us to flesh these points out and we will include these in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for all the work done on the rebuttals including the added experiments.
> FedGVI with the negative log likelihood and KL divergence is equivalent to PVI, i.e. FedGVI(NLL,KLD)=PVI. In the idealised setting where the DGP
is known, PVI with a correctly specified likelihood is preferable to FedGVI(RobustLoss, KLD), i.e. using a robust loss will degrade performance; see Knoblauch et al. (2022), Jewson et al. (2018), Bissiri et al. (2016). Arguably, we rarely have accurate likelihood functions, since these are either too complex to work with or practitioners opt for standard likelihoods without integrating domain expertise.
Appreciate the suggested amendment. I think the point made here (FEDGVI(NLL, KLD) == PVI), though obvious, would be useful to make clearer (e.g., when looking the plots, it would be useful if the reader was drawn to this connection always when looking at the results of PVI).
> We have made a figure to highlight this, see https://anonymous.4open.science/r/Resources-3CEF/FedGVI.png, which we will include in the paper.
Yea sure I understand all this. This comment was probably a bit too harsh initially - I just think that the *main contribution* is other stuff compared to the *framing* of GBI rather than framing of GBI itself - because you get the generalization properties for free really (i.e., any new application of GVI will generalize VI methods that exist for that purpose). Personally, I don't really like the suggested figure. I think that the bottom two thirds of the figure are nice but that the top row is a bit overkill for the purposes of the paper (just a personal opinion - don't feel strongly).
> An interesting future direction is to extend FedGVI within personalised FL settings (Kotelevskii et al., 2020) and hierarchical Bayesian FL through latent variables (Kim & Hospedales, 2023), as well as the use of a structured posterior approximation (Hassan et al., 2023; 2024), in order to incorporate client level variations. Incorporating the hierarchical model structures and additional inductive biases from such settings, while maintaining conjugacy and favourable computational complexity, remain as open challenges.
Thanks. I think these are useful comments to include. I think if you were ever in the business of chasing SOTA performance, a lot of these methods would likely outperform of the baselines, and regardless, interesting future work!
> experimental results
The added experiments look good, and in combo with the experiments in the submission, clearly a useful method.
> FedGVI can be more computationally efficient than PVI through the use of GBI, as we have shown in Prop. 4.9. The conjugacy with exponential family likelihoods through the robust score matching loss enables faster computation than PVI since it does not require sampling at the local clients. See also our response on computational complexity to Reviewer gKSi.
Yea this was an oversight in my initial review; this is a very nice property.
Overall, my feeling is that this submission clearly warrants acceptance at this conference.
---
Reply to Comment 1.1.1:
Comment: Thank you for your fast reply, the constructive feedback, and your score increase.
We appreciate the remarks on the figure we provided, and will amend it for the paper to the bottom two thirds in order to keep the focus on Federated Learning.
Thanks also for the point on highlighting PVI=FedGVI(NLL, KLD), we will amend the figures/tables accordingly, and expand on this in Sec. 4.1. | Summary: The paper presents a new framework, Federated Generalised Variational Inference (FEDGVI), for robust probabilistic federated learning. The authors argue that standard Bayesian and frequentist federated learning methods are vulnerable to model misspecification (e.g., contaminated data, incorrect priors, or mismatched likelihood models). Building on the theory of Generalised Variational Inference (GVI), they propose a method that can (1) systematically handle likelihood and prior misspecifications, (2) generalize existing frameworks like Partitioned Variational Inference (PVI) and FedAvg, and (3) provide stronger robustness guarantees with calibrated uncertainty estimates. Key contributions include:
- A unifying algorithmic framework for GVI in the federated learning (FL) setting, with theoretical convergence guarantees.
- Proofs that FEDGVI is robust to outliers and model misspecification under suitable choices of loss functions and divergences.
- Empirical evaluations on both synthetic tasks (e.g., the 1D “clutter” problem and logistic regression) and real data (CoverType, MNIST, FashionMNIST), demonstrating improved robustness and accuracy compared to baselines such as PVI, DSGLD, and FedAvg.
Claims And Evidence: 1. FEDGVI is robust to both likelihood and prior misspecification. The authors develop theoretical results (Section 4) showing that if each client employs a robust (bounded) loss function, the global posterior remains stable even when some portion of data is contaminated (Theorem 4.11).
2. FEDGVI recovers existing federated learning methods (e.g., PVI, FedAvg) as special cases: In Section 4.1, the authors show that by choosing particular divergences and losses, the method reduces to either standard partitioned variational inference (for Bayesian FL) or FedAvg (for frequentist FL).
3. The approach is scalable to real-world tasks and yields superior predictive performance compared to baselines: Empirical studies on real datasets (Cover Type, MNIST, FashionMNIST) demonstrate that FEDGVI obtains higher classification accuracy under mismatched or noisy training.
Overall, the manuscript provides both proofs of conceptual claims and corresponding empirical corroboration.
Methods And Evaluation Criteria: The proposed method relies on Generalised Variational Inference to handle potential model misspecification. Clients use robust local objectives (like Density–Power or Generalised Cross-Entropy losses) while the server employs a chosen divergence-based penalty (often KL or an $\alpha$-Rényi divergence) on the global posterior. Convergence is examined in terms of fixed points—where the global posterior stops changing under repeated local-global updates.
The proposed evaluation criteria seems solid.
Theoretical Claims: The authors extend existing PVI theory by showing that if a robust generalised Bayesian update is done at each client and a KL-based GVI update at the server, the global solution is itself robust. Key points:
1. Fixed point analysis (Proposition 4.4): Demonstrates that, if the algorithm converges, the final server distribution is a stationary minimizer of a global GVI functional.
2. Equivalence to GBI (Lemma 4.5): With certain parameter choices (e.g., negative log-likelihood as the local “loss,” or a robust alternative + KL at the server), the final solution coincides with the standard or robust GBI posterior.
3. Cavity distribution necessity (Theorem 4.8): Shows that removing the cavity from the client update would lead to systematically biased or overconfident global updates. This result clarifies why each client must “subtract out” the impact of its own data from the global prior, rather than simply using the existing posterior.
I did not uncover any obvious mistakes in the proofs; they seem consistent with prior results on partitioned variational methods. That said, verifying every detail would require deeper familiarity with some of the advanced robust GVI and measure-theoretic derivations, which I do not fully possess.
Experimental Designs Or Analyses: The authors systematically vary contamination rates (e.g., flipping labels randomly on MNIST) and track how well each method recovers clean performance. They compare with classical federated baselines (FedAvg) and Bayesian baselines (PVI, DSGLD, DSVGD, etc.) across varied contamination levels. The data partition is typically homogeneous or split randomly, though the paper briefly remarks it is straightforward to allow non-i.i.d. partitions.
The authors provide clarity that the number of clients (M) or the chosen “damping” parameters can be adapted.
Some additional exploration of the method’s sensitivity to the choice of robust hyperparameters (e.g., $\beta$, $\delta$, $\gamma$) might help readers see how stable the procedure is under default vs. tuned settings.
Supplementary Material: I skimmed the suplementary material.
Relation To Broader Scientific Literature: This work builds on the recognized problem of model misspecification in Bayesian methods, referencing established works like Bissiri et al. (2016) for generalised Bayes, and the robust M-estimation approaches from Ghosh & Basu (2016) or Knoblauch et al. (2022). Overall, the framework is well-situated within existing lines of research but has a distinct novelty in bridging the gap between robust GVI and federated Bayesian methods.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths**
1. The framework is conceptually simple to implement: each client modifies its update with a robust local objective and the server does a GVI-based merge.
2. The theoretical story on why robust GVI helps in the federated setting is coherent and well-linked with standard references.
3. Strong empirical demonstration that the method truly mitigates outlier influence.
**Weaknesses**
1. Some of the hyperparameter tuning for the robust losses (like $\beta$ or $\gamma$ for L$\beta$/GCE) is not deeply explored in the main text; it might require extra user expertise.
2. Communication cost or runtime overhead is not directly benchmarked: presumably the overhead is comparable to standard PVI or FedAvg, but actual run-time comparisons might be valuable to demonstrate practical feasibility.
Other Comments Or Suggestions: The authors might add small clarifications on the recommended heuristics for selecting the “robustness parameters” (β, γ, etc.) in real-world use, or mention a rule of thumb.
It might be beneficial to discuss whether robust aggregator strategies at the server (like coordinate-wise trimming or medians) could be combined with robust GVI for even stronger resilience against malicious clients.
Questions For Authors: 1. How sensitive is FEDGVI to the choice of the robust loss hyperparameters (e.g., $\delta$ in the generalised cross-entropy or $\beta$ in the density–power loss)? Would small changes degrade performance significantly?
2. In practical setups with many clients ($M>>1$), do the introduced robust loss functions or divergences incur any significant computational overhead relative to standard negative log-likelihood updates?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their in-depth, constructive, and positive review.
**Question 1, Weakness 1, and Experiments**
> ...method’s sensitivity to the choice of robust hyperparameters [...] how stable...
> How sensitive is FEDGVI to [...] hyperparameters [...]? Would small changes degrade performance significantly?
We presented results on FashionMNIST while varying $\delta$ with different contaminations in Fig. 2. We now extend these to further contamination levels (0, 0.1, 0.2, 0.4) and $\delta$ (0, 0.4, 0.5, 0.8, 1),
as well as FedAvg, FedPA, and $\beta$-PredBayes, to demonstrate hyperparameter sensitivity:
https://anonymous.4open.science/r/Resources-3CEF/FashionMNIST.png
We also now offer an ablation study on MNIST for hyperparameter selection: https://anonymous.4open.science/r/Resources-3CEF/ablation_study.png (see also response to reviewer *4k22*).
To show the stability of FedGVI under small perturbations of $\delta$, we fix $\alpha=2.5$, vary $\delta$ around $0.8$, and report accuracies on uncontaminated test data after training on 10% contaminated MNIST data split across 5 clients:
https://anonymous.4open.science/r/Resources-3CEF/Perturbations.png
**Question 2, Weakness 2** Computational Complexity/Overhead
**W2**
> Communication cost or runtime overhead is not directly benchmarked...
**Q2**
> do [...] loss functions [...] incur any significant computational overhead [...] updates?
As the number of clients increases, the comp. time at each client is reduced due to less data per client. By Proposition 4.9 we can choose a robust loss at each client that enables conjugate client updates whereas the negative log likelihood might not have closed form. These include intractable likelihood models (Matsubara et al., 2022) where the normalising constant is intractable but we can tractably solve.
The comp. complexity of the divergence is solely dependent on the choice of variational family, prior, and divergence without dependence on number of clients or data per client. The divergence may be more comp. expensive in the case where no closed form solutions exist, e.g. for the Total Variation Distance. However, for exponential family distributions, many robust divergences with closed form solutions exist (Pardo Llorente, 2006; Knoblauch et al., 2022).
The KL and A-R divergences between Gaussians (Sec. 5.5) are available in closed form, scaling only in the number of parameters. The A-R can be computed in $\mathcal{O}(1)$ times the comp. complexity of the KL, driven by the determinant of the covariances; assuming $\Theta\subset\mathbb{R}^d$, and using a Cholesky decomposition this naively takes $\mathcal{O}(d^3)$ for both; assume these are (block)-diagonal, makes this cheaper.
The discrete losses in the classification setting are non-differentiable. We cannot apply the conjugate score matching loss, and require sampling from the approximation. We can transform the NLL in $\mathcal{O}(1)$ to the GCE, thereby, as the likelihood is used for the GCE and it's log for the NLL, these are of the same order of magnitude.
We will add in Section 5.5:
*FedGVI incurs no additional computational complexity when compared to PVI. This is due to the KL and Alpha-Rényi divergences having closed form solutions between Multivariate Gaussians with complexity of $\mathcal{O}(1)$ in each other, and because we require $\mathcal{O}(1)$ additional, constant operations to get the GCE from the NLL.*
To empirically validate this, we now report average wall-clock times at each client for FedGVI and PVI, see https://anonymous.4open.science/r/Resources-3CEF/runtime.png.
**Comments**
> Heuristics for selecting robustness parameters
These include Knoblauch et al. (2018) and Yonekura & Sugasawa (2023) for the Density Power loss, with the latter a theoretically principled Sequential Monte Carlo sampler for selecting $\beta$, or Altamirano et al. (2024) for weighted score matching, through cross validation.
We propose to add this in Sections 3.2 and 3.3:
3.2: *We can use a Sequential Monte Carlo sampler to estimate the $\beta$ or $\gamma$ hyperparameters in $\mathcal{L}_B$ and $\mathcal{L}_G$ (Yonekura & Sugasawa, 2023) or use cross validation to select optimal parameters (Altamirano et al., 2024).*
3.3: *Similarly to the losses, we can perform cross validation to select the $\alpha$ parameter, however as demonstrated in the ablation study (Figure TBD) FedGVI performs favourably under a range of $\alpha$ values.*
> Robust Aggregator strategies at the server
This is an area we are actively working on. We are exploring ways in which the summation in Eq. 6 can be replaced by a robust aggregator, such as Nearest Neighbour Mixing (Allouah et al., 2024) or indeed coordinate-wise median and trimmed mean, to achieve Byzantine robustness. We will discuss this in Sec. 6.
**References**
Yonekura, S., Sugasawa, S. Adaptation of the tuning parameter in general Bayesian inference with robust divergence. Stat Comput 33, 39, 2023. | Summary: This work presents FedGVI, an extension of partitioned variational inference (PVI) to generalised variational inference (GVI). The core benefit of GVI is that it permits robustness to model misspecification. The authors demonstrate a number of advantageous properties of FedGVI, most notably theoretical results that show it convergeses to GVI approximate posteriors, and is thus robust to model misspecification.
Claims And Evidence: Yes, both theoretical and empirical.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: Related to the field of federated probabilistic learning, which has wide spread applications.
Essential References Not Discussed: No,
Other Strengths And Weaknesses: Strengths
* The paper is very well written, not only in presenting FedGVI but also in the presentation of PVI (and extending the current understanding of how it works).
* FedGVI itself is easy to implement (a relatively straightforard extension of PVI).
* The theoretical results are supported by a number of experimantal results demonstrating the superiority of FedGVI relative to baseline methods (including PVI) when data contains outliers.
Weaknesses
* In equation 2 $\beta$ is undefined. Also---and I could be mistaken---should the normalisation constant include a $\beta$ in the integral?
* Whilst I understand the intuition behind 1. in Definition 4.10, I struggle more with the intuition behind points 2. and 3.. If there exists relatively compact intuition, perhaps the paper would benefit from its inclusion here.
Other Comments Or Suggestions: N/A.
Questions For Authors: N/A.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your constructive and positive review that highlights the clarity of our writing, the theoretical results on FedGVI, and its provable robustness. By addressing below the weaknesses you mentioned, we notably improve the clarity of the paper and its ease of understanding for the reader.
**Weakness 1** $\beta$ parameter in Eq. 2 undefined:
Thank you for spotting this typo, it has now been corrected, and we will add the following clarification:
*Here, $\beta\in\mathbb{R}_{>0}$ is a learning rate parameter that determines how much weight we place on the observed data, similar to power posteriors in VI (Grünwald, 2012; Kallioinen et al., 2024).*
The $\beta$ parameter comes from the power/cold/tempered posteriors of e.g. Grünwald (2012), where the likelihood in Bayesian posteriors is raised to some power of $\beta>0$. This was originally done to add some robustness to the posterior, down-weighting observations if $\beta< 1$ and up weighting these for $\beta > 1$. Through a known result (Knoblauch et al., 2022), which we highlight in Lemma B.1 in the Appendix, this is equivalent to having a weighted Kullback-Leibler divergence, $\frac1\beta\mathrm{KL}$. This also allows us to define if we want to trust the prior more $\beta<1$ or less $\beta>1$, since up weighting the data means down weighting the prior and vice versa.
We will add this discussion to Appendix B.1 to give further intuition behind GBI.
**Weakness 2** Intuition behind Definition 4.10 and Conditions 2 and 3:
The three conditions combined allow us to say whether the client posterior (or simply the posterior in a global, 1 Client, GBI setting) derived from such a robust loss is provably robust to Huber contamination.
From *Condition 1* we are able to bound an infinitesimal change in the loss with the contaminating data point $z$ by some auxiliary function $\gamma$, possibly infinite for some values of $\theta$.
*Condition 2* states that the product function, $\gamma(\theta)\pi(\theta)$ has finite uniform norm.
This ensures that this product under the worst case contamination and the worst parameter $\theta$, is finite and hence it cannot be made arbitrarily bad, which does not hold for the negative log likelihood in general. Alternatively, the prior decays to zero faster than the auxiliary function can diverge to infinity in $\theta$.
*Condition 3* further says that $\gamma(\theta)\pi(\theta)$ is finitely integrable, i.e. that this is in $L^1(\Theta)$. This, in effect, bounds the normalising constant of the contaminated posterior and will ensure that this is finite.
Taking all these conditions together tells us that the product function $\pi(\theta)\gamma(\theta)$ is in $L^1(\Theta)$ and that it is finite everywhere; two conditions that are mutually independent, i.e. one doesn't follow form the other.
These conditions characterise the notion of robustness we use for Theorem 4.11, a derivation of this notion is shown in Appendix B.7, by considering the worst choice for the contamination $z$ and the parameter $\theta$ with respect to small perturbations of the resulting posterior through $\epsilon$.
The influence of the contamination $z$ and parameter $\theta$ on the posterior is defined as $\frac d{d\epsilon}q_m^{(t)}(\theta;\mathbb{P}_{n_m,\epsilon, z})|_0$ (evaluated at $\epsilon=0$), which is bounded through the conditions. This then implies that the local posterior is *globally bias robust*, i.e. robust to Huber contamination of all $z$ for all parameters $\theta\in\Theta$.
Thank you for pointing this out, we will provide the following clarification between Definition 4.10 and Theorem 4.11.
*These conditions ensure that the influence of arbitrary contamination on the local posterior is not arbitrarily bad.
In particular the auxiliary function $\gamma_m^{(t)}$ ensures that the influence of an adversarial data point $z$ on the posterior over infinitesimal contaminations,
$$\frac d{d\epsilon} q_m^{(t)}(\theta;\mathbb{P}_{n_m,\epsilon, z}) {\Big|}_0$$
evaluated at $\epsilon=0$, are finite over all $\theta$ and $z$. Condition 2 ensures the loss increases slowly enough for the local posterior to concentrate around the data, and condition 3 ensures the resulting posterior will be normalisable.*
**References not in Paper**
Kallioinen, N., Paananen, T., Bürkner, P.-C., and Vehtari, A. Detecting and diagnosing prior and likelihood sensitivity with power-scaling. Statistics and Computing, 34(1):57, 2024. | null | null | null | null | null | null |
Navigating Semantic Drift in Task-Agnostic Class-Incremental Learning | Accept (oral) | Summary: This paper proposes a method to improve model stability in class-incremental learning by alleviating the semantic drift phenomenon. The authors leverage the transferability of pretrained models and train parameter-efficient fine-tuning LoRA modules with a frozen ViT backbone. They define semantic drift in two dimensions: the mean and covariance of the features. To address this, they compensate for shifts in the mean and introduce a novel approach that uses a Mahalanobis distance loss function to constrain the covariance. The updated first- and second-order statistics are then used to align the classifier in the post-training stage. Additionally, the authors employ a patch-token-based distillation loss to further improve the model’s performance. The method is extensively validated on four major CIL datasets, with each module demonstrating its effectiveness and achieving SOTA performance.
Claims And Evidence: The authors claim that stability in class-incremental learning (CIL), which is a primary goal of CIL, can be undermined by the phenomenon of semantic drift. This phenomenon induces shifts in feature mean and covariance. The claim is supported by visualization and empirical validation across standard benchmarks.
Methods And Evaluation Criteria: The authors propose using low-rank adaptation modules that continually adapt to incremental tasks, and introduce a mean shift compensation and covariance calibration method to alleviate the semantic drift phenomenon, thereby improving stability. The mean shift compensation tracks the mean shift in the feature distribution, while the covariance calibration module adds constraints to the shape of the distribution. The compensated mean and calibrated covariance matrix are then used to update the classifier head at the end of each incremental session. They also incorporate a distillation module to further improve the stability aspect of the CIL task. Overall, the method seems reasonable for addressing the CIL challenge. The proposed method is evaluated on common datasets: ImageNet-R, ImageNet-A, CUB-200, and CIFAR-100, reporting both last and average performances.
Theoretical Claims: The mathematical symbols, variables, and equations are generally well defined and mathematically correct.
Experimental Designs Or Analyses: 1. The experimental design is valid, including ablations of the effectiveness of each component, different LoRA design choices, and a comparison with a large number of very recent SOTA methods across four mainstream benchmark datasets.
2. Although the authors mention that hyperparameter choices are based on sensitivity analysis, experimental results still need to be presented.
Supplementary Material: Yes, all parts.
Relation To Broader Scientific Literature: This work extends the definition of semantic drift, as described in the literature (e.g., Semantic Drift Compensation for Class-Incremental Learning), to both class mean and covariance perspectives, and proposes a novel covariance calibration method.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength:
Creatively imposing the Mahalanobis distance as a constraint in the loss function, which implicitly constrains the covariance matrix.
Weakness:
The gain from patch distillation is relatively small.
Other Comments Or Suggestions: No
Questions For Authors: See experimental designs or analyses, and other strengths and weaknesses.
Ethics Expertise Needed: ['Other expertise']
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer o87J for the valuable comments. Reviewer o87J gives a **positive rating (3-Weak Accept)**, finds our method is "reasonably designed" with "a novel, creative covariance calibration approach" and mathematical formulations are "well-defined and correct", etc. We address the main concerns below:
> **Reviewer o87J asks if the sensitivity analysis of the hyperparameters could be presented.**
Thanks. Here we include the sensitivity study for some important hyperparameters. The results are shown in the table below.
| s | 10 | 20 | 30 | 40 |
| ---- | ----------| ---------- | ---------- | ---------- |
| $\mathcal A_{Last}$ | $79.18 \pm 0.17$ | $81.88 \pm 0.07$ | $81.27 \pm 0.07$ | $80.53 \pm 0.30$ |
| r | 8 | 16 | 32 | 64 |
| ---- | ---------- | ---------- | ---------- | ---------- |
| $\mathcal A_{Last}$ | $80.80 \pm 0.27$ | $81.31 \pm 0.04$ | $81.88 \pm 0.07$ | $81.88 \pm 0.25$ |
|$\lambda$| 0.2 | 0.4 | 0.6 | 0.8 |
| ---- | ---------- | ---------- | ---------- | ---------- |
| $\mathcal A_{Last}$ | $81.83 \pm 0.28$ | $81.88 \pm 0.07$ | $81.72 \pm 0.18$ | $81.44 \pm 0.19$ |
>**Reviewer o87J questions the effectiveness of the patch distillation module.**
Thanks. Kindly refer to our response to the first concerns raised by Reviewer XH2d.
---
Rebuttal Comment 1.1:
Comment: Thank you for the valuable feedback of author. My concerns have been addressed. | Summary: This paper identifies that the feature distribution gap between novel and existing tasks is primarily influenced by differences in mean and covariance moments. To address this, a novel semantic drift calibration method is proposed, integrating mean shift compensation and covariance alignment. Specifically, a Mahalanobis distance constraint is applied to align class-specific embedding covariances between previous and current networks, effectively mitigating covariance shift. Additionally, a feature-level self-distillation mechanism is introduced to enhance generalization and improve model adaptation.
Claims And Evidence: The claims in the submission are well-supported by compelling empirical evidence. The proposed method exhibits superior performance, particularly on challenging benchmarks like ImageNet-R and ImageNet-A, where it outperforms the second-best method, SSIAT, by substantial margins in both 𝐴_{last} and 𝐴_{avg}. Ablation studies validate the contributions of key components, Mean Shift Compensation (MSC) and Covariance Calibration (CC), which collectively improve performance remarkably. Furthermore, the method demonstrates robustness across both long and short task sequences, reinforcing its reliability in diverse class-incremental learning (CIL) scenarios.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria appear well-aligned with the problem of class-incremental learning (CIL). The use of challenging benchmarks such as ImageNet-R and ImageNet-A is appropriate, as these datasets introduce real-world complexities, including distribution shifts and adversarial robustness, which test the model’s adaptability. The evaluation metrics, 𝐴_{last} and 𝐴_{avg}, effectively capture both final and average performance across task sequences, providing a comprehensive assessment of catastrophic forgetting and knowledge retention. Additionally, the inclusion of ablation studies on Mean Shift Compensation (MSC) and Covariance Calibration (CC) further validates the method’s contribution. The experiments on both long and short task sequences enhance the evaluation’s credibility by ensuring the method’s robustness across varying levels of incremental complexity. Overall, the methods and evaluation criteria are well-designed for the intended application, effectively assessing both adaptability and stability in CIL scenarios.
Theoretical Claims: The submission does not include formal theoretical proofs but presents technically grounded mathematical formulations. The formulations for Mean Shift Compensation (MSC) and Covariance Calibration (CC) are well-justified, particularly in their use of Mahalanobis distance constraints for covariance alignment. Additionally, the loss functions and optimization strategy align with established techniques for mitigating catastrophic forgetting. While no theoretical analysis is provided, the effectiveness of these formulations is validated through empirical experiments and ablation studies, demonstrating their practical impact. Overall, the mathematical foundations of the proposed method are sound and effectively supported by experimental results.
Experimental Designs Or Analyses: The experimental design is generally well-structured and appropriate for evaluating class-incremental learning. The use of challenging datasets, ablation studies, and comparisons with a strong baseline strengthen the validity of the findings.
Supplementary Material: There is no supplementary material provided for this submission.
Relation To Broader Scientific Literature: This paper contributes to class-incremental learning (CIL) by integrating LoRA-based fine-tuning with semantic drift compensation, covariance calibration, classifier alignment, and feature self-distillation, addressing key challenges in parameter-efficient adaptation. It builds upon prior research in parameter-efficient fine-tuning (PEFT) for CIL\cite{hu2021lora, valipour2022dylora, hao2024flora}, which focuses on reducing computational overhead while preserving model adaptability. Unlike standard PEFT methods, this work explicitly tackles feature drift, aligning with studies on feature shift correction\cite{liu2021swin, zhu2024vision} by introducing covariance alignment and Mahalanobis distance constraints to stabilize class representations over incremental tasks. Additionally, the paper extends classifier bias mitigation\cite{wu2019large, kang2019decoupling} by ensuring that learned features remain well-calibrated across tasks without reliance on memory buffers. By integrating feature-level self-distillation, it also aligns with research in self-supervised learning and knowledge retention\cite{zhang2022self, touvron2021training}. These innovations improve both efficiency and stability in incremental learning, making the proposed approach highly relevant to the broader literature on continual learning, parameter-efficient adaptation, and large-scale vision model optimization.
Essential References Not Discussed: No
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: 1. The results in Table 2 show that the performance on ImageNet-R is good but seems not good enough on CIFAR-100. Could you provide a detailed explanation for this discrepancy?
2. Better to discuss the limitations and future studies.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer JCMA for the valuable comments. Reviewer JCMA gives a **positive rating (4-Accept)**, acknowledges "a novel semantic drift calibration method" which is "well-designed and effectively assessed" and notes that its "key components improve performance remarkably" offering "improved efficiency and stability", etc. We address the main concerns below:
> **Reviewer JCMA asks if we could provide a detailed explanation for the performance discrepancy in CIFAR-100 and Imagenet-R.**
Thanks. Please refer to our response to the second concerns raised by Reviewer XH2d.
> **Reviewer JCMA asks if we could discuss the limitations and future studies.**
Thanks. We discuss the Limitations and Future Work below. In this study, we have focused on addressing semantic drift by aligning first-order (mean) and second-order (covariance) statistics. While this approach has shown promising results, it is inherently limited in its ability to capture more complex aspects of feature distribution shifts. Specifically, higher-order moments, such as skewness (third-order statistic) and kurtosis (fourth-order statistic), are not considered in this framework. These higher-order statistics could provide additional insights into the shape and tails of the data distribution, which may help in mitigating semantic drift more effectively, especially in tasks with significant feature distribution shifts. Future work will explore this approach by incorporating higher-order statistical moments like skewness and kurtosis into the alignment process. | Summary: Balancing flexibility and stability remain a key challenge in class-incremental learning (CIL). To address this, this paper introduces mean shift compensation and covariance calibration to regulate feature moments, preserving both model stability and adaptability. Additionally, a feature self-distillation mechanism for patch tokens is implemented to further enhance feature consistency. The proposed efficient, task-agnostic continual learning framework surpasses existing methods across multiple public datasets, demonstrating its effectiveness and superiority.
Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. The proposed mean shift compensation, covariance calibration, and feature self-distillation methods are validated through extensive experiments across multiple public datasets. The results consistently demonstrate superior performance over existing class-incremental learning (CIL) methods.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the class-incremental learning (CIL) problem. address semantic drift through mean shift compensation and covariance calibration, ensuring both stability and adaptability. The mean shift compensation tracks distribution shifts, while covariance calibration constrains feature variance, collectively enhancing classifier alignment across incremental tasks. Additionally, a feature self-distillation module further stabilizes feature representations over time.
Theoretical Claims: The formulations are mathematically technically correct.
Experimental Designs Or Analyses: The experimental design is well-structured and comprehensive, covering component-wise ablations, model design choices, and competitive benchmarking.
Supplementary Material: No supplementary material provided.
Relation To Broader Scientific Literature: This work mitigates the Semantic Drift issues in CIL by integrating both mean shift compensation and covariance calibration. It builds upon prior research in parameter-efficient fine-tuning (PEFT) for CIL~\cite{hu2021lora, valipour2022dylora, hao2024flora}, which focuses on reducing computational overhead while preserving model adaptability. Existing works only study the semantic drift phenomenon by exploring class prototypes shifting, however this work extends the semantic drift phenomenon in both the mean and covariance, calibrating them to mitigate catastrophic forgetting.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strength:
1. Good writing with well-organized structure.
2. The illustration looks good and intuitive.
3. The performance looks good compared with SOTA approaches.
4. The motivation is clear and straightforward and well demonstrated with carefully designed experiments .
Weakness:
The patch distillation module appears to be somewhat incremental and lacks a strong integration with the overall motivation and methodology. While it aims to enhance feature stability, its connection to the core semantic drift calibration and covariance alignment framework is not entirely clear. Specifically, its role in mitigating catastrophic forgetting or improving class-incremental learning (CIL) performance should be more explicitly justified. If it primarily contributes to feature refinement rather than directly addressing semantic drift or classifier calibration, further clarification on its necessity and integration with the main approach is needed.
Other Comments Or Suggestions: None.
Questions For Authors: The gains on CIFAR-100 seem marginal. Could you please provide a detailed discussion on this?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer XH2d for the valuable comments. Reviewer XH2d gives a **positive rating (4-Accept)**, finds that "the paper is well-written and well-organized" with "a clear, straightforward motivation" coupled with "intuitive illustrations" and "demonstrate superior performance" with "careful experimental designs", etc. We address the main concerns below:
> **Reviewer XH2d asks for further clarification on the necessity of integrating the patch distillation module with the main approach that alleviates semantic drift.**
Thanks. Patch distillation is not an independent add-on separate from semantic drift calibration; rather, it enhances feature space stability to provide a more reliable basis for mean/covariance calibration. Its mechanism is as follows: semantic drift calibration (MSC/CC) targets high-level semantic features (class tokens), while patch tokens capture fine-grained visual details. Stabilizing local feature representations can reduce the distributional shift at the class token level. In ViT, shallow features encompass general patterns (such as edges and textures) shared across tasks. Enforcing consistency among patch tokens helps prevent the degradation of these foundational features. In summary, Patch Distillation is not an isolated or unrelated component; rather, it complements MSC/CC and plays a crucial role in improving model robustness.
> **Reviewer XH2d asks for a detailed discussion on the gains of our method on CIFAR-100.**
Thanks. We highlight that our approach is designed to be general. Notably, it demonstrates significant advantages on more challenging tasks, such as ImageNet-R and ImageNet-A. In our experiments, we applied a unified hyper-parameter setting without dataset-specific tuning, which underscores the versatility of our approach. We are confident that by fine-tuning the hyperparameters of our method for CIFAR-100, further improvements can be achieved. Within a limited timeframe, we find an optimized hyperparameter configuration using LoRA with a rank of $r = 20$ and a patch distillation loss weight of $\lambda = 0.1$. This setting achieve a CIFAR-100 performance of $\mathcal A_{\text{Last}} = 92.20$ and $\mathcal A_{\text{Avg}} = 95.15$. | Summary: This paper tackles the challenge of class-incremental learning in continual Learning, which enables models to sequentially learn multiple tasks without retraining or accessing data from previous tasks.
While recent advancements in deep learning, such as larger model capacities and large-scale pretraining, have improved plasticity, traditional methods (e.g., regularization, memory replay, and knowledge distillation) still come with significant computational and storage overheads, hindering practical deployment.
The authors focus on addressing catastrophic forgetting and the issue of semantic drift, which occurs when feature means and covariances shift as new tasks are added.
They propose two solutions:
(1) Mean Shift Compensation, which estimates and corrects the drift in feature means by computing weighted averages of embedding shifts, and (2) Covariance Calibration, which uses Mahalanobis distance to align the covariance matrices of embeddings from old and new tasks. These methods are incorporated into a task-agnostic continual learning framework that outperforms existing techniques across several public datasets, enhancing both model stability and adaptability.
Claims And Evidence: The claims made in the submission are generally well-supported by clear and convincing evidence. The proposed method demonstrates superior performance, especially on challenging datasets like ImageNet-R and ImageNet-A, where it surpasses the second-best method, SSIAT, by significant margins in both A_{last} and A_{avg}. Additionally, the ablation study confirms the effectiveness of key components like Mean Shift Compensation (MSC) and Covariance Calibration (CC), which together improve performance by over 2%. The method’s robustness across task sequences (both long and short) further solidifies its reliability in diverse class-incremental learning (CIL) scenarios.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem of class-incremental learning (CIL) and align well with the challenges inherent in this application. The approach leverages a frozen ViT backbone with task-specific LoRA modules, which is an effective way to retain previous knowledge while allowing task adaptation without excessive memory overhead. LoRA’s ability to integrate low-rank weight updates ensures that task-specific adaptation is computationally efficient, making it well-suited for CIL. Semantic drift (feature mean and covariance shift) is a known challenge in CIL, and the method addresses it explicitly through: Mean Shift Compensation (MSC): Correcting shifts in class mean embeddings. Covariance Calibration (CC): Aligning covariance matrices using Mahalanobis distance, ensuring feature distributions remain consistent across tasks. These techniques align with prior findings on distributional shifts in continual learning and provide a principled way to improve knowledge retention. Results highlight the method’s strengths in handling domain shifts (ImageNet-R, ImageNet-A) while performing competitively on natural datasets (CIFAR-100, CUB-200).
Theoretical Claims: Overall, the theoretical claims appear reasonable and grounded.
Experimental Designs Or Analyses: The experimental design is well-structured and rigorous, employing diverse and challenging benchmark datasets, consistent task sequences, and fair comparisons with state-of-the-art methods, ensuring a comprehensive evaluation of model performance. The study’s use of multiple independent runs with identical random seeds enhances result reliability, while the ablation studies effectively isolate the contributions of key components, particularly Mean Shift Compensation (MSC) and Covariance Calibration (CC), demonstrating their effectiveness in mitigating semantic drift. The inclusion of both short (5 tasks) and long (20 tasks) sequences further validates the method’s robustness across different continual learning settings. Additionally, the framework’s reliance on pretrained models with LoRA-based fine-tuning presents a computationally efficient alternative to full fine-tuning approaches. While statistical significance testing and computational cost analysis could further reinforce the claims, the experimental design provides strong empirical evidence supporting the proposed method’s superiority in handling catastrophic forgetting and domain shifts, making it a promising solution for class-incremental learning.
Supplementary Material: No supplementary material
Relation To Broader Scientific Literature: The paper advances the field by integrating LoRA-based fine-tuning with semantic drift compensation, covariance calibration, classifier alignment, and feature self-distillation, all of which address major challenges in CIL. It builds on prior work in PEFT-based CIL, feature shift correction, classifier bias mitigation, and ViT self-distillation, but improves upon them by removing reliance on memory buffers, reducing computational overhead, and enhancing feature stability. These contributions align with and extend existing literature, making them highly relevant to the broader research community in continual learning and parameter-efficient adaptation of large models.
Essential References Not Discussed: No
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: No
Questions For Authors: 1. How does the proposed LoRA-based fine-tuning scale across different model sizes with varying complexities? In particular, how does it handle imbalanced class distributions?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer 31F2 for the valuable comments. Reviewer 31F2 gives a **positive rating (3-Weak Accept)**, finds "the proposed method demonstrates superior performance", and our experimental designs are "rigorous, diverse and comprehensive", etc. We address the main concerns below:
> **Reviewer 31F2 suggests that a computational cost analysis would be useful to demonstrate that LoRA-based tuning is more efficient in terms of computation than full fine-tuning.**
Thanks. We evaluate the computational cost in terms of the number of trainable parameters and Multiply-Accumulates (MACs). We analyze the computational cost of multiple SOTA methods on ImageNet-R. The results are shown in the table below.
|Method| Trainable Params (M) | MACs(G) |ImageNet-R ($\mathcal A_{Last}$)|
|---|---|---|---|
|L2P|0.19|37.48|70.56|
|DualPrompt|0.41|35.17|66.89|
|RanPAC|2.00|17.58|77.94|
|SLCA|85.8|17.58|78.95|
|CPrompt|0.26|25.01|76.38|
|EASE|1.19|17.81|75.91|
|MOS|3.2|17.64|77.68|
|InfLoRA|0.32|20.51|78.78|
|SSIAT|1.19|17.81|79.55|
|Ours|1.20|20.82|81.88|
The results highlight that our Params and MACs are minimal. Even under low computational cost conditions across all methods, our approach still delivers outstanding results.
> **Reviewer 31F2 asks how our method scales across different model sizes with varying complexities.**
Thanks. We evaluate our method with four different scales of models pretrained on ImageNet-21K at a resolution of $224 \times 224$.
|Model|depth|embed_dim|heads|Params(M)| $$\mathcal A_{Last}$$ | $$\mathcal A_{Avg}$$ |
|---|---|---|---|---|---------- |---------- |
| ViT-Ti/16 |12|192|3|5.7|$60.85$|$70.90$|
| ViT-S/16 |12|384|6|22|$75.82$|$81.87$|
| ViT-B/16 |12|768|12|86| $81.83$|$86.27$ |
| ViT-L/16 |24|1024|16|307| $85.28$ | $89.10$ |
The results show that our method demonstrates strong scalability with pre-trained model size. The ViT-Large based model can get better performance.
> **Reviewer 31F2 asks how our method handles imbalanced class distributions.**
Thanks. We identify a notable class imbalance in our training datasets, such as ImageNet-R and ImageNet-A. For ImageNet-R, the training set comprises 200 classes with a total of 24,000 samples. The most frequent class contains 349 samples, whereas the least frequent class contains only 38 samples, resulting in a maximum-to-minimum sample ratio of approximately 9.18:1. In comparison, the ImageNet-A training set also consists of 200 classes but includes only 5,981 samples. Here, the most common class has 86 samples, while the least common class has just 2 samples, leading to a ratio of 43:1. Our proposed method is capable of addressing this class imbalance issue during training.
Specifically, in the classifier alignment phase, we employ the following strategy to alleviate the class imbalance:
For each class $ c $, we generate $s_c$ synthetic feature samples from a normal distribution $ \mathcal N(\mu_c, \Sigma_c) $, where $s_c$ is the same for all classes, and thus independent of class frequency. This design ensures that minority classes generate more synthetic samples relative to their original size, effectively mitigating the classifier’s bias toward high-frequency classes.
Class imbalance itself is an important research direction, and we consider extending our method to better address this challenge as a promising avenue for future work.
---
Rebuttal Comment 1.1:
Comment: Thank authors for the additional experiments and explanations. The impressive performance on the ImageNet dataset makes the work appear solid. I have also carefully revisited the remaining aspects and will accordingly raise my score further. | null | null | null | null | null | null |
Does Data Scaling Lead to Visual Compositional Generalization? | Accept (poster) | Summary: The article investigates how the compositional generalization capacity of vision models is related to the scale and quality of the training dataset. The authors main contribution is the proposal of a systematic way to manipulate training and testing dataset splits in order to evaluate how data diversity promotes generalization. Additionally, they introduce a novel dataset composed on abstract shapes called Funny-Sprites which provides a more comprehensive benchmark than the more standard dSprites. The authors perform a thorough evaluation using both pre-trained models and models trained from scratch. Their findings seem to indicate that data diversity is crucial for compositional generalization, yet cannot achieve perfect compositionally at the scales they investigated.
Claims And Evidence: In general, the authors make a really good effort at substantiating their claims with solid empirical evidence. They evaluate several variations of their test protocol, varying the number of values each concept can take, and the number of combinations each value can appear in. This allows them to manipulate both the diversity of the data and, potentially, the qualitative nature of the missing combinations (though they don't really explore this angle too much).
They do not test a large variety of models when training from scratch (only ResNet-50 is reported), but they claim to have used other baselines in the text. It would be good to include some of these results, even if they are not used in all the conditions. One way to justify not including all test conditions could be to start with simple ones and, should they fare worse than ResNet-50, exclude them from subsequent evaluations.
Methods And Evaluation Criteria: For the most part, yes. The datasets used are standard in the compositional generalization literature. Additionally, they introduced a new dataset called Funny-Sprites which contains abstract, non-regular shapes. They also analyze the learned representations using several metrics to understand how they are structures (i.e. are they more linear, can concept values be decoded, do they lead to better generalization).
Theoretical Claims: I only skimmed through them since they are in the appendix, but looked fine. The claims at least make intuitive sense.
Experimental Designs Or Analyses: Yes, I did. They appear correct to me.
Supplementary Material: Only parts of it to check the precise procedure for some of the analyses.
Relation To Broader Scientific Literature: The paper is well situated within the literature. The authors make a substantial effort to cover the related literature and identify where their contribution lies, and in my opinion they succeed at this.
Essential References Not Discussed: No that I can remember. The authors are very thorough in their discussion of related work.
Other Strengths And Weaknesses: The authors explore an important question: how does data diversity influence compositionality, which is important since one popular claim is that scaling data is enough to achieve strong generalization in AI systems. These findings suggest that this is in fact not the case.
Other Comments Or Suggestions: My main suggestion is regarding the use of the term compositionally. Throughout the paper the authors use it as equivalent to orthogonal/linear representations, yet this is not exactly correct. Rather, having a representation as the one in equation 25 is one way, but not the only way, to implement a compositional representation. Indeed, it is a very popular view of compositionally in DL. However, it is perfectly valid to imagine a representation that does not conform to such notions of linearity/orthogonality, but is still compositional. The only requirement is that whatever mechanism is doing the composition can handle such representation. Thus, I would suggest that the authors make it clear that this is the implementation of compositionally they are exploring. Moreover, the main issue is whether such a representation is stable in the models being studied.
In fact, the author's own findings contradict that claim that these are compositional representations, since they fail to generalize systematically to unseen combinations, a fact that authors point to briefly when they state that they are not "ideal". Note that in the Philosophy of Mind literature compositionality implies systematicity; thus if a model is not systematic it cannot be compositional (or "ideally compositional", see Fodor and Pylyshyn). Thus the representations must either be unstable (they deviate from the expected position in latent space for that combination) or such a representation is not enough to ensure compositionality.
Questions For Authors: What happens if in the analysis in section 4.2, instead of training the decoder and determining PCA directions with both training and testing combinations the authors only se the training ones and predict/project the test combinations with the resulting model/PCA? Do the test combinations clearly separate from the training ones? Is the nice lattice structure clearly observed break? I would like to see this analysis.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your thorough review. We will incorporate the feedback into the updated manuscript.
**They do not test a large variety of models when training from scratch (only ResNet-50 is reported), but they claim to have used other baselines in the text. It would be good to include some of these results, even if they are not used in all the conditions. One way to justify not including all test conditions could be to start with simple ones and, should they fare worse than ResNet-50, exclude them from subsequent evaluations.**
Thank you for this suggestion. We agree that including results from additional models would strengthen the completeness of our empirical analysis. We include some of the results of vision transformers and compare them with ResNet50 here by varying the % of combinations for CMNIST (since the model achieves comparable results with 80% of combinations); additionally, we highlight that even under increased number of training points, vision transformer still underperforms (all experiments assumed 60,000 datapoints, except for the last row):
| Dataset | Model | % Combinations ($k/n$) | OOD Accuracy | |
|--------------|-----------|------------------|--------------|-------------------------------------------|
| CMNIST | ResNet50 | 80% (8/10) | 95.1 | |
| | ViT | 80% (8/10) | 94.5 | |
| | ResNet50 | 40% (4/10) | 66.0 | |
| | ViT | 40% (4/10) | 71.0 | |
| FunnySprites | ResNet50 | 92% (13/14) | 80.1 |
| | ViT | 92% (13/14) | 66.0 | |
| | ViT (120K data) | 92% (13/14) | 57.3 | |
We attempted to be as charitable to ViT architectures as possible: we varied the patch size (8 and 16), ViT depth (4, 6, 8), width of each layer (384, 512), number of heads (8, 12), MLP width (384, 512), and learning rate ($1e{-}5$, $1e{-}4$, $1e{-}3$). In the table above, we report the best-performing configuration for each setting. All the models were able to achieve 99.7% accuracy on the ID data, but ViT typically underperformed on the testing data.
**My main suggestion is regarding the use of the term compositionality.**
Thank you for raising this point. We agree that the term “compositionality” may not have been the most precise description for the notion we are exploring. Our motivation for using it stemmed from the natural emergence of linear factorization in the from-scratch setting with increased data diversity. However, we acknowledge that compositional structure does not have to be linear. Instead, we propose to use the term “linear factorization” as a specific instantiation of compositionality. We will clarify this in the text.
**In fact, the authors’ own findings contradict the claim that these are compositional representations, since they fail to generalize systematically to unseen combinations. [...] Thus the representations must either be unstable or such a representation is not enough to ensure compositionality.**
Our goal is not to claim that the representations are fully compositional, but rather to study to which extent they are compositional, instantiated in our case as exhibiting a linear factorization. The failure to generalize to unseen combinations highlights the limitations of this structure in the models we study. If our writing gave the impression that we claim full compositionality, we’d be happy to clarify--please do let us know if a particular phrasing seemed misleading.
**What happens if in the analysis in Section 4.2, instead of training the decoder and determining PCA directions with both training and testing combinations, the authors only use the training ones and project the test combinations with the resulting model/PCA? Do the test combinations clearly separate from the training ones? Is the nice lattice structure observed in the projections disrupted?**
The displayed points in Section 4.2 are from the OOD data, i.e., unseen combinations. Specifically, we sample a 2×2 grid of combinations that share concept values (i.e., (i, j), (i, k), (l, j), (l, k)), all from the test set. We will clarify this procedure in text.
Regarding the suggestion to perform PCA only on the training data and project the test points using the resulting model: in this case, the lattice structure is not preserved. Intuitively, this is because combinations of concept values can span up to a $2n - 1$-dimensional space. Even if the training data admits a linear factorization, projecting unseen combinations using a PCA fit on the training data alone does not guarantee that the structure will be preserved since the model is trained to classify concepts. This is the reason why we used a smaller set of combinations for visualization purposes. | Summary: This paper studies whether compositional generalization emerges in vision models trained from scratch and large-scale datasets. The authors design simple experiments where two factors, i.e., shape and color, control the dataset. The authors explore different levels of compositions in the training set and offer some discussions based on the results.
Claims And Evidence: - “large pretrained models” can be very misleading. I would suggest the authors always emphasize “vision models pretrained with large-scale datasets”.
- L90 “settings where no restrictions on the scale of compositions are imposed” is not very clear what it means exactly.
- It is challenging to see the relation between this work and others mentioned in related work, especially in the last four paragraphs. This further makes it difficult to judge the contribution of this work.
Methods And Evaluation Criteria: - “ID data” is not explained at all in the paper and is used many times.
Theoretical Claims: yes
Experimental Designs Or Analyses: - L186, “contains additional concept dimensions (like position, orientation, or background) with |Ci| possible values each. For instance, in a dataset with two additional dimensions having |C1| = 8 and |C2| = 12 possible values respectively”, it is unclear why authors introduce more variants to the compositions while claiming that the paper limits its compositionality as 2 for simplicity.
Supplementary Material: I reviewed the dataset sections in the supplementary material.
Relation To Broader Scientific Literature: The key contributions of this paper could facilitate compositionality learning, a core component of human-like AI.
Essential References Not Discussed: - Zerroug et al., A Benchmark for Compositional Visual Reasoning, NeurIPS 2022, shared a highly similar topic.
- Zhou et al., Data Factors for Better Compositional Generalization, EMNLP 2023. Although the paper focuses on NLP domain, it also discusses compositionality from a data perspective.
- Stone et al., Teaching Compositionality to CNNs, CVPR 2017 has demonstrated that training from scratch offers better compositionality generalization.
Other Strengths And Weaknesses: **Strengths**
- The introduction is well-written and easy to follow.
- The studied topic is very important to human-like AI development.
**Weakness:**
- “in this work, we focus specifically on pairwise compositions” Is there any reference to follow and justify whether the simplification is reasonable and effective for its extension?
My major concerns are:
1. It is unclear what the difference is in the empirical findings from existing work.
2. It is unclear whether the simple experimental setting in this paper can scale up to compositions with various compositionality dimensions in the real world.
Other Comments Or Suggestions: L95, “Vision-language models face specific compositional challenge”, would be great to explain a bit more in texts.
L129, “face their own limitations in handling complex visual compositions.”, it would be great to explain a bit more in texts.
L215, "We study this through two main sets of experiments: pre-training, where we train models from scratch (Section 4), and evaluating pre-trained foundation models’ (FM) compositional abilities…” is very confusing and is probably just a typo? I suggest just rephrasing “pre-training, where we train models from scratch” as “trained from scratch”.
Questions For Authors: Please see comments and suggestions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough review. We will incorporate these suggestions in the final version of the manuscript.
**"large pretrained models" can be very misleading**
We agree about emphasizing the vision aspect. However, our models are large within the vision domain—for example, DINO ViT-L contains 300M parameters. Would it be clearer if we added a footnote explaining our criterion for "large-scale models" in the vision context?
**L90 is not very clear what it means**
What we meant by this is that prior work usually considers fixed compositionality settings where the scale of compositions is fixed.
**It is challenging to see the relation between this work and others mentioned in related work, especially in the last four paragraphs.**
We aimed to cover prior work relevant to (compositional) generalization and vision, and highlight challenges. In brief:
- *Compositionality and vision models:* We discuss existing limitations in vision models, e.g. under spurious correlations and how it affects generalization.
- *Scaling and emergent abilities:* Prior work shows scaling can improve performance in (compositional) generalization tasks. We build on this by systematically varying data and model scale across multiple axes.
- *Improving compositionality:* While others introduce architectural or loss-based modifications, we evaluate whether modern scaling practices alone suffice for compositional generalization.
- *Representation learning:* Our approach focuses on the structure of learned representations--specifically, how they support generalization to unseen compositions.
We will revise the text to more clearly explain how our work relates to prior work.
**"ID data" is not explained.**
We agree that this terminology is confusing without an explicit mention of it. By ID data we are referring to the data sampled from the training region of combinations defined by the $(n, k)$ framework we introduced.
**L186: it is unclear why authors introduce more variants to the compositions while claiming that the paper limits its compositionality as 2 for simplicity.** & **Is there any reference to follow and justify whether the simplification is reasonable and effective for its extension?**
We wanted to emphasize that the total dataset size is not restricted by the two concepts we study. While the specific pair of concepts we focus on is limited in size, other concepts in the data can vary freely. This allows us to study a more realistic setting for compositionality. For example, in the real world, when we restrict the pair (color, animal), we know that blue and green pandas aren't common in datasets. However, the pairs that are present (e.g., white pandas) can still vary along many other dimensions, such as size, pose, location, and other attributes.
Additionally, we focus on the two-concept case because we believe it is the simplest compositional setup that still presents challenges for vision models. Given that the total number of concepts is greater than 2, applying further restrictions would only make the task harder. Therefore, we view the two-concept focus as a reasonable simplification.
**Zerroug et al. (2022), Zhou et al. (2023), and Stone et al. (2017) are essential related works not discussed.** & **It is unclear what the difference is in the empirical findings from existing work**
Thank you for these references. We briefly summarize how these works relate to ours:
- Zerroug et al. focus on visual reasoning and data-efficiency, combining representation learning and reasoning over them with repeatedly sampled (i.e. the train and test set has overlap), or procedurally-generated tasks (visual primitives aren't overlapping between training and testing). In contrast, our work centers on classification tasks that rely on strong vision representations and examines generalization when seen concept values at test time are combined in unseen ways.
- Zhou et al. study data compositionality in NLP using primitive-level data augmentation to enhance generalization. While we find a similar conclusion that diverse training data is beneficial, our focus on the vision domain introduces challenges not encountered in language tasks (e.g. making primitive-level augmentations is simple in language, but not in vision; and scaling ID data quantity isn't sufficient to improve generalization).
- Stone et al. modify architectures and losses with object masks to promote compositionality, which relies on injecting priors and using dataset-specific annotations. Our approach, however, investigates compositional generalization from a data-scaling perspective.
We believe our work isolates a specific and underexplored aspect of compositional generalization in vision. Our work, to our knowledge, is the first to consider the scale of compositions across multiple dimensions: types of concepts, number of concept values, number of datapoints used in training, types of datasets, and densities of compositions in a controlled manner.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for addressing my comments. However, my concern about real-world or complex scenarios extension has not been well addressed. Although the authors emphasized the controllability of the simple environment and highlighted that the current setting could be challenging enough for the evaluated models, I believe it is intuitive to investigate more complex and realistic data since the paper studied compositionality with a focus on data scaling. Moreover, it is still unclear whether the presented results could reflect or have implications for more challenging data and scenarios (the second weakness in my review). Based on this, I decided to keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up.
We have additionally conducted experiments on the real-world dataset MPI3D [1], containing real photographs with controlled variation factors (examples in Figure 1). Full results are available at: https://anonymous.4open.science/r/7475686574/rebuttal_response.pdf. In short, all of our original claims hold: compositional generalisation remains challenging in from-scratch models (Figure 2), scaling ID data quantity alone does not help (Figure 3, top left), and increasing class diversity or combination coverage improves generalisation (Figure 3, top right and bottom). Pre-trained models show linear factorisation and near-perfect accuracy on colour (Figure 4, left), yet probing shows struggles with harder concepts like shape (Figure 4, right).
While our environments are controlled, they allow us to systematically scale key data factors, and the consistent results across synthetic and real-world data suggest that the difficulties we observe reflect fundamental model limitations. We are not aware of any real-world datasets with known factor structures at the scale and level of control considered in this work. Without full control over the concepts we train on, or when using mislabeled or partially labeled real-world images, it would not be possible to correctly attribute which data factors drive successful generalisation.
Finally, we note that Section 5 evaluates large-scale pre-trained models, trained on diverse real-world data, yet compositional generalisation remains hard--further supporting that this is a core challenge, not an artifact of the experimental setup and the type of data.
In sum, we agree that real-world datasets are important for future work, but we believe this study provides an important first step toward understanding the key factors underlying compositional generalisation, understanding what feature space structure enables it, and to which extent current vision backbones support it.
[1] Gondal, Muhammad Waleed, et al. "On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset." Advances in Neural Information Processing Systems 32 (2019). | Summary: This paper examines whether data scaling improves compositional generalization in vision models, emphasizing the role of data diversity. Through controlled experiments with synthetic datasets, the authors show that models can achieve compositional generalization, but this ability depends on training data diversity rather than quantity.
Claims And Evidence: Basically yes.
Methods And Evaluation Criteria: The experimental setup is somewhat unconvincing:
1) The authors report the average accuracy across all concepts at each epoch (Line 223). What is the significance of reporting average accuracy per epoch? Does it effectively reflect the model's final compositional generalization capability?
2) The use of two independent classification heads to predict each concept in a composition for evaluating compositional generalization is debatable (Line 230). Other setups, such as a similarity-based approach, should be considered to improve comprehensiveness of evaluation.
3) The authors claim to have evaluated three probe architectures (Line 255), but these are not mentioned in the experimental results. Additionally, the relevance of evaluating architectures with only minor differences (e.g., one hidden layer or activation function) remains debatable.
4) Does the authors isolate a single variable in their experiments to ensure a controlled comparison? For instance, Does the number of traing samples kept constant while increasing training data diversity? (Sec. 4.1)
5) Does the authors consider conducting experiments on real-world datasets?
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes, please see "Methods And Evaluation Criteria"
Supplementary Material: Yes, I've reviewed all the Appendix.
Relation To Broader Scientific Literature: This paper explores the impact of data on the compositional generalization ability of vision models, a topic of broad relevance. As deep learning increasingly depends on large-scale data pretraining, optimizing data collection and filtering with compositional generalization in mind can significantly enhance training efficiency and model performance.
Essential References Not Discussed: No
Other Strengths And Weaknesses: As mentioned above, this paper's topic has both theoretical and practical significance, but the experimental setup needs improvement.
Additionally, some parts of the writing require clarification. For example, on line 318, what does performance specifically refer to?
Other Comments Or Suggestions: N/A
Questions For Authors: See `Methods And Evaluation Criteria` and `Other comments Or Suggestions`.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough review. We will clarify the raised points in the revised manuscript.
**L223: What is the significance of reporting average accuracy per epoch? Does it effectively reflect the model's final compositional generalization capability?**
Thank you for pointing this out. We should have stated this more clearly. The metric reported in Line 223 corresponds to the best-performing epoch (i.e., oracle model selection), not an average over epochs. Specifically, we track the average accuracy across both concepts at each epoch and report the highest value observed during training. This provides an estimate of the *upper bound* on achievable performance under ideal model selection. We chose this evaluation to emphasize that even with perfect model selection, the performance gap between ID and OOD is present.
**The use of two independent classification heads to predict each concept in a composition is debatable (Line 230)**
If we understood your question correctly, by similarity-based, do you mean a CLIP-like objective? Our current design mirrors the supervision used in CLIP and standard classification settings, up to norm-rescaling of the features and weight vectors. The main difference is that our setup assumes a closed-vocabulary setting, and we believe that using two independent classifiers is appropriate since the concepts are independent.
We would be happy to hear more about alternative formulations or suggestions on how to frame this classification problem!
**The authors claim to have evaluated three probe architectures (Line 255), but these are not mentioned in the experimental results.**
Our main goal was to evaluate the pre-trained models' representations as charitably as possible. We did evaluate multiple probe architectures, but our main text only reported the best-performing configuration.
In response to this comment, we now include the average testing accuracies on both concepts across all datasets across 4 settings of data diversity (% of combinations in training), comparing linear probes and MLPs. MLPs most often performs better, but not universally across the datasets. The first number is the linear probe and the second is the MLP ([512, 512]):
| Model | 25.0%| 50.0%| 75.0% | 93% |
|----|---|---|----|----|
| ResNet50 IN1K | 0.59 / 0.55 | 0.67 / 0.65 | 0.75 / 0.75 | 0.79 / 0.82 |
| DINOv1 ResNet50 | 0.60 / 0.67 | 0.71 / 0.80 | 0.76 / 0.88 | 0.80 / 0.92 |
| DINO ViT-L | 0.68 / 0.70 | 0.78 / 0.83 | 0.84 / 0.91 | 0.86 / 0.95 |
| CLIP ViT-L | 0.61 / 0.64 | 0.70 / 0.74 | 0.75 / 0.79 | 0.76 / 0.84 |
**Additionally, the relevance of evaluating architectures with only minor differences (e.g., one hidden layer or activation function) remains debatable.**
While the architectural variations may appear small, we found that going beyond two hidden layers didn't result in substantial increase in performance. Below, we provide comparisons using the FSprites dataset (again for 4 diversity settings); we used these results as justification for focusing on simpler configurations in this work.
| Model / Probe | 25% | 50% | 75% | 93% |
|--------------------|------|------|------|------|
| ResNet50 IN1K (Lin)| 0.38 | 0.52 | 0.57 | 0.66 |
| ResNet50 IN1K (MLP)| 0.35 | 0.47 | 0.55 | 0.64 |
| ResNet50 IN1K (Deep)|0.34 | 0.46 | 0.55 | 0.63 |
| DINO R50 (Lin) | 0.40 | 0.56 | 0.64 | 0.74 |
| DINO R50 (MLP) | 0.41 | 0.62 | 0.72 | 0.79 |
| DINO R50 (Deep) | 0.41 | 0.60 | 0.71 | 0.79 |
| DINO ViT-L (Lin) | 0.44 | 0.63 | 0.78 | 0.81 |
| DINO ViT-L (MLP) | 0.46 | 0.65 | 0.81 | 0.90 |
| DINO ViT-L (Deep) | 0.45 | 0.64 | 0.81 | 0.90 |
| CLIP ViT-L (Lin) | 0.41 | 0.56 | 0.68 | 0.74 |
| CLIP ViT-L (MLP) | 0.43 | 0.60 | 0.73 | 0.85 |
| CLIP ViT-L (Deep) | 0.42 | 0.59 | 0.72 | 0.82 |
(Lin = linear probe, MLP = [512, 512], Deep = [512, 512, 512])
**Does the number of training samples remain constant while increasing training data diversity? (Sec. 4.1)**
Yes, we do control for this factor, although we acknowledge that it was not made sufficiently clear in the text. All experiments in Section 4.1 (except for Figure 4) use a fixed dataset size of 60,000 samples where available; for Shapes3D and CMNIST we used 30,000 samples.
**Does the authors consider conducting experiments on real-world datasets?**
We considered real-world datasets with known generative factors, but they are currently limited in compositional complexity. Moreover, real-world datasets offer sparse coverage of concept values, making controlled compositional generalization experiments intractable. We are open to suggestions for real-world datasets that offer controllability.
Finally, although our datasets are synthetic, Section 5 evaluates pre-trained models used in real-world practice and tests how pre-trained vision models cope in compositional settings.
**Line 318, what does performance specifically refer to?**
By performance, we refer to the mean classification accuracy over the considered concept values.
---
Rebuttal Comment 1.1:
Comment: The authors partly sovled my concerns. I think experiments sbould be more sufficient.
---
Reply to Comment 1.1.1:
Comment: Thank you for your follow-up. Could you please clarify which concerns you feel we have not addressed in our clarifications and additional results?
**EDIT**:
We have conducted additional experiments on a real-world dataset, MPI3D [1]. This dataset contains real photographs of various objects with controlled factors of variation, such as shape, colour, orientation, size, and position (examples shown in Figure 1). Please find the full results at the anonymised link:
[https://anonymous.4open.science/r/7475686574/rebuttal_response.pdf](https://anonymous.4open.science/r/7475686574/rebuttal_response.pdf)
In short, all of our original claims hold on this dataset as well. Specifically:
- **From-scratch training**:
Compositional generalisation remains problematic. In Figure 2, we show that the model undergoes a large drop in performance for one of the concepts when evaluated on unseen combinations.
- **Scaling ID data quantity**:
As shown in Figure 3 (top left), scaling ID data quantity alone does not help.
Instead, increasing either the number of target classes (top right) or the number of combinations (bottom) improves compositional generalisation performance.
- **Pre-trained models**:
We again observe that linear factorisation is often present. In Figure 4 (left), we show that under this assumption, the models can classify the concepts well above random chance. In this case, all models show near-perfect accuracy on the colour concept.
- **Probing with linear and non-linear classifiers**:
When probing the models (Figure 4, right), they continue to struggle, mostly on the shape concept.
We would like to emphasise that these experiments represent the best we could achieve given the current constraints in compositional generalisation research and evaluation (particularly the availability of concept labels).
We hope this additional evidence further clarifies the robustness of our findings across both synthetic and real-world data.
[1] Gondal, Muhammad Waleed, et al. "On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset." *Advances in Neural Information Processing Systems* 32 (2019). | Summary: Compositional generalization is a fundamental aspect of human intelligence, yet its emergence in machine learning, particularly in vision models, remains unclear. This paper investigates whether increasing data scale contributes to compositional generalization in vision models by systematically isolating the effects of dataset size, diversity, and compositional structure. Through behavioral evaluations and probing latent representations of both pretrained and from-scratch models, the authors find that data diversity—specifically, the ratio of exhaustive compositional samples—plays a crucial role in generalization, rather than sheer data quantity. Their findings suggest that compositional representations emerge only under conditions of sufficient data diversity, where representations become more structured and linearly organized in latent space, facilitating efficient generalization.
Claims And Evidence: The claims presented in the paper are well-supported by experimental results and clearly articulated. The authors systematically control for dataset characteristics and demonstrate that compositional generalization is not merely a function of dataset size but rather of diversity in compositional coverage. Their analysis of pretrained models further substantiates their claims, revealing that large-scale vision models do not inherently develop compositional generalization unless trained on sufficiently diverse data.
Methods And Evaluation Criteria: The methods employed in the study are appropriate for assessing compositional generalization in vision models. The authors introduce a systematic protocol that varies dataset size, concept values, and training combinations while keeping other factors controlled. Their use of synthetic datasets allows precise manipulation of compositional factors, ensuring that their results are not confounded by spurious correlations. The evaluation criteria, including out-of-distribution (OOD) accuracy and probing latent representations, align well with the problem at hand and provide clear insights into model generalization.
Theoretical Claims: The paper does not focus on proving new theoretical results but instead builds on existing theoretical insights into compositional learning. The experimental findings align with prior work on neural network biases and structure in latent space, supporting claims about the importance of data diversity for compositional generalization. The authors also introduce a theoretical analysis showing that only a small number of compositional samples are required for perfect generalization if the feature space is structured compositionally, which is empirically validated in their experiments.
Experimental Designs Or Analyses: The experimental design is rigorous and well-structured, effectively isolating key variables affecting compositional generalization. The study employs controlled datasets where specific combinations of concepts are systematically included or excluded from training, allowing precise analysis of generalization behavior. The comparison between from-scratch models and large-scale pretrained models provides valuable insights into the role of pretraining. However, it would be beneficial to extend the analysis to other modalities, such as language models or vision-language models, to examine whether similar principles apply across domains.
Supplementary Material: No I did not review the supplementary material.
Relation To Broader Scientific Literature: The paper is highly relevant to researchers studying generalization in machine learning, particularly in vision and compositionality. It also holds significance for neuroscience and cognitive science communities, as it provides insights into how structured learning environments influence compositional learning. The findings can inform experimental design in cognitive science by emphasizing the importance of exhaustive combinatorial sampling. Additionally, the study is valuable for researchers working on dataset construction and training strategies, as it suggests guidelines for optimizing training data diversity to improve generalization.
Essential References Not Discussed: A related study by Dorrell et al. (2024) suggests that modularity in latent representations arises from certain input distribution characteristics. This work may provide additional theoretical grounding for the claim that data diversity enhances compositional generalization by promoting structured latent spaces. Including a discussion on how their findings align or contrast with Dorrell et al.’s results would strengthen the paper’s positioning within the literature.
Reference: Dorrell, Will, et al. "Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations." arXiv preprint arXiv:2410.06232 (2024).
Other Strengths And Weaknesses: The paper is well-written and clearly structured, making it an enjoyable read. The authors conduct an extensive set of experiments that provide strong empirical support for their claims. A notable strength is their systematic approach to controlling dataset characteristics, which allows them to make precise claims about the role of data diversity. However, one potential limitation is the exclusive focus on vision models—extending the study to multimodal or language models could provide a more comprehensive understanding of compositional generalization. Additionally, it would be interesting to explore whether the findings hold for different architectural choices, such as transformer-based vision models.
Other Comments Or Suggestions: 1. Clarify Definitions: The paper would benefit from a clearer distinction between "features" and "concepts." Are features referring to learned latent representations, while concepts are explicit or implicit properties of the stimuli?
2. Further Exploration of Takeaway 4.1: Consider investigating non-uniform distributions of n−k concept combinations to determine whether concepts that exhibit lower degradation in uniform settings maintain robustness under more naturalistic data distributions.
3. Additional Experimentation: If space allows, evaluating whether the results extend to language models or vision-language models would strengthen the paper's impact.
Questions For Authors: 1. **Compositionality and Simplicity Bias**
- In Section 2, why does the simplicity bias lead to model preference for spurious correlations? And why is this issue more pronounced for underrepresented concepts?
2. **Dataset Combinatorial Calculations**
- In Section 3.1, when calculating the number of possible unique images, should the formula be \( n \times k + k \times (n-k) \) instead?
- Since we consider \( n \times k \) for the first dimension while the remaining \( n-k \) concept values for the second dimension should also be combined with possible \( k \) values from the first dimension.
3. **Figure 3 Analysis**
- What is the boundary value of \( k \) for achieving the observed compositional generalization improvements?
- Additionally, for fixed maximum target classes, does the improvement hold for lower \( n \) values?
4. **Figure 4 Clarification**
- What is the y-axis measuring—OOD accuracy or drop in performance?
- Also, why does FSprites exhibit lower values in general? What factors make it more challenging compared to other datasets?
5. **Figure 7 Interpretation**
- Does the gap indicate that the retrained model demonstrates better compositional generalization performance than a from-scratch model, or does it reflect the limitations of the pretrained model?
6. **Linear Probing in Section 5**
- What is the \( k \) value used in the linear probing analysis?
- How does it compare to the threshold required for compositional representations to emerge?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your thorough review. We will add clarifications to the updated manuscript.
**Clarify of definitions**
In our terminology, “concepts” refer to interpretable properties of the data (e.g., colour, shape, size), while “features” refer to the latent representations extracted by the model.
**Investigating non-uniform distributions**
The goal in our setup was to ensure uniform coverage over observed combinations to avoid introducing imbalances across concept values. While certainly interesting, we are careful in complexifying the data setting in a way that makes the problem harder.
**Evaluating language models and VLM.**
Extending to language or vision-language models is challenging. Autoregressive models lack comparable representations. For non-autoregressive ones like CLIP, we probed the vision encoder; probing the text encoder is harder, as it requires generating many prompts to describe concept values. Nonetheless, we consider this an important direction for future work.
**Transformer-based vision models.**
Due to space constraints in this response, we couldn’t include the results for the vision transformer here. Another reviewer also expressed interest in the ViT’s performance--please refer to the table in the response to reviewer `TQRD`.
**Related study by Dorrell et al. (2024).**
Thank you for pointing us to this excellent reference--we will be sure to discuss it in more detail. In short, this work, as well as the work it builds on (Whittington et al., 2023), characterises when the minimal-energy non-negative solution is modular, i.e when each neuron responds to a single task factor. We believe both this framework and ours can be viewed in a similar light. The key distinction lies in the goal: while their focus is on the structure of solutions over the observed data, our work studies whether representations can be predicted from their constituent concept values under novel combinations in the test set, and how data factors influence the structure of representations.
**Simplicity bias and spurious correlations, underrepresented concepts**
Our reasoning is based on prior observations in the debiasing literature. When training data exhibits underspecification--i.e., when multiple hypotheses are consistent with the data--a model may favour simpler hypotheses. In such cases, it may learn to generalize to majority groups and memorize the underrepresented groups [a]. In compositional generalization settings, this happens when some combinations are missing entirely (as in OOD splits), and models default to entangled solutions instead of learning disentangling concepts in their representation. Concepts, especially in the real world, are highly co-varying in some combinations and not in others, even if the observed concept pairs are not entangled. Because of this sampling bias, models may exploit such spurious correlations.
[a] Sagawa, Shiori, et al. "An investigation of why overparameterization exacerbates spurious correlations." ICLR 2020.
**In Section 3.1, formula for unique images**
In Section 3.1, we were referring specifically to the number of training combinations observed in the $(n,k)$ framework, but we will clarify this distinction in the revised text.
**Figure 3: boundary value of $k$**
If we understood your question correctly: The boundary value of $k$ corresponds to $n-1$, that is, training on all but one combination per concept value.
**Figure 3: For maximum target classes, does the improvement hold for lower $n$?**
Yes, and we believe Figure 10 in the appendix demonstrates this: for fixed $k$, increasing $n$ lowers generalization performance; conversely, for fixed $n$, increasing $k$ improves it.
**Figure 4: What is the $y$-axis measuring? Also, why does FSprites exhibit lower values in general?**
The $y$-axis in Figure 4 shows OOD accuracy. FSprites performs worse than other datasets, likely due to its more complex and visually similar shapes, which are harder to disentangle. Also, orientation may not align well with shape features, encouraging memorisation of joint shape-orientation co-variations instead of separate concept learning.
**Figure 7: Does the gap indicate that the retrained model demonstrates better compositional generalization performance than a from-scratch model, or does it reflect the limitations of the pretrained model?**
It indicates both. The retrained model benefits from better compositional generalization due to exposure to more diverse or better-structured training data. However, the presence of a gap also reflects the limitations of the pretrained models to generalize compositionally.
**Linear Probing in Section 5: What is the $k$ value used in the linear probing analysis? How does it compare to the threshold required for compositional representations to emerge?**
We varied $k$ from 2 to $n-1$, showing only aggregated results across datasets. While the exact threshold varies, $k=2$ is theoretically the minimum needed per dataset. | null | null | null | null | null | null |
Local Pan-privacy for Federated Analytics | Accept (poster) | Summary: This paper explores three fundamental problems within the framework of local pan differential privacy: estimating the number of non-zero entries, calculating the mean, and constructing histograms. The authors begin by establishing a lower bound for counting non-zero entries, which includes a $\sqrt{T}$ factor, indicating that local pan privacy may be significantly more challenging than local differential privacy. They then showed how to remove this dependence by relaxing to computational local pan privacy: they proposed an algorithm leveraging rerandomizable public-key encryption, and extended the result to other two problems. Finally, they presented a theorem suggesting that the rerandomizable public-key encryption scheme is necessary for computational 0-local pan-privacy.
Claims And Evidence: The proofs presented in the paper are clearly articulated and well-structured, effectively supporting the authors' claims.
Methods And Evaluation Criteria: Yes, estimation error serves as the standard metric for assessing the performance of a differentially private algorithm.
Theoretical Claims: I reviewed all the proofs and found no significant flaws.
Experimental Designs Or Analyses: N/A
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper introduces algorithms for several fundamental problems, which could serve as foundational components for addressing othertasks within the realm of local pan differential privacy.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written, making it accessible to non-experts.
2. The results are intriguing. Although the techniques used to prove them are not particularly sophisticated, they effectively illustrate the fundamental limits of pan privacy within the local model.
Weakness:
The paper lacks experimental results. While this is acceptable for a theoretical study, including simulations or even a basic complexity analysis (e.g., time and space considerations) could enhance the demonstration of the proposed methods' advantages over fully homomorphic encryption (FHE).
Other Comments Or Suggestions: 1. In Theorem 7, it appears that one $\delta$ is missing -- the aggregator algorithm should be $(\varepsilon,\delta)$-DP.
2. In the "if r = 0" part of Algorithm 6, the superscripts are incorrect -- you wrote $t$ instead of $T$. Additionally, should it be $T+1$ as in Algorithm 3?
3. In Theorem 6.1, should the number of allowed rerandomizations be $T - 1$? Because after creating the ciphertext using $x_1$, there are only $T-1$ transitions left.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback, and for pointing out the typos in Thm 7, Thm 6.1, and in Algorithm 6. We will fix those in the updated version.
**Public-key vs. FHE**: the existing implementations of these primitives have very large gaps. Indeed the public-key encryption based aggregation schemes are already used, both in the single-server and the two-server setting. These schemes in practice incur a fairly small overhead in run-time and communication and our algorithms requiring re-randomization at each step would add a small additional overhead. FHE implementations currently incur very large overheads, not just for computation but also for communication. E.g. in Google’s FHE library, the encryption of a single bit is nearly 3kB, compared to 256 bytes for encrypting a field element that are currently used in HPKE. From a computational point of view, our algorithms just add re-randomization operations at each step, which are relatively inexpensive. We will add a more detailed discussion of these costs in an updated version. We also remark that our algorithms effectively implement the standard algorithms for counting and histogram estimation in the local DP model and thus incur no overhead in terms of utility. Indeed for these applications, these algorithms are known to be optimal in terms of privacy-utility trade-offs. | Summary: This paper introduces a privacy protection model termed local pan-privacy, aiming to address privacy leakage issues in federated analytics when local devices are subject to continuous intrusions. The authors first theoretically prove that achieving pan-privacy in an information-theoretic sense leads to estimation errors at least times larger than traditional local differential privacy methods. To overcome this limitation, they propose a computationally secure Local Pan-Privacy model based on rerandomizable public-key encryption schemes. Specifically, this method protects privacy by storing encrypted states locally, updating the encrypted state via rerandomization, and using the randomized response mechanism to securely transmit ciphertexts to the server. This design enables accurate computation of the number of devices experiencing a particular event (COUNTNONZERO) and the histogram of event occurrences, without significant loss in practical performance. Moreover, the proposed approach is effective in both single-server and two-server aggregation settings.
Claims And Evidence: (1)Overall, both the theoretical claims and methods put forward in this paper are supported by clear and rigorous mathematical proofs, and thus enjoy a relatively high level of credibility at the theoretical level. Nevertheless, the “no additional cost” mentioned in the paper, despite being proven at the theoretical level, lacks sufficient support from real-life scenarios or experiments and awaits further verification.
(2)The paper clearly expounds on the deficiencies of the existing local differential privacy and traditional pan-privacy methods in the scenario of “privacy protection when client devices in federated analytics are subject to continuous intrusions”, and puts forward an improved scheme based on cryptography. However, the authors did not clearly present experimental verification.
(3)The theoretical derivation and proof process in the paper are logically clear, and no obvious errors or derivation problems have been found. For example, it is proved in the paper that the cost of pan-privacy under the information theory is relatively high, while the local pan-privacy based on cryptography has no significant overhead.
(4)The paper does not include any experimental section.
The supplementary materials provide proofs for Theorem 12, Theorem 13 and Theorem 6.1. Meanwhile, the pseudocodes of Algorithm 4, Algorithm 5 and Algorithm 6 are also supplemented. These are mainly used to support Theorem 8, Theorem 9, Lemma 3.1 and Lemma 3.2.
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: yes
Relation To Broader Scientific Literature: (1)The local pan-privacy proposed in this paper is an innovative extension based on the classic differential privacy and the existing pan-privacy theories. Meanwhile, this research also ingeniously applies the existing rerandomizable public-key encryption schemes in the field of cryptography. The application of such schemes is a further innovative integration based on the existing cryptographic theories and practices like PRIO.
(2)The scope and relevance of the citations in the paper are currently good. No work that is highly relevant to the paper has been found to be uncited yet.
Essential References Not Discussed: no
Other Strengths And Weaknesses: (1)This paper demonstrates strong theoretical originality and plays an important role in promoting the development of the privacy field. This paper demonstrates strong theoretical originality and plays an important role in promoting the development of the privacy field.
Other Comments Or Suggestions: no
Questions For Authors: (1)The paper mainly focuses on some simple statistical tasks, such as the COUNTNONZERO mentioned. But does it also have applicability or limitations in more complex statistical or machine learning tasks?
(2)Theorem 3 states that exact local pan-privacy algorithms require a public key encryption scheme. Does this rule out the possibility of using lightweight cryptography, such as symmetric encryption or hash functions? Or does it imply that public-key encryption is the only viable option or the best choice in the context of the paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback.
**Choice of tasks**: In this work, we have focused on the simplest and most natural tasks in this new model. These histogram-type tasks have also been used in federated analytics applications to enable a larger class of statistical tasks. To our knowledge, nearly all implemented federated analytics systems build on histograms as the basic building block. We leave to future work more complex tasks, and believe that the techniques in this work would be useful for other tasks as well.
**The need for PKC**: We will discuss the implications of Theorem 3 in more detail. Constructing public-key crypto (PKC) from symmetric key encryption of hashing is a major and long-standing open problem in cryptography, and it is widely believed that PKC is "harder" to build. Theorem 3 implies that constructing accurate algorithms for countnonzero under local pan-privacy from symmetric-key crypto or hashing is at least as hard as constructing PKC from these primitives (and thus at the very least it requires resolving a central and long-standing open question in crypto).
---
Rebuttal Comment 1.1:
Comment: The response is not clear.
---
Reply to Comment 1.1.1:
Comment: We assume the reviewer has found our explanation of the PKC result unclear. We elaborate below.
Our Theorem 3 says that if we have a locally pan-private algorithm for CountNonZero, then we can use it to build a public-key cryptosystem (PKC). I.e. Local-pan-privacy => PKC.
We take that to mean that lightweight crypto such as symmetric-key-crypto (SKC, or hashing) is not going to be sufficient for building Locally pan-private algorithms.
The justification for this is the following: Suppose that we could build locally-pan-private-algorithms from symmetric-key-crypto (SKC). Then using Thm 3, we would get a construction for PKC from SKC. It is widely believed that PKC needs stronger assumptions than SKC and thus we don’t expect to be able to construct PKC from SKC (this is a long standing major open problem in cryptography). Thus Thm 3 leads us to conclude that we should not expect to be able to construct locally pan private algorithms from SKC.
We also remark that this is the standard approach in the literature to proving that some primitive cannot be built from SKC, and this distinction is indeed important since in practice SKC can be more efficient than PKC. | Summary: The paper introduces and examines the concept of *local pan-privacy*, which assumes that an adversary can eavesdrop on a local device’s internal memory. As a result, clients must ensure that their *internal states* remain secure. Under this constraint, the authors investigate (federated) analytics problems such as *count-nonzero* and *count-histogram* in a streaming setting, where each client observes a bit stream. The goal is to privately release either the total number of nonzero bits or a histogram of bit occurrences while maintaining local pan-privacy.
The paper makes two key contributions. First, it establishes an information-theoretic lower bound, demonstrating that for the *count-nonzero* problem, local pan-privacy introduces an error proportional to $\sqrt{T}$, where $T$ is the length of the bit stream. In contrast, standard local differential privacy (DP) does not impose this additional error. Second, the authors propose a *computational* pan-DP approach that circumvents this fundamental limitation. Their method is conceptually simple: it encrypts the internal memory, decrypts it when needed, and then applies a privatization mechanism before transmitting data to the server.
Claims And Evidence: The proofs of the theorems are correct to my best knowledge.
Methods And Evaluation Criteria: N/A since this is a theory paper.
Theoretical Claims: The proofs of the theorems are correct to my best knowledge.
Experimental Designs Or Analyses: N/A since this is a theory paper.
Supplementary Material: I've skimmed through the additional proofs.
Relation To Broader Scientific Literature: The references seem to be adequate.
Essential References Not Discussed: The references seem to be adequate.
Other Strengths And Weaknesses: **Strengths:**
While the two key contributions of the paper—the information-theoretic lower bound and the computational upper bound—are relatively simple, the central message is intriguing. The paper highlights that by relaxing the definition of differential privacy from an information-theoretic to a computational setting, one can reduce the estimation error by a factor of $\sqrt{T}$ in the count-nonzero problem. This result establishes an interesting distinction between information-theoretic and computational differential privacy.
**Weaknesses:**
Although the paper presents an interesting theoretical result, I have concerns about its practical applicability, particularly given the strong assumptions of the threat model. If an adversary has the capability to eavesdrop on internal states and memory, wouldn’t the public-key encryption system also be vulnerable to compromise? Related to that, I would appreciate if the authors can elaborate on the threat model in the framework.
Other Comments Or Suggestions: See the above comments.
Questions For Authors: Is the information-theoretic lower bound established in Theorem 9 tight in an information-theoretic sense? In other words, does there exist an information-theoretic pan-privacy scheme that consistently achieves this error for any input stream? Note that the mechanism constructed in Lemma 3.1 is *not* $\varepsilon$-local DP, as in the worst-case scenario, every bit in $x_1, \dots, x_T$ could change. Consequently, while Theorem 9 provides a valid lower bound, it remains unclear whether it is a tight one.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback.
**Threat model**: We will clarify, motivate and illustrate the threat model. Our main motivating example is a shared device: a computer in a public library, a shared device in a home or other setting. The attacker is another user of the device, who can observe its state regularly. This is the same kind of attack model that private browsing modes in browsers are designed to address. A stolen device may also lead to an attack of this kind. In our schemes, it is important that only the public encryption (and re-randomization) key is held on the device: even if an attacker sees the entire internal state, it still cannot decrypt ciphertexts.
**Tightness of lower bound**: Our lower bound instances have the property that each stream has at most a single ‘1’, and the lower bound can be shown to be tight for this case. We conjecture that in the general case where the stream may have multiple ‘1’s, the lower bound would be larger by a $\sqrt{T}$ factor. We discuss this briefly in the Conclusions section, and will elaborate on this. | null | null | null | null | null | null | null | null |
OrcaLoca: An LLM Agent Framework for Software Issue Localization | Accept (poster) | Summary: This paper proposes a novel framework for automated software issue localization tasks. Issue localization is a critical component of autonomous software engineering, making the problem addressed in this paper well-motivated. The proposed technique combines several strategies to improve localization accuracy and efficiency:
- Use of a priority queue to identify higher-relevance or more urgent actions first.
- Decomposition of search actions for finer-grained and more relevant operations.
- Preservation of the most relevant code snippets as the agent iteratively retrieves them using a shortest-path heuristic on the code graph (a graph-based representation of the codebase that facilitates indexing and efficient search while capturing both code containment and references).
Experiments are conducted on the widely used SWE-bench Lite benchmark for Python projects. The authors report:
- A new open-source state-of-the-art Function Match Rate of 65.33% (an approximately 2% improvement over the previous open-source best).
- A File Match Rate of 83.33%.
- A final Resolved Rate (percentage of issues successfully fixed by the generated patches) of 41.00%.
Additionally, an ablation study is conducted on a smaller subset of SWE-bench, demonstrating that each component of the proposed approach positively contributes to the overall improvements.
## update after rebuttal
Thank you for your response to my questions. I don’t have any further questions at this time. After considering all of the discussions here, I’ve decided to keep my original score.
Claims And Evidence: Overall, the paper’s main claims are well-supported by experimental results. Specifically, it provides detailed evidence for the following:
- OrcaLoca achieves a new state-of-the-art function match rate among open-source systems.
- OrcaLoca improves the final resolved rate by 6.33 percentage points over a baseline system.
- Priority-based scheduling and action decomposition enhance relevance and reduce noise in context.
Methods And Evaluation Criteria: The authors use standard SWE-bench metrics (Resolved Rate, Function Match Rate, and File Match Rate) along with a new metric, Function Match Precision, to measure localization accuracy. This approach is reasonable and aligns with established practices in the emerging field of LLM-based bug fixing.
Theoretical Claims: The paper does not present any formal theoretical claims or proofs.
Experimental Designs Or Analyses: OrcaLoca is tested on the SWE-bench Lite, a widely used curated set of 300 real-world GitHub issues. The authors also conduct an ablation study on a smaller subset, SWE-bench Common. They compare with 17 solutions (both closed-source and open-source). I think this is fairly comprehensive evaluation of their technique. The paper also systematically removes each of the major components to gauge its importance on localization performance.
Supplementary Material: I skimmed through the appendices of the paper, which provide extended details on the approach and additional information about the experiments.
Relation To Broader Scientific Literature: The related works section of the paper is fairly comprehensive and includes sufficient details on prior LLM-based fault localization and debugging frameworks (including Agentless, AutoCodeRover, Repograph, as well as spectrum-based and mutation-based approaches for debugging). I agree with the authors that the main novelty of this paper is the application of multi-step agent designs to codebase-level tasks. The approach successfully bridges the gap between large LLM text-generation capabilities and more “surgical” code exploration techniques.
Essential References Not Discussed: In general, the references from the emerging field of LLM-based bug fixing are fairly comprehensive.
Other Strengths And Weaknesses: Strengths:
- The three major components (priority scheduling, decomposition, distance-based pruning) are distinct yet cohesive ideas. I think the paper is giving future researchers a clear blueprint for adaptation or replication.
- Strong empirical results: SWE-bench Lite is a practical, recognized dataset, making the results fairly credible. The jump in function match rate and final resolved rate, plus ablation tests, provide strong evidence that each piece is adding value.
Weaknesses:
- While the overall approach is innovative, parts of the paper are difficult to parse. Section 3.1, in particular, introduces multiple details in quick succession. More introductory context would help guide readers through the agent workflow. Additionally, the paper references external APIs from prior systems (e.g., “Consider the previous agent API used by systems like (Zhang et al., 2024b; Ma et al., 2024a)”) without summarizing them, which may confuse less-informed readers. I encourage the authors to use available space more effectively to provide sufficient grounding before diving into technical details. Ensuring the paper is self-contained and introducing the agent framework more gradually would significantly improve readability.
- The approach is tested entirely on Python issues. It’s unclear how well the code-graph approach generalizes to, say, Java projects or cross-language scenarios.
Other Comments Or Suggestions: It would be interesting to see a break-down of performance across different types of issues (e.g., small vs. large repositories, etc.).
Questions For Authors: - Q1: How do you envision OrcaLoca’s performance scaling for extremely large repositories (e.g., tens of thousands of files, multi-million-line codebases)?
- Q2: Do you anticipate any major challenges when applying OrcaLoca to other programming languages or multi-language repositories?
- Q3: Your approach relies on shortest paths in a code graph. For large repositories, do you cache or approximate these distances, or are they computed dynamically each time?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful feedback and for highlighting several key strengths of our work. We listed our responses for weaknesses and questions below:
**Weakness 1:** While the overall approach is innovative, parts of the paper are difficult to parse... Ensuring the paper is self-contained and introducing the agent framework more gradually would significantly improve readability.
**Response:**
Thank you for your useful advice. We agree that Section 3.1 contains a lot of technical information in a small space, which may hinder readability. In the revision, we will summarize the APIs utilized in previous systems (Zhang et al., 2024b; Ma et al., 2024a) to make the paper more self-contained and accessible to a wider audience.
**Weakness 2:** The approach is tested entirely on Python issues. It’s unclear how well the code-graph approach generalizes to Java or cross-language settings.
**Response:**
This is an important point. Currently, we rely on SWE-Bench, which is the state-of-the-art benchmark so far for evaluating software agents and is Python-based. We recently found more new benchmarks like SWE-Gym and LocBench, but these are also Python-based. We recognize the need for broader generalization and are actively working on developing a cross-language benchmark to evaluate agent performance in more diverse environments. Our graph-based framework could be improved further to support other languages with appropriate parsing and indexing backends.
**Q1:** How do you envision OrcaLoca’s performance scaling for extremely large repositories (e.g., tens of thousands of files, multi-million-line codebases)?
**Response:**
This is an excellent question. Scaling to extremely large repositories such as Linux or Android codebases is an exciting but challenging direction. Key bottlenecks include efficient index management and the context management of CoT (Chain-of-Thought) reasoning during retrieval. In future work, we aim to incorporate an efficient Process Reward Model (PRM) to compress reasoning chains while reducing overall token consumption. We are also exploring hierarchical indexing and chunking strategies to handle large codebases more efficiently.
**Q2:** Do you anticipate any major challenges when applying OrcaLoca to other programming languages or multi-language repositories?
**Response:**
Applying OrcaLoca to other single-language repositories generally presents no fundamental issues, as our system operates on the code structure and relationships. However, languages with strong type systems (e.g., C++, TypeScript) offer opportunities for type inference integration, which could enhance LLM reasoning by leveraging compiler-level information. This could reduce reasoning effort and improve localization accuracy.
For multi-language repositories—especially common in ML systems (e.g., Python + C++ via PyBind)—the main challenge lies in cross-language linking. We are currently extending our index to infer connections across language boundaries and enhance reasoning with cross-language semantic context. This will require both richer static analysis and more advanced reasoning strategies.
**Q3:** Your approach relies on the shortest paths in a code graph. Are these distances cached or computed dynamically?
**Response:**
Currently, shortest-path distances are computed dynamically, as our current system serves primarily as a proof of concept. That said, we recognize the performance bottleneck this can introduce. In future iterations, we plan to add a caching layer and incremental computation mechanisms to improve efficiency, especially for large or frequently accessed repositories.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my questions. I don’t have any further questions at this time. After considering all of the discussions here, I’ve decided to keep my original score and recommend acceptance of the paper. | Summary: This paper introduces OrcaLoca, an LLM agent framework that improves software issue localization by integrating three key components: priority-based scheduling for LLM-guided actions, action decomposition with relevance scoring, and distance-aware context pruning. Experimental results demonstrate that OrcaLoca achieves good performance on the SWE-bench Lite benchmark with a function match rate of 65.33% and a file match rate of 83.33%. Additionally, by integrating patch generation capabilities, OrcaLoca improves the resolved rate by 6.33 percentage points compared to existing frameworks.
Claims And Evidence: The claims in the paper are well-supported by comprehensive experimental evidence. The authors provide detailed performance metrics on the SWE-bench Lite benchmark, comparing OrcaLoca against 17 different approaches.
Methods And Evaluation Criteria: The evaluation methodology is sound and appropriate for the problem. The authors use established benchmarks (SWE-bench Lite and SWE-bench Verified) and metrics (Resolved Rate, Function Match Rate and File Match Rate) that are standard in the field.
Theoretical Claims: The paper does not make formal theoretical claims requiring proof verification.
Experimental Designs Or Analyses: I carefully examined the experimental designs and analyses presented in the OrcaLoca paper. Overall, the experimental methodology appears sound, with appropriate benchmarks, metrics, and comparative analyses.
However, **there are also some limitations**: more in-depth analysis of ablation experiments, evaluation on multiple models, and evaluation of costs compared to other methods.
Supplementary Material: No other supplementary materials.
Relation To Broader Scientific Literature: The paper thoroughly discusses related work in LLM-based fault localization, positioning OrcaLoca's contributions in the context of existing approaches. The authors clearly identify limitations in previous systems such as Zhang et al. (2024a), Wang et al. (2024b), and Agentless (Xia et al., 2024), explaining how OrcaLoca addresses these limitations.
Essential References Not Discussed: The paper covers most essential references in the field.
Other Strengths And Weaknesses: **Strengths**
- The paper addresses a significant challenge in Autonomous Software Engineering (ASE) with a novel approach that demonstrably outperforms existing methods.
- The framework design is comprehensive, tackling multiple identified challenges in LLM-based localization.
- The analysis of unique localizations (Figure 4) demonstrates OrcaLoca's ability to solve issues that other systems cannot.
**Weaknesses**
- **Complexity of the approach**: While the ablation studies show that each component contributes to performance, the overall design is complex with multiple interacting parts. The results show that removing individual components reduces performance by 3-5 percentage points, which raises questions about whether a simpler design might achieve comparable results with less complexity.
- **Computational overhead**: The paper doesn't thoroughly discuss the computational costs associated with OrcaLoca's approach. While Section 4.1.3 mentions that "the cost of searching is about 0.87, the cost of editing is about 0.90 per instance," a more detailed analysis of time and resource requirements compared to other approaches would strengthen the paper.
- **Model dependency**: The experiments primarily use Claude-3.5-Sonnet-20241022, and it's unclear how the approach would perform with other widely used models like GPT-4o or open-source models. Testing with a broader range of LLMs would demonstrate the generalizability of the approach.
Other Comments Or Suggestions: A more in-depth analysis of **complex components** would be helpful to improve this paper.
Questions For Authors: See about "Other Strengths And Weaknesses".
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful and constructive feedback, as well as for highlighting several key strengths of our work. For the responses for weakness, we listed our analysis part by part below:
**Weakness 1:** The complexity of the approach
**Response:**
Thank you for your suggestions. We are actively exploring strategies to simplify the workflow while maintaining localization performance. One promising direction involves using a reward model to efficiently pre-score relevant code indices in a single pass, followed by a more powerful LLM to process only the high-value candidates. However, these methods are still under experiment. The current area is relatively new so there exists a big design space to explore and have adventure. We are also very interested to see if there exists a more elegant and efficient localization method.
**Weakness 2:** Computational overhead
**Response:**
Thank you for highlighting the importance of a detailed computational cost analysis. We agree that comparing resource usage across approaches is valuable for readers.
We chose token cost as our metric because LLM inference dominates our system's overall time and money expenses. For time analysis, since we are majorly leveraging the API service from model providers, it could seen as a metric proportional to the token count.
In our revised manuscript, we will include a table summarizing the token cost for each agent:
| Agent | Cost |
| ---|---|
| OpenHands | 1.14 |
| SWE-Agent | 1.62 |
| AutoCodeRover | 1.3 |
| Agentless | 1.05 |
| OrcaLoca | 1.77 |
| OrcaLoca-batch | 1.48 |
Notably, over half of OrcaLoca's cost is attributed to the editing phase (0.90 out of 1.77), which is influenced by the specific edit tool used.
Although we majorly target performance and accuracy issues in our paper, localization has a large potential for efficiency optimization shown by OrcaLoca-batch.
In OrcaLoca-batch, we implemented a batched actions optimization for the localization process, where we extracted the top batch steps from the priority queue.
The table below summarizes the old and new token costs per instance, with the ratio (New Cost / Old Cost) indicating the improvement:
| Inst ID | Old Cost | New Cost | Ratio |
| ---|---|---|---|
| django-13551 | 0.3 | 0.26 | 0.87 |
| django-15814 | 1.44 | 0.97 | 0.67 |
| django-16255 | 0.17 | 0.18 | 1.06 |
| pylint-7228 | 0.71 | 0.66 | 0.93 |
| pytest-8906 | 1.93 | 0.87 | 0.45 |
| scikit-learn-13439 | 0.31 | 0.21 | 0.68 |
| sympy-14774 | 0.53 | 0.15 | 0.28 |
| sympy-15011 | 1.14 | 0.64 | 0.56 |
| sympy-16792 | 1.05 | 0.64 | 0.61 |
| sympy-24213 | 0.55 | 0.2 | 0.36 |
Due to the cost budget of experiments, we sampled 10 issues from the original SWEBench-Lite with different levels of token cost. Using weighted averages based on sampled instances in each cost bin, we estimate that the per-instance cost for localization was reduced by an average of 34% (from 0.87 to 0.58) with no adverse effect on localization correctness.
We are also committed to further refining OrcaLoca and will explore additional optimizations (e.g., implementing kv-cache techniques) in future work.
**Weakness 3:** Model dependency
**Response:**
We agree that model generalization is an important concern. In our experiments, we have actually tested with models like GPT-4o and Gemini 2.0, and they worked well in our test samples, proving our agent framework is model-agnostic. However due to budget constraints, we could not exhaustively evaluate across a wider range of LLMs. We majorly tested Claude because it was the best model with code reasonability at that time. In future work, we plan to include open-source models such as Qwen, LLaMA, and others—particularly once we fine-tune and serve our models locally, which will largely save cost for the repo-level benchmark evaluation.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's reply, I will maintain my positive score. | Summary: The authors presented a new agentic framework called OrcaLoca to identify relevant code and resolve issues in software engineering problems. They introduce 3 new approaches: giving each action a relevancy score, maintaining a priority queue of actions where they are dynamically reordered based on contextual relevance and a context manager which prunes the search context.
These methods ensure that actions closely aligned with the issue are executed first resulting in more focused search. The context manager further helps in actively filtering out irrelevant context, reducing noise. Overall, this framework offers a precise approach to software issue localization compare, making it valualbe for finding and fixing issues in large scale codebase.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes
Relation To Broader Scientific Literature: Most agentic workflows involve adding reasoning, few shot examples and improving retrieval for improving performance. However, they actions are usually static, and thus explorations can suffer from noise accumulation. OrcaLoca improves on that idea by introducing
relevancy score to actions, priority scheduling mechanism and context pruning techniques - improving overall search.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths
- The paper presents novel approaches to address the problems caused by static action prediction from LLMs, especially when search space is large. This could be translated to other agentic framework such as searching from large unstructured databases.
Weaknesses
- Although their appraoch leads to SOTA performance on Function and File Match, i.e., software issue localization problem, these improvements do not translate into the highest Resolved rate. The authors should comment on this discrepancy. It would be valuable to understand why method like Kodu-v1 achieved higher Resolved rate despite having significantly lower Function Match rate.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful and constructive feedback, as well as for highlighting several key strengths of our work. We also thank you for raising this important point regarding the discrepancy between Function/File Match performance and the Resolved rate in baselines.
Generally speaking, higher localization performance usually offers a better background for the stage of downstream editing (Fig.1). However, since the editors are different across different implementations, the final resolved rate is not guaranteed to be high by providing a higher localization rate. For instance, although HyperAgent and RepoGraph agents scored exactly the same in the Function Match in Table.1, HyperAgent's inadequate editing capability results in a final resolved rate less than that of HyperAgent.
Another reason is caused by agent workflow and algorithm stage plan. In modular systems where localization is seen as a separate step, localization provides an exact starting point that can greatly improve the quality of the final changes, which is shown in Table 2. However, not all agents follow a strictly modular localization → edit pipeline. Some approaches, such as swe-search[1], interleave editing, and localization stages, where editing feedback can influence and refine the localization in an iterative loop. In such designs, the final Resolved rate may align more directly with localization performance because the two components are tightly coupled. Kodu-v1 may also have particular algorithms to set up a tighter link between localization and edit stages.
[1] SWE-Search: Enhancing Software Agents with Monte Carlo Tree Search and Iterative Refinement
https://arxiv.org/abs/2410.20285
Note: We discovered that we omitted to mention this study since it was under the Moatless tool and not reported on SWE-Bench. Later on, we will include this in our revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. I’ll keep my rating at 4. | Summary: OrcaLoca is a novel framework that leverages LLM agents to improve software issue localization by precisely identifying the problematic sections within large codebases. The paper introduces three key innovations: priority-based scheduling for dynamically managing LLM-guided actions, action decomposition with relevance scoring to break down high-level queries into more granular sub-actions, and distance-aware context pruning that filters out irrelevant code to maintain focus. These techniques address longstanding challenges in automated bug localization, including imprecise navigation and excessive context noise, by integrating a structured code graph representation with targeted search strategies. Empirical evaluations on the SWE-bench Lite dataset demonstrate that OrcaLoca not only achieves SOTA performance with a 65.33% function match rate and 41.00% resolution rate.
Claims And Evidence: 1. The claim regarding a 6.33% improvement in the final resolved rate due to integration with a patch generation component is quantitatively supported. However, this improvement is context-dependent, relying on the successful integration of another system, which might not generalize across different systems.
Methods And Evaluation Criteria: Both the methods and the evaluation criteria are thoughtfully chosen and make strong sense for the application.
Theoretical Claims: The paper primarily focuses on presenting a novel framework with empirical supports.
Experimental Designs Or Analyses: 1. Although the experiments use a low-temperature setting (0.1) to promote deterministic behavior, the inherent variability of LLM outputs might still introduce some fluctuations in performance, which are not deeply discussed.
2. The performance improvements, particularly the increase in the final resolved rate, partly depend on the integration with an external patch generation component. This dependency may affect the reproducibility of the results if similar integration conditions are not met.
Supplementary Material: All the supplementary materials are reviewed.
Relation To Broader Scientific Literature: The paper extends existing fault localization and automated debugging techniques by integrating LLM-based methods with graph representations and hierarchical planning, drawing on ideas like CoT reasoning and repository-level code graphs. By benchmarking on SWE-bench datasets and comparing with systems such as AutoCodeRover and Agentless, the work demonstrates how its contributions address limitations in prior research while paving the way for more precise LLM-driven code search and context management.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The introduction of a priority-based scheduling mechanism for LLM-guided actions enhances the strategic exploration of large codebases.
2. Decomposing high-level actions with relevance scoring allows for more precise localization by breaking down complex tasks into manageable sub-actions.
3. The distance-aware context pruning method minimizes noise by filtering out irrelevant code, thereby maintaining focus on pertinent sections.
Weaknesses:
1. Heavy reliance on LLM outputs can lead to unpredictability and occasional hallucinations, affecting consistency in localization results. For example, different LLMs or even different LLM versions might yield different results.
2. The evaluation primarily focuses on SWE-bench datasets.
3. The paper provides limited discussion on computational overhead and scalability, leaving questions about resource requirements for practical deployment.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful feedback and for highlighting several key strengths of our work. For the weak points, we list our responses part by part, as follows:
**Weakness 1:** Heavy reliance on LLM outputs can lead to unpredictability and occasional hallucinations, affecting consistency in localization results. For example, different LLMs or even different LLM versions might yield different results.
**Response:**
Thank you for this valuable point. We acknowledge that LLM-based agents may suffer from unpredictability or hallucinations. However, in our system, we have incorporated several design choices to mitigate these issues and improve reliability.
As illustrated in Figure 3, we introduce redundant action elimination to prevent the LLM from repeatedly generating previously executed (and potentially hallucinatory) actions. Furthermore, our agent adopts a context management framework, which enables pruning out noise and irrelevant data. Our agent has a self-consistency mechanism, where the LLM evaluates and refines its own intermediate search result to guide convergence, improving both stability and correctness. To evaluate consistency, we conducted a controlled experiment by selecting five representative instances from SWE-bench ([django__django-13551], [django__django-16255], [scikit-learn__scikit-learn-13439], [sympy__sympy-14774], [sympy__sympy-15011]). For each, we ran the agent five times and consistently observed the same function-level localization results under the same LLM configuration, demonstrating strong intra-model stability.
For inter-model consistency, we agree that different models varying a lot due to their reasonability and model weight discrepancy. To solve this problem, we would focus on improving LLM consistency by exploring consistency-aware prompting, robust fine-tuning, and more powerful code index and embedding in the future.
**Weakness 2:**
The evaluation primarily focuses on SWE-bench datasets.
**Response:**
Currently, we rely on SWE-Bench, which is the state-of-the-art benchmark so far for evaluating software agents. We recently discovered additional benchmarks such as SWE-Gym and LocBench, however they were released so lately that we do not have time to review the findings before submission. In the future, we plan to expand our method to include more benchmarks and experiment with other approaches.
Another reason is that the benchmark uses repo-based software, which increases the cost significantly. Experiments for 300 instances on SWE-bench lite will cost between `$500` and `$600`, not considering debugging and reproduction for different baselines. To address this issue, we will continue to work on efficient models (such as small model distillation and finetuning) to better preserve tokens.
**Weakness 3:** The paper provides limited discussion on computational overhead and scalability, leaving questions about resource requirements for practical deployment.
**Response:**
Thank you for bringing this up. Our initial experiments primarily focused on accuracy and localization performance, so we did not perform extensive optimization for computational overhead. Currently, our system runs using API-based access to LLMs, which means that the local infrastructure requirement is minimal—a standard CPU server is sufficient to orchestrate the agent pipeline.
However, we acknowledge that scalability and cost-efficiency are crucial for practical deployment. In future work, we plan to serve our own models (e.g., fine-tuned variants) using lightweight LLM-serving frameworks such as Sglang, which would likely require GPU resources (e.g., A100 or H100) depending on the model size.
For additional details regarding cost analysis, including token usage and runtime cost on SWE-bench Lite, please refer to our response to Reviewer 3 (due to character limitations here). Thank you for your understanding! | null | null | null | null | null | null |
Function Encoders: A Principled Approach to Transfer Learning in Hilbert Spaces | Accept (poster) | Summary: This work provides a geometric characterization of transfer learning in the form of 3 types of inductive transfers presented in a Hilbert space. A type 1 transfer is when the predictor of the target task is the Convex Hull of the source predictors, type 2 is when the target is the linear span of the source predictors, and type 3 is when the target sits outside the linear span.
The paper uses the theory of function encoders to propose a method that achieves transfer learning across all 3 types. This is done by providing a closed-form least-squares solution to fit the target predictor (i.e. L2 projection onto the linear span for source predictors).
A detailed study of the proposed method against other transfer learning benchmarks is presented, showcasing how the method can match and outperform SOTA in various settings.
Claims And Evidence: The claims in the paper are mostly supported by theoretical and empirical analysis.
Methods And Evaluation Criteria: IMO, L2 loss and accuracy metrics are reasonably chosen based on the setting.
Theoretical Claims: The proof for the theoretical claims (Theorem 1) is provided in the appendix. Insights into the design of the inner product for other spaces (e.g. probably spaces) are also provided in the appendix.
Experimental Designs Or Analyses: The experimental domains include toy models, small-ish image tasks (CIFAR 100, 7-Scenes) and the MuJoCo Ant (dynamic control). I think the paper can benefit from additional experiments in other domains, e.g. text, structured prediction.
Supplementary Material: I briefly studied Appendix C (choosing the inner product).
Relation To Broader Scientific Literature: The paper compares and contrasts with prior work in great detail in section 1.1. Specifically there are comparisons with common approaches in transfer learning, meta-learning and kernel methods, in terms of sample and computational complexity and how they differ in transfer efficiency.
Essential References Not Discussed: Most relevant and related work is referenced in the paper.
Other Strengths And Weaknesses: The main premises of this work are interesting and insightful. The use of L2 projection onto the span of source predictors is novel to this work (to the best of my knowledge), and experiments suggest it outperforms the inner product method (Ingebrand et al., 2024b).
The main theorem in the paper is not very strong IMO, as it mainly applies in the limit of infinite basis functions.
Other Comments Or Suggestions: Use of H for both the Hilbert space and Convex Haul (even with different fonts) can be confusing, especially since predictors live in both of them.
Questions For Authors: When comparing ad-hoc methods (Siamese and prototypical networks) we see comparable performance to the proposed method in multiple settings. Do you have any insights into how these ad-hoc methods achieve the type 2 or 3 transfer in the cases they can?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. I think the paper can benefit from additional experiments in other domains, e.g. text, structured prediction.
Function encoders are applicable in many domains. However, there are additional theoretical questions on how to define an appropriate inner product and geometric characterization of task relatedness for text prediction and structured representations that would need to be addressed first, and are thus beyond the scope of the current work.
2. The main theorem in the paper is not very strong IMO, as it mainly applies in the limit of infinite basis functions.
Please see our response to reviewer qZy4, question 2. Universal approximation theorems are existence proofs, meaning they provide justification that the proposed approach could achieve arbitrary accuracy given enough resources. Practically, Theorem 1 guarantees the existence of a function encoder which spans the source tasks, which is necessary for the types of transfer we characterize.
3. Use of H for both the Hilbert space and Convex Haul (even with different fonts) can be confusing, especially since predictors live in both of them.
Thank you for this comment, we will change the notation for convex hull in the final version.
4. When comparing ad-hoc methods (Siamese and prototypical networks) we see comparable performance to the proposed method in multiple settings. Do you have any insights into how these ad-hoc methods achieve the type 2 or 3 transfer in the cases they can?
Siamese networks use a contrastive loss to minimize the distance between inputs belonging to the same category and to maximize the distance otherwise. We believe there are deep theoretical connections between this approach and maximum mean discrepancy (MMD) from kernel methods, where you could view the mean output across a category as a mean embedding, and the distance between embeddings as the MMD. Therefore, maximizing the distance between individual embeddings is somehow similar to maximizing the difference between mean embeddings, and thus Siamese networks are effectively learning basis functions which maximally distinguish between the distributions corresponding to each category. Prototypical networks do something similar, but instead work with the mean embedding directly, using the closest mean embedding to a given input as the correct category. Therefore we believe these approaches have unrecognized theoretical connections to Hilbert space theory, which potentially explains their performance.
---
Rebuttal Comment 1.1:
Comment: Thanks for the comments and explanations. I suggest adding the discussion around ad-hoc methods to the appendix or even the main body of the paper. It would indeed be interesting to analyse (in some future work) if these methods are implicitly building basis functions.
The existence proof is useful and needed for the discussion around typing, but it does not provide a finite construction path or convergence rate. This IMO limits the significance of the theoretical analysis provide in the work. | Summary: This paper studies inductive transfer learning where the source tasks and the target task share the same domain $(\mathcal{X}, \mathbb{P}(\mathcal{X}))$ and output space $\mathcal{Y}$, but have different predictors $f:\mathcal{X}\to\mathcal{Y}$ that lie in some Hilbert space. It characterizes the difficulty of transfer by the geometry of the source and target predictors. Drawing inspirations from prior works on function encoders, it proposes to learn a set of basis functions that approximately span the Hilbert space of predictors, so as to adapt to future tasks without retraining the model. To train function encoders, it applies least squares which is more efficient than the previously used inner product method. Numerical experiments on polynomial regression, image binary classification, position estimation from image, and robot state prediction demonstrate the effectiveness of the algorithm.
## Update after Rebuttal
I thank the authors for the detailed response. I still think that the work [1] diminishes the novelty of the methodology of this work. For the geometric view of transfer, the authors explained with the example of classifying horses and classifying lung cancer, but I believe that one can tell the two tasks are highly dissimilar without the Hilbert space framework. The geometric categorization, while intuitive, does not seem to provide any fundamentally deeper understanding of the problem, and does not really give insights into the development of the methodology. Therefore, given the limited novelty of the theoretical framework and the methodology, I choose to maintain my score.
Claims And Evidence: The claims made in the paper are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed method is conceptually simple and intuitive. It is tested on four different datasets covering different applications.
Theoretical Claims: I have checked that Theorem 1 and its proof are correct.
Experimental Designs Or Analyses: The experiment procedures and results in the paper are clearly described, and the results look convincing.
Supplementary Material: I did not review the supplementary material which contains the code for the paper.
Relation To Broader Scientific Literature: This paper proposes a general approach for transfer learning and demonstrates its effectiveness for a variety of tasks. Transfer learning is still a widely open problem in machine learning, and is important for various applications including image classification and robotics.
Essential References Not Discussed: A key methodological contribution of this paper is to replace the inner product method in the function encoder training procedure by the least squares method. However, it seems that the least squares method is not new, as it already exists in the paper [1] which proposes the same method for the same purpose of overcoming the orthogonality assumption. (See, e.g., Figure 2 in that paper.)
**Reference**
[1] Ingebrand et al. (2024). Basis-to-basis operator learning using function encoders. https://doi.org/10.1016/j.cma.2024.117646
Other Strengths And Weaknesses: **Strengths:**
1. The paper is well written, and the ideas are clearly explained.
2. The proposed method is conceptually simple and intuitive.
**Weaknesses:**
1. Originality of the proposed approach. The least squares method for function encoder training seems to already exist in the literature. See "Essential References Not Discussed" for more details.
2. Significance of Theorem 1. While the theorem is correct, I feel that it is mathematically trivial given the universal approximation theorem of neural networks. In particular, the result is qualitative; it only states that any predictor in the Hilbert space can be approximated using sufficiently many basis functions. More importantly, it offers minimal insights into the transfer learning problem. Indeed, the basis functions are learned using source tasks, and whether the basis functions can well approximate the target task should depend on the similarity and the diversity of the tasks. This important problem is not covered by Theorem 1.
3. Implications of the geometric view of transfer. The paper divides the transfer learning problem into three types, according to whether the target predictor lies in the convex hull or the linear span of the source predictors. The geometric characterization is intuitive, and the numerical experiments confirm the three types of problems have different difficulty levels. However, beyond these points, I do not see how the geometric view can give us more insights into the difficulty of the transfer learning problem, or how it can used to facilitate transfer learning in specific scenarios.
Other Comments Or Suggestions: The notations for domain ($\mathcal{D}$) and dataset ($D$) use the same letter "D", which can be a bit confusing.
Questions For Authors: 1. Ablation studies in the paper show that the choice of the number $k$ of basis functions can have a significant impact on the performance of the algorithm. However, in practice, the appropriate $k$ is unknown beforehand. While the experiments suggest that $k$ can be chosen as the dimensionality of the space, it can be too large and make the algorithm computationally expensive. Do the authors have a principled way of choosing an appropriate $k$?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: 1. Originality of the proposed approach. The least squares method for function encoder training seems to already exist in the literature. See "Essential References Not Discussed" for more details.
We thank the reviewer for pointing out the referenced work in [1], and we have added the reference to our final manuscript. To clarify the originality of our contribution in relation to [1], the development of the referenced work occurred concurrently with our manuscript. Indeed, the referenced work [1] uses the least-squares formulation and implementation from this work, despite appearing first. Leaving out [1] as a reference was an oversight and has been corrected.
In any case, while [1] addresses operator learning and PDE modeling, this paper is concerned with function space approximation. We introduce a geometric characterization of transfer, we define the necessary regularization for the LS approach to converge, we generalize the inner product definition, we provide a universal function space approximation theorem, and we compare against SOTA baselines such as meta learning and transformers on various benchmarks, including a classification task that function encoders have not previously been able to handle. Thus while both approaches use LS, these two papers are distinct.
2. Significance of Theorem 1. While the theorem is correct, I feel that it is mathematically trivial given the universal approximation theorem of neural networks...
We agree that, in general, universal approximation theorems are fundamentally existence proofs. However, they play a foundational role in justifying neural networks as universal approximators of continuous functions, and thus have significant practical value. Nevertheless, we agree that universal approximation theorems may have limited use in determining the quantitative computational requirements or error bounds, which vary from problem to problem and depend on the training data.
3. More importantly, it offers minimal insights into the transfer learning problem. Indeed, the basis functions are learned using source tasks, and whether the basis functions can well approximate the target task should depend on the similarity and the diversity of the tasks. This important problem is not covered by Theorem 1.
From our geometric categorization of transfer, we know that the difficulty of approximating a new task depends on its relation to the source tasks, e.g. whether it lies in the span of the source tasks and if not, the distance from the span of the source task as measured by the Hilbert space norm. On the other hand, since the function encoder trains basis functions, its approximation error is proportional to the distance between the span of the basis and the target task. Therefore, the approximation error of a function encoder on a new task aligns with the expected difficulty of that target task according to the geometric categorization of transfer. UAT is useful because it guarantees the existence of a function encoder which spans the source tasks. So assuming that the training procedure converges, we expect the function encoder’s approximation error to be conceptually equivalent to the difficulty of transfer suggested by the geometric view. We have added a paragraph highlighting this connection at the end of section 3.
4. I do not see how the geometric view can give us more insights into the difficulty of the transfer learning problem, or how it can used to facilitate transfer learning in specific scenarios.
This is a great question. The geometric view of transfer provides a framework for categorizing transfer problems. Consider classifying images by attributes, such as identifying horses or stripes. If you want to detect zebras (striped horses), this likely involves type 1 transfer, where existing methods typically perform well. However, then classifying lung cancer from X-rays isn’t a linear combination of previous attributes, indicating type 3 transfer, making it significantly harder despite the apparent similarity between tasks. Thus, the geometric view helps assess the difficulty of transfer problems and suggests suitable approaches; for instance, our results indicate transformers aren’t effective for type 3 transfer.
5. Do the authors have a principled way of choosing an appropriate k?
Please see our response to reviewer wRbf for questions 1 and 2 for additional details. The overhead for choosing a larger k is relatively small.
Furthermore, the ablations show that k=100 is enough for all of the problems in this paper; This aligns with prior work. If k is chosen to be less than the dimensionality of the space, then the algorithm will likely learn the k principle basis functions, since doing so would minimize the loss. For example, see the ablation on k for the CIFAR dataset, where most of the prediction accuracy is due to 20 basis functions, and using additional basis functions provides only diminishing returns.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. I still think that the work [1] diminishes the novelty of the methodology of this work. For the geometric view of transfer, the authors explained with the example of classifying horses and classifying lung cancer, but I believe that one can tell the two tasks are highly dissimilar without the Hilbert space framework. The geometric categorization, while intuitive, does not seem to provide any fundamentally deeper understanding of the problem, and does not really give insights into the development of the methodology. Therefore, given the limited novelty of the theoretical framework and the methodology, I choose to maintain my score. | Summary: The authors define three types of inductive transfer in Hilbert spaces: interpolation within the convex hull, extrapolation to the linear span, and extrapolation outside the span. They propose to learn neural network basis functions, named as function encoders, that can represent any function in this space. Specifically, they employ a least-squares formulation to compute the coefficients of the basis functions. The paper support their method with a universal function space approximation theorem, showing that any function in a separable Hilbert space can be approximated arbitrarily well if enough basis functions are used. They validate their approach through experiments on several benchmarks, including polynomial regression, CIFAR image classification, camera pose estimation, and dynamics estimation on robotic data.
## update after rebuttal
I appreciate the authors' detailed response and the additional experiments addressing the computational cost of increasing $k$, as well as the discussion on basis function regularization and domain shifts. Based on these clarifications, I will maintain my overall score.
Claims And Evidence: The paper’s claims are well supported by both theoretical analysis and experimental results. I did not encounter any obviously problematic or unclear statements in the claims.
Methods And Evaluation Criteria: Yes. The authors evaluate their approach on a wide range of datasets, from simulated polynomial regression to real-world tasks like CIFAR image classification, camera pose estimation on the 7-Scenes dataset, and dynamics estimation with MuJoCo. This diversity in experiments demonstrates the versatility of the method across different transfer learning scenarios, especially in cases requiring extrapolation. Some issues are noted in the question section.
Theoretical Claims: I checked the detailed proof provided for Theorem 1, which establishes a universal approximation guarantee for the function encoder framework. Overall, the proof is correct in its high-level idea and is built upon classical universal approximation results.
Experimental Designs Or Analyses: I checked the Section 4 in the paper. Overall, the experimental designs are sound and cover a diverse set of tasks. However, addressing the points above could further solidify the claims and help clarify the practical limitations of the proposed approach.
Supplementary Material: Yes, I've reviewed Section A-F in Supplementary Material.
Relation To Broader Scientific Literature: The paper’s contributions are closely connected to kernel methods. The proposed approach shares similar spirit to that in kernel learning, where functions are represented in a RKHS, but here the authors replace fixed kernels with learned neural network basis functions. This provides a more flexible representation that can adapt to complex data structures.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The paper presents a geometric perspective for transfer learning, distinguishing between different types of transfer. Overall, the manuscript is well-structured, making it easy to follow the progression from theoretical motivation to algorithmic details and experimental validation. The experimental comparisons are comprehensiv, covering a broad spectrum of tasks, enhancing method's credibility.
See Questions part for weakness.
Other Comments Or Suggestions: 1. I would suggest some additional experiments on the computational cost of increasing $k$. It would be better if there are some experiments regarding a reasonable method for choosing $k$ from the dataset adaptively.
2. It would be beneficial to explicitly write out the details of Section A Basis Function Regularization, e.g., the form of additional regularizer on the diagonal of Gram matrix.
Questions For Authors: 1. The method assumes that the multiple source and target domains are identical, i.e., $D_{S_1}=\cdots=D_{S_n}=D_T$, which is a strong assumption in practice. Given that many real-world scenarios involve some degree of domain shift or weak task relatedness, what specific aspects of the FE framework could be adapted to handle cases when there exists some $j$ such that $D_{S_j}\ne D_T$? It would be beneficial to discuss potential extensions to their framework under domain shifts.
2. Given that Section G suggests that the optimal $k$ varies across datasets, what is the impact of larger $k$ on computational cost across all tasks? Is there a reasonable method to balance empirical performance with computational efficiency?
3. How does learning neural network–based basis functions provide advantages over fixed basis methods (e.g., kernel method or dictionary atoms) in capturing complex nonlinearities and enabling robust extrapolation, particularly for Type 2 and Type 3 transfer? It would be helpful to discuss the limitations in fixed basis methods and explain how a learned representation can adapt to more intricate data structures in challenging transfer scenarios.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: 1. I would suggest some additional experiments on the computational cost of increasing k.
Using the worst-case strategy, the compute time for a forward pass is linear in k if you compute the basis functions sequentially on a single thread. Beyond the cost for the forward pass, the compute time for the Gram matrix inverse is cubic in k, but is typically insignificant relative to the forward pass.
However, in practice, the compute time for a forward pass is approximately constant with respect to the number of basis functions since each can be run simultaneously via GPU parallelization. Therefore, there is little overhead for increasing k. Empirically, the percentage increase in training time of k=100 relative to k=1 is the following:
Polynomial - 14.4 %
CIFAR - 8 %
7 Scenes - 11.5 %
Ant - 17.8 %
Indeed, the overhead is minor even though we increase k by two orders of magnitude. We will include a graph comparing training time for different numbers of basis functions in the appendix.
2. It would be better if there are some experiments regarding a reasonable method for choosing k from the dataset adaptively.
We agree that exploring adaptive methods to reduce the number of basis functions (either during training or after training) is an interesting area for future research. We anticipate that L1 regularization can be used to impose sparsity during training, or that we could "constructively" train a set of basis functions--adding new basis functions periodically until we encounter diminishing returns. However, in most cases the overhead of choosing a large k is sufficiently small such that often the best solution is to simply choose a large value of k at the start.
3. It would be beneficial to explicitly write out the details of Section A Basis Function Regularization, e.g., the form of additional regularizer on the diagonal of Gram matrix.
We will improve the discussion of regularization in the final version by incorporating the regularization loss shown in Algorithm 1 as part of the main text.
4. ... It would be beneficial to discuss potential extensions to their framework under domain shifts.
This is an excellent point. In practice, we only require that the input space X is the same for all datasets. However, the distribution between tasks can be different. If the distributions are entirely disjoint, this is a special case of the weighted inner product described in Appendix D. So the algorithm is still applicable, but implicitly the inner product’s definition changes between domains. Note that both image-based experiments in this paper include domain transfer in addition to function space transfer, since the distribution of images is different for different classes/scenes. Implicitly, our use of least squares calibrates the function estimate to new domains. We will improve the discussion in Appendix D to highlight these details.
Furthermore, analyzing the interplay between domain shifts, inner products, and least squares is an interesting direction for future work.
5. Given that Section G suggests that the optimal k varies across datasets, what is the impact of larger k on computational cost across all tasks? Is there a reasonable method to balance empirical performance with computational efficiency?
Please see our response to question 1 above for the computational cost of choosing k and the scaling across tasks. However, increasing k does not reduce prediction accuracy. Therefore, a good naive strategy is to simply choose large k, which maximizes performance while only mildly increasing compute time. As mentioned in our response to question 2 above, we anticipate that we could "constructively" train a set of basis functions if computational efficiency is critical.
6. How does learning neural network–based basis functions provide advantages over fixed basis methods... It would be helpful to discuss the limitations in fixed basis methods and explain how a learned representation can adapt to more intricate data structures in challenging transfer scenarios.
We anticipate that kernel methods and dictionary learning are amenable to type 2 & 3 transfer due to the use of least squares. However, the key advantage of function encoders over kernel methods is that the Gram matrix inverse in kernel methods scales cubically with the amount of data, whereas the scaling is cubic with the hyper-parameter k for function encoders. Additionally, we note that function encoders avoid the issue of pre-specifying a kernel, and therefore may learn basis functions which are specialized to the given problem. On the other hand, dictionary learning is effectively learning a discretized version of basis functions evaluated at a fixed mesh of input locations, and so cannot be queried at new points, which makes it intractable for domains like robotics where we can’t measure the same state twice. In contrast, the function encoder is queryable at any new point due to the use of neural networks. | null | null | null | null | null | null | null | null |
Avoiding Catastrophe in Online Learning by Asking for Help | Accept (poster) | Summary: The paper addresses a novel problem of avoiding catastrophe in online learning with the help of a mentor policy. Specifically, for each input and action, there is a probability of catastrophe. The objective is to minimize the probability of catastrophe over T rounds of play while keeping the number of queries to the mentor relatively low. The mentor policy is assumed to either belong to a class with finite VC dimension and the input distribution is "smooth"; or to have finite Littlestone dimension. Additionally, the authors assume some kind of Lipschitz continuity in the mentor's policy, which they refer to as "local generalization."
The authors propose an algorithm with **sub-constant** regret and sub-linear queries to the mentor. The algorithm discretizes the policy class and, at a high level, queries the mentor with a fixed probability while running the Hedge algorithm on the discretized policy set over queried rounds. Furthermore, if an input is too far from a previously queried input, the algorithm also queries the mentor. The local generalization essentially ensures that if the input is close to an already queried input, the mentor's action will be approximately the same.
## Post Rebuttal
Overall, my assessment remains unchanged: I still find it odd that the regret compared to the mentor becomes somewhat vacuous unless the mentor avoids catastrophe with a probability close to 1. Additionally, I suggest the authors include a more detailed discussion regarding their packing argument, specifically addressing why current tools fail and how their approach overcomes these challenges. Currently, this aspect is quite vague, both in the main text and in their rebuttal.
I will keep my score unchanged.
Claims And Evidence: I did not find problematic claims; the analysis combines a few known techniques, and the analysis overview in the main text made sense to me.
Methods And Evaluation Criteria: Overall, the evaluation criteria is not standard, but this is part of the paper's novelty. The requirement that the number of queries (or that the querying rate vanishes with $T$) makes perfect sense, as this essentially means that the algorithm becomes self-sufficient over time.
The multiplicative objective that quantifies the overall probability of avoiding catastrophe also makes sense to me.
The regret compared to the mentor makes sense on one hand; however, the fact that it becomes somewhat vacuous unless the mentor avoids catastrophe with a probability close to 1 is a bit odd. I understand the authors' discussion regarding that in principle, but more technically, it implies that $\mu_t(x_t,\pi^m) \to 1$, but $\pi^m$ is fixed, so it kind of assumes that the $x$'s are such that the mentor's policy avoids catastrophe with a probability that becomes arbitrarily close to 1. If the mentor's policy were not fixed, it would make more sense.
Theoretical Claims: I did not verify the appendix's proofs, but as mentioned, the analysis overview in the main text made sense to me.
Experimental Designs Or Analyses: There are no experiments in the paper.
Supplementary Material: I have not read the supplementary material.
Relation To Broader Scientific Literature: This is somewhat related in motivation to safety in online learning (say, in constrained MDPs), and also somewhat related to imitation learning, as here the algorithm learns by querying the mentor. In a sense, it combines motivation from these two lines of work, as in safety there is no component of requiring a mentor, and in imitation, there is no explicit safety requirement.
Essential References Not Discussed: Not that I'm aware of.
Other Strengths And Weaknesses: Overall, the paper is well-written and easy to follow.
The local generalization assumption is not a standard one and is not adequately justified in the paper.
The analysis mainly combines known techniques, which I find okay in this case since the setting is novel and well-motivated. Properly combining known techniques to solve a new problem is definitely not a trivial task. The main claim of technical novelty by the authors lies in the packing argument, which they only briefly discuss in the last paragraph of Section 5. However, I think that this paragraph does not adequately convey the technical challenge and how it is addressed.
Other Comments Or Suggestions: N/A
Questions For Authors: In line 229, what do you mean by feature embedding? Isn't $x$ already the "state" embedding?
It seems that whenever hedgeQuery is true, then X is not updated, and that whenever the algorithm "asks for help," the policy is not updated. This is not very intuitive; why is it that you don't need that?
Is there a specific policy class for which you can make the algorithm efficient (not exponential in $d$)?
Is there a difference between $s_t$ (as mentioned in Algorithm 1) and $x_t$ in the rest of the paper, or is that just a typo?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review. We respond to each concern and question:
**Concern 1**
> The regret compared to the mentor makes sense on one hand; however, the fact that it becomes somewhat vacuous unless the mentor avoids catastrophe with a probability close to 1 is a bit odd…If the mentor's policy were not fixed, it would make more sense.
To clarify, the mentor’s policy $\pi^m$ is *not* fixed across values of $T$. When we stated in the paper that $\pi^m$ is fixed, we meant that $\pi^m$ does not vary across time steps in a given problem instance, i.e., we do not deal with non-stationarity. It is crucial that the mentor policy *can* vary across values of $T$ (and across problem instances) for the exact reason stated by the reviewer. Our original writing was unclear and we will fix this.
We also mention that our subconstant bound on standard additive regret (Theorem 5.3) is unaffected by this issue entirely.
**Concern 2**
> The local generalization assumption is not a standard one and is not adequately justified in the paper.
We agree on the importance of justifying assumptions and will expand this justification when revising.
To recap, local generalization states that
$|\mu_t^m(x) - \mu_t(x, \pi^m(x'))| \le L ||x-x'||$ for any $x,x’$. We make the following claims:
A. One can transfer knowledge between similar situations, with the reliability of the transfer depending on the degree of similarity. This is well-known in the education and psychology literature (e.g., Esser et al. 2023, Hajian 2019).
B. Local generalization captures the idea of Claim A. Specifically,
B1. $\mu_t^m(x)$ is the effect of taking the right action for the current input $x$
B2. $\mu_t(x, \pi^m(x'))$ is the effect of using what you learned in $x’$ for the current input $x$
B3. $||x-x'||$ captures the similarity between $x$ and $x’$
To effectively focus our revision, it would help us to understand which of these claims felt weakest to the reviewer, or if their skepticism is based on something else.
**Concern 3**
> The main claim of technical novelty by the authors lies in the packing argument, which they only briefly discuss in the last paragraph of Section 5. However, I think that this paragraph does not adequately convey the technical challenge and how it is addressed.
To recap, we perform a packing argument with respect to an arbitrary categorization of the data instead of considering all data in aggregate as is typical in packing arguments. In our case, the categorization is based on the algorithm’s action. However, we should have emphasized in the paper that our technique can be applied to any categorization of the data.
In fact, we think our technique has promise as a general-purpose way to bound how well categorized data covers an input space. It’s often insufficient to simply have lots of data for each category: to ensure good generalization, each category should be represented well *across the input space*. For example, suppose one has plenty of data on healthy and sick patients, but all the sick patient data is from hospital visits: models trained on this data may make inaccurate predictions about sick patients outside of the hospital, despite plenty of sick patient data. We will expand this discussion.
**Question 1**
> In line 229, what do you mean by feature embedding? Isn't $x$ already the "state" embedding?
Yes, we just mean state embedding. The idea is that the algorithm is agnostic to how the states are represented, so local generalization just has to be satisfied for *some* way of representing states. We will clarify this.
**Question 2**
> It seems that whenever hedgeQuery is true, then X is not updated, and that whenever the algorithm "asks for help," the policy is not updated. This is not very intuitive; why is it that you don't need that?
Those updates are not necessary because the algorithm already learns enough from the existing updates. Omitting those updates from the algorithm allows us to fully separate the Hedge updates and the ask-for-help updates, which slightly simplifies the analysis. However, we see why this is unintuitive, and we will add an explanation of this. We are also open to including those updates in the algorithm.
**Question 3**
> Is there a specific policy class for which you can make the algorithm efficient (not exponential in $d$)?
In general, the size of a cover of the policy class must be exponential in $d$. Intuitively, this is because a VC dimension of $d$ means that the policy class can realize all $2^d$ labelings of $d$ points.
However, we are optimistic about reducing the dependence on $d$ in future work via techniques from Block et al. (2022), who show how to bypass explicitly constructing covers. We will mention this in the conclusion.
**Question 4**
> Is there a difference between ($s_t$ as mentioned in Algorithm 1) and $x_t$ in the rest of the paper, or is that just a typo?
This is simply a typo. We apologize for the confusion. | Summary: This paper formula the events of catastrophe as product of payoff functions where each payoff function can be seen as the probability of catastrophe occurs at each time step. Apart from the multiplicative objective in different to standard regret in online learning, the agent observes feedback only when asking for help from a mentor. The goal is to maximize product of pay off against mentor's performance while maintaining non-trivial query numbers.
This paper first establishes a worst-case impossibility result, showing that an agent cannot avoid catastrophe with sublinear mentor queries, even when the mentor follows a safe policy. For the case where catastrophe avoidance is possible, the paper identifies a connection to Littlestone and VC dimensions and leverages a variant of Hedge [Russo et al. (2024)], which enables achieving sublinear regret with sublinear queries.
Claims And Evidence: Yes
Methods And Evaluation Criteria: the metric is analogous to regret in online learning with the mentor's performance as the comparitor, which makes sense.
Theoretical Claims: Checked Theorem 4.1
Experimental Designs Or Analyses: not applicable
Supplementary Material: checked appendix A
Relation To Broader Scientific Literature: the first provable algorithm in learning with irreversible costs without bayesian inference
Essential References Not Discussed: well referenced.
Other Strengths And Weaknesses: Strength:
- The paper does well in motivating the considered problem, the sensible formulation and iterative feedback set-up, and drawing differences of the novel formulation in comparison to standard online learning, thus it was clear that the formulation itself is not a trivial problem.
- The result of this paper is considered as complete as it gives negative case and solvable cases
Weakness: for the problem that is solvable, it generally relying on the VC / Littlestone Dimension as a parameter to Hedge Algorithm, which can be impractical to compute
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review. We believe the reviewer articulates only a single weakness:
> Weakness: for the problem that is solvable, it generally relying on the VC / Littlestone Dimension as a parameter to Hedge Algorithm, which can be impractical to compute
We agree that this is a weakness of our current algorithm. We offer two responses:
1. We are the first to show that avoiding irreversible mistakes without Bayesian inference is possible at all. Eventually we would indeed like an efficient algorithm, but we believe that proving that the problem is solvable at all is an important step towards finding an efficient algorithm for the problem.
2. We are optimistic about the prospect of a more efficient algorithm based on techniques from Block et al. (2022). In the context of standard online learning (i.e., without irreversible mistakes), they show how to explore a hypothesis class without explicitly computing the VC dimension or constructing an $\varepsilon$-cover. We will add a mention of this to the conclusion, as we think this is an important avenue for future work.
We would also like to clarify the regret bound. The reviewer writes:
> ...which enables achieving sublinear regret with sublinear queries.
However, our algorithm achieves **subconstant** regret with sublinear queries. | Summary: The paper studies how to design learning algorithms that avoid catastrophe. In particular, the goal of the learner is to minimize the chance of a catastrophe. This is modeled as an online learning problem in which the learner aims at minimizing the product of the payoffs. The learner is equipped with a mentor that can be queried to obtain the best action for the current round. The problem is in general unlearnable since any algorithm queries the mentor a linear number of times or guarantee to cause catastrophe with high probability. The authors show that when the policy class is learnable, it is possible to design a learning algorithm that queries the mentor a sublinear number of times and whose regret approaches $0$.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Not Applicable.
Theoretical Claims: I only checked the main paper.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper introduces a new model. However, it borrows some techniques used for other online learning problems.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The paper introduces a new problem, and the problem of avoiding catastrophe is well-motivated. However, how the problem is modeled as an online learning problem with mentor is a bit arbitrary. I'm not sure if the right way to sell the paper is yours, i.e., you model the problem of avoiding catastrophe as an online learning problem, or to focus more on a general online learning setting in which the goal is to maximize the product of the payoff which has as application avoiding catastrophe.
From a technical perspective, it seems that the paper is mostly putting together results from previous works. Moreover, it is unclear whether the paper introduces general techniques or models that can be applied to broader and more fundamental problems.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review. We respond to each of the reviewer’s concerns:
**Concern 1**
> However, how the problem is modeled as an online learning problem with mentor is a bit arbitrary. I'm not sure if the right way to sell the paper is yours, i.e., you model the problem of avoiding catastrophe as an online learning problem, or to focus more on a general online learning setting in which the goal is to maximize the product of the payoff which has as application avoiding catastrophe.
We appreciate the suggestion and are happy to make the paper’s framing more general if the reviewer thinks that would strengthen the paper. In fact, this seems more like a strength of our work than a weakness: if we understand correctly, the reviewer suggests that our work is actually more general than our current framing.
**Concern 2**
> From a technical perspective, it seems that the paper is mostly putting together results from previous works. Moreover, it is unclear whether the paper introduces general techniques or models that can be applied to broader and more fundamental problems.
We do agree that technical novelty is not our primary contribution. However, we offer some mild pushback:
1. The existing results we utilize come from a variety of sources and topics. We think that combining previously disparate results in a new way can also be a form of technical novelty.
2. We believe our packing argument is novel, as discussed at the top of the right column of page 8. To recap, we perform a packing argument with respect to an arbitrary categorization of the data instead of considering all data in aggregate as is typical in packing arguments. In our case, the categorization is based on the algorithm’s action. However, we should have emphasized in the paper that our technique can be applied to any categorization of the data.
In fact, we think our technique has promise as a general-purpose way to bound how well categorized data covers an input space. It’s often insufficient to simply have lots of data for each category: to ensure good generalization, each category should be represented well *across the input space*. For example, suppose one has plenty of data on healthy and sick patients, but all the sick patient data is from hospital visits: models trained on this data may make inaccurate predictions about sick patients outside of the hospital, despite plenty of sick patient data. We will expand this discussion when revising. | Summary: This work introduces an online learning framework that avoids catastrophic mistakes by allowing an agent to query a mentor when uncertain, rather than relying solely on trial-and-error. The approach maximizes the product of payoffs, where each payoff represents the chance of avoiding catastrophe, and leverages local generalization to apply knowledge from similar inputs. The authors prove that without mentor assistance, any algorithm with sublinear queries incurs high regret, but with help, one can achieve subconstant regret and a sublinear query rate under standard learnability assumptions (finite VC or Littlestone dimensions). Their results offer theoretical guarantees for safe online learning in high-stakes scenarios where irreversible errors must be avoided.
Claims And Evidence: Yes the claims made in this work are supposed by clear and convincing evidence, as detailed in section 1.4.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I briefly checked the correctness of Theorem 4.1.
Experimental Designs Or Analyses: N/A
Supplementary Material: I briefly reviewed the proof of Theorem 4.1
Relation To Broader Scientific Literature: The paper extends classical online learning frameworks, which typically assume that all errors are recoverable by leveraging concepts like finite VC/Littlestone dimensions and algorithms such as Hedge, by incorporating a mentor query mechanism and local generalization to safely handle irreversible, catastrophic mistakes.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: Page 4: "Also, that work" ---> Also, their work
Questions For Authors: On the bottom of page 3 the authors state "sublinear regret is possible iff the hypothesis ...". Is this supposed to by an if and only if or is this a typo?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review.
> Page 4: "Also, that work" ---> Also, their work
We appreciate the correction and will make this change.
> On the bottom of page 3 the authors state "sublinear regret is possible iff the hypothesis ...". Is this supposed to by an if and only if or is this a typo?
This is indeed “if and only if”. See, e.g., Corollary 21.8 in Shalev-Shwartz and Ben-David, *Understanding Machine Learning: From Theory to Algorithms* (2014). We will expand “iff” to “if and only if” to avoid reader confusion. | null | null | null | null | null | null |
An Asymptotically Optimal Approximation Algorithm for Multiobjective Submodular Maximization at Scale | Accept (poster) | Summary: The paper develops a discrete, scalable algorithm for multiobjective submodular maximization under a cardinality constraint that nearly achieves the optimal (1−1/e) approximation ratio. Instead of relying on continuous relaxations such as the multilinear extension, the proposed approach builds the solution iteratively by solving a linear program at each step to derive a probability distribution over candidate elements, sampling from this distribution, and adding the chosen element to the solution. The algorithm incorporates multiplicative weights updates (MWU) and lazy evaluations to reduce the number of function evaluations, thereby enhancing its efficiency. As an additional contribution, the paper introduces an application to fair centrality maximization, which generalizes standard centrality measures by ensuring fairness across different groups.
Claims And Evidence: The query complexity of O(nBk) appears suboptimal since B approaches n, potentially resulting in quadratic complexity. This does not seem sufficiently efficient for practical applications. The authors should consider developing an approximation with near-linear complexity, which would be more meaningful and applicable in practice.
Methods And Evaluation Criteria: While the proposed algorithm is faster than existing methods, its time complexity remains suboptimal for handling the massive data sets encountered in real-world applications. Nevertheless, the paper offers a promising conceptual framework that could inspire further advancements in this area.
Theoretical Claims: I have conducted a preliminary review of the theoretical proofs in the paper, and they appear to be correct. However, I should note that my examination was not exhaustive, and there remains a possibility that some details may have been overlooked.
Experimental Designs Or Analyses: The algorithm proposed in the paper demonstrates promising performance in terms of objective function value. However, its runtime is inferior compared to the other two greedy-based algorithms, namely Greedy Minimum and Greedy Round Robin. Additionally, the datasets used in the paper are relatively small, making it difficult to evaluate the efficiency of the algorithms in real-world large-scale environments.
Supplementary Material: I have reviewed all of the Supplementary Material provided with the paper to enable a more comprehensive evaluation of the overall work.
Relation To Broader Scientific Literature: The problem addressed in this paper is related to fair submodular optimization, which involves optimizing a submodular function under fairness constraints. However, some important works in this area have been overlooked, such as [1][2].
[1] Fazzone, A., Wang, Y., & Bonchi, F. (2024). Fair Representation in Submodular Subset Selection: A Pareto Optimization Approach. Transactions on Machine Learning Research.
[2] Cui, Shuang, et al. "Fairness in streaming submodular maximization subject to a knapsack constraint." Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024.
Essential References Not Discussed: Please refer to the above.
Other Strengths And Weaknesses: N.A.
Other Comments Or Suggestions: N.A.
Questions For Authors: Please refer to the above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your comments and pointing out that our algorithm outperforms existing methods in terms of running time.
> The query complexity of O(nBk) appears suboptimal since B approaches n, potentially resulting in quadratic complexity. This does not seem sufficiently efficient for practical applications. The authors should consider developing an approximation with near-linear complexity, which would be more meaningful and applicable in practice.
> While the proposed algorithm is faster than existing methods, its time complexity remains suboptimal for handling the massive data sets encountered in real-world applications.
A running time of $O(nB)$ is natural for the greedy algorithm which is a standard algorithm in practice and is widely applied on huge datasets. A running time of $O(nBk)$ is therefore quite natural on $k$ objectives. Even in the case where $B=\Omega(n)$, our algorithm still has significantly better running time than all prior works that offer the same approximation guarantee (Chekuri et al. ‘10, the $1-1/e$ algorithm of Udwani ‘18, Tsang et al. ‘19).
As shown in our experiments, our algorithm is vastly more performant than prior work with theoretical guarantees. We are the first to provide an algorithm that scales to ten thousands of nodes (and potentially more; the running time of our algorithm on the largest instances we tried is not yet prohibitive) while providing the best possible approximation ratio.
> However, its runtime is inferior compared to the other two greedy-based algorithms, namely Greedy Minimum and Greedy Round Robin. Additionally, the datasets used in the paper are relatively small, making it difficult to evaluate the efficiency of the algorithms in real-world large-scale environments.
As compared to greedy, our algorithm has theoretical guarantees. We outperform all algorithms with comparable theoretical guarantees in terms of running time, which form a long line of work that improves upon the objective value of the greedy algorithm.
The biggest instance in our experiments has $n=10378$ elements which is substantially larger than any of the instances used in the literature on multiobjective submodular maximization with theoretical guarantees (before us, the largest instance appears in Udwani ‘2018 with $n=1024$ elements).
> However, some important works in this area have been overlooked, such as [1][2].
> [1] Fazzone, A., Wang, Y., & Bonchi, F. (2024). Fair Representation in Submodular Subset Selection: A Pareto Optimization Approach. Transactions on Machine Learning Research.
> [2] Cui, Shuang, et al. "Fairness in streaming submodular maximization subject to a knapsack constraint." Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024.
We will cite these papers in our related work along the following discussion:
- [1] solves a multi-objective problem by extending the saturate algorithm to obtain a sequence of approximately pareto-optimal solutions, while our algorithm gives a single solution for the (max-min) multiobjective problem.
- [2] considers a fairness notion that enforces constraints on the solution set. (We will add [2] alongside similar works that we cite in our related work section (e.g., Celis et al. ‘18)).
---
Rebuttal Comment 1.1:
Comment: After reading the rebuttal, I have decided to increase my score. | Summary: This paper addresses the problem of maximizing the minimum over several submodular functions, known as multiobjective submodular maximization. The authors present the first scalable algorithm that achieves the best-known approximation guarantee. Additionally, they introduce a novel application—fair centrality maximization—and demonstrate how it can be effectively addressed through multiobjective submodular maximization.
Claims And Evidence: +The paper tackles an important problem in the field of submodular optimization, with results that can be applied to solving fairness-aware subset selection problems.
+The paper is well-written, with the algorithm presented in a clear and easy-to-follow manner. The analysis is also clearly outlined. Notably, the algorithm is both scalable and effective, achieving the best approximation ratio without relying on continuous relaxation of the submodular set functions.
However, there are some points of concern:
-The technical novelty of the work seems marginal. While the proposed solution outperforms existing studies in terms of lower running time and improved approximation ratio, the fundamental design and analysis of the algorithm appear to be inspired by previous work, such as Chekuri et al. (2010). Moreover, the results of Chekuri et al. (2010) can be applied to a broader range of constraints, such as matroid constraints. Additionally, I believe their results do not rely on the budget constraint B.
-The paper lacks a direct comparison between their method and the one proposed in Chekuri et al. (2010) in the experimental section. While I understand that Chekuri et al. (2010)’s algorithm may not be practical due to its reliance on multilinear extensions, I would have appreciated seeing some comparative results in a small- or medium-scale setting to better understand the trade-offs.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper tackles an important problem in the field of submodular optimization, with results that can be applied to solving fairness-aware subset selection problems.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: NA.
Other Comments Or Suggestions: NA.
Questions For Authors: Please refer to my comments in Claims And Evidence.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for evaluation and the positive comments on our writing and the scalability of our approach.
> The technical novelty of the work seems marginal. While the proposed solution outperforms existing studies in terms of lower running time and improved approximation ratio, the fundamental design and analysis of the algorithm appear to be inspired by previous work, such as Chekuri et al. (2010). Moreover, the results of Chekuri et al. (2010) can be applied to a broader range of constraints, such as matroid constraints. Additionally, I believe their results do not rely on the budget constraint $B$.
Our method and swap rounding (due to Chekuri et al. (2010) and also used in later works, e.g. in Tsang et al. (2019)) are **different approaches** towards the multiobjective problem. The idea of swap rounding is to build a fractional solution (which they find using gradient ascent over the multilinear extension) and in the last step round it to a discrete solution. Our method adds one (discrete) element in each iteration. This leads to important differences in both the running time and the analysis of both approaches. Since we do not optimize over the multilinear extension, our approach is decisively faster: We only need $O(nBk)$ function evaluations compared to exponentially many queries (and exponential runtime) of Chekuri et al. (2010). We also carry out a different martingale analysis that is tailored to the greedy algorithm. Furthermore, their algorithm also depends on $B$ to establish their approximation guarantee, which is natural for the multiobjective problem to avoid the inapproximability regime ($B<k$) shown by Krause et al. (2008). Another natural reason for a dependence on the budget in swap rounding is that the Chernoff bound obtains better concentration for larger budgets, even for linear functions.
> The paper lacks a direct comparison between their method and the one proposed in Chekuri et al. (2010) in the experimental section. While I understand that Chekuri et al. (2010)’s algorithm may not be practical due to its reliance on multilinear extensions, I would have appreciated seeing some comparative results in a small- or medium-scale setting to better understand the trade-offs.
We compare our paper with the approach of Tsang et al. (2019) which provides a practical implementation of the approach of Chekuri et al. (2010) by performing gradient ascent efficiently via Frank-Wolfe. Their parameters are chosen to make the algorithm practical on special medium-sized instances (their algorithm does not apply to general applications such as fair centrality maximization, which is why we exclude it there). However, their algorithm is still slower and has worse objective value than our algorithm, suggesting that the trade-offs of the unrefined approach of Chekuri et al. (2010) are also poor. | Summary: They study the multiobjective monotone submodular maximization under cardinality constraint, where the goal is to select a subset of elements of limited size that maximizes the minimum submodular value among the given functions. Previously, a $(1-\frac{1}{e})$-approximation algorithm was known for this problem using multilinear extension which is impractical. Moreover, there was a practical $(1-\frac{1}{e})^2$-approximation algorithm for this problem. Here they develop a $(1-\frac{1}{e}-\epsilon)$-approximation algorithm which relies on selecting elements step by step. At each step, they try to solve an LP to find a probability distribution which identifies the weight of selecting elements to increase the submodular value enough. This is similar to selecting an element with a marginal gain above a threshold, but in this case, it is more complex as it requires finding a probability distribution via an LP. Finally, they select an element using this probability distribution and add it to their solution.
Since their approach is randomized, they run it multiple times to find a desired solution with higher probability. Note that their algorithm has |B| steps, where |B| is the cardinality constraint. At each step, they run a LP which is time consuming and needs to find marginal value of all elements compared to their current solution. However, they have introduced a lazy update method using multiplicative weights update (MWU) to speed up their algorithm.
Finally, they show the efficiency of their approach with experimental evaluation. In their evaluation, they beat the previous practical algorithms with approximation guarantee. However, it seems a greedy heuristics algorithm finds almost the same submodular value, while having better running time and query complexity. This greedy algorithm tries to select an element that maximizes the submodular value of the function that has the current minimum value.
Claims And Evidence: They proved their lemmas and claims and also ran experiments to show their dominance compared to previous practical works.
Methods And Evaluation Criteria: The methods make sense. They could run experiments on larger instances, especially since the multilinear extension approach despite having the same approximation guarantee is impractical on large instances.
Theoretical Claims: I only checked the proofs in the main part of the paper and not the appendix and they seem fine.
Experimental Designs Or Analyses: No
Supplementary Material: I only read the algorithms in the appendix.
Relation To Broader Scientific Literature: While there was an algorithm with the same approximation guarantee, it was not practical and the previous practical algorithms had worse approximation factors. They could develop a practical algorithm that matches the best known approximation factor for this problem.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: I like their algorithm, it seems natural and achieves the best known approximation factor for submodular maximization. The paper is very well written and easy to follow and understand. The only weakness is that the greedy minimum algorithm seems to outperform their algorithm in experiments but since that one does not have an approximation guarantee, still their algorithm and its analysis remain interesting.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your comments and the positive feedback on our presentation.
> However, it seems a greedy heuristics algorithm finds almost the same submodular value, while having better running time and query complexity.
> The only weakness is that the greedy minimum algorithm seems to outperform their algorithm in experiments but since that one does not have an approximation guarantee, still their algorithm and its analysis remain interesting.
While the greedy heuristic does obtain high objective value, there is a long line of work on the multiobjective problem which aims to improve upon greedy heuristics (Leskovec et al. ‘10; Chekuri et al. ‘10; Orlin et al. ‘18; Udwani ‘18; Tsang et al. ‘19). We contribute to this line of work by providing the first practical algorithm with the best possible approximation guarantee. Although greedy might perform similarly in practice it does not have any theoretical guarantees, as the reviewer points out. Such guarantees are valuable as they ensure that we consistently perform well on any given instance. In particular, we outperform or match greedy in objective value. | Summary: A combinatorial algorithm for multi-objective submodular optimization is developed, that achieves ratio 1-1/e with constant problem under assumptions on the budget and number of colors. This improves over the best ratio achieved by a combinatorial algorithm in prior work. The algorithm requires several novel ideas. A new application of fair centrality is introduced, and an empirical evaluation of the algorithm is included.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I checked the results in the main text through Section 4.1. I even tried to come up with a simpler version of the algorithm that avoided an LP, which didn't end up working.
Experimental Designs Or Analyses: No.
Supplementary Material: No.
Relation To Broader Scientific Literature: Improves the state-of-the-art ratio for a combinatorial algorithm for this problem.
Essential References Not Discussed: This work isn't essential, but I would appreciate a discussion of [1], as elaborated in strengths and weaknesses.
[1]. Buchbinder, Niv, et al. "Submodular maximization with cardinality constraints." Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics, 2014.
Other Strengths And Weaknesses: Strengths:
+ Paper is well written and easy to follow. Contributions are clearly explained and the relationship to prior work is well documented.
+ The main algorithm reminds me a lot of the RandomGreedy algorithm from [1], with some novel differences. RandomGreedy samples a random element from the top k marginal gains to get one with good enough average contribution. This algorithm requires solving an LP to obtain a probability distribution to sample from. Additional ideas are needed to get a bound that holds with constant probability. I do think it would be a good idea to cite [1] and explain the relationship between the algorithms.
+ I found the fair centrality application interesting, and the experimental results clearly show the advantages of the proposed algorithm.
+ I also appreciated an interpretation of fairness in terms of multi-objective optimization, rather than enforcing it through various contraints.
[1]. Buchbinder, Niv, et al. "Submodular maximization with cardinality constraints." Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics, 2014.
Weaknesses:
+ Would like to see [1] cited and a discussion of the relationship between the combinatorial algorithm (RandomGreedy) in that paper and this one. Could ideas from RandomGreedy (which works for non-monotone objective) extend to a non-monotone variant of the algorithm in this paper?/
Other Comments Or Suggestions: line 107: seting -> setting
It might be a good idea if B > k, or the approximability of the problem were discussed earlier. When reading Problem 3.1, I was wondering about this. If B < k, clearly inapproximable.
Questions For Authors: - See weaknesses.
- Is there a direction transformation from the notion of fairness as a constraint (as in some of the references discussed in the introduction) to the multi-objective notion of fairness? If so, how do the results in those papers compare with this one?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your evaluation and the encouraging comments on the strengths of our paper.
> This work isn't essential, but I would appreciate a discussion of [1], as elaborated in strengths and weaknesses.
> [1]. Buchbinder, Niv, et al. "Submodular maximization with cardinality constraints." Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics, 2014.
> I do think it would be a good idea to cite [1] and explain the relationship between the algorithms.
> Would like to see [1] cited and a discussion of the relationship between the combinatorial algorithm (RandomGreedy) in that paper and this one. Could ideas from RandomGreedy (which works for non-monotone objective) extend to a non-monotone variant of the algorithm in this paper?
A similarity is that we sample randomly, but our sampling probabilities are carefully chosen to solve the multiobjective problem. One way to see the difference between the algorithms (in terms of the goal and design of either algorithm) is if we run our algorithm for only a single color. Then our algorithm reduces to greedy, showing that our sampling is designed only to enforce good progress across all colors. Their sampling is (intuitively) intended to avoid adding “bad elements” to the solution. However, we believe it could be a promising idea to combine both sampling strategies to obtain an algorithm for non-monotone multiobjective maximization. We will add such a discussion to our paper.
> line 107: seting -> setting It might be a good idea if B > k, or the approximability of the problem were discussed earlier. When reading Problem 3.1, I was wondering about this. If B < k, clearly inapproximable.
In a revised version, we will fix the typo and state after Problem 3.1 that we are interested in the case where the budget is large compared to k ($B>k$).
> Is there a direction transformation from the notion of fairness as a constraint (as in some of the references discussed in the introduction) to the multi-objective notion of fairness? If so, how do the results in those papers compare with this one?
Fairness as a constraint is enforced on the solution set itself. For instance, one can require that the intersection of the solution set with each group (groups are subsets of the ground set) has a certain size. The multiobjective problem is different in that it asks for fairness in the objectives, which corresponds to the notion of max-min fairness on the outcomes. As such, we do not think that one problem can be cast as the other. | null | null | null | null | null | null |
Fundamental Bias in Inverting Random Sampling Matrices with Application to Sub-sampled Newton | Accept (oral) | Summary: This paper looks at how inversion bias (where the inverses of random sketches of a matrix are not unbiased) can be corrected for several random sampling methods. The paper gives an outline of how this can be done, and gives bounds for an $(\epsilon,\delta)$ estimator in Theorem 3.1 The paper makes a commentary on Derezinski's bounds (if I understand this correctly, the paper's derivation uses an alternative proof which beats the bounds given by Derezinski et al (2021a;b)). The paper then gives practical applications of these bounds to de-biased sub-sampled Newton with a better convergence rate (or convergence-complexity) trade-off.
Claims And Evidence: Each claim is supported by proofs in the Appendix. While I did not have the opportunity to check each line thoroughly, each part of the (proofs) in the appendix gives the outline of the proof, and clear explanation for each subsequent step. This makes it straightforward for readers with more detailed background knowledge to follow and check.
Methods And Evaluation Criteria: The datasets are standard benchmark datasets (MNIST and CIFAR-10).
I do have a question about lines 400-402: "Figures 2 and 3 do not include the time for input data pre-processing......"
What is the relative magnitude of the time for input data pre-processing? That is to say, if the relative magnitudes of pre-processing is greater than the reduction in wall clock time for Newton-LESS / SSN-ARLev, then I don't think wall-clock time would be a meaningful metric.
Theoretical Claims: The proofs look reasonable, however I did not carefully check the correctness of them.
Experimental Designs Or Analyses: I would be interested to see the boxplots of the relative errors over the 30 independent runs in Fig 2, and 10 independent runs in Fig 3 for both Newton-LESS and SSN-ARLev, as the relative error for Newton-LESS and SSN-ARLev seem close to each other as the sketch size $m$ increases. This is not to say that the experimental design / analyses is flawed, but I'm curious if the relative errors are skewed in one direction or equally symmetric about the solid lines in both Figures.
Supplementary Material: I took a brief look at the supplementary material (re proofs), but did not have the time to carefully go through it.
Relation To Broader Scientific Literature: The contributions extend the work of randomized numerical algebra, where the paper looks at reducing the inversion bias of random sketches for random sampling methods. Based on the theorems and discussion in the main paper, the paper builds upon and extends (and critiques?) the results in the papers Derezinski 2021a;b using leverage based sampling.
Essential References Not Discussed: No
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: I will admit I am not very familiar with this paper, so I am basing my review solely on the clarity of the writing, some papers I referred to from the references in this paper, and questions from the experimental side. But what I like about this paper is that the proofs are "reproducible", in the sense that the very detailed appendix explains what is happening in each line of the proof, as well as a general outline, hence anyone with sufficient background should be able to check what is happening at each step - unfortunately I am not the best person to do this.
Hence, please take this review with a grain of salt.
Questions For Authors: Please see above for my questions
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her constructive and thoughtful comments. Please find our point-by-point responses below.
**Methods And Evaluation Criteria**:
The datasets are standard benchmark datasets (MNIST and CIFAR-10).
I do have a question about lines 400-402: "Figures 2 and 3 do not include the time for input data pre-processing......" What is the relative magnitude of the time for input data pre-processing? That is to say, if the relative magnitudes of pre-processing is greater than the reduction in wall clock time for Newton-LESS / SSN-ARLev, then I don't think wall-clock time would be a meaningful metric.
**Reply**:
We thank the reviewer for the constructive question (which is in fact shared by Reviewer GJpF).
It is worth noting that various second-order baselines in Figures 2 and 3, including Newton-LESS, ARLev, ALev, SLev, and SRHT, **all** require preprocessing of the data.
Specifically, Newton-LESS requires a Hadamard transform preprocessing with a complexity of $O(nd \log n)$, see [Derezinski2021a].
To ensure a fair comparison, we treat SRHT as a uniform sampling method that undergoes the same random Hadamard transform preprocessing as Newton-LESS.
The leverage-based sampling methods, ARLev, ALev, and SLev, require computing approximate leverage scores before constructing the sketches $\check{\mathbf{S}}\mathbf{A}$ via random sampling, with complexity $O(\text{nnz}(\mathbf{A}) \log n + d^3 \log d)$.
As such, the preprocessing time are in fact **comparable** across these methods, and it makes sense to exclude the time for the Hadamard transform and leverage score computation ensures a fair comparison.
Considering the computational overhead associated with the sketching process itself (rather than the preprocessing steps) allows for a clearer illustration of the "convergence-complexity" trade-off across different stochastic optimization methods.
We have reproduced Figures 1 and 2 by adding (for all methods) the preprocessing time, and observed **a similar trends**.
In particular, SSN-ARLev achieves a significantly lower overall runtime compared to Newton-LESS and establishes better complexity-convergence trade-off than other baselines.
These additional numerical results and discussions will be included in the revision.
Additionally, for certain random Newton-type solvers, preprocessing is done **only once** at the beginning of the iteration and then reused in subsequent iterations.
For instance, in [Li2024], distributed least squares regression based on federated Newton iterations only performs data preprocessing (such as the Hadamard transform and computing leverage scores) once, and then reuse the obtained results alongside the optimization procedure.
In such cases, the preprocessing time becomes negligible compared to the iteration computation and communication costs.
Thus, we believe wall-clock time remains a meaningful metric.
[Derezinski2021a] Derezinski M, Lacotte J, Pilanci M, et al. Newton-LESS: Sparsification without trade-offs for the sketched Newton update. Advances in Neural Information Processing Systems, 2021, 34: 2835-2847.
[Li2024] Li J, Liu Y, Wang W. Fedns: A fast sketching newton-type algorithm for federated learning. Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(12): 13509-13517.
**Experimental Designs Or Analyses**:
I would be interested to see the boxplots of the relative errors over the 30 independent runs in Fig 2, and 10 independent runs in Fig 3 for both Newton-LESS and SSN-ARLev, as the relative error for Newton-LESS and SSN-ARLev seem close to each other as the sketch size m increases. This is not to say that the experimental design / analyses is flawed, but I'm curious if the relative errors are skewed in one direction or equally symmetric about the solid lines in both Figures.
**Reply**:
We thank the reviewer for the insightful comment.
Our theoretical results say that the convergence rate of SSN-ARLev can approximate, but **not** exceed, the convergence rate of Newton-LESS, see Theorem 4.3.
Moreover, as $m$ increases, the convergence rate of the de-biased sub-sampled Newton method approaches that of Newton-LESS.
Therefore, it is reasonable to see in the experimental results that the relative errors for both Newton-LESS and SSN-ARLev becoming increasingly similar as $m$ increases.
Our contribution in fact relies in showing that SSN-type methods can indeed achieve (local) **problem-independent convergence rates** (which, to the best of our knowledge, has **never** been established for SSN-type methods) that
(1) can be significantly faster than *standard* SSN; and
(2) matches that of Newton-LESS, but with significantly reduced computation complexity, and thus better complexity-convergence trade-off, as confirmed by Figures 1 and 2.
As suggested by the reviewer, we will include boxplots of the relative errors for both methods also converge as $m$ increases.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. I will maintain my score. | Summary: The paper studies the row-sampling of matrices in the context of approximating the inverse. Specifically, given an $n \times d$ matrix $A$ and a row-sampling matrix $S$ with $m$ rows, sketching is commonly used to approximate $A^T A$ by $A^T S^T S A$. Prior work by Derezinski et al. showed that the inverse of the (rescaled) sketched matrix, $(\frac{m}{m-d} A^T S^T S A)^{-1}$ is a ($1+\epsilon$)-approximation to $(A^T A)^{-1}$ with only $m = O(d + \sqrt{d}/\epsilon)$. In contrast, a direct approximation of $A^T A$ typically requires $m = d \log d / \epsilon^2$ rows.
This paper generalizes the result of Derezinski et al. by studying how well $(A^T S^T SA + C)^{-1}$ approximates $(A^T A + C)^{-1}$, where $C$ is a PSD matrix. When $C$ is the identity matrix, this encompasses ridge leverage scores. This paper shows that $(A^T S^T SA + C)^{-1}$ is an ($1+\epsilon$)-approximation to (A^T D A + C)^{-1} for some diagonal matrix D depending on ridge leverage scores and sampling probabilities, when the number of sampled rows $m$ satisfies $m \gtrsim d \log(m d) + d/\epsilon^{2/3}$. When $C = 0$ and the sampling probabilities are exactly leverage scores, $D = (m-d)/m \cdot I$, recovering the result of Derezinski et al. The paper then obtains results for some specific constructions of S, and applies the new results to the update step of Newton’s method, where it is used to approximate the inverse of the Hessian, and evaluates its performance through numerical experiments.
Overall, this submission makes a solid contribution to the study of the inverse of row-sampled matrices, which also benefits the acceleration of optimization methods. I recommend acceptance, provided that my questions are adequately answered.
Claims And Evidence: The claims are theoretical and proofs are provided.
Methods And Evaluation Criteria: The numerical experiments look reasonable.
Theoretical Claims: I did not have time to check all the proofs. The following are a few questions:
- Page 4, left column, Line 199: what does it mean to say “subspace embeddings … ensure unbiasedness up to the same level of epsilon but are not guaranteed to remain unbiased for a smaller epsilon”?
- Page 21, inequality (a) (in the middle of the page), can you explain how you get the constant factor of 38?
Experimental Designs Or Analyses: I do not see issues in general but I do not think one can claim “improved convergence” because the convergence is not improved. The main thing seems to be improving the runtime at the same convergence. The numerical experiments also show this: the convergence rate of SSN-ARLeve looks about the same (at least the rate looks similar) as Newton-LESS in Figure 1 on both datasets.
Supplementary Material: Research on the inverse of row-sampling matrices appears to be less developed than that on row-sampling matrices themselves. This submission is a valuable contribution to the literature on the inverse of row-sampling matrices and also makes a nice addition to the literature on using sketching to accelerate numerical methods.
Relation To Broader Scientific Literature: Research on the inverse of row-sampling matrices appears to be less developed than that on row-sampling matrices themselves. This submission is a valuable contribution to the literature on the inverse of row-sampling matrices and also makes a nice addition to the literature on using sketching to accelerate numerical methods.
Essential References Not Discussed: NIL
Other Strengths And Weaknesses: NIL
Other Comments Or Suggestions: - Page 4, left column, line 204: add a comma after $m$
- Page 4, left column, line 207: as follow -> as follows
- Page 5, left column, line 264: “n, eff” -> “n and d_eff”
- Page 7, right column, line 338: It is better to say “Suppose that F, f: …”.
- Page 16, Eq.(20): A right square bracket is missing for \tilde{Z}_0.
- Page 16, Lines 862-871: to be consistent with (22), it is better to write the event in the subscript, that is, $\mathbf{1}_{\neg\zeta_3}$.
- Page 21, the long list of derivation: Once u and v are separated, there is no need to carry both u and v for supremum; it suffices to have one supremum over u and the other one over v.
Questions For Authors: Please see the sections above.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her constructive and thoughtful comments. Please find our point-by-point responses below.
**Theoretical Claims:**
I did not have time to check all the proofs. The following are a few questions:
1. Page 4, left column, Line 199: what does it mean to say “subspace embeddings … ensure unbiasedness up to the same level of epsilon but are not guaranteed to remain unbiased for a smaller epsilon”?
**Reply:** Sorry for this discussion that may be misleading.
Here, we meant that sketching matrices satisfying the subspace embedding property, with an error $\epsilon$ (say of order $\epsilon = O\left(\sqrt{\rho_{\max} d_{\mathrm{eff}} \log(d_{\mathrm{eff}}/\delta)/m}\right)$ ), can guarantee the **same** level $\epsilon$ of the inversion bias, see Lemma B.1 in the supplementary material for a proof.
It is, however, unclear whether this (upper bound of) inversion bias is tight and/or can be made smaller with refined analysis.
This is in fact the very motivation of the proposed de-biased sampling matrix $\check{S}$, which, under the same sampling size, achieves a **smaller** inversion bias of order $\epsilon = O\left(\sqrt{\rho_{\max}^3 d_{\mathrm{eff}}^3 / m^3}\right)$, see Proposition 3.2, offering an improvement over the standard subspace embedding-type analysis.
2. Page 21, inequality (a) (in the middle of the page), can you explain how you get the constant factor of 38?
**Reply:** We thank the reviewer for the constructive comments.
Regarding the constant factor of 38 in inequality (a) on Page 21, we will provide a detailed explanation.
By combining $H^{1/2} Q_{-s} H^{1/2} \preceq 6 I_d$ and the bound $|| H^{\frac12} \widetilde{H}^{-1} H^{\frac{1}{2}}\ || \leq 2$, along with $||v || = 1$ and $||u || = 1$, we can derive the following two inequalities:
$$
\sum^n_{j=1} u^\top \widetilde{H}^{\frac12} Q_{-s} a_{j} a_{j}^\top Q_{-s} \widetilde{H}^{\frac12} u=u^\top \widetilde{H}^{\frac12} Q_{-s} A^\top A Q_{-s} \widetilde{H}^{\frac12} u \leq 36
$$
and
$$
\sum^n_{j=1} v^\top \widetilde{H}^{-\frac12} a_{j} a_{j}^\top \widetilde{H}^{-\frac{1}{2}} v = v^\top \widetilde{H}^{-\frac12} A^\top A \widetilde{H}^{-\frac12} v \leq 2.
$$
Adding these two results gives the constant factor of 38. We will provide these proof details in Section D.2 of the supplementary material in the revised version for clarification.
**Experimental Designs Or Analyses:**
I do not see issues in general but I do not think one can claim “improved convergence” because the convergence is not improved. The main thing seems to be improving the runtime at the same convergence. The numerical experiments also show this: the convergence rate of SSN-ARLev looks about the same (at least the rate looks similar) as Newton-LESS in Figure 1 on both datasets.
**Reply:**
We greatly appreciate the reviewer's helpful feedback and agree that the claim of "improved convergence" is indeed a bit misleading.
As discussed in Section 4 of the submission, the proposed de-biased SSN improves the local convergence rate over **standard** sub-sampled Newton method, to achieve problem-independent convergence rates (that, to the best of our knowledge, has **never** been established for SSN).
It is worth noting that the theoretical convergence rate of de-biasd SSN can approximately match, but **not** exceed, that of Newton-LESS (that is a closely-related, but different approach replying on random projection), but will **significantly reduced** computational complexity.
In particular, the proposed SSN-ARLev (that replies on random sampling) offers significant computational efficiency advantages over Newton-LESS, and thus outperforms Newton-LESS in terms of the complexity-convergence trade-off, see confirmed by Figures 1 and 2.
While the numerical experiments in Figure 1 suggest that SSN-ARLev may offer **even improved** convergence in certain practical scenarios over the Newton-LESS approach, we are unable yet provide a theoretical proof of such superiority in convergence rate.
Additionally, we believe that combining the proposed de-biased approach with more efficient sub-sampled newton algorithms, such as the adaptive SSN-type algorithm proposed by [Lacotte2021], could further accelerate convergence.
However, exploring this is beyond the scope of the present work.
[Lacotte2021] Lacotte J, Wang Y, Pilanci M. Adaptive newton sketch: Linear-time optimization with quadratic convergence and effective hessian dimensionality. International Conference on Machine Learning. PMLR, 2021: 5926-5936.
**Other Comments Or Suggestions:**
**Reply:** We thank the reviewer again for the thorough and thoughtful review. All minor issues raised will be addressed in the revised version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications.
1. Please rephrase the sentence to mention explicitly that the subspace embedding has an error $\epsilon$ and the unbiasedness refers to the inversion bias. The current phrasing is mostly unclear.
2. It might be good to mention at some early point in the proof that $A^\top A\preceq H$ and $\tilde{H}\preceq H$. The inequality is then clear.
3. Please make your point clear in the revised version what the "improved convergence" actually means. Also, I do not see from Figure 1 that SSN-ARLev has a clear better convergence than Newton-LESS. They are almost the same.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer’s timely feedback and constructive comments. Below are our responses to the points raised:
- In the revised paper, we will explicitly mention that the subspace embedding introduces an error $\epsilon$, and clarify that the term *“unbiasedness”* specifically refers to the *inversion bias*. The corresponding explanation will be added after Definition 2.6.
- Furthermore, we will include the technical details that $ \mathbf{A}^\top \mathbf{A}\preceq \mathbf{H}$ and $ \widetilde{\mathbf{H}}\preceq \mathbf{H}$ in Section D.2 of the supplementary material to make the derivation easier to follow.
- We appreciate the reviewer’s comment and apologize again for the lack of clarity in the original claim of *“improved convergence”*. In the revised version, we will clarify this claim at the end of Section 4. Specifically, we will state that compared to the **standard** sub-sampled Newton method, our de-biased SSN method *improves* the local convergence rate; while its theoretical rate is comparable to, but **not better** than, that of Newton-LESS, it **significantly lowers** the computational cost, resulting in a more **favorable complexity–convergence** trade-off. | Summary: The paper studies debiasing scheme for sub-sampled random matrix sketching. It improves prior work by providing a novel random sampling method which has better convergence result. Experiments are presented to corroborate theoretic results.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Not in full details
Experimental Designs Or Analyses: Not in full details
Supplementary Material: No
Relation To Broader Scientific Literature: The key contribution is mainly a new random sampling scheme achieving better debiasing result, and connection between the new scheme and prior method (the scalar debiasing scheme, and phase transition, etc). It's a more refined version of prior debiasing literature.
Essential References Not Discussed: Newton Meets Marchenko-Pastur: Massively Parallel Second-Order Optimization with Hessian Sketching and Debiasing (ICLR 2025, https://arxiv.org/abs/2410.01374)
Optimal Shrinkage for Distributed Second-Order Optimization (ICML 2023, https://arxiv.org/abs/2402.01956)
Other Strengths And Weaknesses: The paper is well-written, theoretically sound, and the experiments are clearly presented. I personally appreciate that the authors provide a lot of comments and discussions / remarks linking to prior findings, pointing out what's the similarity and what's the difference, which offers a comprehensive perspective for readers.
Other Comments Or Suggestions: n/a
Questions For Authors: 1. In Definition 2.3, $\rho_\min=\min_{1\leq i\leq n} \ell_i^C/(\pi_id_{\text{eff}})$, isn't $\pi_id_{\text{eff}}=\ell_i^C$ and thus cancels the numerator? Also, how does this sampling factor related to $S$, I think it's just how much we weigh each row of $A$ when we sample, but the Definition sounds like $\rho_\min$ is also related to $S$?
2. In Figure 2, the authors present numerical results showing the proposed method is much more efficient compare to Newton-LESS, I feel it would be more helpful if the authors can discuss more about the baseline and what's the main computation overhead there.
3. Also in the experiments, the authors mention they exclude the time for computation of exact or approximate leverage score, does is mean the main workload of computing $\check S$ is removed? Is there any justification for such exclusion, i.e., is it a fair thing to do?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her constructive and thoughtful comments. Please find our point-by-point responses below.
**Essential References Not Discussed**:
Newton Meets Marchenko-Pastur: Massively Parallel Second-Order Optimization with Hessian Sketching and Debiasing (ICLR 2025, https://arxiv.org/abs/2410.01374)
Optimal Shrinkage for Distributed Second-Order Optimization (ICML 2023, https://arxiv.org/abs/2402.01956)
**Reply**: We thank the reviewer for pointing to the two helpful references.
They will be included and discussed in the revised version of the paper.
**Questions For Authors**:
1. In Definition 2.3,$ \rho_{\min} \equiv \min_{1\leq i \leq n}\ell_i^{C}/(\pi_i d_{\mathrm{eff}})$, isn't $\pi_i d_{\mathrm{eff}}=\ell_i^{C}$ and thus cancels the numerator? Also, how does this sampling factor related to $S$, I think it's just how much we weigh each row of $A$ when we sample, but the Definition sounds like $\rho_{\min} $ is also related to $S$?
**Reply**: We thank the reviewer for this question.
In Definition 2.3, we define $\rho_{\min} \equiv \min_{1 \leq i \leq n} \frac{\ell_i^{\mathbf{C}}}{\pi_i d_{\mathrm{eff}}}$.
While for **exact (ridge) leverage score** (when the random sampling probability $\pi_i$ is chosen according to $\ell_i^{\mathbf{C}}/d_{\mathrm{eff}}$), one has $\pi_i d_{\mathrm{eff}} = \ell_i^{\mathbf{C}}$ and thus cancels the numerator, this is **not** true for generic random sampling methods.
The parameter $\rho_{\min}$ is introduced to capture the interaction between the choice of sampling probabilities $\pi_i$ and the data structure of $\mathbf{A}$ (e.g., its leverage scores) and how it affects the associated random sampling inversion bias.
2. In Figure 2, the authors present numerical results showing the proposed method is much more efficient compare to Newton-LESS, I feel it would be more helpful if the authors can discuss more about the baseline and what's the main computation overhead there.
3. Also in the experiments, the authors mention they exclude the time for computation of exact or approximate leverage score, does is mean the main workload of computing $\check S$ is removed? Is there any justification for such exclusion, i.e., is it a fair thing to do?
**Reply**:
We thank the reviewer for the comments.
It is worth noting that various second-order baselines in Figures 2 and 3, including Newton-LESS, ARLev, ALev, SLev, and SRHT, **all** require preprocessing of the data.
Specifically, as performed in [Derezinski2021a], Newton-LESS first performs the Hadamard transform preprocessing on the data, and then generates LESS-uniform sketching matrix $\mathbf{S}$ to get the sketch $\tilde{\mathbf{A}} = \mathbf{S} \mathbf{A}$.
The computational costs of these two steps are $O(nd \log n)$ and $O(msd)$, where $s$ represents the sparsity level of (each row in) $\mathbf{S}$. To ensure a fair comparison, we treat SRHT as a uniform sampling method that undergoes the same random Hadamard transform preprocessing as Newton-LESS.
The three leverage-based sampling methods, ARLev, ALev, and SLev, all involve the computation of approximate leverage scores and the sampling process, and has complexities of $O(\text{nnz}(\mathbf{A})\log n + d^3 \log d)$ and $O(m + md)$, respectively.
Thus, the preprocessing time is in fact **comparable** across all baseline methods, and by
excluding the time for the Hadamard transform and leverage score computation, we ensure a **fair comparison** when evaluating these methods.
Considering the computational overhead associated with the sketching process itself (rather than the preprocessing steps) allows for a clearer illustration of the "convergence-complexity" trade-off across different stochastic optimization methods.
We have reproduced Figures 1 and 2 by adding (for all methods) the preprocessing time, and observed a similar trends.
In particular, SSN-ARLev achieves a significantly lower overall runtime compared to Newton-LESS and establishes better complexity-convergence trade-off than other baselines.
These additional numerical results and discussions will be included in the revision.
[Derezinski2021a] Derezinski M, Lacotte J, Pilanci M, et al. Newton-LESS: Sparsification without trade-offs for the sketched Newton update. Advances in Neural Information Processing Systems, 2021, 34: 2835-2847.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I'll keep my evaluation. | Summary: This paper analyzes and corrects the inversion bias arising from common random sketching methods, including uniform/weighted sampling (based on leverage scores) and structured fast Johnson-Lindenstrauss transform like the sub-sampled randomized Hadamard transform (SRHT). Via a fine-grained characterization of inversion bias, it is demonstrated that for certain structured methods like (approximate) leverage-based sampling and SRHT, the bias can be effectively corrected through careful diagonal-wise rescaling. Leveraging these theoretical insights, the paper further establishes the first problem-independent local convergence rates for sub-sampled Newton methods, closely matching those obtained with dense Gaussian sketches. Empirical evaluations support these theoretical findings.
Claims And Evidence: Yes, the main theoretical claims regarding the characterization and correction of inversion bias, as well as the convergence rates for sub-sampled Newton, are clearly presented. While I couldn't verify all the proofs in the appendix, with a quick scan, the proofs seem reasonable, and the intuitions provided at the beginning are helpful. The empirical results are also convincing and support the theoretical claims.
Methods And Evaluation Criteria: Yes, the application of the proposed de-biasing method to sub-sampled Newton is well-aligned, providing convincing evidence for the broader applicability of the theoretical results.
Theoretical Claims: While I couldn't verify all the proofs in the appendix, the main theoretical results in the paper seem reasonable. The proofs are well-structured, and the intuitions provided at the beginning are helpful.
Experimental Designs Or Analyses: This is a theoretical paper with a few synthetic numerical experiments. The experiments are simple but sufficient to support the theoretical claims.
Supplementary Material: Not applicable.
Relation To Broader Scientific Literature: Yes, this work provides a fine-grained characterization of inversion bias in common random sketching methods leveraging RMT techniques. It extends the existing results in Derezinski et al. (2021a;b) on approximately unbiased sparse embedding to more common random sketching methods. Applying the resulted de-biasing method to sub-sampled Newton leads to considerably improvement compared to Lacotte & Pilanci, (2019), Derezi ́nski et al., (2021a)., etc., both theoretically in terms of the convergence guarantee and empirically.
Essential References Not Discussed: To my knowledge, the paper discussed the essential references in the field.
Other Strengths And Weaknesses: Overall, I think this is a valuable theoretical study on inversion bias in random sketching methods. The paper is well-structured and easy to follow. The proposed de-biasing method is theoretically sound and practically useful. The application to sub-sampled Newton provides a good example of the broader applicability of the proposed de-biasing method. Some minor weaknesses are discussed below in the "Other Comments" and "Questions For Authors" sections.
Other Comments Or Suggestions: Here are some minor points/questions:
1. Are the constants $C$ in the theorems, propositions, and lemmas in Section 2 and 3 the same, or at least on the same order? It would be helpful to clarify this in the paper.
2. The preliminary in Section 2 seems to be a bit long, especially given the 8-page limit of the main text. Compared to the preliminaries, some proof sketches and intuitions in the appendix seem to be more insightful and may deserve a place in the main text.
Questions For Authors: Is the log factors in $m$ in most of the theorems, propositions, and lemmas in Sections 2 and 3 a consequence due to independent sampling of the $m$ rows (uniformly or according to leverage scores)? If so, would it be possible to remove the log factors via dependent sampling, e.g., volume sampling, or some faster alternatives like adaptive leverage score sampling (see, e.g., [1])? It would be insightful to provide some discussions on this, even just as future directions.
[1] Cortinovis, Alice, and Daniel Kressner. "Adaptive randomized pivoting for column subset selection, DEIM, and low-rank approximation." arXiv preprint arXiv:2412.13992 (2024).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her constructive and thoughtful comments. Please find our point-by-point responses below.
**Other Comments Or Suggestions**:
1. Are the constants C in the theorems, propositions, and lemmas in Section 2 and 3 the same, or at least on the same order? It would be helpful to clarify this in the paper.
**Reply**:
While the constants $C$ in the theorems, propositions, and lemmas in Sections 2 and 3 are not identical, they are all *absolute constants* independent of the dimensions $n$, $d_{\mathrm{eff}}$, and $m$.
2. The preliminary in Section 2 seems to be a bit long, especially given the 8-page limit of the main text. Compared to the preliminaries, some proof sketches and intuitions in the appendix seem to be more insightful and may deserve a place in the main text.
**Reply**: We thank the reviewer for this constructive comment.
We agree that the current section of preliminaries are a bit too long and that the proof sketches and intuitions (currently in the appendix) should be moved to the main text to better highlight the connection between RMT and RandNLA.
Due to the 8-page limit for the submission, we were unable to include them in the current version.
If the paper gets accepted, we will revise the content accordingly (with the one additional page) in the camera-ready version by incorporating the proof sketches and intuitions from Appendices D.1 and E.1 into the main text, placing them directly after Theorem 3.1 and Proposition 3.2, respectively.
**Questions For Authors**:
Is the log factors in m in most of the theorems, propositions, and lemmas in Sections 2 and 3 a consequence due to independent sampling of the m rows (uniformly or according to leverage scores)? If so, would it be possible to remove the log factors via dependent sampling, e.g., volume sampling, or some faster alternatives like adaptive leverage score sampling (see, e.g., [1])? It would be insightful to provide some discussions on this, even just as future directions.
[1] Cortinovis, Alice, and Daniel Kressner. "Adaptive randomized pivoting for column subset selection, DEIM, and low-rank approximation." arXiv preprint arXiv:2412.13992 (2024).
**Reply**:
We thank the reviewer for the constructive comment and the reference.
We agree with the reviewer's perspective.
As shown in Section B of the supplementary materials, the subspace embedding property in Lemma 2.7 relies on the independence of the random row sampling procedure.
This implies that the logarithmic factors in $m$ present in most of the theorems, propositions, and lemmas in Sections 2 and 3 arise from the independent sampling of the $m$ rows (including, but not limited to, uniform sampling or leverage score-based sampling).
As such, it is very possible that dependent sampling methods, such as volume sampling or more efficient techniques like adaptive leverage score sampling (see, e.g., [1]), may provide tighter bounds and potentially eliminate the logarithmic factors.
For now, we are not able to include these efficient methods in the proposed analysis framework, since our current proof approach strongly relies on the independence of random sampling.
We also agree with the reviewer that these dependent sampling methods could yield better computational efficiency and/or convergence rate for SSN.
We leave them for future work.
We will include a discussion of these methods in the conclusion section (Section 6) of the revised version, addressing the potential benefits and challenges of using volume sampling, adaptive leverage score sampling, or other dependent sampling approaches.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for addressing my questions and concerns. I will maintain my evaluation. | null | null | null | null | null | null |
Learning curves theory for hierarchically compositional data with power-law distributed features | Accept (poster) | Summary: This paper uses a theoretical framework to explain the emergence of neural scaling laws. It builds on the synthetic model and results derived in previous works on classification (Cagnetta et al., 2024) and next-token prediction (NTP) (Cagnetta & Wyart, 2024) that studied how hierarchical tasks are learned. The authors use the Random Hierarchy Model (RHM), a type of probabilistic context-free grammar, to generate synthetic data with a controllable hierarchical structure. This allows them to analyze how the loss changes as the training dataset grows in size.
The previous two works focused on the case where the hierarchical rules generating the data were uniformly distributed. This paper extends those analyses to the case where the rule probabilities follow a power-law distribution (at a single level of the hierarchy). In the case of classification tasks, the authors show that the scaling law exponent depends on the rule distribution. However, for next-token prediction, they argue that the exponent remains unchanged across different power-law distributions. While the decay rate of the loss remains the same regardless of the exponent, the rule distribution affects the final loss value, which corresponds to the entropy lower bound. They support their claims with both theoretical analysis and experiments on RHM models.
Claims And Evidence: The derivation of their theoretical results relies on an assumption (Assumption 3.1), which is not rigorously proven but is intuitively used. This assumption states that as long as the correlation between features is distinguishable from noise, the model can correctly predict the class or next token. Based on this assumption, the authors, without proof, use the fact that if the correlations are statistically resolvable, the model can always correctly classify the sequences in the case of classification, or that the cross-entropy loss matches the loss of an $\ell$-gram model in the case of next-token prediction (a similar assumption is made in Cagnetta & Wyart (2024)).
Apart from this assumption, the rest of the theory is solid and extends the asymptotic analysis from the previous works by Cagnetta & Wyart (2024) and Cagnetta et al. (2024) to the case where the rules have non-uniform distribution.
The empirical results validate the accuracy of this asymptotic analysis for CNNs/Transformers trained on the RHM data.
Methods And Evaluation Criteria: The theory and empirical results support the scope and claim of the paper for the RHM setup.
Theoretical Claims: (I followed the proof sketch from the main body but skipped the details in the supplementary material. See "Claims And Evidence".)
Experimental Designs Or Analyses: The experimental analysis fits within the scope of the paper and supports its claims.
(See "Weaknesses" for further comments on the experiments.)
Supplementary Material: I reviewed the additional experimental details in the supplementary material.
I did not go through the full proof, as it follows the approach in Cagnetta & Wyart (2024) and Cagnetta et al. (2024). Instead, I relied on the proof sketch in the main body, which highlights the differences due to the different rule distribution.
Relation To Broader Scientific Literature: There is a substantial body of empirical papers on scaling laws. As also listed in the paper, many previous works explain the scaling laws from a theoretical perspective in simplified setups like linear/kernel regression.
This paper builds on the previous works by Cagnetta & Wyart (2024) and Cagnetta et al. (2024), which use the RHM to study learning from hierarchical data. These earlier works derived loss behavior and scaling laws under the assumption of a uniform rule distribution. This paper extends that analysis to consider non-uniform rule distributions.
Essential References Not Discussed: None that I’m aware of.
Other Strengths And Weaknesses: **Strengths**:
The synthetic setup is nice: compared to previous papers that focus on regression tasks to derive scaling laws, the RHM provides a closer model to language, where the problem consists of discrete vocabularies and sequential data. While it cannot perfectly capture all aspects of natural language, it offers a nice, controllable framework that models the hierarchical and tokenized nature of language.
**Weaknesses**:
1. One aspect that needs to be emphasized in the main body is the training setup. Scaling laws (e.g, as in Kaplan et al. (2020)) can be derived in different setups depending on whether data, model size, or FLOPs is the bottleneck.
I assume in this paper, data is the bottleneck, while model size and FLOPs are flexible (can be unbounded). It's mentioned in App A. that the classification experiments are run for a large number of epochs. However, this detail seems to be missing for the next-token prediction experiments. Whether the model is trained for a single pass or multiple epochs, and the size of the model, might influence the behavior of the scaling laws. Specifically, it would be useful to know whether the training length was long enough and the model size large enough for the training loss to reach its lower bound, or if the experiments were conducted in the under-trained regime. Additionally, it would be helpful to know whether the test loss for each given dataset size $P$ has converged.
2. The emergence of scaling laws for NTP in the case of RHM, due to the hierarchical structure of the data, was already demonstrated in Cagnetta & Wyart (2024). In that sense, this paper does not offer additional insights for NTP. However, in the case of classification, the observation that changing the distribution of the rules can alter the scaling behavior is an interesting observation.
Other Comments Or Suggestions: Minor typos:
- line 217 first column, Theorem in "Assumption Theorem 3.1" is extra.
- Line 159, second column, have you defined $n_c$ in the expression for $P^*(\mathbf{\mu})$?
- From what I understand, the analysis is for the joint expectation of the loss over the realization of RHM and the sequences $x$. Is that also the case in eq (14) (and (15))? The current notation used in the equation only takes expectation over the sequences.
Questions For Authors: In Sec 3.1, the different loss behavior in the case of classification is attributed to the difficulty of resolving the correlations for the rare production rules. A few questions regarding this argument:
1. In the figures, the larger $a$ is (further away from the uniform distribution), the error/loss decays faster in both cases. So, is having the Zipf's distribution over the rules making the task easier?
2. The same argument about the rare rules and correlation should hold true in the case of NTP as well. But why isn't the behaviour changing?
3. Given that your analysis shows the scaling behavior is unchanged, can you explain the source of the gap between the loss curves for different values of $a$ in the NTP task? Based on the analysis, the only thing that should change with $a$ is the cross-entropy lower bound, which is not surprising since changes in the distribution affect the entropy of the data.
4. Can you explain what happens with $a=-1$ (uniform rule distribution) in the classification case? Eq (7) cannot be evaluated at this value.
**Minor question:**
5. In Equation (6), is the right-hand side (RHS) an equality, or is it only an upper bound on the error? The RHS measures when all patches are resolvable, but a model could still make a correct prediction by resolving the majority of patches correctly, right?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Weaknesses
1. As the reviewer points out, we work in the data-limited regime, which is suitable for studying learning curves. In all our experiments, the models are large enough and trained for long enough for the training loss to reach its lower bound. In particular, for consistency, we increase the number of parameters (width for CNNs and number of heads for transformers) until the performance does not depend on this number anymore. Notice that, being overparametrized, the test loss of our models increases towards the end of training due to overfitting. To solve this problem, we employ early stopping, i.e. select the training step where the model gave the best performance over a validation set of size between ~32K and ~131K (depending on the Zipf exponent $a$). Studying how many parameters are needed is interesting, but we view it as less fundamental than the present work. The number of parameters will presumably depend on the detailed choice of architecture (transformers vs CNNs, depth vs width etc.). In contrast, we expect our present results to hold for deep enough architectures, including CNNs and transformers (as shown in the absence of Zipf distribution in Cagnetta et al, 2024). ***In the revised manuscript***, we will emphasize further that our results apply to the data-limited regime, and comment in the conclusions about the possibility of studying the dependence on FLOPs and model size.
2. We think that the fact that the scaling law of NTP does not depend on the Zipf law is a surprising and interesting result. This result is particularly relevant for a whole portion of the theoretical literature on scaling laws, assuming that scaling results from the Zipf distribution of the data features. In particular, our result suggests that the approach based on the feature distribution is not the right one for NPT, although it may be suitable for classification.
## Questions
1. Rather than making the task easier, having Zipf-distributed production rules results in the separation of data into different categories. Those having only the most frequent production rules ($k=1,2,\dots$) are easier to learn compared to the uniform case $f_k=1/m$, whereas those having some rare production rules $k=m,m-1,\dots$ are harder. As a result, the sigmoidal learning curve of the uniform case turns into a power law. ***In the revised manuscript***, we will add the learning curves of classification in the uniform case to Figures 2 and 7 to clarify this point.
2. The behaviour of the individual steps is indeed changing, as highlighted in Figure 4. However, when considering a sequence of steps, the overall scaling is determined by the distance between the steps (which is controlled by the hierarchical structure) rather than the behaviour of the single step (controlled by the ZIpf law).
3. We should stress that our result is asymptotic, so it does not apply to the very first step. That said, what changes with $a$ is not only the cross-entropy lower bound (which causes a vertical shift of the curves), but also the sample complexities of the steps from Eq. (13) (causing a horizontal shift of the curves). However, the way that such sample complexities depend on the level of the step is independent of $a$: that's why the scaling law remains unchanged.
4. In the uniform case $f_k=1/m$ for all $k$'s. There is only one sample complexity $P^*=v m^L$, and the test error crosses over from $\simeq 1$ to $\simeq 0$ around $P^*$. Following the suggestion of Reviewer gXnS, we will add further background material on Cagnetta et al., 2024, that clarifies this point.
5. The reviewer is correct that it is an upper bound. Depending on the value of $m/v^{s-1}$, the root can still be inferred correctly without resolving all of the patches. The probability for this to happen is derived in (Sclocchi et al., PNAS 2024) for an optimal decoder. However, notice that, if $m=v^{s-1}$ as in figures 2 and 3, changing one patch changes the root label with high probability, implying that all patches need to be resolved. ***In the revised manuscript***, we will add a sentence after equation (6) to clarify this point.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification.
I’m increasing my score to 3. The discussion in the rebuttal would be a valuable addition to the paper, particularly in clarifying the exact setup for the scaling law and making the discussions more self-contained, as suggested by Reviewer gXnS.
The setup and methodology rely heavily on previous work (Cagnetta & Wyart, 2024; Cagnetta et al., 2024), making the contribution somewhat incremental. However, the results are sound and relevant to ongoing theoretical discussions. The contrast between classification and NTP is interesting, and studying the impact of long-tailed feature distributions in these setups could be useful to the community. | Summary: - Main algorithmic/conceptual ideas:
- Mostly based on the framework of Cagnetta et al., 2024.
- Main results/findings:
- This paper shows that Zipf distribution of feature exists (which is common in real world scenario for long-tailed distributions) the learning curve of classification will show a power law shape instead of a sigmodal shape during uniform distribution.
- The curve of next-token prediction won't change due to Zipf assumption.
Claims And Evidence: Empirical results support all theoretical claims, which is good. For the discussion of theoretical claims, see relation to broader scientific literature section.
Methods And Evaluation Criteria: - Methods: N/A because this is a theory paper.
- Evaluation Criteria: The scaling law is verified with synthetic dataset. Since it's designed for the purpose of verifying theoretical statements, this is acceptable.
Theoretical Claims: Yes, for all claims made in the main text. The writing of the paper makes it a bit hard to judge theoretical claims made in the paper. Also see relation to broader scientific literature section.
A couple of questions about the theory part:
- What does "compatible" mean? Line 86 it says: "training data is compatible with both a PCFG and a nonhierarchical generative model", Line 343 it says "the set of terminal tokens compatible with the context of size ..." and there are 13 mentions of "compatibility" in this submission. They don't sound like sharing the same meaning. What is "compatible" with what?
- Line 70: Even if power law exists everywhere in practice, what does it imply to model Zipf for each layer in your tree?
- Line 130: "In this case,... last token $x_d$" which 2-tuples on the figure are considered in the next-token prediction task?
- Line 132: Why does $f_{k(\mu)}$ depend on the rank (in your subscript, also inferred from line 148 and no information is found before line 132)? Also, what are you ranking exactly? Is it related to line 161 when you wrote "Ranking all the low-level tuples by the probability of the corresponding production rules" later on?
Experimental Designs Or Analyses: Experiments mostly look good to me. One question: for Figure 6, what is the black dashed line? Why is this black dashed line a better prediction for top three colored curves compared to the red dashed line representing Eq. 11?
Supplementary Material: No.
Relation To Broader Scientific Literature: - Based on the whole paper, it seems that this paper is mainly built on two papers, Cagnetta et al., 2024 (learning curve theory based on RHM) and Hutter, 2021 (standard learning curve theory).
- However, I feel the current description of these two papers are insufficient to let the reader understand the current submission without checking these two background papers carefully.
- For example, more background for these papers can be very helpful to solve the following questions:
- How did you derive Eq. (4)?
- Line 147: where does the quadratic term of $f_{k(\mu)}$ come from?
- Line 155: Why do you need to divide $P$ by $f_{k(\mu)}m$?
- Line 192: Could you elaborate on how to compute two mentioned correlations with Cagnetta et al., 2024? What claims in Cagnetta et al., 2024 are used?
- Line 262: What's the framework in Hutter, 2021 specifically and the relationship to your paper?
- Equation 13: This seems to be from rearranging some terms in Eq.12, but how do you derive Eq.13 exactly and what's the meaning of $P_{l,k}$?
Essential References Not Discussed: No I'm not aware of any missing essential references.
Other Strengths And Weaknesses: - Originality: This paper extended Cagnetta et al., 2024 and I believe this is a novel contribution. The motivation is clear, although I'm not sure how is your specific choice to inject Zipf distribution related to the practical problem you want to solve. See the question for Line 70.
- Significance:
- Given the limitation authors also realized that their HRM model might be still a simplification of the real-world data, I do want to ask how are your conclusions guide practitioners.
- We see all proved scaling laws are verified empirically so I don't have too much concerns about the correctness of the statements. And this is a big strength for this paper.
- Clarity: have some space for improvement. See Relation To Broader Scientific Literature. Also, I noticed that you do have the space at the end of the 8th page so it's better to elaborate more on background papers. I would recommend the author to make sure all knowledge required to understand this paper is self-contained.
Other Comments Or Suggestions: - Line 125: It's better to clarify $s$ represents a binary tree and $L$ is the height of the root node $-1$ if my understanding is correct.
- Line 339: replace the notation of normalization as $\mathcal{N}$ is often used to describe Gaussian distribution.
- Comment on writing: It will be better to add one preliminary section before section 3 to incorporate all materials before nonuniform scenarios, because these are all background materials. The current sections are a bit confusing: when I read the paragraph starting from line 110, it's hard for me to tell if this is from your contribution or from Cagnetta et al., 2024 which you only cite in the early sentences of the paragraph. Or alternatively, use one sentence before section 3 to tell the readers what to expect in the following section.
Questions For Authors: See all comments above. The current score is mainly decided based on the writing issue, but this could be addressed in the rebuttal session if the authors can provide convincing explanations for questions about theoretical technical details.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ## Theoretical claims
1. **compatible**. In line 86, compatible means that the sentences in the training data can be generated both by a PCFG and by a non-hierarchical generative model. The next 4 occurrences of "compatible" in the main refer to compatibility of a single token ($x_d$) with the context ($x_1,\dots,x_{d-1}$), where compatibility is defined by the full string ($x_1,\dots,x_d$) belonging to the set of strings generated by the RHM. The next two occurrences refer to a different kind of compatibility, i.e. the one of an empirical curve that follows a theoretical prediction within the error bars. ***In the revised manuscript***, we will define the compatibility of a token with the context as above, and we will use "agrees with the theory" instead of "is compatible with the theory" when referring to empirical curves.
2. **Zipf for each layer**. Having Zipf production rules at all levels would change the distribution of the input features, as their probability will be given by the product of several Zipf-distributed frequencies. Nevertheless, the learning curve can still be described via Assumption 3.1. In particular, we believe that the main conclusion of our paper would still hold: the exponent of the scaling law is controlled by the Zipf distribution of production rules for classification and by the hierarchical structure for next-token prediction.
3. **Which $2$-tuples**. All the complete $2$-tuples are considered: $(x_5,x_6)$ for reconstructing $\mu_3^{1}$, $(x_3,x_4)$ for $\mu_2^{1}$ and $(x_1,x_2)$ for $\mu_1^{1}$. However, unlike the classification case, the correlations are not equal for all tuples: $(x_5,x_6)$ is closer in tree distance to the missing token, so it has a higher correlation. As a result, the hidden variable $\mu_3^{1}$ can be inferred with less samples than $\mu_1^{1}$ and $\mu_2^{1}$.
4. **Dependency on the rank**. The rank $k$ referred to here was introduced in assumption *iv)* of Section 2, $k\,{=}\,1,\dots,m$ from the most to the last likely production rule. Due to the non-ambiguity assumption, the rank determines uniquely the low-level tuple $\mathbf{\mu}$ produced, thus $k=k(\mathbf{\mu})$. This is indeed connected to the sentence "Ranking all the low-level tuples by the probability of the corresponding production rules". ***In the revised manuscript***, we will clarify this point while discussing the derivation of Eq. (4) (see relation to literature paragraph).
## Experimental designs
The black dashed line in Figure 6 represents the prediction in the uniform case and should only describe the top coloured curve (solid blue line). However, as we claim in the paper, all curves display the same asymptotic decay with $P$, as highlighted by the red dashed curve. This curve is only supposed to represent the asymptotic power-law decay emerging from the sequence of steps. Its absolute position has been shifted by adding a $P$-independent constant for visualisation purposes.
## Relation to broader literature
As the reviewer suggests, ***in the revised manuscript***, we will add a section before 3, including,
1. Further background on Cagnetta et al., 2024, in particular concerning the calculation of correlations joint probabilities like Eq. (4) using the CFG structure, and the derivation of the sample complexities required for the accurate measurement of said correlations. In simple terms, the joint probability of the leaves conditioned on the class is given by the product of the probabilities of all the production rules used. Then, correlations involving a specific tuple of leaves or a specific production rule are obtained by marginalisation. The corresponding sample complexity is derived by studying how the empirical correlations measured with $P$ training data converge to the true correlations as $P$ increases. This addition would solve questions about Eq. (4), line 147, line 155, line 192 and Eq. (13).
2. Further background on Hutter, 2021. Hutter ranks all the input data according to their probability, and assumes that a datum is correctly classified only if it appears in the training set (memorisation). We, instead, rank data according to the probability of the associated production rules, and assume that data can be correctly classified when all these production rules can be resolved.
## Significance for practitioners
Despite the simplicity of the data model, our results can be directly tested in real scenarios, for instance, by training LLMs on real data where the rarest tokens/words/grammatical constructions have been removed. Such empirical studies could suggest practical directions to explore, for example, curriculum learning approaches where the fraction of rare words or grammatical constructions would be varied during training. For RHM data, such a procedure will not alter the learning curve exponent but may change the prefactor.
## Other comments
We will clarify line 125 and replace $\mathcal{N}$ in line 339.
---
Rebuttal Comment 1.1:
Comment: Thanks for authors' rebuttal! All my questions for technical details were answered, but
1. since the work is heavily based on previous work, it's hard for me to justify the relative significance of your contribution, even if it's novel.
2. rewriting to include further background as you listed requires a nontrivial amount of work
3. Significance for practitioners is not proved
Therefore, I hold the current score. Feel free to leave comments and I'm open to discussion.
---
Reply to Comment 1.1.1:
Comment: We are surprised by the reviewer’s answer, who agrees with all our technical points but maintains a mark below acceptance based on generic comments.
**Concerning 1**, our main results explaining how rare tokens or grammatical rules affect learning curves are entirely new. In particular, our result on next-token prediction questions many other theoretical works assuming that scaling results from the ZIpf distribution of the data features (see reply to Reviewer xKcs). Considering how scaling laws have driven the technological advancement of LLMs, it is key to understand what controls scaling exponents. The significance of our results is thus clear, as agreed by all other reviewers.
**Concerning 2**, the additional background can be integrated efficiently without significant effort, as it comes from already published material, and will enhance the manuscript clarity
***Concerning 3***, ICML not only publishes practical works with proven applications, but also fundamental works that help the community as a whole establish understanding, which can lead to new technology on a longer time scale.
As a result, we believe that there are no specific scientific reasons that justify the below-acceptance mark. | Summary: This paper presents a theoretical model to explain the emergence of power-law learning curves in deep neural networks trained on data with Zipfian feature distributions and hierarchical compositional structure. By parameterizing the Random Hierarchy Model (RHM) with a probabilistic context-free grammar (PCFG), the authors obtain asymptotic scaling laws for classification and next-token prediction tasks. For classification, they show that if the production rules satisfy a Zipf law with exponent a, then test error asymptotically decreases as ε(P) ∼ P^(–a⁄(1+a)) (to within a large multiplicative constant depending on the combinatorial structure). For next-token prediction, while the fine-grained shape of the representation learning curve depends on the production rule distribution, large-scale scaling is only dictated by the hierarchical form. These theoretical results are verified by large-scale numerical experiments on deep convolutional neural networks and transformers on artificial data from the RHM.
## Update after rebuttal
I stay with my original rating, as I agree with the comments of author's rebuttal.
Claims And Evidence: Its main arguments are two-pronged. One, the paper states that in classification tasks the learning curve attains the Zipf exponent, giving a power-law decay with exponent a⁄(1+a) after a pre-asymptotic phase ruled by the hierarchy. Second, for next-token prediction tasks the asymptotic decay is independent of the Zipf exponent, with the hierarchical structure being the primary determinant of the scaling behavior. These claims are supported by asymptotic derivations (see Equations 6–7 and 11 in the paper) and are validated through systematic experiments (illustrated in Figures 2–7) that show agreement between theory and empirical learning curves. Overall, the evidence seems mathematically rigorous and empirically convincing, though the experiments are conducted solely on synthetic data.
Methods And Evaluation Criteria: They derive their scaling laws by considering the statistics of correlations in RHM-generated data. They assume that a production rule is "learned" when an effect of a production rule on correlations is seen (Assumption 3.1) and use it to infer sample complexities at different levels of the hierarchy. In experiments, the paper employs deep CNNs for classification and multi-layer transformers for next token prediction, and tracks test error and cross-entropy loss as functions of training set size. The test criteria—i.e., the asymptotic decay of loss and error, and the collapse of rescaled learning curves—are appropriate for demonstrating the predicted power-law behavior.
Theoretical Claims: The paper presents several theoretical claims, including the derivation of the sample complexity P* that scales as vm^L, and the asymptotic decay ε(P) ∼ P^(–a⁄(1+a)) for classification. The derivations are detailed and build on top of previous work on neural scaling laws and learning curve theory. Although the derivations make strong assumptions (e.g., Assumption 3.1), the mathematical arguments appear sound and well-connected to prior results in the literature. The paper also formally derives the scaling law for next-token prediction (Equation 11) and describes how the hierarchical structure allows deep models to capture increasingly longer-range correlations.
Experimental Designs Or Analyses: Experiments are performed to confirm the theoretical predictions in controlled settings. The authors generate synthetic data with the RHM for varying values of the Zipf exponent and the number of production rules. They then train deep CNNs and transformers, reporting both classification error and cross-entropy loss. The experimental design effectively isolates the role of the hierarchical structure and the nonuniform (Zipf) distribution. The empirical learning curves—especially their collapse after rescaling—strongly support the theoretical analysis. One potential concern perhaps is that the experiments are limited to synthetic data; additional validation on real-world datasets might further strengthen the findings.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: The paper situates its contribution in the broader framework of neural scaling laws and theoretical knowledge of deep learning. The paper offers a new perspective on why power law scaling is achieved in deep learning. The connection to both empirical and theoretical work is made explicit throughout the manuscript.
Essential References Not Discussed: All essential references seem to be discussed.
Other Strengths And Weaknesses: Strengths:
The paper provides a good theoretical framework that nicely bridges the hierarchical organization of data and power-law learning curves.
Derivations are mathematically accurate and relatively comprehensive.
Experimental evidence on synthetic data is in good agreement with theory predictions.
The paper provides useful insights which can potentially explain the behavior of large-scale deep learning models.
Weaknesses:
The reliance on a model of synthetic data (RHM) prevents direct extension to real data.
Assumption 3.1, while calculatively convenient, may be to idealized and its behavior under more realistic conditions may not be explored sufficiently.
Additional experiments with natural datasets would strengthen the contribution.
Other Comments Or Suggestions: .
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our study's strengths. Regarding the connection with real data, please see the reply to Reviewer GeDB.
Concerning experiments on natural datasets, our work predicts that removing rare words from a data set should not affect the scaling law of the learning curve. It also predicts that grammatical structures that involve a large number of tokens require more data to be learned. Studying these questions systematically during training, beyond the existing analyses of how pre-trained LLMs represent grammar, will require a whole new project. We believe that presenting the theory clearly and convincingly warrants a dedicated paper, as we do here. | Summary: This paper extends the findings in (Cagnetta et al., 2024) to a probabilistic context-free grammars (PCFGs) case. The authors investigate how the structure of data influences learning curves, focusing on datasets with hierarchical compositional structures and features distributed according to power laws (Zipf distributions). By using probabilistic context-free grammars (PCFGs) to generate synthetic data, the authors unify two theoretical perspectives explaining neural scaling laws.
- For classification tasks, a Zipf distribution of features transforms learning curves from sigmoidal to power-law shapes.
- For next-token prediction tasks, instead, the hierarchical structure primarily dictates the power-law scaling exponent, while the Zipf distribution.
## update after rebuttal
I stay with the original rating.
The merits of the theoretical contribution made me incline to weak acceptance.
The common concerns about its applicability to real-world data (also mentioned by Reviewer 1eLH) and applications stopped me from further increasing my rating.
Claims And Evidence: Two major claims are supported by both theoretical analysis and experimental results on synthetic data.
Methods And Evaluation Criteria: The evaluation is based on the theoretical framework proposed in the paper. The experiments are mainly designed to verify this theoretical framework, which makes the data modeling depending on certain assumptions, e.g., the PCFGs assumption. But it makes sense as the observation of ''leaf-level nodes following Zipf's law'' has been observed in textual data.
Theoretical Claims: I checked Eq. (7) and Eq. (12), corresponding to two major claims of this paper, their proofs in Appendix B and D are correct from my understanding.
Experimental Designs Or Analyses: The experimental design is a natural extension of (Cagnetta et al., 2024), which looks sound and valid. one concern would be the model size and data size are too small in claiming ``scaling law``.
Supplementary Material: I checked Appendices A, B, C, and D.
Appendix B and Appendix D are theoretical inductions, which seem correct to me.
Appendix C is more emprical results on the synthetic data.
Appendix A raises some concerns about the data size and model size used in the paper, which can be too small for claiming the scaling law.
Relation To Broader Scientific Literature: The paper contains proper related literature, most prominently (Cagnetta et al., 2024).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: One potential weakness would be the application power of the proposed scaling law. The results (both theoretical and experimental) rely on the assumptions of PCFGs, which is not necessarily true for the data we observed in today's applications. I further raise questions in the ``Question for Authors`` section.
Other Comments Or Suggestions: N/A
Questions For Authors: How far are the results proposed by the authors from guiding the applications like LLMs? This question can be further divided into the following parts:
- What is the exact size of the model proposed in the paper? The description in Appendix A refers to the model size implicitly using $H$ for CNNs or $n_h$ for self-attentions, but the exact size of the model is not reported.
- The proposed synthetic dataset is relatively small, at the scale of $\sim 10^6$, is this enough for claiming "the scalling law" as today's scaling law usually refers to much larger data sizes?
- How different the synthetic data from the natural data? As mentioned in (Tomasini & Wyart, 2024), a context-free grammar modeled data is applicable for hierarchical data, but how about the case of natural data? References or comparisons with textual, vision, and audio would be appreciated.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1st question:** For deep CNNs, $H=512$ (1M parameters in total, see last sentence of the second paragraph in appendix A), and we checked that our conclusions do not change up to $H=1024$ (~4M parameters). For transformers, $n_h=16$ (resulting in ~3M parameters), and we checked that our conclusions do not change up to $n_h=64$. In simple terms, we set the number of parameters to be large enough that it does not impact performance. As a result, we can focus our study on the dependency on the size of the training set (see also reply to Reviewer xKcS). We will clarify the text accordingly.
**2nd question:** Our predictions in the synthetic data setting are valid for arbitrarily large datasets. Fixing the RHM parameters (in particular $v$, $m$ and $L$) to the values considered in the paper allows us to test the theory with $\leq 10^7$ data, which is the range accessible to our experiments.
**3rd question:** Concerning language, CFGs are believed to provide an accurate representation of syntax, except for only a few languages and constructions (see e.g. Gerhard Jäger and James Rogers, *Formal language theory: refining the Chomsky hierarchy*, Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1598):1956–1970, 2012). Some aspects of semantics can also be modelled with trees via "attribute grammars" (Knuth, *Semantics of context-free languages*, Mathematical systems theory 2.2 (1968): 127-145), where hidden non-terminal symbols are complemented by attributes that influence the choice of production rule and are transmitted along the tree. Natural images can also be described with stochastic grammars, as done in pattern theory (Ulf Grenander, *Elements of pattern theory*, JHU Press, 1996). The RHM considered in this paper makes several simplifying assumptions to be tractable: uniform production rule probabilities, frozen tree topology and random production rules. Despite the approximations, the RHM already captures non-trivial phenomena observed in real data, as shown in (Cagnetta et al. 2024) and (Sclocchi et al, 2025). Here we showed how including a broad distribution of production rules results in power-law learning curves for classification, as observed in real data, whereas, surprisingly, it doesn't affect the scaling law of next-token prediction. Extending these results to the case of non-random rules and varying tree topology is an important direction for future studies.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification. Considering both the theoretical elegance and the gap to real-world scaling laws. I have no more questions and would like to retain my rating. | null | null | null | null | null | null |
G-Designer: Architecting Multi-agent Communication Topologies via Graph Neural Networks | Accept (spotlight poster) | Summary: Recent research has shown that large language model (LLM)-based multi-agent systems outperform traditional single-agent methods. This paper focuses on the challenge of choosing effective communication topologies for multi-agent systems. It puts forward the LLM-based Multi-agent Communication Protocol (MACP), which serves as a guiding principle for topology design, and presents G-Designer, an adaptable solution for multi-agent deployment.
G-Designer represents the multi-agent system as a graph. It uses a variational graph auto-encoder to encode agents, along with a task-specific virtual node, and then decodes a communication topology that is tailored to the task at hand. The study finds that the performance of different topologies varies with task complexity.
Experimental evaluations across six benchmarks verify that G-Designer exhibits high performance, task adaptability, and adversarial robustness. In conclusion, G-Designer offers a practical and efficient approach for multi-agent system deployment, paving the way for further research in collective intelligence systems. ## update after rebuttal
Claims And Evidence: This paper mainly puts forward the following ideas. All of these claims seem to be adequately supported:
1. It posits that LLM-based multi-agent systems can surpass traditional single-agent approaches. Moreover, the topologies of multi-agent systems can be optimized to yield superior outcomes across a variety of tasks. This assertion is substantiated by prior research and the experiments conducted in this study.
2. The paper regulates multi-agent topology design across three dimensions: effectiveness, complexity-adaptiveness, and adversarial robustness. These logics are validated through in-depth analysis, demonstrating their effectiveness in optimizing communication structures.
3. The study introduces G-Designer, a novel solution for multi-agent deployment. G-Designer models the multi-agent system as a graph and employs a variational graph auto-encoder to encode agents and a task-specific virtual node. The algorithmic details of G-Designer are clearly presented and supported by theoretical derivations and experimental results.
Methods And Evaluation Criteria: I believe that the methods and ideas proposed in this paper hold significant practical value for the advancement and application of multi-agent systems. The article introduces a novel multi-agent communication protocol based on large language models, termed the LLM-based Multi-agent Communication Protocol (MACP), which can serve as a guiding principle for the topological design of multi-agent systems. Furthermore, the paper presents an effective, adaptive, and robust LLM-driven multi-agent communication graph designer, G-Designer, to facilitate the automated design of collaborative AI systems. For me, G-Designer has the potential to inspire future research on the emergence of self-organizing and self-evolving collective intelligence.
Theoretical Claims: I have reviewed the formulas and derivations in Section 4 and found no explicit flaws. Formula (9) employs the variational graph auto-encoder (VGAE) framework to generate the multi-agent interaction topology. Formula (10) presents the specific form of the encoder module within the VGAE, where the innovative introduction of a task-specific virtual node and agent nodes, along with the use of GNN to learn the representations of agent nodes, results in a reasonable representation of the agent collaboration graph.
Experimental Designs Or Analyses: The experimental design of the paper is thorough and comprehensive. The authors conducted extensive experimental evaluations of G-Designer across six benchmarks, validating its high performance, task adaptability, and adversarial robustness in multi-agent systems. It is worth noting that the authors performed adversarial attack experiments to verify the robustness of G-Designer, which expands the application prospects of the method. This ensures that in scenarios where there are poisoned agents or less intelligent agents, G-Designer can detect them and maintain usability.
Supplementary Material: I have examined the supplementary material of the paper. The demonstrations are helpful.
Relation To Broader Scientific Literature: I believe this paper can be primarily linked to the Agent Scaling Law mentioned in Macnet[1]. Macnet suggests that as the number of agents increases, the performance of multi-agent systems gradually improves, but this improvement is associated with the collaboration patterns among the agents. The G-Designer proposed in this paper can automatically design appropriate collaboration patterns based on the complexity of the task, thereby better adapting to the requirements of different tasks and enhancing the performance of multi-agent systems. Therefore, I think applying G-Designer to a larger number of agents could potentially better validate the effectiveness of the Agent Scaling Law.
[1] Scaling Large Language Model-based Multi-Agent Collaboration ICLR'25
Essential References Not Discussed: To the best of my knowledge, all relevant related works that should be mentioned have been included.
Other Strengths And Weaknesses: 1. The methodology is innovative. While the abstraction of multi-agent collaboration into graph structures has been mentioned in previous papers, this paper introduces VGAE on top of that. The combination of these two aligns with intuition and leverages the powerful graph representation capabilities of GNNs, making it highly novel.
2. The paper is clearly articulated. Particularly, the illustrations and case studies are visually appealing.
3. The training burden is relatively light. I observed that the experimental section not only discusses the effectiveness and efficiency during inference but also mentions the training overhead in `Table 2`. This is crucial for large-scale applications, as excessive training costs can limit the model's applicability. The overhead of this model appears to be lower compared to previous methods.
However:
1. The paper does not seem to provide specific examples of agent profiles. I am aware that the profile generation method in the paper follows the classical configurations in LLM-MA systems. However, I am curious about the impact of different qualities and contents of profiles on the results. Is the dynamic collaboration graph design sufficiently adaptable to different profiles?
Other Comments Or Suggestions: Typos:
1. In Table 1, the average accuracy of `MetaGPT` appears to be miscalculated.
2. In Figure 3, 'Ouput solution' -> 'Output solution'.
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for your careful comments and thorough understanding of our paper! Here we give point-by-point responses to your comments and describe the revisions we made to address them.
---
> **`Weakness 1`**: Would switching between different agent roles have an impact on the experimental results?
Thank you for the insightful comment! To address your concerns, we analyze the transferability of G-Designer to unseen agent roles as follows:
*Table A. Generalization of G-Designer to unseen agent roles. We train G-Designer on the "dataset for optimization" and evaluate it on the "dataset for test," where the test set may include agent roles unseen during training.*
| Dataset for Optimization | Roles | Dataset for Test | (Unseen) Roles | Performance |
|--------------------------|-------|------------------|----------------|-------------|
| GSM8K | - | GSM8K | - | 95.07 |
| HumanEval | Project Manager, Algorithm Designer, etc. | GSM8K | Math Solver, Mathematical Analyst, etc. | 93.98 |
| HumanEval | - | HumanEval | - | 89.90 |
| MMLU | Knowledgeable Expert, Mathematician, etc. | HumanEval | Project Manager, Algorithm Designer, etc. | 88.72 |
The results indicate that G-Designer demonstrates strong transferability across datasets, effectively adapting from HumanEval to GSM8K and from MMLU to HumanEval, among others.
---
> **`Weakness 2`**: Typos
Thank you very much for your careful review of our manuscript. We have carefully addressed the typos you pointed out and made the necessary corrections throughout the document. Your feedback has been very helpful in improving the clarity and accuracy of the paper.
We appreciate your attention to detail and your valuable contribution to the review process.
---
Rebuttal Comment 1.1:
Comment: I have read the author's response and I will maintain my score
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer e1AK**,
Thank you for your detailed, thoughtful feedback and for taking the time to review our responses. We truly appreciate your constructive insights and your willingness to engage with our clarifications. Your recognition of the **innovative implementation of G-Designer** and of its **lightweight, resource-saving design**, has truly inspired us!
Best Regards,
Authors | Summary: This paper introduces G-Designer, an framework for designing task-aware, adaptive, and robust communication topologies for multi-agent systems powered by large language models (LLMs). G-Designer dynamically generates communication structures for specific user requests using a variational graph auto-encoder (VGAE), which encodes both agent profiles and task-specific information to decode an optimal interaction graph. This method aims to balance performance, adaptability, and robustness by minimizing communication overhead while maintaining high accuracy and resilience against adversarial attacks.
Claims And Evidence: The main claims by this paper (i.e., task-awareness, adaptiveness, robustness, high-performance) are well supported by the methodology and experiments. I particularly appreciate the emphasis on task-adaptiveness in G-Designer, a feature that has been largely absent in prior LLM-based multi-agent systems. Systems such as MetaGPT and ChatDev adhere to static SOPs, and MacNet exhibits poor scaling capabilities due to its fixed structure. Although GPTSwarm incorporates dynamic training, it relies on a fixed topology during testing. In the latest ICLR 2025 papers, works like AFlow, AgentPrune, and ADAS fail to dynamically allocate inference resources based on task complexity. Therefore, I think G-Designer’s advocacy for task-adaptiveness and its proposed Multi-agent Communication Protocol (MACP) represent a significant contribution that warrants attention from the research community.
Methods And Evaluation Criteria: The authors primarily employed three categories of benchmarks. Overall, the results on these benchmarks support the superiority of their approach. However, I recommend that the authors include more agent-specific benchmarks, such as API-Bank [1] and AgentBench [2].
[1] API-Bank: A Comprehensive Benchmark for Tool-Augmented LLMs
[2] AgentBench: Evaluating LLMs as Agents
Theoretical Claims: No significant theoretical issues identified.
Experimental Designs Or Analyses: G-Designer is compared against vanilla LLM, single-agent, and multi-agent methods. In the cost-efficiency scatter plot (Figure 4), G-Designer achieves the best performance with the lowest token cost among all methods. Additionally, I highly appreciate the case study and scalability study presented in Figure 6 and Table 6, which I strongly suggest be moved to the main text. Figure 6 provides an intuitive understanding of G-Designer’s operational logic, and Table 6 suggests that G-Designer may offer preliminary insights into agent scaling laws. Current graph-based multi-agent systems, such as MacNet and GPTSwarm, suffer from poor scaling laws, with marginal performance improvements as the number of agents increases (MacNet is especially criticized by reviewers for this issue). G-Designer explicitly highlights this challenge, namely the quadratic growth in communication edges, for which the authors propose a lightweight solution.
Supplementary Material: Yes, I reviewed the case study and scalability experiments.
Relation To Broader Scientific Literature: This paper builds on the research trajectory of GPTSwarm, MacNet, and AgentPrune, which use DAGs to model multi-agent systems, and is highly relevant to the current line of research on (automated) LLM-based multi-agent systems.
Essential References Not Discussed: I recommend that the authors include the following recent multi-agent papers. Incorporating these references would further emphasize the novelty of their contribution:
[1] AFlow: Automating Agentic Workflow Generation, ICLR 2025
[2] AgentSquare: Automatic LLM Agent Search in Modular Design Space, ICLR 2025
[3] Automated Design of Agentic Systems, ICLR 2025
[4] Flow: Modularized Agentic Workflow Automation, ICLR 2025
[5] Talk Structurally, Act Hierarchically: A Collaborative Framework for LLM Multi-Agent Systems
[6] Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence
Other Strengths And Weaknesses: Strength:
- The motivation and methodology presented in this paper are clear and accessible. Figure 3 is particularly well-designed and illustrative.
- The proposed method is well-structured and methodologically rigorous, ensuring robustness, adaptability, and high performance.
Weakness:
- The experiments were conducted only on GPT-4 and GPT-3.5, which are no longer state-of-the-art models. Are the authors planning to test more advanced or open-source LLMs?
- The system prompt attack described in Section 5.3 requires more detailed elaboration. Moreover, there are more advanced agent attack methods available, such as those in [1]. Do the authors plan to evaluate G-Designer’s defense capabilities against a broader range of attacks?
[1] Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Other Comments Or Suggestions: No specific comments.
Questions For Authors: - Do the authors plan to test more advanced or open-source LLMs?
- In Section 5.3, do the authors intend to incorporate more advanced agent attack methods?
- Are the authors considering using more agent-specific benchmarks to evaluate G-Designer?
- G-Designer employs a chain-based anchor graph as its starting point. Do the authors plan to explore alternative anchor graph initialization?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to express our deepest respect for your meticulous review! In response to your efforts, we have carefully prepared a point-by-point reply:
---
> **`Weakness 1`**: Supplement more agent-specific benchmarks.
Thank you very much for your insights. We conducted experiments on the more agent-specific GAIA benchmark to assess the effectiveness of our method. The experimental results are as follows:
*Table A. Performance comparison on GAIA benchmark.*
|Method|Level 1|Level 2|Level 3|Avg.|
|-|-|-|-|-|
|Vanilla GPT-4|9.68|1.89|2.08|4.05|
|DyLAN|13.98|4.40|0|6.69|
|GPTSwarm|23.66|16.35|2.04|16.33
|**G-Designer**|**25.16**|**18.87**|**2.04**|**16.94**|
This demonstrates that our method effectively outperforms baselines even in more agentic benchmarks like GAIA. We sincerely hope this addresses your concerns.
---
> **`Weakness 2`**: Include the recent multi-agent papers.
Your erudition and wisdom have been immensely helpful to us! We commit to including the papers you mentioned in the revised manuscript.
---
> **`Weakness 3`**: Are the authors planning to test more advanced or open-source LLMs?
To address your concern, we use the newer and popular closed-source model GPT-4o-mini and the open-source model DeepSeek-V3 as base models, and compare the experimental results of our method with Vanilla and GPTSwarm in Table B.
*Table B. Performance results with more advanced LLMs.*
| Method | LLM | MMLU | GSM8K | HumanEval |
|:-:|:-:|:-:|:-:|:-:|
| Vanilla | GPT-4o-mini | 77.12 | 92.67 | 85.71 |
| Vanilla | DeepSeek-V3 | 86.27 | 93.25 | 87.67 |
| GPTSwarm | GPT-4o-mini | 78.43 | 93.78 | 86.28 |
| GPTSwarm | DeepSeek-V3 | 86.93 | 94.38 | 89.72 |
| G-Designer | GPT-4o-mini | 79.73 | 94.23 | 90.32 |
| G-Designer | DeepSeek-V3 | 89.54 | 95.52 | 91.93 |
Due to the time constraints of the rebuttal, we were unable to compare our method with more baselines. However, the experimental results presented above demonstrate that our method is also applicable to newer LLMs.
---
> **`Weakness 4`**: Do the authors plan to evaluate G-Designer’s defense capabilities against a broader range of attacks?
To address your concerns, we evaluated G-Designer under the PoisonRAG [1] attack, with results presented in the table below:
*Table C. Performance comparison on MMLU. We employ the configuration from PoisonRAG, inserting erroneous messages into the contextual memory of the attacker agent, enabling them to disseminate conclusions derived from these messages to others.*
|Method|Chain|Star|Complete Graph|DyLAN|GPTSWarm|G-Designer|
|-|-|-|-|-|-|-|
|Before attack|82.3|80.7|83.1|83.4|84.0|84.5|
|After attack|73.6|65.8|75.2|77.9|81.0|83.6|
The results indicate that even under memory attacks, G-Designer effectively mitigates interference from malicious agents through its dedicated sparsification mechanism.
[1] Poison-rag: Adversarial data poisoning attacks on retrieval-augmented generation in recommender systems.
---
> **`Weakness 5`**: Do the authors plan to explore alternative anchor graph initialization?
The suggestion you proposed is very thoughtful! In `Table C`, we explored the impact of different initial topologies on the experimental results.
*Table D: Different Topologies for anchor graph initialization on GSM8K dataset.*
|Anchor|Accuracy|
|-|-|
|Chain|95.07|
|Layered|94.92|
|Star|94.47|
|Random|91.15|
|FullConnected|95.22|
Using different topologies as initial anchors shows minimal performance differences, although the cost is higher. This aligns with our analysis in Section 5.4, where we concluded that the success of G-Designer is mainly attributed to its efficient communication graph generation.
---
Rebuttal Comment 1.1:
Comment: I have read the author's rebuttal and the review of other reviewer. I'd love to maintain my accept score.
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer gFC8**,
Thank you for your thoughtful feedback and continuous support of our work. Your comments helped us refine the presentation, enhance the experimentation, and strengthen the manuscript. We particularly appreciate your recognition of **the novelty of G-Designer** and **the importance of our proposed Multi-agent Communication Protocol (MACP) to the MAS community**.
Thanks again for the time you spent on this insightful review!
Best regards,
Authors | Summary: This paper introduces G-Designer, an innovative solution for multi-agent communication topology design in LLM-MAS. The authors first propose the Multi-agent Communication Protocol (MACP), which sets standards for LLM-MAS topology design in terms of effectiveness, complexity-adaptiveness, and adversarial robustness.
G-Designer models MAS as a graph, using a variational graph auto-encoder. It constructs a multi-agent network with a task-specific virtual node and then decodes an optimized communication topology. This is achieved through a process of encoding agents and task information, decoding a sketched graph, and refining it with anchor and sparsity regularization.
Extensive experiments on six benchmarks demonstrate G-Designer's superiority.
## update after rebuttal
The authors address most of my concerns. So I raised my score.
Claims And Evidence: The core claim of this paper is that the topology of MAS should be task-dynamic. The authors first support this through the method design: G-Designer is trained to design a task-customized communication graph for different domains and user queries of varying difficulties. Secondly, experimental verification supports this: Fig 4 shows that different graphs for different queries can significantly reduce the average token consumption while maintaining good performance. Finally, case studies verify this: Fig 6 shows that G-Designer can indeed design different topologies for different tasks.
Methods And Evaluation Criteria: This paper is the first to use graph neural networks for topology design in LLM-MAS, which is novel. It uses the most commonly used datasets and evaluation metrics.
Theoretical Claims: No new theorems or proofs provided.
Experimental Designs Or Analyses: The benchmarks used in this paper are standard, and the authors comprehensively compared single-agent and multi-agent baselines. I have the following concerns:
- In Section 5.1, why is the temperature set to 0 for single agents and 1 for other MAS? Could this introduce inconsistencies when reporting the performance?
- Section 5.1 mentions that the number of dialogue rounds is K=3. Can the authors explain this? Would other values significantly affect performance? Can this value be automated rather than mannual defined?
- The authors state that the anchor topology is set as a chain structure. Have the authors tried other structures?
Supplementary Material: I have reviewed all the content in the appendix. They are helpful for my review.
Relation To Broader Scientific Literature: This paper is highly relevant to collaborative AI and multi-agent systems. It is the first to advocate for the task dynamics of LLM-MAS and proposes a unified protocol to regulate future LLM-MAS designs, which I believe is important for the community.
Essential References Not Discussed: The authors have comprehensively discussed relevant papers.
Other Strengths And Weaknesses: ### Strength
- Using graph neural networks and variational autoencoding for adaptive MAS topology design seems quite novel.
- The visualizations are all well-designed and clearly presented.
- The experimental results of the method are good. Detailed ablation studies validate the method's advantages in performance, token efficiency, and adversarial robustness.
### Weakness
- The authors only used GPT-3.5 and GPT-4. Have they considered using the latest GPT or LRMs?
- Table 1 is limited to five agents. Have the authors considered testing with more agents (100 or 1000)? Can G-Designer maintain its cost efficiency with more agents?
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for the thoughtful and constructive reviews of our manuscript! Based on your questions and recommendations, we give point-by-point responses to your comments and describe the revisions we made to address them.
---
> **`Weakness 1`**: Temperature Settings in the Experiments
Thank you very much for your thorough review! In the multi-agent system, to facilitate more diverse communication among different agents, we followed DyLAN and set the temperature to 1. For the Single Agent scenario, to ensure more reproducible experimental results, we set the temperature to 0.
For a fairer comparison, we have also presented the performance of the Single Agent method with the temperature set to 1 in `Table A`. Experimental results show that the change in temperature from 0 to 1 does not significantly affect the average accuracy of single-agent methods.
*Table A. Performance of single agent methods with temperature set to 1.*
| Method | MMLU | GSM8K | HumanEval |
|:-:|:-:|:-:|:-:|
| Vanilla | 82.14 | 85.40 | 71.68 |
| CoT | 82.55 | 86.81 | 75.80 |
| ComplexCoT | 83.70 | 87.04 | 75.66 |
---
> **`Weakness 2`**: The reason for setting the number of dialogue rounds.
Your question is very valuable! In Table B, we present the changes in accuracy and cost with different numbers of dialogue rounds on the HumanEval dataset. Intuitively, increasing the number of optimization rounds leads to more refined and accurate results, yielding substantial performance improvements, but also comes with increased cost. To balance performance with token savings, we consistently set $K = 3$.
*Table B. We report the performance of G-Designer on HumanEval benchmark with different $K$ values.*
| K | 1 | 3 | 5 | 7 |
|:-:|:-:|:-:|:-:|:-:|
| Acc | 88.21 | 89.90 | 90.66 | 91.08 |
| Cost | 0.1325 | 0.2298 | 0.3339 | 0.4097 |
As regards the automation of $K$, we respectfully argue that G-Designer can easily achieve this via incorporating existing early-stopping mechanisms like in MacNet or DyLAN. Nevertheless, G-Designer not particularly leverage these, as its core contribution lies in the automation of MAS topology.
---
> **`Weakness 3`**: The impact of different anchor settings on the results.
The suggestion you proposed is very thoughtful! In Table C, we explored the impact of different initial topologies on the experimental results.
*Table C: Different Topologies for anchor graph initialization on the GSM8K dataset.*
| Anchor | Chain | Layered | Star | Random | FullConnected |
| :-: | :-: | :-: | :-: | :-: | :-: |
| Accuracy | 95.07 | 94.92 | 94.47 | 95.15 | 95.22 |
| Token | $8.5\times 10^6$ | $9.3\times 10^6$ | $8.9\times 10^6$ | $9.2\times 10^6$ | $1.2\times 10^7$ |
Using different topologies as initial anchors shows minimal performance differences, although the cost is higher. This aligns with our analysis in Section 5.4, where we concluded that the success of G-Designer is mainly attributed to its efficient communication graph generation.
---
> **`Weakness 4`**: Have the authors considered using the latest GPT or LRMs?
To address your concern, we use the newer and popular closed-source model GPT-4o-mini and the open-source model DeepSeek-V3 as base models, and compare the experimental results of our method with Vanilla and GPTSwarm in Table D.
*Table D. Performance results with more advanced LLMs.*
| Method | LLM | MMLU | GSM8K | HumanEval |
|:-:|:-:|:-:|:-:|:-:|
| Vanilla | GPT-4o-mini | 77.12 | 92.67 | 85.71 |
| Vanilla | DeepSeek-V3 | 86.27 | 93.25 | 87.67 |
| GPTSwarm | GPT-4o-mini | 78.43 | 93.78 | 86.28 |
| GPTSwarm | DeepSeek-V3 | 86.93 | 94.38 | 89.72 |
| G-Designer | GPT-4o-mini | 79.73 | 94.23 | 90.32 |
| G-Designer | DeepSeek-V3 | 89.54 | 95.52 | 91.93 |
Due to the time constraints of the rebuttal, we were unable to compare our method with more baselines. However, the experimental results presented above demonstrate that our method is also applicable to newer LLMs.
---
> **`Weakness 5`**: Have the authors considered testing with more agents?
Thank you for this thoughtful inquiry! In Table 6 of our paper, we provide a comparison of accuracy, time, token consumption, and cost across different agent number configurations. The experimental results show that G-Designer is still able to maintain its cost efficiency with more agents.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough rebuttal. Your detailed responses and additional experiments in Tables A-D, effectively address my concerns about temperature settings, dialogue rounds, anchor topologies, and the use of newer LLMs. The results demonstrate the robustness and adaptability of G-Designer, reinforcing the paper’s overall solidity. Based on this, I’m happy to raise my score.
---
Reply to Comment 1.1.1:
Comment: **Dear Reviewer LUTJ**,
We sincerely appreciate your invaluable support for our research. Your insightful suggestions regarding the anchor topology, temperature setting, and LLM backbone setting of G-Designer have significantly contributed to improving the depth and precision of our manuscript. We would also like to express our gratitude for your recognition of G-Designer's **novelty, clarity, robustness, and adaptability**. It has been an honor to incorporate your comments and strengthen our work accordingly.
Thank you once again for your time, expertise, and constructive review.
Best regards,
Authors | Summary: The paper discusses the advancements in collective intelligence among large language model-based agents, highlighting the challenge of selecting effective communication topologies for specific tasks. To address this, the authors introduce G-Designer, an adaptive solution that dynamically creates task-aware communication topologies using a variational graph auto-encoder. This approach aims to balance efficiency and performance by customizing the inter-agent communication network according to the task requirements.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: Yes, the contributions of the paper are related to the broader scientific literature of agentic frameworks of LLMs.
Essential References Not Discussed: All the essential references are discussed.
Other Strengths And Weaknesses: Strengths:
1. It addresses an important problem of learning the communication network structure in a multi-agent system of LLMs.
2. Proposing a formal multiagent communication protocol is important. This paper attempts it.
3. Some analysis done in the paper w.r.t. the number of agents and number of tokens are very insightful.
Weaknesses / Confusions:
1. In Section 3.2, what happened to the communication protocol if a graph has a directed cycle?
2. It is not clear why these 3 criteria are considered as the Mukti agent communication protocol? Is this list exhaustive?
3. The need of the simple starting anchor matrix \tilta{A} is not clear. If A is an almost random guess, why do you want your final topology to be close to it in Equation 14?
4. In Line 279, terms in Equation 14 are not correctly referred.
5. Equations 14 and 16 are both trainable optimisation functions with hyperparameters. Good amount of training data will be needed to optimize these objectives. The authors just mentioned that limited training data is required without explaining details.
6. The overall paper looks like an overkill and is often made complex without any proper motivation / intuition about where existing simpler things would fail?
7. In the implementation details, no details about the training of the proposed method is given. You need to provide training data name, size, validation and hyperparameter tuning details. Also, it is not clear if the baseline approaches need additional training. If yes, are they also being trained on the same datasets? If no, how do you ensure fairness in your experiments?
8. Can you clarify what all agents being used in Table 1? If they are all the same(GPT4 based), will all the nodes in the graph have similar node features and the final topology may boil down to a symmetric graph structure wrt the nodes?
9. The variance of G-Designer is high in Table 1? Please throw some light on the statistical significance of the results? Also, is it because of the additional uncertainties because of the training?
"**Update after rebuttal**"
I have checked the rebuttal. I thank the authors for their rebuttal. Some of my concerns are addressed. However, concerns around the correctness of the equations, motivation about some steps of the algorithm and details of the training are still there. I will hold my overall score.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **`Weakness 1`**: What happens to the communication protocol if a graph has a directed cycle?
Thank you for the insightful inquiry! In Equation (15), we extract an enforced acyclic graph $\mathcal{G}\_{com}$ from the potentially cyclic topology $\tilde{\mathbf{S}}$. Specifically, in our provided anonymous code repository, the function `check_cycle()` in `GDesigner/graph/graph.py` is responsible for avoiding cycles when constructing $\mathcal{G}\_{com}$. We will emphasize this point more clearly in the revised version.
---
> **`Weakness 2`**: Why are these 3 criteria considered?
1. **Effectiveness**: Extraordinary problem-solving is one of the core motivations behind MAS.
2. **Complexity Adaptiveness** aims to achieve optimal performance with minimal resource consumption. This is emphasized or pursued by many recent multi-agent papers, such as AgentPrune (ICLR 2025), RouterDC (NeurIPS 2024), and GraphRouter (ICLR 2025).
3. **Adversarial Robustness**: As attack & defense mechanisms in Agent/MAS gain increasing attention (e.g., TrustAgent (EMNLP 2024)), we believe safety and robustness will be critical for the trustworthy deployment of MAS in real-world applications.
We respectfully acknowledge that while we have highlighted what we consider critical, we do not claim this framework to be exhaustive. We would greatly appreciate any additional insights you might suggest and would be happy to incorporate them into our future work.
---
> **`Weakness 3`**: The starting anchor is simple.
We would like to respectfully clarify that the anchor is not an "almost random guess". Instead, as stated in line 247, it incorporates prior knowledge of the workflow sequence (for example, a simple but standard coding procedure). In Section 5.4 and Table 3, we demonstrate that introducing the starting anchor results in a practical performance improvement.
---
> **`Weakness 4`**: Equation 14 are not correctly referred.
Thank you for your thorough review! We will correct the reference to Eq. (14) in Lines 278–288 in the revised version.
---
> **`Weakness 5 & 7`**: No details about the training of the proposed method are given.
- Training set: For both our method and the baselines that require training, we use the same training set. As mentioned in Line 321 (Right) of the paper: 40 samples for MMLU/MultiArith/HumanEval, 80 samples for GSM8K/SVAMP/AQuA;
- Test set: provided in Table 4;
- Hyperparameter tuning: We use a validation set of size 100 for each benchmark and apply a grid search to find the optimal parameters. To reduce the complexity of hyperparameter tuning for users, we provide a set of generalized hyperparameters in Section 5.1.
---
> **`Weakness 6`**: The paper lacks proper motivation.
Thank you for bringing this issue to our attention! We humbly provide a more intuitive explanation of each component:
- Section 4.1: Makes it possible to process MAS with VGAE.
- VGAE encoder $q(\cdot)$: Captures the semantic relationships among different agents regarding a given task.
- VGAE decoder $p(\cdot)$: Directly leveraging complex MetaGPT/LLM-Debate/GPTSwarm will cause unnecessary costs on simple queries.
- Equation 14: If the optimized topology is not regularized, it may suffer from malicious edges or unnecessary edges. This practice has been well-validated in the traditional graph structure learning literature, such as NeuralSparse (ICML 2020), PTDNet (WSDM 2021).
---
> **`Weakness 7`**: If the baseline approaches need additional training?
In the baselines of our paper, DyLAN, LLM-Blender, and GPTSwarm require training.
We compare with other training-free methods because **all previously published training-required methods (including DyLAN (COLM'24), LLM-Blender (ACL'23) and GPTSwarm (ICML'24)) have compared with training-free methods**. Therefore, we follow the same setting as theirs for a comprehensive evaluation.
Besides, we respectfully argue our approach has the following key advantages **in a fair way**:
(1) G-Designer has **better performance** than training-free/required topologies; (2) its **training cost** is an order of magnitude lower than all training-required methods (Table 2). Additionally, our **inference cost** is among the lowest (Figure 4).
---
> **`Weakness 8`**: Clarify the details of all the agents used in Table 1.
The profiles and tools of different agents vary significantly, resulting in distinct encoded node features. Consequently, the constructed graph varies significantly. We provide a case study (Figure 6) to illustrate this. Examples of different agent role/tool descriptions can be found in our provided code in `GDesigner/prompt/gsm8k_prompt_set.py`.
---
> **Weakness 9**: The variance of G-Designer is high in Table 1?
We would like to clarify a possible misunderstanding: the subscripts in Table 1 indicate improvements over the vanilla baseline rather than variance.
Besides, we would like to emphasize that all results in Table 1 are averaged over three runs. | null | null | null | null | null | null |
Learning to Generate Projections for Reducing Dimensionality of Heterogeneous Linear Programming Problems | Accept (poster) | Summary: Projection is a key methodology to understand polyhedral structure and reduce dimensionality in linear programming, with recent works proposing it as a heuristic to address large-scale models. The novelty of this work is in applying a machine learning technique to learn effective projection operators. The authors present structural results associated with the sample complexity of the approach, as well as perform experiments assessing the performance against more traditional methods (e.g., PCA).
## update after rebuttal
Thank you for the rebuttal; most of my comments have been addressed. However, I am concerned with the points raised by Reviewer oitp, and prefer to slightly decrease my score as they do not seem to be addressed very clearly.
Claims And Evidence: Experiments are thorough and the paper is well written and rigorously formalized.
Methods And Evaluation Criteria: In general, every optimization approach involving a heuristic component is suitable to learning, which has fostered several ICML submissions under this same spirit. However, the differential in this paper is the nice methodology to derive an appropriate embedding for the projection matrix (Section 4.2), which I found to be non-trivial and suitable to this application. I also appreciate the paper's theoretical analysis to assess how much data is needed to derive good results.
The benchmarks were appropriately selected; in particular, they include recent work in column-randomized methods, which would be the primary methodological approach to consider within this context.
Theoretical Claims: Theorem 5.1 (the key contribution). It is based on a few seminal results in the area and I believe it is correct to the best of my analysis. However, the text tends to overly summarize much of the theoretical claims, both in the main text as well as in the proofs (e.g., when applying Warren's theorem, or discussing the many assumptions of the theorem in the main text). Although there are space issues in the main text, the proof in the Appendix could be extended to be less obscure.
My major concern is that the authors have not interpreted much of the intricate generalization bound from Section 5.2, which can be difficult to understand. For example, the text could provide a table with a (rough) estimate of the bound for some notion of large-scale LPs, and discuss how it connects to the training sets used in the experimental analysis. That is, provide further evidence of how the bound can be useful to understand the required training set sizes.
Experimental Designs Or Analyses: The experimental analysis is thorough. However, my minor comment is that I found the problem sizes too small (less then 2,000 variables for packing and 900 nodes for netflow). For example, state-of-the-art LP solvers can handle problems such as MaxFlow or MinCost-Flow with millions of nodes, especially as networks can be relatively sparse in many practical applications.
Although one could always argue that the time limit to solve a problem can be small, the work could better highlight the benefit of projection methods (and learning) if showcasing LPs that could not even be represented in reasonably sized memory.
Supplementary Material: Proofs and the additional experiments in Section C.
Relation To Broader Scientific Literature: Projection methods are very classical, and their use in multiple heuristic approaches has received growing attention in these last two years. The authors fundamentally improve heuristic performance by incorporating learning.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: Could you please provide further interpretation of the generalization bound and training sizes that would be required for certain (carefully selected) LPs?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your positive and constructive comments.
> My major concern is that the authors have not interpreted much of the intricate generalization bound from Section 5.2, which can be difficult to understand. For example, the text could provide a table with a (rough) estimate of the bound for some notion of large-scale LPs, and discuss how it connects to the training sets used in the experimental analysis. That is, provide further evidence of how the bound can be useful to understand the required training set sizes.
Thank you for your suggestion. First, we must clarify that our generalization bound, as is often the case in statistical learning theory, merely provides a theoretical guarantee that the generalization error asymptotically approaches zero as the dataset size $D$ increases. We will explicitly state this point in the main text. However, with the recent rise of data-driven approaches to optimization problems, it is likely that increasingly large amounts of data will become available. Therefore, even if our bound appears impractical at present, ensuring its asymptotic convergence remains valuable.
For reference, we provide the requested table of $\mathrm{Pdim}$ values corresponding to LP sizes $M, N$ and the reduced dimensionality $K$. Given an allowable generalization error $\epsilon$ and an upper bound $B$ on the LP's optimal value, an estimate for the dataset size $D$ is approximately $\frac{B^2}{\epsilon^2} Pdim$, as discussed in Section 5.2. Here, we used $L=4, K=30, N=500$, and $M=50$.
|$N$|100|200|300|400|500|600|700|800|900|1000|
|----|----|----|----|----|----|----|----|----|----|----|
|Pdim|77039526|154435341|232369487|310654224|389199991|467953928|546881065|625956523|705161691|784482117|
|$M$|100|200|300|400|500|600|700|800|900|1000|
|----|----|----|----|----|----|----|----|----|----|----|
|Pdim|672945915|1263418755|1874759352|2500683030|3137794899|3783981206|4437800128|5098205390|5764403103|6435770256|
|$K$|10|15|20|25|30|35|40|45|50|
|----|----|----|----|----|----|----|----|----|----|
|Pdim|59319255|107887333|165114494|231061207|305746474|389199991|481459510|582566831|692565290|811498343|
> my minor comment is that I found the problem sizes too small (less then 2,000 variables for packing and 900 nodes for netflow).
We newly conducted experiments on larger LPs (packing problems with 100,000 variables). The results for a reduced dimension of $K=50$ and number of constraints $M=50$ are shown in the table below. Our method achieved a higher objective ratio than the other methods (Rand and PCA), and significantly reduced computational time compared to solving the original LPs (Full) on these larger instances.
Objective ratio
|Ours|Rand|PCA|SharedP|FCNN|
|----|----|----|----|----|
|0.924|0.230|0.066|0.070|0.257|
Computational time in seconds
|Full|Ours|Rand|PCA|SharedP|FCNN|
|----|----|----|----|----|----|
|23.487|0.055|0.013|0.029|0.063|0.049|
> Could you please provide further interpretation of the generalization bound and training sizes that would be required for certain (carefully selected) LPs?
As mentioned above, if the training size $D$ is larger than $B^2/\epsilon^2 Pdim$, the generalization error is approximately bounded by $\epsilon$. Our theoretical analysis guarantees this decrease in the generalization error by increasing the training size, potentially bringing it arbitrarily close to zero.
> the proof in the Appendix could be extended to be less obscure.
We will extend the proof in the Appendix to make the proof more accessible to readers. | Summary: This paper applies machine learning to find a more suitable projection of optimization models in the form of linear programming. By projecting to a space of smaller dimension, it is possible to solve a simpler version of the LP model much faster. A good projection should have basic solutions mapping to optimal (or near-optimal) solutions of the original LP model, in which case we may find an optimal (or near-optimal) solution from solving the project LP.
Claims And Evidence: In comparison to prior work, this paper proposes a machine learning method that takes into account instance-specific information. In other words, the projection is not only based on the model but also on the specific coefficients of the LP to be projected and solve. In comparison to an approach that is not instance-specific, as well as more standard approaches, they report solutions of considerably better quality.
Methods And Evaluation Criteria: I believe that evaluating the proposed NN architecture with random initialization and no training is an important benchmark. This would be different from the random projection used.
Why? Because, in recent years, some papers on ML for optimization have either spotted or reflected on the fact that ML can sometimes be an overkill. Therefore, I believe that it would a good practice that papers in this area evaluate the impact of a NN producing pure noise in their benchmarks.
Theoretical Claims: Only in the appendix; not evaluated.
Experimental Designs Or Analyses: See two items above.
Supplementary Material: No.
Relation To Broader Scientific Literature: As claimed by the authors, there is a clear missing gap in using instance-specific information for obtaining a better projection for LP models.
In particular, I found the description of the PELP layers quite clear. Kudos!
Essential References Not Discussed: Nothing comes to mind.
Other Strengths And Weaknesses: Strength:
- The paper is clearly written.
- The contribution is clearly stated.
Weakness:
- Figure 3 is not very clear
- Some comments on the mathematical optimization side do not seem entirely accurate (see next items)
Other Comments Or Suggestions: Page 1:
- "is widely used in many [areas of] application" (no plural in application)
- "most of which are based": remove "are"
Page 2:
- "After obtaining [an] optimal solution"
- "if the projected LP is feasible, [the] recovered solution"
Page 3:
- "although it may not be optimal": add a comma before this excerpt
- "can be different from the training LPs, [i.e.,]"
- "Our aim is to [efficiently] obtain a high-quality solution of the test LP" (move"efficiently" from the end)
- "outputs [a] projection matrix"
- "is model parameters" -> "are the model parameters"
- "Optimal solution" -> Add "An" or "Any" in front of the sentence (LPs may have multiple optimal solutions)
- "Therefore, [an] appropriate projection"
- "which makes training [more] efficient"
Equation 6:
- Should ":" be "m"?
Questions For Authors: 1) When you say that "[w]e focus on decreasing the number of variables since the recovered solutions are always feasible for the original LPs", you are ignoring duality: by decreasing the number of constraints, you would find a dual feasible solution that could provide you an optimality gap. Saying that one is preferable to the other is a bit misleading. Even if not intended, I believe that addressing that by mentioning the case of decreasing the number of constraints, even if not carried out in the paper, would make it more accurate and self-contained. Remember: some people may start working on this from your paper, and such absences may affect their future work. Can you please address this?
2) What do you mean when you say that "[a]n LP with equality constraints can be transformed into an inequality-form LP if a (trivially) feasible solution is available"? Why is that even necessary? To be clear, if S = {x : A x = b}, then we can reformulate it as S = {x : A x >= b, A x <= b} = {x : - A x <= - b, A x <= b}. Isn't it?
3) You have both minimization and maximization problems, but the objective ratio always peaks at 1.0 in Figure 3. My understanding is that you take the ratio between the smallest and the largest values between the solution you obtain through the projection and the optimal solution of the LP. Is that the case? If so, can you please clarify that in the paper?
4) Please comment on what I wrote in "Methods And Evaluation Criteria".
5) Please comment on what I wrote in "Other Strengths And Weaknesses".
6) Please comment on what I wrote in "Other Comments Or Suggestions".
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your positive and constructive comments.
> I believe that evaluating the proposed NN architecture with random initialization and no training is an important benchmark. This would be different from the random projection used.
We compared with the proposed NN with random initialization and no training.
The objective ratios of our method (Ours) and our NN without training (NoTrain) are shown in the following tables. Without trainig, the proposed NN does not perform well. This result demonstrates the importance of our proposed training procedures. We will add this discussion.
Packing: Objective ratio
|$K$|10|15|20|25|30|35|40|45|50|
|----|----|----|----|----|----|----|----|----|----|
|Ours | 0.922 | 0.938 | 0.953 | 0.970 | 0.978 | 0.986 | 0.987 | 0.990 | 0.985 |
|NoTrain | 0.097 | 0.098 | 0.098 | 0.098 | 0.099 | 0.099 | 0.099 | 0.099 | 0.099 |
MaxFlow: Objective ratio
|$K$|10|15|20|25|30|35|40|45|50|
|----|----|----|----|----|----|----|----|----|----|
|Ours | 0.677 | 0.768 | 0.862 | 0.919 | 0.983 | 0.983 | 0.982 | 0.990 | 0.994 |
|NoTrain | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 |
MinCostFlow: Objective ratio
|$K$|10|15|20|25|30|35|40|45|50|
|----|----|----|----|----|----|----|----|----|----|
|Ours | 0.844 | 0.859 | 0.862 | 0.875 | 0.878 | 0.879 | 0.886 | 0.889 | 0.894 |
|NoTrain | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
> Figure 3 is not very clear
We will make the figure more clearly.
> Some comments on the mathematical optimization side do not seem entirely accurate
We will revise the sentences according to your comments.
> When you say that "[w]e focus on decreasing the number of variables since the recovered solutions are always feasible for the original LPs", you are ignoring duality: by decreasing the number of constraints, you would find a dual feasible solution that could provide you an optimality gap. Saying that one is preferable to the other is a bit misleading. Even if not intended, I believe that addressing that by mentioning the case of decreasing the number of constraints, even if not carried out in the paper, would make it more accurate and self-contained. Remember: some people may start working on this from your paper, and such absences may affect their future work. Can you please address this?
Thank you for your insightful comment. As you correctly pointed out, reducing the number of constraints and reducing the number of variables are equivalent from the perspective of LP duality. In our paper, we have focused on reducing the number of variables purely for simplicity - to avoid intricate discussions on infeasibility. However, we acknowledge that, from a duality perspective, one could also consider reducing the number of constraints. We will make this explicit in the paper to ensure clarity and completeness.
> What do you mean when you say that "[a]n LP with equality constraints can be transformed into an inequality-form LP if a (trivially) feasible solution is available"? Why is that even necessary? To be clear, if S = {x : A x = b}, then we can reformulate it as S = {x : A x >= b, A x <= b} = {x : - A x <= - b, A x <= b}. Isn't it?
We can reformulate it as you mentioned. However, by this reformulation, projected LPs are more likely to be infeasible, and the number of inequality constraints increases. On the other hand, the reformulation using a trivially feasible solution can ease the feasibility issue, and the number of inequality constraints does not change. We will clarify this point in the paper.
> You have both minimization and maximization problems, but the objective ratio always peaks at 1.0 in Figure 3. My understanding is that you take the ratio between the smallest and the largest values between the solution you obtain through the projection and the optimal solution of the LP. Is that the case? If so, can you please clarify that in the paper?
In our experiments, all problems are converted into maximization problems, and $x=0$ (with an objective value of zero) is always a trivially feasible solution. Therefore, the objective ratio is calculated as the obtained objective value divided by the optimal value of the original LP. We will clarify this point in the paper.
---
Rebuttal Comment 1.1:
Comment: I do not understand your answer to one of my questions:
_What do you mean when you say that "[a]n LP with equality constraints can be transformed into an inequality-form LP if a (trivially) feasible solution is available"? Why is that even necessary? To be clear, if S = {x : A x = b}, then we can reformulate it as S = {x : A x >= b, A x <= b} = {x : - A x <= - b, A x <= b}. Isn't it?_
A reformulation on the same space of variables does not make the formulation more or less likely to be infeasible: the solution set remains the same. I would appreciate if you explain your reasoning for this part in more clear terms.
---
Reply to Comment 1.1.1:
Comment: Thank you for seeking further clarification regarding equality constraints. We would like to address your concern with a more detailed explanation.
You are absolutely correct that the equality-constrained set, S = {x : Ax = b}, can be mathematically reformulated as S = {x : Ax ≤ b, Ax ≥ b}. However, in our setting of learning to generate projections, there are practical considerations that have motivated our transformation based on trivial feasible solutions:
1. As is also discussed by Sakaue and Oki (2024), learning to generate projections that satisfy equality constraints varying across instances is challenging. Thus, following their setup, we focused on inequality-form LPs to mitigate this difficulty.
2. Still, when a trivially feasible solution is available, we can transform an equality-constrained LP into an equivalent inequality form using the procedure described in Appendix C of Sakaue and Oki. Technically, this simply restricts the movement of variables to translations within the subspace that satisfies the equality constraints, originating from the feasible solution.
3. The resulting inequality-constrained LP typically has a full-dimensional feasible region within the linear subspace defined by the equality constraints. This empirically reduces concerns about infeasibility when learning to generate projections.
4. In contrast, reformulating Ax = b as Ax ≤ b and Ax ≥ b creates feasible regions that are not full-dimensional, making it more challenging to learn to generate feasible projected LPs.
5. We adopted the approach of Sakaue and Oki due to the above empirical advantage and convenience of handling equality-constrained problems uniformly as inequality-constrained LPs.
It should be noted that having full-dimensional feasible regions is not strictly necessary. In some cases, neural networks can empirically learn projections onto appropriate linear subspaces. (Hence, unlike Sakaue and Oki, the equality constraints are not necessarily identical across instances.) Additionally, learning to generate projections for equality-constrained LP instances without trivially feasible solutions remains a more challenging task.
We hope this clarifies our reasoning. We will expand the discussion on this point in revision. Thank you again for prompting us to provide a more thorough explanation. Please let us know should any questions or concerns remain. | Summary: This paper presents a data-driven method for reducing the dimensionality of linear programming (LP) problems by generating instance-specific projection matrices using a neural network-based model. The proposed approach aims to improve the efficiency of LP solvers by projecting high-dimensional LP instances into a lower-dimensional space while maintaining solution feasibility and quality. The authors conclude that their method offers an efficient and solver-agnostic approach for solving large-scale LPs, making it a practical tool for accelerating LP solvers in real-world applications.
Claims And Evidence: The paper provides strong empirical and theoretical evidence for its claims, but it lacks a clear comparison with existing MILP generation methods to highlight its distinct advantages.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for LP dimensionality reduction, but the lack of direct comparison with existing MILP generation approaches limits the clarity of its broader impact.
Theoretical Claims: The paper provides a solid theoretical analysis, including a generalization bound for the learned projection matrices, and the proofs appear correct, though their practical implications could be further clarified.
Experimental Designs Or Analyses: The experimental design is well-structured with diverse LP benchmarks and strong empirical validation, but it lacks direct comparisons with existing MILP generation methods to contextualize its advantages.
Supplementary Material: No
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: This paper presents a well-structured and theoretically sound approach for dimensionality reduction in linear programming (LP) problems by generating instance-specific projection matrices using a neural network. One of its key strengths is the solid theoretical foundation, particularly the generalization bound analysis, which ensures the reliability of the generated projection matrices across different LP instances. Additionally, the permutation equivariant and invariant model design enhances its flexibility, allowing it to handle LPs of varying sizes effectively. Empirical evaluations on multiple LP benchmarks demonstrate that the method achieves high-quality solutions while significantly reducing computational time compared to solving the original LPs.
However, despite these strengths, the novelty of the approach is somewhat limited. The idea of using projection-based techniques for MILP/LP problem transformation has been widely explored in the literature, and the paper does not sufficiently differentiate its method from existing MILP instance generation approaches. While the theoretical analysis is a valuable contribution, the lack of a direct comparison with other MILP generation methods makes it difficult to fully assess the impact of this work within the broader optimization community. Additionally, the practical robustness of the learned projection matrices, especially when encountering out-of-distribution LP instances, could have been further analyzed.
Overall, this paper provides a rigorous and well-executed extension of projection-based LP optimization methods, but its contribution would be stronger if it explicitly addressed its uniqueness compared to prior MILP generation methods and included more discussions on practical deployment and generalization to MILPs.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your constructive comments.
> it lacks a clear comparison with existing MILP generation methods to highlight its distinct advantages.
Since MILP generation methods and our approach serve significantly different purposes, we cannot make direct comparison. We plan to add a discussion to clarify this point, citing relevant references, such as [A Deep Instance Generative Framework for MILP Solvers Under Limited Data Availability, NeurIPS2023], [MILP-StuDio: MILP Instance Generation via Block Structure Decomposition, NeurIPS2024], [DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee, TMLR, 2024]. Below is the detailed discussion.
MILP instance generation approaches generate optimization problem instances. The generated instances can be used for training machine learning-based solvers, tuning hyperparameters of solvers, or evaluating solvers. On the other hand, our method transforms a given optimization problem instance to another reduced-size instance to efficiently solve the given instance. That is, MILP instance generation approaches do not transform a given instance, while our purpose is to solve a given instances efficiently, making them incomparable.
> proofs appear correct, though their practical implications could be further clarified.
If the training-data size $D$ is larger than $B^2/\epsilon^2 Pdim$, the generalization error is bounded by $\epsilon$. Our theoretical analysis guarantees that the generalization error decreases as the training-data size increases, potentially bringing it arbitrarily close to zero. We will clarify this practical implication.
> Additionally, the practical robustness of the learned projection matrices, especially when encountering out-of-distribution LP instances, could have been further analyzed.
We have analyzed the robustness of our learned neural network-based model
in Figure 5 (p.8) and Table 1 (p.14 in the appendix) when encountering out-of-distribution LP instances.
In Figure 5, Ours (red line) represents the performance of our method on test LP instances whose sizes are different from the training LP instances. Figure 5 (a,b) show that our method works well for LPs when its number of variables is close to or smaller than that of the training LPs even when its number of constraints is different in packing problems.
In MaxFlow, our method works well when the number of nodes was close to or smaller than that of the training LPs (c), and the number of edges was close to that of the training LPs (d). In MinCostFlow, our method works well when the number of nodes was small (e), and it did not vary much with the number of edges (f).
Table 1 shows that our method's performance was high when the same type of LP instances were used for training and test LPs, and it was low when instances with the same type were not included. Although our method trained with the Mix data achieved slightly worse than that trained with the same type's instances, it was much better than that trained with different types' instances. This result indicates that it is important to include LPs in training that are similar to test LPs.
> included more discussions on practical deployment
On practical deployment, by synthesizing LPs according to the LP's parameter distribution of target LPs, and training our model using the synthesized LPs beforehand, we can efficiently find high-quality feasible solutions of target LPs that will be given in the future as described in Section 1.
Potentially, we may be able to use (MI)LP instance generation techniques for synthesizing LPs. We will add discussions on practical deployment.
> generalization to MILPs
By the LP relaxation of the given MILP, we can apply our method to MILPs. Our method cannot be directly used for reducing MILPs because we design our training procedures tailored to LPs. However, our high-level idea of learning to generate projections could be extended to solve MILP instances efficiently. Developing such extensions to MILPs is an interesting direction for future work. We will add discussions on generalization to MILPs.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. Your response is strong enough to persuade me. I have decided to raise my score. | Summary: This paper proposed a new data-driven method to train a machine learning model to reduce the dimensionalities of large-scale linear programming problems (LP). This model not only can provide reduced dimension LP, by solving which one could obtain a high-quality feasible solution of the original LP, but can reduce dimensionality of LPs with different sizes. The authors also use theory to show the generalization of the model, and illustrate the performance of this model by numerical experiments.
Claims And Evidence: Yes. The proof of generalization bound looks good, and the numerical experiments also support the conclusion.
Methods And Evaluation Criteria: Regarding the theoretical part, the authors only show the generalization bound but do not show the optimality of this model. However, it seems that this type of generalization bound follows many existing TCS analyses about the generalization of NNs, so this result might not be very novel. For this problem, potentially, an optimality guarantee would be more interesting.
Additionally, for the numerical experiment parts, the used cases might not be general enough. It would be better to apply this new method to some problems for the competition of commercial solvers. This mentioned comparison might show the true performance of this new method.
Theoretical Claims: Yes. This type of bound should be similar to some existing bounds.
Experimental Designs Or Analyses: Please see "Methods And Evaluation Criteria".
Supplementary Material: I did not check the proof, but I think this type of bounds can be established with similar proof of other TCS papers.
Relation To Broader Scientific Literature: There are many different methods to reduce dimension for large-scale optimization, or linear programming, such as column generation. It seems that this paper proposed a new approach to design some novel NN layers to learn the reduced-dimension LPs, which can also provide high quality solutions.
Essential References Not Discussed: I am not familiar with using NN to reduce dimensionality for optimization problems. However, it might be important to cite some traditional methods to reduce the dimensionality of LPs, such as column generation.
Other Strengths And Weaknesses: Please see "Methods And Evaluation Criteria" for my main concern.
Additionally, it would be great if the authors could discuss the difference between this new approach and some traditional dimensionalty reduction methods for LPs.
Other Comments Or Suggestions: Please see "Methods And Evaluation Criteria" for my main concern.
Additionally, it would be great if the authors could discuss the difference between this new approach and some traditional dimensionalty reduction methods for LPs.
Questions For Authors: Please see "Methods And Evaluation Criteria" for my main concern.
Additionally, it would be great if the authors could discuss the difference between this new approach and some traditional dimensionalty reduction methods for LPs.
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your constructive comments.
> the authors only show the generalization bound but do not show the optimality of this model.
Since the objective function for training our model is non-convex, it is challenging to show the optimality of the trained model. Note that we can improve the quality of solutions by maximizing our objective function, although it may not be optimal, and our method can obtain high quality solutions as demonstrated in our experiments. The generalization bound ensures that the empirical high performance generalizes to unseen future instances.
Our model is based on deep sets [Deep sets, NeurIPS2017], and deep sets are known to be universal for approximating invariant and equivariant functions [On Universal Equivariant Set Networks, ICLR2020]. Moreover, the model search space of our model is significantly reduced by a factor of N!M! by its invariant and equivariant properties as described in our paper. Therefore, although we cannot say it is the optimal model, it is a data-efficient model with high representation power for generating projections for LPs.
> It would be better to apply this new method to some problems for the competition of commercial solvers.
We newly conducted experiments using GROW7, an LP from the Netlib repository [The Netlib Mathematical Software Repository, D-Lib Magazine, 1995], which is a collection of software for scientific computing. We generated LP instances from GROW7 by multiplying all the LP parameters (in the objective function and constraints) with uniform random values ranging from 0.75 to 1.25 and permuting the variables and constraints. As shown in the table below, our method achieved better objective ratios compared to the other methods, and its computational time is significantly shorter than that of solving the original LPs (Full).
Objective ratio
|$K$|10|15|20|25|30|35|40|45|50|
|----|----|----|----|----|----|----|----|----|----|
Ours | 0.469 | 0.618 | 0.669 | 0.709 | 0.780 | 0.814 | 0.786 | 0.841 | 0.852 |
Rand | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
PCA | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
SharedP | 0.000 | 0.002 | 0.003 | 0.004 | 0.004 | 0.005 | 0.006 | 0.008 | 0.009 |
FCNN | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
Direct | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
Computational time in seconds
|$K$|10|15|20|25|30|35|40|45|50|
|----|----|----|----|----|----|----|----|----|----|
Full | 1.236 | 1.236 | 1.236 | 1.236 | 1.236 | 1.236 | 1.236 | 1.236 | 1.236 |
Ours | 0.058 | 0.067 | 0.084 | 0.104 | 0.120 | 0.141 | 0.160 | 0.184 | 0.202 |
Rand | 0.044 | 0.060 | 0.078 | 0.096 | 0.114 | 0.130 | 0.148 | 0.169 | 0.188 |
PCA | 0.044 | 0.062 | 0.078 | 0.099 | 0.113 | 0.132 | 0.152 | 0.176 | 0.192 |
SharedP | 0.043 | 0.059 | 0.078 | 0.097 | 0.139 | 0.133 | 0.153 | 0.170 | 0.189 |
FCNN | 0.045 | 0.062 | 0.080 | 0.097 | 0.116 | 0.133 | 0.148 | 0.198 | 0.189 |
Direct | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 |
> I am not familiar with using NN to reduce dimensionality for optimization problems.
To our knowledge, there is no existing work that uses NN to reduce dimensionality for optimization problems.
> it might be important to cite some traditional methods to reduce the dimensionality of LPs, such as column generation. Additionally, it would be great if the authors could discuss the difference between this new approach and some traditional dimensionalty reduction methods for LPs.
We appreciate this comment. We will cite traditional methods that reduce the dimensionality of LPs, including column generation, and add discussions on them. Below we clarify the relation to column generation.
Column generation is an iterative method for solving LPs: starting from an LP with a small number of variables, it iteratively selects relevant variables until the optimality is confirmed via the LP duality. A crucial distinction is that column generation is an LP solver, whereas our model serves as a data-driven preprocessing step for reducing the dimensionality of LPs. Therefore, our method can be used to accelerate column generation by benefiting from data of past LPs. From the computational perspective, column generation repetitively solves reduced-size LPs for solving an original LP instance. On the other hand, our method finds an appropriate projection matrix by a single forwarding pass of our neural-network-based model, and the resulting reduced-size LP is solved only once. Exploring the collaboration of the algorithmic (like column generation) and data-driven (like ours) approaches to reducing LP sizes will be an exciting future direction.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I am satisfied with it and would like to raise my rating. | null | null | null | null | null | null |
AlphaPO: Reward Shape Matters for LLM Alignment | Accept (poster) | Summary: Authors step from likelihood displacement, a phenomenon in which both preferred and dispreferred responses drop likelihood during optimization. Derived from f-DPO, authors add length normalization to the alpha-divergence and show that alpha controls the likelihood displacement strengths through gradient analysis: larger alpha will be less aggressive.
Results show that AlphaPO outperforms DPO and SimPO, with a proper positive alpha achieving the best performance.
Claims And Evidence: ### alpha in AlphaPO controls the strengths of likelihood displacement: larger alpha gives less agressiveness in likelihood displacement.
Evidence: experiment figure 2 and Theorem 3.1.
* The theorem does not provide proof of monotonicity. Instead,it analyzes the extreme cases where alpha reaches -inf or inf. Therefore, the claim is not fully supported by the theorem. However, the experiment looks good.
### AlphaPO has less constraint on the range of alpha
Evidence: Not found
* It is unclear why the range of alpha in AlphaPO is wider, given its theory foundation is the same as Wang et al. 2024a
### AlphaPO performs better than DPO and SimPO, and corresponding side claims in ablation study
Evidence: Table 1, and other related figures
* This is well supported by the experiment.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I check Thm 3.1, 3.2, and Cor 3.3. I read Thm 3.1 proof and had a quick glance at Thm 3.2 and Cor 3.3.
The proof is correct, to my knowledge. For the claim correctness, please see above sections.
Experimental Designs Or Analyses: The experiment is overall sound and complete.
Several things that are unclear to me:
* Does the author apply hyperparam tuning for SimPO too?
* Is the hyperparam searching applied to each dataset? Or do authors apply hyperparam searching on one dataset and apply it on another dataset without searching again?
Supplementary Material: N/A
Relation To Broader Scientific Literature: Prior works solve this problem either by adding NLL loss [1], reducing learning rate [2], and adaptive margin [3].
[1] https://arxiv.org/pdf/2404.19733
[2] https://arxiv.org/pdf/2409.02392
[3] https://arxiv.org/pdf/2402.13228
Essential References Not Discussed: Reducing the learning rate can mitigate preferred responses probability drop[2], which is not discussed in the paper.
Other Strengths And Weaknesses: Other Strengths:
* The gradient analysis is interesting and gives a picture of how AlphaPo is working.
Other Weakness:
* The introduction of length normalization is intuitive but less theoretical. Specifically, it is unclear what divergence the modified objective is representing, and whether it makes sense from the theory perspective.
* Authors say that prior work (Wang et al 2024) shows that $\alpha$ does not bring much performance gain but their experiment shows that different $\alpha$ affects results much. What causes the differences? Is it because of length normalization? More discussion are needed here.
* Experiments are only conducted on chat tasks, making it unclear if the methods are still good for other tasks such as reasoning.
Other Comments Or Suggestions: * It is better to add label to y axis in Figures 3 for easier understanding without reading texts
## update after rebuttal
I raise my score to 3 since the authors address my major concerns.
Questions For Authors: My main concern is that the AlphaPO is less theoretically convincing (See above reviews). Also, given that the experiment was conducted on chat tasks, it is hard to convince me that the AlphaPO is generally useful.
I would be happy to raise scores if the authors can address my concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > The theorem does not provide proof ...
We acknowledge that the theorem can be made more clear. In particular, monotonicity can be proved for $T_1(\alpha)$; see the summary table [here](https://i.imgur.com/aW2jjkf.png). In general, the gradient magnitude is not monotonic as presented in Illustrations. We present a 3D plot wrt the relation of the gradient, response length, and $\alpha$ [here](https://i.imgur.com/cxM7PFX.png), which demonstrates non-monotonicity clearly.
We will update the draft with the proof of $T_1(\alpha)$ monotonicity.
> It is unclear why the range of alpha in ...
In order to derive an explicit formula for the reward as a function of $\pi$ without normalization, Wang et al. require that 0 is not in the domain of $f'(u)$, which constrains the allowable range of $\alpha$. In contrast, our reward construction—similar to SimPO—is not based on divergences and is thus not subject to that same constraint. Moreover, the AlphaPO reward function involves length normalization—something Wang et al. do not consider.
> Does the author apply hyperparam ...
Yes, we performed comprehensive tuning. The published HPs from the SimPO authors proved optimal for all three models, and our AE and AH results closely match.
>Is the hyperparam searching ...
HP search was done independently for every model. The instruct models use on-policy data, so datasets across models are not directly comparable (details in section 4.1).
HPs introduced in SimPO are required to be tuned for every model. However, extensive tuning of α is not required. See response to reviewer **2gDT** for additional details.
> Reducing the learning rate ....
Although small LR can mitigate the drop in the probabilities of preferred responses, it may result in worse generalization. We verify this by decreasing the optimal LR for Mistral to $6.0e-7$. Result - LC 29.36 / WR 30.28, compared to LC 33.03 / WR 34.12 obtained with LR = $7.0e-7$. A similar behavior was observed in the SimPO paper (Table 17) for the Gemma-2-9b model.
From reference [2], the authors propose a smaller LR based on gradient analysis in the paragraph “On-policy sampling and small learning ....” Furthermore, they explicitly confirm: “A more comprehensive understanding of the training dynamic of the direct preference learning algorithms remains largely open.” - this is a key contribution of our paper.
Additionally, incorporating an NLL loss term [1] harms generalization because it inhibits likelihood displacement of preferred responses. Methods such as RRHF, SLiC-HF, CPO, and ORPO include an NLL loss term, yet their LC / WR scores in Table 12 are considerably lower than those of SimPO. Moreover, as noted in Appendix H of SimPO under the paragraph “Incorporating SFT regularization in SimPO,” while NLL regularization may benefit certain tasks, it leads to a decrease in performance on AE2.
Adaptive Margin [3] is an interesting idea. We will acknowledge this in the paper.
> The introduction of length normalization is intuitive but...
The reward function is designed—rather than derived from a divergence—based on its impact on training dynamics and generalization. This approach is similar to SimPO [1] and ORPO [2]. R-DPO [3] use length penalty - similar in spirit to [1][2]. Tulu3 [4] uses the length normalized variant of DPO successfully. Although length normalization is appealing and empirically beneficial, literature has not yet succeeded in deriving it as an emergent property from any specific divergence.
**References:**
[1] "Simpo"
[2] "Orpo"
[3] "Disentangling length from quality in direct preference optimization."
[4] "Tulu 3: Pushing frontiers..."
> Authors say that prior work (Wang et al 2024) shows that α ....
Due to length normalization, the effective α’ in the fDPO exponent becomes α divided by response length. Based on Theorem 3.1 and our empirical study, a small positive α leads to improved generalization. In the context of fDPO, this translates to a tiny effective α’—especially considering that responses can be as long as 1000 tokens, which the authors of fDPO did not consider
We also refer the reviewer to Figure 2 of the SimPO paper which also applies to AlphaPO. Due to length normalization, AlphaPO, like SimPO, consistently achieves a positive reward margin for all response pairs (irrespective of length differences), and consistently improves the margin over the SFT model. Therefore, this is another key reason for AlphaPO doing better than without LN (fDPO)
> Experiments are only conducted on chat tasks..
Details + additional experiments can be found in the response to Reviewer **G3bY**.
> It is better to add label to y axis...
We have fixed this.
> My main concern is that the AlphaPO is less...
Please see above our response to your specific questions on the theoretical aspects of AlphaPO. Also, please see our answer to Reviewer **G3bY** for experiments on reasoning and other tasks where AlphaPO still outperforms SimPO.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal.
If my understanding is correct, the AlphaPO formulation is less theoretically supported and leans more on empirical evidence, such as the range of alpha and length normalization.
I've raised my score to 3 accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging important aspects of our work, such as the choice of the alpha range and length normalization. Although AlphaPO does not rely on a divergence-based formulation, it is grounded in training dynamics and supported by both theoretical insights and empirical findings. We will further clarify the theoretical aspects in the camera-ready version. | Summary: The paper introduces AlphaPO, a novel preference training algorithm designed to improve the alignment of large language models (LLMs) by modifying the shape of the reward function used in preference optimization.
Unlike existing methods like Direct Preference Optimization (DPO) and Simple Preference Optimization (SimPO), which often suffer from likelihood displacement and reward over-optimization, AlphaPO leverages an \(\alpha\)-parameter to adjust the reward function shape, allowing for better control over these issues.
The authors demonstrate that AlphaPO achieve improvements in alignment performance, with relative gains of 7% to 10% over SimPO and 15% to 50% over DPO on models like Mistral-7B and Llama3-8B. A comparable performance with SimPO is obtained on Gemma models.
Through gradient analysis and experiments, the paper shows that AlphaPO's reward function shapes influence the training dynamics, leading to better generalization and reduced over-optimization. The results highlight the importance of reward function design in LLM alignment and suggest that AlphaPO's approach can be extended to other alignment methods.
## update after rebuttal
My position stilll stand as weak accept
Claims And Evidence: * Reward shaping affects the training dynamics, and a properly shaped reward can alleviate the likelihood displacement issue.
This claim is supported by analysis in Figure-2 tracking the likelihood for different alpha values.
* The paper claims better generalizability after reward shaping for SimPO method. The claim is supported by superior results on widely acknowledged benchmarks in Table-1,
Methods And Evaluation Criteria: makes sense. AlpacaEval and ArenaHard are the right benchmarks for preference alignment.
Theoretical Claims: I check the proof for Theorem 3.1, and it looks good.
Experimental Designs Or Analyses: The experiment design is good.
Supplementary Material: NA
Relation To Broader Scientific Literature: It relates to offline preference training literature.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: The main weakness is that the absolute improvement on SimPO is limited. If the paper can address the stability issues of SimPO like method (choice of hyper-parameter), it would be making a more impactful contribution.
Other Comments Or Suggestions: NA
Questions For Authors: Can you discuss more on how you made the hyper-parameter choices for your experiment. Given that the differences between your result is very close to SimPO, could hyper-parameters be the differing factor?
Also, does reward shaping help with stabilizing training, because find right hyper-parameter for SimPO has been a pain point for people.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > The main weakness is that the absolute improvement on SimPO is limited. If the paper can address the stability issues of SimPO like method (choice of hyper-parameter), it would be making a more impactful contribution.
We thank the reviewer for the comment. We would like to point out that our improvements over SimPO are actually significant and in-line with other well-established papers:- Reviewer **G3bY** pointed out that **AlphaPO** provides a significant improvement on **AlpacaEval 2** and **Arena-Hard** over methods like **DPO** and **SimPO**.
- We also compared our gains to the gains of established, popular methods like **ORPO** [1] and **SimPO** [2].
- **ORPO** achieves a relative gain of ~10% over the **Zephyr** model, which was the SOTA model at the time of publication (Table 1 in the ORPO paper).
- From Table 4 of the **SimPO** paper, it is clear that many established methods such as **SLiC-HF** [3], **IPO** [4], **ORPO** [1], and **KTO** [5] perform at par or worse than both **DPO** and **SimPO** on many benchmarks.
- Thus, improving upon **SimPO**'s performance is a challenging problem.
We thank the reviewer for the important comment about stability issues for SimPO related to choice of hyperparameters. We address this in the responses below.
> Can you discuss more on how you made the hyper-parameter choices for your experiment. Given that the differences between your result is very close to SimPO, could hyper-parameters be the differing factor?
As discussed above, we would like to emphasize that our improvements over SimPO are significant and consistent with observations from other papers (please refer to the discussion above).
From **Table 1** in our paper, we observe the following:
- For **Mistral-Instruct**, AlphaPO achieves an LCWR of **33.03**, while SimPO results in **29.71**. This is a substantial improvement.
- For **LLaMA-Instruct**, AlphaPO yields an LCWR of **45.37**, compared to **42.05** from SimPO — again, a significant improvement.
To tune AlphaPO, we started with the best-tuned SimPO baseline and then lightly tuned the `gamma` and learning rate (LR). In many cases, we did not need to modify `gamma` and LR at all. As for `alpha`, both theoretical insights and ablation studies indicate that extremely high positive or negative values suppress the gradient. In practice, a small positive value — typically around **0.1** or **0.25** — consistently improves performance over SimPO.
We believe AlphaPO could perform even better with more extensive tuning, which we were unable to pursue due to compute limitations.
We appreciate the reviewer's question and conducted an additional experiment to investigate further. Specifically, we took the SimPO-tuned hyperparameters, changed only `alpha` to a positive value, and trained an AlphaPO model. Note that this setup is not necessarily optimal for AlphaPO. The results are:
**LLaMA-3-Instruct**:
- AlphaPO (SimPO HPs, `α = 0.25`) → **LCWR: 43.33** , **WR of 38.51**
- SimPO→ **LCWR: 42.05**, **WR: 36.90**
**LLaMA-3-Instruct ArmoRM**:
- AlphaPO (SimPO HPs, `α = 0.25`) → **LCWR: 52.91**, **WR: 47.21**
- SimPO → **LCWR: 51.66**, **WR: 46.54**
These results demonstrate that simply modifying `alpha`, without changing any other hyperparameters, can lead to clear performance gains over SimPO.
> Also, does reward shaping help with stabilizing training, because find right hyper-parameter for SimPO has been a pain point for people.
Tuning the hyperparameters for SimPO has long been a challenge. While the reward shaping introduced in our method does help stabilize training by regularizing the gradient (see Theorem 3.1), empirically, SimPO with α = 0 typically exhibits the most aggressive increase in the margin (as demonstrated in Figures 14 and 15). However, despite its stabilizing effects against reward over optimization, this does not always translate into improved generalization. Therefore, while it aids training stability, reward shaping does not fully alleviate the need for hyperparameter tuning to achieve optimal generalization performance. Reward shape helps to further enhance the generalization on top of the SimPO.
[1] Hong, Jiwoo, Noah Lee, and James Thorne. "Orpo: Monolithic preference optimization without reference model." arXiv preprint arXiv:2403.07691 (2024).
[2] Meng, Yu, Mengzhou Xia, and Danqi Chen. "Simpo: Simple preference optimization with a reference-free reward." Advances in Neural Information Processing Systems 37 (2024): 124198-124235.
[3] Zhao, Yao, et al. "Slic-hf: Sequence likelihood calibration with human feedback." arXiv preprint arXiv:2305.10425 (2023).
[4] Azar, Mohammad Gheshlaghi, et al. "A general theoretical paradigm to understand learning from human preferences." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
[5] Ethayarajh, Kawin, et al. "Kto: Model alignment as prospect theoretic optimization." arXiv preprint arXiv:2402.01306 (2024). | Summary: The paper introduces AlphaPO, a variant of f-DPO that adopts $\alpha$-divergence and length normalization. The paper shows that varying $\alpha$ affects the shape of the implicit reward. With an appropriate value of $\alpha$, AlphaPO can mitigate the over-optimization issue of Direct Alignment Methods.
Claims And Evidence: Yes.
1. The authors perform extensive experiments over 2 RLHF benchmarks and 3 base models. The effects of crucial hyper-parameters ( $\alpha$ and margin $\gamma$ ) are studied.
2. The authors theoretically analyze the effect of varying $\alpha$ on the gradient scale and the likelihood of preferred responses.
There are a few questions though:
1. In section 3.3 the authors mention "large $|\alpha|$ values impose a regularization effect on alignment training due to the vanishing gradient for samples with positive length-normalized margins". Methods like IPO [1] and SLiC [2] also limit the margin between preferred and dispreferred responses through L2-loss or hinge loss. It would be better if the paper includes a discussion of these works.
2. Section 3.2 states that length normalization is a crucial element. But the importance of length normalization is not thoroughly discussed in the paper. For example in Illustration 1, the length is fixed to be 1.
[1] Azar, Mohammad Gheshlaghi et al. “A General Theoretical Paradigm to Understand Learning from Human Preferences.” ArXiv abs/2310.12036 (2023): n. pag.
[2] Zhao, Yao et al. “SLiC-HF: Sequence Likelihood Calibration with Human Feedback.” ArXiv abs/2305.10425 (2023): n. pag.
Methods And Evaluation Criteria: Yes. The authors demonstrate that AlphaPO with an appropriate $\alpha$ regularizes the magnitude of positive margin and thus mitigating over-optimization.
Theoretical Claims: I have checked the main claims, which looks alright to me.
Experimental Designs Or Analyses: The paper follows standard evaluation protocol of RLHF. The effects of crucial hyper-parameters ( $\alpha$ and margin $\gamma$ ) are studied.
Supplementary Material: I have checked the additional experiments in the appendix. The results support the claim of the paper.
Relation To Broader Scientific Literature: The paper studies the problem of over-optimization, a well known issue in Direct Alignment Algorithms.
Essential References Not Discussed: See "Claims And Evidence".
Other Strengths And Weaknesses: **Strengths**
- There is a significant improvement on AlpacaEval 2 and Arena-Hard over previous methods like DPO and SimPO.
**Weaknesses**
- AlphaPO introduces an additional hyper-parameter $\alpha$ . It seems that the optimal value of $\alpha$ depends on the base model and must be found using grid search.
Other Comments Or Suggestions: I don't have other comments for this draft.
Questions For Authors: 1. The paper mainly focuses on RLHF tasks. How does AlphaPO perform on reasoning tasks, e.g., math or coding?
2. According to figure 2, the median of the margin stays around 0. But a well-trained model should have a positive margin between preferred and dispreferred responses, if I understand correctly.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > In section 3.3 the authors mention "large values impose a regularization effect ...
We thank the reviewer for their insightful comment. The reviewer is right to point out that IPO and SLiC are important methods. The reasons we did not include them in the paper are (1) the SimPO paper already compares to SLiC and IPO, and demonstrates that SimPO is substantially better. (2) Both SLiC and IPO do not contain length normalization, which is key for generalization. (3) SLiC contains a separate term for log likelihood for the winning response, which can affect likelihood displacement (some likelihood displacement is actually necessary for good generalization). IPO, on the other hand, enforces its margin constraints solely via the L2-loss, which lacks a mechanism to encourage the controlled likelihood displacement.
We will update the paper to include these details in the draft.
> Section 3.2 states that length normalization is a crucial element.....
We do want to highlight that we indeed study the effect of generation length on generalization (1) We highlight in Figure 3 (left panel) and Figure 8 that longer responses usually result in a lower quality (AE 2.0 length-controlled win rate) as we vary . (2) Section 2.2 of the SimPO paper comprehensively discusses that length normalization is a key element in preventing the reward formulation from resulting in a bias towards generating longer but lower-quality sequences.
The reviewer is right to point out that Illustration 1 uses a fixed length of 1. We create a few more illustrations with length > 1. We create a 3D plot, with gradient norm in log scale on the z-axis, alpha on the y-axis and length of y_w, y_l on the x-axis. We use the following parameters for the plot - log_pi_w = -5, log_pi_l = -10, beta = 5, partial π_w / partial v = partial π_l / partial v = 1. The resulting plot can be found [here](https://i.imgur.com/cxM7PFX.png). From the plot (a) The gradient goes to zero whenever alpha goes to + / - infinity, as proved in Theorem 3.1 (b) In general, the gradient is not a monotonic function of alpha.
We will be happy to include this improved illustration in the camera-ready version.
> AlphaPO introduces an additional hyper-parameter α. ....
Please see our response for Reviewer **2gDt** for a detailed treatment.
> The paper mainly focuses on RLHF tasks. How does AlphaPO perform on reasoning tasks, e.g., math or coding?
Thank you for the question. Our evaluation is on **AlpacaEval 2.0 (AE2.0)** and **ArenaHard**. Both benchmarks include math and coding questions. Below are one example for each category from each benchmark.
## AE2.0
### Coding
- **Question 1:**
Write a C++ function that takes a reference to a `std::string` containing markdown formatted text and returns a `std::string` containing html formatted text.
### Math
- **Question 1:**
Given that *f(x) = 5x³ - 2x + 3*, find the value of *f(2)*.
---
## ArenaHard
### Coding
- **Question 1:**
I have a Python script that scrapes a webpage using Playwright. Now I want to start ten instances of that script in parallel on one AWS EC2 instance, but so that each script binds to a different IP address. How can I do that with Terraform?
### Math
- **Question 1:**
What is the 95% confidence interval for the sum of 100 fair six-sided dice?
Based on the reviewer’s questions, we conducted additional experiments:
- **HellaSwag** – A commonsense reasoning benchmark.
- **TruthfulQA** – Carefully designed questions to test models' susceptibility to common misconceptions and their factual accuracy.
We compare **AlphaPO** to **SimPO** on these datasets:
### HellaSwag (10 Shot)
| Model | AlphaPO | SimPO |
|------------------------|---------|--------|
| Llama3-8b-instruct | 0.7694 | 0.7576 |
| Mistral-7b-instruct | 0.8638 | 0.8610 |
### TruthfulQA
| Model | AlphaPO | SimPO |
|------------------------|---------|--------|
| Llama3-8b-instruct | 0.6142 | 0.6078 |
| Mistral-7b-instruct | 0.7127 | 0.7061 |
Based on these results, it is clear that **AlphaPO outperforms SimPO**.
> According to figure 2, the median of the margin stays around 0 ..
In Figure 2, we plot the margin of length-normalized probability, i.e., $\frac{\log \pi_w}{y_w} - \frac{\log \pi_l}{y_l}$, instead of the reward margin (with an additional factor of $\beta$), to better track the training dynamics. That is why the absolute value of the margin is small. For $\alpha=0$, our results align with the training curve open-sourced by the SimPO author [link](https://wandb.ai/yumeng0818/simpo/runs/4w25j650?nw=nwuseryumeng0818) for the Gemma2 model (the mistral training curve is not published by the authors).
At the end of training, the SimPO authors reported the average eval margin of 0.48 and our average eval margin is 0.528 and the median is 0.372 (Figure 6). Similarly, the positive but small margin of length-normalized probability is as expected. | null | null | null | null | null | null | null | null |
Efficient Multivariate Robust Mean Estimation Under Mean-Shift Contamination | Accept (poster) | Summary: In this paper the authors look at the problem of robust mean estimation under the mean shift contamination. Here one receives samples from a standard d-variate Gaussian with mean m such that m is $\mu$ with 2/3 probability and can be something else otherwise. The goal is to recover $\mu$ up to an $\epsilon$ error in L2 norm. Such a guarantee is information theoretically impossible under the Huber's contamination model. However, in this case one can use the fact that the contamination is not arbitrary but only a mean shift to solve the problem.
Claims And Evidence: The authors achieve a sample complexity of $poly(d,2^{\epsilon^{-2}})$ and a time complexity of $poly(n,d)$ to solve this problem.
The claims look clear and convincing to me.
Methods And Evaluation Criteria: The authors give sound theoretical arguments to justify their paper
Theoretical Claims: I haven't gone through the theoretical proofs given in the paper. But based on the overview given in 1.2 and the prior results on this topic, the results look correct to me.
The main idea is to find a direction where the mean has a large projection and once found it suffices to run an one-dimensional estimation along that direction. Along the way, the authors manage to address several technical hurdles, in particular, while obtaining the aforesaid direction.
Experimental Designs Or Analyses: However, any experimental evaluation is missing. I thought experimental verification of the proposed algorithm would have added further value to the main claims of the paper.
Supplementary Material: I have gone through the additional related work
Relation To Broader Scientific Literature: I think this particular spectral filtering algorithm and its variation has been extensively looked at numerous results in the last decade. This paper builds on this technique and one of the main contribution is finding out that this problem of mean estimation under mean-shift contamination where the aforesaid technique can be applied.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths: as mentioned before one of the main contribution is in finding out this problem where the recently proposed spectral techniques could be applied.
Weakness: the main weakness I see that is the result looks incremental. As I said previously, this particular algorithm (spectral filtering algorithm) and its analysis have been investigated so many times in recent years. Also, lack of experimental evidence is another weakness of the paper given that it is easy to experiment with synthetic Gaussian data. This makes me less excited about the paper for a top ML venue such as ICML.
Other Comments Or Suggestions: NA
Questions For Authors: Where is the dependence on $\alpha$ hiding in Theorem 1.2? I would guess the problem becomes harder as $\alpha$ gets larger.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their effort and time in assessing our work. We respond to the points raised individually below:
(**Experiments**) We would like to emphasize that our primary contribution is to characterize the computational-statistical landscape for this fundamental learning task—in terms of error guarantee, sample complexity, and runtime. That being said, our algorithm is already fairly simple and potentially practical, as it uses a dimension-reduction procedure with each step essentially being an SVD of a reweighted covariance matrix. Also, while it is easy to generate synthetic Gaussian data, it is not clear what distribution on errors should be considered, making the meaningfulness of experiments questionable.
(**Novelty**) We would like to stress that our work develops a novel spectral dimension-reduction technique, rather than applying/adapting methods from prior work. To the best of our knowledge, there are two “spectral methods” proposed in prior work for robust mean estimation (in Huber’s model). Both of them are significantly different than our algorithm, as we discuss below:
* (*Spectral filtering from Diakonikolas et. al (2016)*) This method consists of iteratively computing the top eigenvector of the empirical covariance and removing points whose projected deviation from the mean along that direction is higher than a carefully chosen threshold. This has the effect of removing more inliers than outliers until the dataset becomes so clean that the empirical mean is accurate. Importantly, this iterative filtering approach cannot yield consistent estimation (for the mean-shift model that we study), essentially because it does not take into account the additional structure of the outliers.
The algorithm we propose is fundamentally different. Rather than removing points based solely on the top eigenvector of the empirical covariance, our approach performs dimension reduction. It uses multiple eigenvectors of a reweighted covariance matrix as the subspace of the next iteration. In our method, each point is assigned an exponential weight, with carefully designed reweighting factors that are updated throughout the execution of the algorithm (see lines 126-144, second column and 175-186, first column for intuition on choosing the reweighting factor). Finally, once the problem’s dimension has been reduced sufficiently, we employ an inefficient estimator on the lower-dimensional data.
* (*Dimension reduction from Lai et al. (2016)*) Although the goal in that technique is also to reduce the dimension, as we explain in lines 662-671 in Appendix A of the paper and in our response to Reviewer BKkQ, the similarities are only superficial. The method from Lai et al. (2016) uses the standard covariance matrix to identify the subspace for the next iteration. This method also fails to leverage the special nature of the outliers. In contrast, our method reweights the space with an exponential function that encodes our prior knowledge on the outliers in order to truncate their effect in the second moment matrix. Although one could broadly view this reweighting as a kind of “soft filtering”, this is fundamentally different from the filtering of the previous paragraph: there, points were removed based solely on their deviation along a single direction, where here the weight assigned to each point is based on the norm of the vector. Moreover, as we comment in lines 662-671 of the paper and in our response to Reviewer BKkQ, our dimension reduction comes with a number of technical obstacles that are due to the nature of our reweighting and need to be carefully addressed.
(**Dependence on $\alpha$**) Theorem 1.2 holds for any $\alpha \leq 0.49$. As we reply to Reviewer oS19, the most interesting parameter regime in our opinion is when the fraction of outliers $\alpha$ is a positive constant. Essentially, this is the hardest setting for efficient estimation. In that regime, no sub-exponential time algorithm was known to achieve error lower than $\Omega(1)$, while our algorithm can obtain arbitrarily small error in sample-polynomial time; and its sample complexity is information-theoretically near-optimal (as follows from the lower bound of Kotekal & Gao, 2025). As we show in Appendix F, the parameter $0.49$ in the condition of Theorem 1.2 can be replaced by any other absolute constant strictly smaller than $½$ (and the error will degrade by an absolute constant factor). Furthermore, if one prefers to use the reparameterization $\alpha = ½ - c$, where $c$ is a parameter (rather than an absolute constant), the error degrades in a way that increases as $c$ approaches zero. We refer to the guarantee in Claim F.2 for the precise functional form of this dependence, which quantifies how the problem becomes more difficult as $\alpha$ increases.
---
Rebuttal Comment 1.1:
Comment: I have carefully read the reply from the authors. Thanks. My major concerns regarding the acceptance of the paper still remains:
- lack of experimental contribution, I don't agree that simple experiments with robust statistics algorithms can't be done. see arXiv:2002.01432 or arXiv:1506.02428 just for example;
- the authors could not answer my question regarding the explicit dependence on alpha,
- I still feel that the technical contribution is quite limited. In particular, the 1D case has been well-understood which this paper falls back to as a black-box after a dimension reduction step. The data-dependent dimension reduction step itself goes back to earlier works such as [arXiv:2006.12476] and its following works.
I will therefore maintain my assessment.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their time and for expressing their opinion on our submission. We respond to each point made by the reviewer (in some cases reiterating points made in our first response).
**Experiments**: We note that the ICML call for papers welcomes papers in the “Theory of Machine Learning” category. Typically, such papers design algorithms with provable performance guarantees (as our work) without providing experimental evaluations of these algorithms. In other words, the lack of experiments per se should not be counted against a learning theory submission. That said, we believe that experimental evaluation of our algorithm is possible, and would be interesting to do in future work.
**Dependence on $\alpha$**: Reiterating our first response, our main theorem was stated for $\alpha \leq 0.49$. Since the problem becomes harder as $\alpha$ increases, the worst-case occurs when $\alpha = 0.49$. Hence the “dependence on $\alpha$” is a universal constant that appears in the big-O of the sample complexity. If the reviewer was asking what happens when $\alpha$ becomes arbitrarily close to $1/2$, then we point out that the sample complexity would have a term that scales with $1/(1-2\alpha)$.
**Technical Contribution**: The reviewer’s first concern appears to be based on the fact that the one-dimensional version of our problem was previously solved, and that we use a dimension-reduction procedure to reduce to that case. The reviewer is also concerned about similarities with the dimension-reduction method in [arXiv:2006.12476].
On the first point, we would like to remark that essentially every single paper in algorithmic robust statistics over the past decade has developed efficient algorithms for high-dimensional robust estimation tasks whose one-dimensional versions were already solved. The whole contribution is the design of a computationally efficient algorithm in high dimensions. As an example, Lai et al. (2016)---one of the first papers that initiated the field—as well as many more recent works, also reduces to the one-dimensional case.
On the second point: The similarity of our method to the method in [arXiv:2006.12476] (which studies a supervised learning problem without corruptions) is that we look at large eigenvalues of some kind of moment matrix in order to do dimension-reduction. This is in fact a classical generic idea in the literature that did not originate with [arXiv:2006.12476]. That said, that is where the similarities end. Specifically:
* In our setting, we needed to develop a carefully tuned *exponential term* in our moment computations to dilute the effect of outliers without substantially altering the contributions from the non-outliers. This idea is new and it is crucial for the correctness of our algorithm. This exponential reweighting was developed specifically to leverage the structure of the outliers in our contamination model.
* The aforementioned exponential reweighting meant that our “mean” was no longer an unbiased estimator of the true mean. To address this, we needed to develop a new analysis to show that the component of the bias in the direction of small eigenvalues was small.
* In order to develop a sample-near optimal algorithm, we needed to do a much more careful analysis of the convergence of the error matrix.
* Importantly, our result requires a *recursive* dimension-reduction rather than a one-shot method.
* Once we have reduced to a low-dimensional problem, we need a vastly different method of solving the lower dimensional problem efficiently. | Summary: The authors consider the problem of robust mean estimation of an identity covariance Gaussian in the presence of mean-shift contamination. They specifically give the first computationally efficient algorithm for high-dimensional robust mean estimation with mean-shift contamination. Their algorithm has near-optimal sample complexity, runs in sample-polynomial time, and approximates the target mean to any desired accuracy. It does so by using a data dependent dimension reduction technique which recovers a low-dimensional subspace on which μ has a large projection, and then running a low-dimensional robust estimator on that subspace.
Claims And Evidence: The claims made in the submission supported by clear and convincing evidence
Methods And Evaluation Criteria: The proposed methods make sense for the problem or application at hand.
Theoretical Claims: I checked the correctness of the claims in the main body and the full proof of Theorem 1.2 in the appendix, skipping all other technical claims in the appendix.
Experimental Designs Or Analyses: There are no experimental designs in the paper.
Supplementary Material: I read the proof of the main theorem and skip the technical proofs.
Relation To Broader Scientific Literature: The paper formulates the first computationally algorithm for high-dimensional robust mean estimation that runs in time polynomial in the sample complexity.
Essential References Not Discussed: According to my knowledge, there are no essential works not currently cited in the paper.
Other Strengths And Weaknesses: The strength is the algorithm construction, which makes it run in sample polynomial time. The discussion on the algorithm construction is thorough, breaking it down to multiple intuitive steps.
Other Comments Or Suggestions: In the paper, it is implicitly assumed that || \mu || =O(1) while, n general, || \mu || can be O(d). The authors should elaborate on why that is a reasonable assumption, and what happens to the algorithm in case this assumption doesn't hold.
Questions For Authors: I have no questions for the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and for their positive assessment of our work. Below, we provide a clarification in response to their comment.
The introductory proof sketch mentions that we can assume $\lVert\mu\rVert = O(1)$ due to naive outlier removal. This means that even if $\lVert\mu\rVert$ is arbitrary, we can preprocess the data to reduce it to the case where $\lVert\mu\rVert = O(1)$; a step that is formally carried out in the subsequent sections containing the full proofs. The argument is quite simple: At the start of the algorithm (line 6 of Algorithm 1), we perform a preprocessing step to obtain a rough estimate $\hat{\mu}_0$ satisfying $\lVert\hat{\mu}_0 - \mu\rVert = O(1)$. This can be achieved using any black-box outlier removal estimator designed for the Huber model from prior work. We then draw a new dataset and replace each point $x$ in that dataset by $x - \hat{\mu}_0$. This transformation effectively shifts the data distribution so that its true mean becomes $\mu - \hat{\mu}_0$, which has norm $O(1)$. After the algorithm produces an $\epsilon$-approximation $\hat{\mu}_1$ for this shifted mean, we simply add back $\hat{\mu}_0$ to $\hat{\mu}_1$, obtaining an $\epsilon$-approximation of $\mu$ (line 26 of Algorithm 1). | Summary: This paper continues the recent line of work in robust mean estimation in high dimensions, with focus on getting arbitrary small (eps) error in the mean despite an alpha-fraction (for alpha in [0, 0.49]) of outliers. The more commonly studied Huber model for outliers contamination can only achieve O(alpha) error, so this method has more requirements:
- the inliers are of form N(mu, I) [this is very common], and outliers x_i of form N(z_i, I), notably with same covariance as inliers
- the n = Omega(2^O(1/eps^2)) samples are required
The fact the outliers have the same covariance is outliers seems like a mild assumption, and could maybe be relaxed more; this seems mostly to show that outliers are full rank and their covariance is psd. But the form considered is natural.
The size of the samples is a large increase from Theta(d / eps^2) for robust methods that obtain O(alpha) error (with eps <= alpha). But the 2^Omega(1/eps^2) appears necessary (from Kotekal+Gao 2025) for even d=1.
Moreover this second point allows that if we can reduce the problem to k=1/eps^2 dimensions, then we can net the space of directions (with 2^O(1/eps^2) directions), apply the 1-d version in each one, and return a point in the intersection of these intervals. This is what the proposed method does.
What remains is to project to k dimensions so that the true inlier mean incurs at most eps error in this projection. This is done iteratively somehow using the now-standard insight that the directions with most covariance contain both the true inlier mean, and any meaningfully biased set of outliers. As such, the algorithm iteratively finds the top eigenvectors of the covariance matrix, and projects onto them -- reminiscent of Lai, Rao, Vempala 2016.
Although, some new ideas seem needed here -- especially ones that make use of the specific structure of the outlier distributions.
There are numerous details to get this to work, and tighten the runtime. The details are technical, and appear to be handled correctly, but do not seem to use wholesale new insights.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I followed the main arguments, and feel reasonable confident that they work. But I did not verify all of the details.
Experimental Designs Or Analyses: This is a theory paper, and not experiments are presented. That is not a problem.
Supplementary Material: I skimmed it, but did not go over all proofs in detail. It seems comprehensive.
Relation To Broader Scientific Literature: The paper seems to have done a good job.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength: I think the arbitrary smaller error variant of this problem is interesting to study. This paper makes a major advancement in that sense going from d=1 to high dimensions d.
Weakness: the runtime of 2^O(1/eps^2) is quite large, and makes the ultimate result of less general relevance. Albeit, recent prior work claims this is needed even in d=1.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their effort and their positive assessment of our work. We respond to the comments below:
* (**Comparison with Lai, Rao, and Vempala 2016**) As we discuss in detail in Appendix (lines 662–671), although Lai, Rao, and Vempala (2016) iteratively reduce the dimension using the centered second moment of the data, this approach can only achieve an error of order $\alpha$, at best (in fact, their algorithm additionally incurs error with a logarithmic dependence in the dimension). Our dimension-reduction technique is fundamentally different, with the key innovation being the use of exponential reweighting for each sample. This mitigates the impact of outliers (in our model) and enables consistent estimation. Implementing this approach introduces additional challenges that must be addressed: (i) our dimension reduction method requires the space to have at least $1/\epsilon^2$ dimensions (as opposed to just one dimension in Lai, Rao, and Vempala 2016); (ii) the parameter used in exponential reweighting influences both the centering applied in the computation of the centered second moment and the sample complexity. As explained in Section 1.2, the first attempt for setting this parameter correctly does not immediately allow for reduction all the way down to the desired level. Instead, our algorithm operates in two stages: the first stage performs the initial dimension reduction from the last sentence, and the second stage resets that parameter to a refined value that allows for further reduction.
* (**Runtime**) As the reviewer points out, the runtime of $2^{O(1/\epsilon^2)}$ is unavoidable, even in one dimension. This is because, even in the one-dimensional case, estimating the mean up to error $\epsilon$ information-theoretically requires $2^{\Omega(1/\epsilon^2)}$ samples, as shown in Kotekal & Gao (2025). The key conceptual point of our paper is that, prior to our work, it was unknown whether the high-dimensional problem incurs a significantly higher runtime (e.g., $2^{\Omega(d/\epsilon^2)}$). We rule out this possibility by showing that runtime $poly(d)2^{\Omega(1/\epsilon^2)}$ is enough (i.e., the dependence on the dimension is polynomial instead of exponential). Notably, this implies that setting $\epsilon = \Theta(1/\sqrt{\log d})$ allows for error tending to zero as $d \to \infty$) within an overall $\text{poly}(d)$ runtime. In contrast, every prior algorithm for the Huber model cannot achieve an error smaller than $\Omega(\alpha)$, which could be an absolute constant. | Summary: This work studies a relaxed model of robust mean estimation of a spherical Gaussian with identity covariance, where the adversarial points do not follow arbitrary distribution Q, but are sampled from $\mathcal{N}(z_i, I)$, where $z_i$ is an arbitrary point. Authors show that in such model consistent estimation is possible, and provide a computationally efficient algorithm for high-dimensional robust mean estimation.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Not applicable
Supplementary Material: Yes, I reviewed all supplementary material
Relation To Broader Scientific Literature: This work extends the literature on robust mean estimation, studies a high-dimensional setting of the mean-shift contamination. This is the first result which looks at the computationally efficient algorithms for such contamination model.
Essential References Not Discussed: No to the best of my knowledge.
Other Strengths And Weaknesses: Strengths: studies an interesting setting of robust mean estimation where consistent estimators exist. Proposes a computationally efficient algorithm for the case where inlier samples follow $\mathcal{N}(\mu, I)$.
Weaknesses:
1. The work only covers identity covariance case.
2. The dependancy on the corruption rate $\alpha$ does not appear in the main result.
Other Comments Or Suggestions: I suggest adding more details on why the $2^{O(1 / \varepsilon^2)}$ term is necessary in the sample complexity. I do not think it is true in general (see Questions).
Questions For Authors: It seems that the work implicitly assumes that $\alpha = \Omega(1)$?
What happens when $\alpha = 0$ or $\alpha = \varepsilon$?
In both cases there exists much more sample efficient algorithms then what is provided by the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time, effort and their clarifying questions. We comment on the points mentioned and respond to the reviewers questions below:
(**Identity covariance**) As pointed out by the reviewer, our algorithm works under the assumption that the covariance of the inliers is known. Please note that our algorithm does not require the covariance to be exactly equal to the identity; it is sufficient for the covariance to be *known*. (As pointed out in lines 85-88, for known covariance, by applying an appropriate transformation to the samples, we can reduce the problem to the identity covariance case.) We study the known covariance case, as it is the most fundamental setting for which we demonstrate the main conceptual point of the paper, which is an examination of how more structured adversaries in robust statistics can alter the statistical and computational landscape of estimation. As discussed in Section 3, extending our algorithmic result to the case of unknown covariance is an interesting problem for future work. There are several nuances in defining the problem for the unknown covariance case. For the sake of space, we point the reviewer to lines 629-654 for a thorough discussion.
(**Dependence on alpha**). We discuss the two relevant points below:
* (*Dependence in the main theorem*). The result in Theorem 1.2 holds for any $\alpha < 0.49$ (where $0.49$ can be replaced by any constant near $1/2$, as shown in Appendix F). The main conceptual point of the theorem is that even when nearly half of the samples are outliers, the algorithm efficiently achieves any *desired* error $\epsilon$ with the stated sample complexity of roughly $d/\epsilon^{2 + o(1)} + 2^{O(1/\epsilon^2)}$. This sample complexity is tight in this setting. Any prior algorithm would either require exponential runtime or incur a significantly larger error of $\Omega(1)$ (see lines 72-80 of the paper where this is explained), whereas our result allows for arbitrarily small error. Regarding other regimes for $\alpha$, prior work on one dimension (Kotekal & Gao (2025)) has fully specified the sample complexity as both a function of $\alpha$ and $\epsilon$. It would be interesting to investigate the exact dependence in high-dimensions. However, as with most prior work on robust statistics, the most interesting regime is to obtain a result for when the fraction of outliers $\alpha$ is a positive constant. This is because this is the hardest scenario for estimation (with the smallest amount of information about the target mean).
* (*Special cases for $\alpha$*). Formally speaking, setting $\alpha = \epsilon$ is insufficient to obtain error $\epsilon$ using a Huber robust estimator, as any estimator for the Huber model incurs an error of at least $C\epsilon$ for some constant $C > 1$ (as established in the robust statistics literature). Ignoring this nuance regarding absolute constants, the regime $\alpha = \Theta(\epsilon)$ (and even more so $\alpha = 0$) corresponds to special cases where existing algorithms can be directly applied to achieve error $O(\epsilon)$—specifically, any algorithm designed for Huber contamination or the sample mean setting, respectively. Thus, these cases are orthogonal to our work as they are special cases that can be handled by existing algorithms. As mentioned earlier, the main question we focus on is obtaining a result for the case when $\epsilon \ll \alpha$, i.e., we aim to estimate with error significantly smaller than the fraction of outliers. This is an objective that was impossible to achieve with algorithms designed for the Huber model, and our algorithmic result for constant fraction of outliers has tight sample complexity. | null | null | null | null | null | null |
Hyperband-based Bayesian Optimization for Black-box Prompt Selection | Accept (poster) | Summary: This paper assumes a candidate instruction set and a candidate example set and aims to optimize prompts by automatically selecting the best instruction + example combination. The method combines Bayesian Optimization with Hyperband, where the main contribution compared to previous work is using Hyperband to discard poor-performing prompts early, reducing computational cost.
Claims And Evidence: The paper makes two main claims:
1. Modeling the structural information of instructions and examples separately and using Gaussian Process for prompt selection optimisation (they call it structural-aware deep kernel GP) is better.
->The paper provides dimensionality reduction comparison experiments (t-SNE visualization), showing that DK-GP aligns the prompt performance better in the low-dimensional space. They also did ablation experiments, demonstrating that structural-aware deep kernel GP outperforms a GP that does not model the structure.
2. HbBoPs improves efficiency by reducing computational cost through Hyperband’s early stopping mechanism.
->The experimental results are solid, and the fact that Hyperband improves efficiency is also expected.
Methods And Evaluation Criteria: The setup assumes that we have a large candidate instruction set and candidate example set, which already includes the high-quality choices we are looking for. Then, BO is used for efficiency to find the best combination. This application scenario feels somewhat limited because if a human were writing prompts, the number of candidates wouldn't be that large. Essentially, we would first need some method to generate a large set of instructions and examples to choose from, but the quality of this set itself is unknown. In 3.2 Structural-aware Deep Kernel, the feature extractor is trained separately for the instruction set and example set. If we later realize that both sets are of poor quality and want to update them, then the previous training becomes wasted and difficult to adjust
When running the experiments, HbBoPs does achieve a lower error rate with fewer LLM calls, but I am curious why the metric here is calls. Typically, we measure time and tokens instead. Also, it seems like the cost does not include the feature extractor’s computation overhead, but generally, comparison methods should consider the total cost.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: The experiments corresponding to the main claims of the paper are conducted clearly, and the results are well-presented.
However, the paper does not explicitly compare different n-shot settings and instead fixes 5-shot throughout the experiments. I am curious about how factors such as candidate set size, quality, and n-shot choices affect the method’s performance and convergence. As mentioned earlier, this method seems to be designed specifically for scenarios where we already have a large, high-quality candidate set, since the candidate set is fixed and cannot be updated in the current framework. This raises concerns about its applicability to situations where prompt refinement is iterative. Additionally, the dimensionality reduction step does not explain why 10 dimensions were chosen, and many hyperparameters feel somewhat arbitrarily set without justification.
Supplementary Material: No supplementary material provided
Relation To Broader Scientific Literature: Overall, this work feels somewhat narrow in scope, focusing on a specific setup where a high-quality, fixed candidate set is assumed, without addressing broader prompt optimisation techniques that involve iterative refinement or generation of new candidates.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The paper is easy to follow.
Other Comments Or Suggestions: It would be useful to discuss whether and how the approach could handle updates to the candidate set. Maybe compare with baselines that can handle updates, to make this work more convincing.
Questions For Authors: As I discussed above.
Ethical Review Concerns: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive review.
We are pleased that the reviewer finds our method efficient, acknowledges the strength of our experimental results, and appreciates the clarity of the paper.
Below, we respond to the concerns raised in the review.
`1. The setup assumes that we have a large candidate instruction set and candidate example set, which already includes the high-quality choices we are looking for.`
Our method is designed for the static black-box prompt selection setting, as discussed in Section 6, which is distinct from iterative prompt optimization frameworks.
That said, while our experiments use a fixed candidate set to ensure fair comparison across methods, our approach does not require a fixed candidate set.
It can be readily integrated into dynamic or iterative pipelines, where new prompts are generated - e.g., via evolutionary strategies ([Fernando et al., 2024, ICML](https://proceedings.mlr.press/v235/fernando24a.html)) - which update the candidate set.
`2. This application scenario feels somewhat limited because if a human were writing prompts, the number of candidates wouldn't be that large.`
In human-in-the-loop scenarios, combining a small number of candidate instructions with few-shot exemplars of different examples and permutations quickly leads to a combinatorially large search space. Therefore, moderately sized initial sets can produce rich candidate pools that benefit from principled selection.
`3. The feature extractor is trained separately for the instruction set and example set. If we later realize that both sets are of poor quality and want to update them, then the previous training becomes wasted and difficult to adjust.`
We acknowledge that changing the candidate set mid-optimization requires embedding new prompts. However, prior evaluations are not wasted. We retain all prior evaluations and use them to learn the relationship between latent prompt representation and performance. If newly added candidates are drawn from a different distribution and improve upon earlier ones, the GP will reflect this in its updated posterior enabling our method to adapt.
`4. LLM calls used as cost metric`
Overall wall clock time for the entire pipeline is less informative in the context of prompt selection, as prompt evaluations can be parallelized via asynchronous API calls.
In the case of black-box LLMs accessed via APIs, cost is naturally tied to the actual monetary API cost.
While token count could be used, API pricing distinguishes between input and output tokens (e.g., cost per 1000 input/output tokens), and these rates vary across LLMs.
Aggregating results across different models using token-based cost introduces additional complexity.
In contrast, the number of LLM calls offers a clean, model-agnostic measure of cost and it was positively noted by reviewers vVkq, rbkb and NvHi that our method reduces this cost.
`5. Cost of feature extractor`
We observe an overhead (on an M1 processor) of approximately 1-4 seconds per BO proposal, including training the deep kernel GP and selecting the next candidate prompt.
This is similar to overhead induced by EASE or TRIPLE-GSE.
We consider this overhead minimal in practice relative to the latency and monetary cost incurred by multiple LLM queries required for prompt evaluation.
Moreover, the overhead of the encoder is small, since encoders have parameters in the range of millions.
`6. Fixed 5-shot setting`
We conducted preliminary investigations into the effect of the number of few-shot examples.
Results indicated that increasing the number of examples improves the performance of prompts - similar to [Wan et al. (2024, NeurIPS)](https://proceedings.neurips.cc/paper_files/paper/2024/hash/6b031defd145b02bed031093d8797bb3-Abstract-Conference.html).
EASE [(Wu et al., 2024, NeurIPS)](https://proceedings.neurips.cc/paper_files/paper/2024/hash/dd8e7dae18cecd7c9137840161e1bf62-Abstract-Conference.html), a competitor, uses $k=5$ in the experiments which we mimicked.
From a methodological standpoint, we expect that increasing the number of few-shot examples further amplifies the benefits of our method.
Competitors, in contrast, are not designed to disentangle the contributions of the instructions and few-shot exemplars.
`7. Hyperparameters of the deep kernel`
The latent dimensionality $d=10$ was motivated by widely observed limitations in the BO literature, where vanilla GPs tend to perform poorly in moderate to high-dimensional input spaces. Therefore, we selected $d=10$ as a conservative upper bound.
With respect to the architecture and hyperparameters of the feature extractor, we acknowledge that these choices were not exhaustively tuned. However, we adopted widely accepted defaults: feedforward MLPs with ReLU activations, trained using AdamW with common hyperparameters.
We hope that our responses have adequately addressed the concerns raised by the reviewer, and we remain happy to clarify any follow-up questions. | Summary: This paper contributes in two main ways for prompt optimization in the context of maximizing total scores over datasets (e.g. GSM8K):
* Introducing a deep kernel GP, i.e. project high-dimensional prompt embeddings into a lower dimension before sending vectors to kernel
* Introducing standard Hyperband technique for multi-fidelity scheduling (i.e. avoid evaluating using the full validation set in GSM8K).
Experiments show this method outperforms previous prompt optimization methods (TRIPLE-GSE, TRIPLE-SH, MIPROv2, EASE), and ablations are done to analyze the effects when using different base methods (Claude, LLAMA, Mistral) and different embedders (BERT, MPNet, DistillRoBERTa).
Additionally Appendix A contains important ablations on understanding the behavior of the projected embeddings (i.e. if they match with the objective function).
Claims And Evidence: Yes, the experiments are fairly comprehensive. I was most interested in making sure the projected embeddings on validation settings made sense (i.e. new objective landscape is smooth). It's possible that the high parameter-count of the embedding projector could've led to overfitting when performing GP logprob maximization, but apparently not.
Methods And Evaluation Criteria: Yes, the method makes complete intuitive sense and matches standard intuitions from Bayesian Optimization (e.g. making sure the Gaussian Process works well for regression) + using Hyperband for multifidelity optimization.
Theoretical Claims: N/A - no theoretical claims.
Experimental Designs Or Analyses: As mentioned earlier, I just wanted to make sure that the new deep kernel-GP provided accurate regression results, and according to Appendix A's visualizations, this confirmed it.
Everything else (i.e. optimization performance) follows directly.
Supplementary Material: Appendix A mostly.
Relation To Broader Scientific Literature: The deep kernel GP idea using LLM embeddings might be much more broadly applicable than just prompt optimization. For example, since inputs are strings, one can also perhaps use such techniques to speed up evolutionary code searches (e.g. [1]) as well, or even generally, any optimization problem involving mapping a string to a score. Recently, it has been discovered [2] that even embedding tabular inputs leads to embeddings which have good objective landscape properties.
[1] (Romera-Paredes, 2023) Mathematical discoveries from program search with large language models.
[2] (Tang, 2024) Understanding LLM Embeddings for Regression
Essential References Not Discussed: Since prompt optimization is a form of bandit optimization, there have also been other works involving combining embedding, GPs and Hyperband for regression (in more other optimization settings however), that the authors should consider citing:
[1] (Rankovic, 2023) BoChemian: Large language model embeddings for Bayesian optimization of chemical reactions.
[2] (Falkner, 2017) Combining Hyperband and Bayesian Optimization.
[3] (Kristiadi, 2024) A Sober Look at LLMs for Material Discovery: Are They Actually Good for Bayesian Optimization Over Molecules?
[4] (Tang, 2024): Understanding LLM Embeddings for Regression.
[5] (Nguyen, 2024): Predicting from Strings: Language Model Embeddings for Bayesian Optimization
Other Strengths And Weaknesses: As mentioned before, I suspect this deep kernel GP idea using LLM embeddings will be much more broadly applicable to many different problems (any objective mapping a string to a score) than just prompt optimization. It would be worth investigating this in future work.
This is especially true if one can pretrain the GP over a massive amount of offline data first for better warm-starting.
Other Comments Or Suggestions: * Just making sure (forgive me if I missed it) - in terms of amount of data, is the deep kernel GP retrained online at every optimization iteration only on the current optimization task's trials, or is it pretrained from offline data before the start of optimization?
* Why EI for acquisition, as opposed to UCB? EI can be quite flat, making acquisition optimization difficult sometimes, unless you use tricks like log-EI [1].
[1] (Ament, 2023) Unexpected Improvements to Expected Improvement for Bayesian Optimization.
Questions For Authors: N/A - paper is overall solid.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive review.
We are pleased that the reviewer recognizes our structure-aware deep kernel GP as a novel and meaningful contribution and that the integration with Hyperband is viewed as sound and well-justified.
We also appreciate their acknowledgment of the comprehensiveness of our experiments and ablation studies.
Below, we respond to weaknesses, feedback and questions raised by the reviewer:
`1. Possibility of Overfitting in Deep Kernel GP`
Indeed, prior work, such as [Ober et al. (2021, UAI)](https://proceedings.mlr.press/v161/ober21a.html) has highlighted scenarios where deep kernel models may overfit.
However, in our setting, this risk is mitigated.
The encoder component (e.g., BERT) is a large, frozen, pre-trained language model.
The only trainable component in our deep kernel GP consists of the comparably small feature extractor applied on top of the encoder, with approximately 100k trainable parameters.
This is significantly smaller than typical deep kernel learning applications, where models with 5-10x more trainable parameters are used.
Additionally, when optimizing the GP marginal likelihood, the log-determinant term in the objective acts as a form of complexity regularization, penalizing overly flexible kernel functions.
`2. EI as acquisition function`
We chose EI as our acquisition function since it is widely regarded as the de facto standard in BO.
While UCB is a popular alternative, it requires the specification or tuning of a mean-variance trade-off parameter, which introduces an additional hyperparameter whose optimal value can be problem-dependent.
The reviewer is correct in referring to [Ament et al. (2024, NeurIPS)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/419f72cbd568ad62183f8132a3605a2a-Abstract-Conference.html), who discuss challenges associated with optimizing EI using gradient-based methods.
These difficulties are primarily due to EI being flat in large regions of the input space and often numerically close to zero.
This results in first- or second-order methods having serious challenges during optimization, which can be mitigated by optimizing the EI on log scale (and performing other numerical tricks for robust computation of the EI).
However, these challenges do not arise in our setting.
In particular, we do not rely on gradient-based optimization of the acquisition function.
Instead, since our candidate set of prompts is discrete, we evaluate the EI exhaustively over all candidates and select the argmax directly.
As such, issues of flatness, vanishing gradients, or numerical instability in gradient-based optimization are not applicable in our case.
`3. Is the deep kernel GP trained only on the current task’s trials or pre-trained on offline data?`
The deep kernel GP is trained entirely online, using only the observations collected during the current optimization run.
We do not pre-train the deep kernel feature extractor on offline data before the task.
That said, we agree that exploring pre-training strategies for the deep kernel - particularly leveraging ideas from transfer learning - represents a promising direction for future research!
`4. Related Work`
We thank the reviewer for pointing us to relevant work that focuses on obtaining meaningful representations of non-standard input to make BO applicable in more complex domains.
We appreciate these suggestions and will ensure that these works are included and appropriately discussed in the camera-ready version of the paper.
Concerning the directly related work of [Falkner et al. (2018, ICML)](https://proceedings.mlr.press/v80/falkner18a.html) on combining Hyperband with BO, we would like to clarify that this reference is already cited and discussed in Section 3.4 of our paper.
We thank the reviewer again for their feedback and we remain happy to clarify any follow-up questions. | Summary: The paper introduces HbBoPs, a novel method for optimizing prompt selection in large language models (LLMs) in black-box settings. The method combines a structural-aware deep kernel Gaussian Process with Hyperband, a multi-fidelity scheduler, to efficiently select prompts. The approach is designed to handle large, combinatorial search spaces and high evaluation costs associated with black-box LLMs. Extensive experiments demonstrate that HbBoPs outperforms state-of-the-art methods in both performance and efficiency.
Claims And Evidence: Claim: HbBoPs improves query-efficiency by adaptively allocating resources across different fidelity levels.
Evidence: The paper presents experimental results across ten benchmarks and three LLMs, showing that HbBoPs outperforms existing methods in terms of both performance and efficiency.
Claim: The structural-aware deep kernel GP enhances the surrogate model’s ability to predict prompt performance.
Evidence: The paper includes an ablation study demonstrating the effectiveness of the structural-aware deep kernel GP compared to non-structural-aware models.
Methods And Evaluation Criteria: The paper uses a combination of Gaussian Processes and Hyperband for prompt selection. The structural-aware deep kernel GP is used to learn a low-dimensional prompt representation, while Hyperband governs the number of validation instances for prompt evaluation.
The methods are evaluated based on their ability to identify well-performing prompts with a limited number of LLM calls. The performance is measured using validation and test errors across various tasks and LLMs.
Theoretical Claims: The paper claims that the structural-aware deep kernel GP can effectively use structural and semantic differences in prompts to improve prediction accuracy.
It also claims that Hyperband can efficiently explore the search space by terminating poor-performing prompts early.
Experimental Designs Or Analyses: The experiments are conducted on ten tasks commonly used for LLM evaluation, using three different LLMs.
The paper compares HbBoPs against several baselines and state-of-the-art methods, using a total budget of 25 full-fidelity evaluations for each task.
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: The method is highly efficient in terms of both sample and query efficiency. The integration of Hyperband with a structural-aware GP is novel and well-justified.
The method's performance depends on the choice of encoder model for embeddings, although the paper claims robustness to this choice.
Other Comments Or Suggestions: In line 51, the authors claim that EASE does not consider the joint optimization of instruction and exeamplers, which is not true. Please check the EASE paper carefully again.
Questions For Authors: How does the choice of encoder model affect the performance of HbBoPs in practice?
Can the method be extended to handle other components of prompts, such as output guidance or formatting constraints?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and for recognizing the novelty and efficiency of our proposed method.
While we appreciate the reviewer’s feedback, we note that some contributions of the paper may not have been fully recognized.
Below, we respond to the two concerns raised in the review.
`1. In line 51, the authors claim that EASE does not consider the joint optimization of instruction and exemplars, which is not true. Please check the EASE paper carefully again.`
EASE can be used for joint instruction and few-shot exemplar selection by treating the prompt as a single block of text.
We ran EASE in exactly this way, as intended by the authors.
However, in contrast to our method, EASE does not make use of the structural information of the prompt being composed of different building blocks.
Therefore, compared to our method, EASE is "not explicitly designed for the problem" which is what we stated in line 51.
In our experiments, we observed that this structural unawareness leads to lower optimization performance compared to our approach.
Still, to prevent misunderstanding, we will rewrite line 51 as: "EASE can be used for joint instruction and few-shot exemplar selection by treating the composed prompt as a block of text".
`2. The method's performance depends on the choice of encoder model for embeddings, although the paper claims robustness to this choice. How does the choice of encoder model affect the performance of HbBoPs in practice?`
To assess the impact of the encoder model on the performance of our method, we conducted an ablation study using three distinct encoder models: BERT, MPNet, and DistillRoBERTa.
These experiments were carried out across all benchmark tasks and LLMs, and the results are reported in Section 5.2 of the paper.
As shown in Table 3, the anytime validation and test performance remains stable across the different encoders, suggesting that our method is robust to the choice of encoder.
We further confirm this observation with a statistical analysis provided in Appendix E.3.
This evidence supports our claim that the overall effectiveness of our method is not sensitive to the particular encoder.
We note that this robustness was also recognized as a positive aspect by reviewer Nvhi.
We hope that our responses above have adequately addressed the two concerns raised by the reviewer responsible for the comparably low rating.
The reviewer further had the following question:
`Can the method be extended to handle other components of prompts, such as output guidance or formatting constraints?`
HbBoPs is sufficiently flexible to be extended to handle additional prompt components, such as output guidance or formatting constraints.
One natural extension would be to introduce categorical hyperparameters that encode the presence or type of such components - for example, binary indicators for the inclusion of output guidance or formatting constraints or categorical variables specifying formatting styles (e.g., none, JSON, XML, Markdown).
This would expand the search space beyond instructions and few-shot exemplars along these dimensions.
Incorporating these into our framework involves augmenting the input representation to the deep kernel GP.
Specifically, the additional parameters, after suitable encoding, can be concatenated with the learned representations of the instructions and few-shot exemplars.
For example, let $\mathbf{\eta} = \phi_{\left(\phi_{\mathrm{enc}}(i), \phi_{\mathrm{enc}}(e)\right)}$ be the $d$ dimensional output of our feature extractor operating on embeddings of instructions $i$ and few-shot exemplars $e$. We can then concatenate additional hyperparameters $\mathbf{\lambda}$ that indicate or encode the inclusion of output guidance or formatting constraints to obtain $[\mathbf{\eta}, \mathbf{\lambda}]$, which make up the features used within the GP.
The surrogate model will then jointly learn to predict performance based on the prompt structure including these new components.
We view this as a promising direction for future work.
Finally, we observed that the reviewer listed the design of our structural-aware deep kernel GP and the incorporation of Hyperband as a multi-fidelity scheduler under Theoretical Claims. We would like to clarify that our work does not propose novel theoretical contributions related to Hyperband or BO. As such, we believe that any critique in this regard is not applicable.
We thank the reviewer again and are happy to clarify any further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have raised my score accordingly. | Summary: The paper uses hyperband, combined with a Gaussian process surrogate model with deep kernel learning, to optimize the prompt for black-box LLMs. The method optimizes both the instruction and exemplars, and specifically aims to optimize the efficiency in terms of the total number of LLM calls. Extensive experiments and comparisons with previous baseline methods demonstrate the efficacy of the proposed method.
Claims And Evidence: The claims made in the paper are supported by clear and convincing empirical evidence. Extensive experiments and comparisons with previous baseline methods demonstrate the efficacy of the proposed method. The experimental results clearly show that the proposed method performs better than previous full-fidelity and multi-fidelity methods. Additionally, the ablation studies are well-designed and reveal the importance of different components of the proposed method, further supporting the claims.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable for the studied problem. Applying hyperband to prompt optimization is a natural and suitable idea. The paper specifically considers minimizing the total number of LLM calls on the validation set, which is an important yet often overlooked aspect in previous works on prompt optimization. This focus on efficiency (in terms of LLM calls) is commendable and aligns well with the goal in practice. The structural-aware deep kernel design is very interesting. It separately embeds the instruction and exemplars before combining them via another MLP.
Theoretical Claims: The paper does not have theoretical claims.
Experimental Designs Or Analyses: The experimental designs and analyses appear sound and valid. The experimental results are a strength of the paper, showing the superiority of the proposed method over previous approaches. The ablation studies are well-designed, revealing the importance of different components such as the structural-aware deep kernel design. One potential drawback is that the search space in the experiments may be a little too small, as the space contains only 5 instructions (although the entire space has 250 input prompts).
Supplementary Material: I briefly checked the additional results in the appendix.
Relation To Broader Scientific Literature: The paper contributes to the existing broader scientific literature, particularly in the areas of prompt optimization for black-box LLMs. The use of hyperband for prompt optimization is a natural and effective idea.
Essential References Not Discussed: I'm not aware of any essential missing reference.
Other Strengths And Weaknesses: Strengths:
- The design of the structural-aware deep kernel is elegant and innovative.
- The experimental results are strong and clearly demonstrate the efficacy of the proposed method.
- The focus on minimizing the number of LLM calls on the validation set is a commendable and practical aspect.
Weaknesses:
- The search space in the experiments may be a little too small, as it only contains five instructions.
Other Comments Or Suggestions: An interesting (but optional) further experiment to explore is to optimize the prompt for the most powerful LLMs nowaways (such as those reasoning models like DeepSeek R1), to see whether the proposed method can improve the performance of these strong models.
Questions For Authors: Please see my individual comments above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive review.
We appreciate the reviewer’s recognition of the extensiveness of our experiments, as well as their assessment that our claims are supported by clear and convincing empirical evidence.
Moreover, we are happy to hear that they find the structural-aware deep kernel to be elegant.
We are also pleased that the design and insightfulness of our ablation studies were positively noted.
Lastly, we thank the reviewer for highlighting that our method is highly efficient in reducing the number of LLM calls required to identify a well-performing prompt - a practical aspect that we believe is crucial, yet was not explicitly recognized by all reviewers.
Below, we respond to weaknesses, feedback and questions raised by the reviewer:
`1. Limited instruction search space`
While our experiments consider a fixed set of 5 instructions, the total prompt search space spans 250 candidates due to the combinatorial selection of instructions and few-shot exemplars.
This results in a search space comparable in size to those used in prior work concerned with black-box prompt selection, such as [Shi et al. (2024, NeurIPS)](https://proceedings.neurips.cc/paper_files/paper/2024/hash/b46bc1449205888e1883f692aff1a252-Abstract-Conference.html).
From a methodological standpoint, our approach is expected to scale effectively with larger instruction spaces.
This is due to the structural-aware surrogate model, which encodes instructions and exemplars separately.
Thus, increasing the instruction set size would likely further amplify the benefits of our modeling choices rather than hinder them.
`2. Interesting optional experiments on the most powerful LLMs, such as DeepSeeks R1`
While applying our method to cutting-edge models such as DeepSeek R1 is indeed a promising direction for future work, we believe that our current experimental setup already provides strong evidence for the method’s applicability across diverse LLMs.
In particular, we evaluate our approach on a representative and heterogeneous set of models, including Claude 3 Haiku, LLaMA 3 8B Instruct, and Mistral 7B Instruct.
The consistent improvements achieved by our method across these models suggest that it is robust to model-specific variations and well-suited for deployment in real-world scenarios.
We thank the reviewer again for their feedback and we remain happy to clarify any follow-up questions. | null | null | null | null | null | null |
Joint Localization and Activation Editing for Low-Resource Fine-Tuning | Accept (poster) | Summary: The paper proposes JOLA, a novel parameter-efficient fine-tuning (PEFT) method that dynamically selects and edits the outputs of specific Transformer attention heads. JOLA jointly learns: (1) which heads to target, (2) the intervention type (additive, multiplicative, or both), and (3) the corresponding parameters (additive offsets/multiplicative scalings). Evaluated on three NLP tasks—commonsense reasoning, language understanding, and generation—JOLA outperforms baselines under low-resource settings, demonstrating improved stability and efficiency through its unified framework of localization and modular activation editing.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Yes, no issues.
Supplementary Material: Yes, all parts.
Relation To Broader Scientific Literature: While the paper introduces JOLA as a novel PEFT method for joint localization and activation editing, there is insufficient clarity in distinguishing it from LOFIT, a closely related prior work that also intervenes on specific Transformer heads with additive/multiplicative editing. Both JOLA and LOFIT aim to (1) identify select heads for editing and (2) apply combined additive/multiplicative interventions. The paper does not clearly articulate what differentiates JOLA’s joint optimization framework from LOFIT’s approach, although the performance of JOLA is higher than LOFIT.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The article has a clear and coherent structure with well-organized content.
2. The article provides rich visual and analytical insights. The figures and tables effectively clarify complex concepts. The visualization of gate status provides valuable empirical evidence of JOLA's mechanism.
3. JOLA outperforms baselines across multiple NLP tasks, particularly under low-resource settings. The results are well-documented, and the performance gains are consistent, supporting the method's efficacy.
Weaknesses:
1. In Figure 1, you can draw a red box structure diagram of LoFIT like RED and JOLA.
2. In Figure 1, LoFIT's Step 2, it's a "+", not a "x" sign.
3. The article mentions "However, the effectiveness of standard PEFT methods is limited in low-resource scenarios with only a few hundred examples." The PEFT method is a series of fine-tuning methods proposed to reduce computing resources. This turn of the article is to explain the shortcomings of PEFT, but the low-resource background is not a defect of PEFT itself. The logical relationship between this and the following text is not clear.
4. I hope the article can provide a detailed comparative analysis between JOLA and LOFIT. From the current description of the article, LOFIT's m is learnable, but the maximum value of the value range is a hyperparameter and needs to be set manually. So, is LoFIT's m the product of JoLA's g and m? If LoFIT's m is 0, does it mean that the head is not selected? Similarly, is LoFIT's a the sum of JoLA's g and a? In addition, the article mentions that LoFIT determines the selected head by multiplication, but JOLA also uses multiplication to determine the selected head. What is the specific difference here? Or are the two stages of LOFIT not distributed as shown in Figure 1, because at present, the equation for synthesizing one stage is also acceptable.
Other Comments Or Suggestions: 1. To enhance clarity in Figure 1, I suggest utilizing distinct colors for the boxes representing different methods. This would allow for easier differentiation and improve the overall visual appeal of the figure.
2. In Table 2, it would be beneficial to bold the maximum value to draw attention to this important data point, thereby facilitating a quicker understanding of the results presented.
Questions For Authors: 1. It is worth exploring why JoLA incorporates two additional gates while maintaining the same overall number of parameters as LOFIT. A thorough explanation of this architectural choice would enhance the reader's understanding of how JoLA optimally manages its parameters.
2. In Figure 2, the performance of editing attention modules on the Physics task appears to be the lowest. An analysis of this outcome is warranted.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and suggestions
**Q1: Presentation in Figure 1, Table 2.**
**A1:** Thank you for your suggestion. We will make the necessary changes in the next version.
**Q2: Relationship between PEFT?**
**A2:** In the abstract, we introduced PEFT and its limitations in low-resource scenarios. Activation editing is a form of PEFT but updates fewer parameters compared to traditional methods like Adapter and LoRA. Traditional methods are less suitable for low-resource scenarios, and our approach aims to tackle this issue. We will clarify this logic in the next version to avoid any confusion.
**Q3: Provide a detailed comparative analysis between JOLA and LOFIT.**
**A3:** Below, we provide a comparison focusing on methodological differences, intervention strategies, and empirical performance.
(1) Methodological differences: Unlike LoFiT’s rigid two-stage pipeline, JoLA unifies localization and intervention into a single end-to-end framework. This allows dynamic adaptation to task requirements, avoiding suboptimal head selection caused by decoupled optimization.
|| LoFIT|JoLA|
|-|-|-|
|Localization|Two-stage process: 1. Select heads via learning with multiplicative interventions 2. Freeze selection and train additive interventions (bias vectors). | Joint optimization: Dynamically selects heads while learning intervention parameters.|
|Intervention Type|Additive only (bias vectors applied to selected heads).|Hybrid intervention: Combines additive biases and multiplicative scaling via adaptive gating.|
|Sparsity Control|L1 regularization on scaling factors to select top-K heads.|Hard Concrete gates with expected-L0 regularization, enabling probabilistic head pruning during training.|
|Flexibility|Fixed intervention strategy (additive) after head selection.|Learns task-specific intervention types (e.g., scaling vs. bias) per head.|
(2) Formula-level comparison
**LoFIT:**
$$z^{(l,i)}_t \leftarrow z^{(l,i)}_t + v^{(l,i)} \quad \text{(Additive bias)}$$
**Limitation:** Static additive edits may insufficiently adapt complex tasks requiring both amplification and suppression of features.
**JoLA:**
$$
z^{(l,i)}_t = (1 + g_m^{(l,i)} \cdot m^{(l,i)}) \odot z^{(l,i)}_t + g_a^{(l,i)} \cdot a^{(l,i)}
$$
where,
Scaling: $(1 + g_m^{(l,i)} \cdot m^{(l,i)})$,
Bias: $g_a^{(l,i)} \cdot a^{(l,i)}$
**Advantage:** The hybrid operation enables fine-grained control over activations—scaling amplifies/suppresses existing features, while biases shift representations.
(3) Empirical Performance
+ JoLA show robustness across 26 tasks under low-resource settings.
+ JoLA is more parameter efficient than LoFIT, we will update the parameter counting as discussed in Review 2Bjt [Q2] in the new version of the paper.
**Q4: Parameter issues**
**A4:** As noted in the original papers for each baseline method, these methods are sensitive to hyperparameters, and we detail this in Appendix D.3. The following table shows the sensitivity of these methods, with the results highlighting the lack of robustness, which motivates our proposal for a more stable approach.
**Different learning rate in RED**
||SIQA|WinoGrande|Law|Physics|e2e_nlg|Web_nlg|
|-|-|-|-|-|-|-|
|5e-5|48.47|48.25|28.00|21.00|8.27|12.74|
|2e-4|51.23|50.87|32.00|18.00|11.33|14.53|
|6e-2|50.16|47.14|29.00|23.00|10.14|13.59|
**Different prefix and suffix positions in ReFT**
||SIQA|WinoGrande|Law|Physics|e2e_nlg|Web_nlg|
|p7+s7|62.51|53.96|28.00|40.00|11.44|12.57|
|p11+s11|58.24|51.48|30.00|41.00|12.02|11.35|
**Number of Attention Heads in LoFIT**
||SIQA|WinoGrande|Law|Physics|e2e_nlg|Web_nlg|
|32|54.21|52.34|33.00|10.00|13.18|17.27|
|64|56.52|54.56|35.00|12.00|15.56|16.34|
|128|51.26|55.42|30.00|8.00|14.60|18.07|
**Q5: What happens on the physics task in Figure 2?**
**A5:** Upon reviewing the generated text for the physics task in Figure 2, we found that in the low-resource setting, the generated outputs did not consistently follow the required format, which impacted the results. Specifically, the generated text lacked the necessary result options, leading to lower accuracy when evaluated using exact match criteria. | Summary: This paper introduces JOLA, a novel approach to efficiently adapt large language models in low-resource settings. The method jointly learns which attention heads to modify and determines optimal activation interventions using both additive and multiplicative adjustments. By incorporating a dynamic gating mechanism based on Hard-Concrete distributions with expected-L0 regularization, JOLA selectively edits a minimal set of components, thereby significantly reducing the number of trainable parameters. Extensive experiments on tasks spanning commonsense reasoning, natural language understanding, and language generation demonstrate that JOLA consistently outperforms existing methods such as LoRA and other activation editing techniques, achieving robust and scalable improvements even with limited training data.
Claims And Evidence: The claims proposed in the paper are substantiated by extensive experimental evidence, demonstrating that the JOLA method not only consistently outperforms baseline activation editing and parameter-efficient fine-tuning techniques in low-resource scenarios, but also significantly enhances performance across a diverse set of tasks including commonsense reasoning, natural language understanding, and generation. Moreover, the ablation studies provide clear and convincing evidence that the dynamic gating mechanism and selective attention head modifications are key factors driving these improvements, effectively validating the paper's central claims.
Methods And Evaluation Criteria: The Joint Localization and Activation Editing (JOLA) approach presented in this paper is soundly designed. The challenge of efficiently adapting large language models with scarce data stems from the difficulty in discerning which internal components yield the greatest impact. JOLA overcomes this by simultaneously determining the most influential attention mechanisms to modify and selecting the appropriate adjustment strategy—be it scaling, shifting, or a combination—while fine-tuning the corresponding parameters, thereby ensuring that the targeted modifications are optimally calibrated for low-resource environments.
Theoretical Claims: The paper does not include theoretical proofs.
Experimental Designs Or Analyses: The paper presents a thorough experimental design to evaluate the proposed Joint Localization and Activation Editing (JOLA) method for low-resource fine-tuning. In addition, the inclusion of extensive ablation studies (on gate mechanisms, the number of gates, and head selection strategies) and analyses across different data and model sizes strengthens the validity of their claims. The author mentioned that there is a huge difference in hyperparameter selection among the baseline methods. Could the extreme value variance for JORA and the other methods be displayed? Furthermore, providing a more detailed explanation of the statistical significance of the results and the reproducibility of the experiments would strengthen confidence in the research findings.
Supplementary Material: Yes, the supplementary materials include experimental configuration, hyperparameters, detailed experimental results for different datasets, as well as detailed examples of dataset input and output.
Relation To Broader Scientific Literature: To improve low-resource adaptation of large language models, prior works have explored both parameter-efficient fine-tuning methods, such as LoRA and BitFit, and activation editing techniques like RED, REPE, and LoFIT. The JoLA method proposed in this paper innovatively combines dynamic attention head localization with joint optimization of additive and multiplicative interventions. Drawing on ideas from network pruning—specifically, HardConcrete gating and expected-L0 regularization—JoLA overcomes the static limitations of previous methods, resulting in a more adaptive and robust fine-tuning strategy that aligns well with contemporary advances in model adaptation.
Essential References Not Discussed: na
Other Strengths And Weaknesses: Strengths:
Originality:
JOLA introduces an innovative approach by jointly determining which attention heads to modify and optimizing both scaling and shifting interventions, thereby creatively combining insights from activation editing and network pruning.
Significance:
The extensive experiments across commonsense reasoning, language understanding, and generation tasks clearly demonstrate that JOLA substantially outperforms existing fine-tuning methods in low-resource settings, underscoring its potential impact on efficient model adaptation.
Clarity:
The paper is well-structured and offers detailed explanations of the dynamic gating mechanism and head selection strategy, with comprehensive ablation studies that effectively illustrate the contribution of each component.
Weaknesses:
The approach’s reliance on specific gating mechanisms and parameter settings may introduce additional complexity and sensitivity to hyperparameter tuning, potentially affecting reproducibility.
Other Comments Or Suggestions: na
Questions For Authors: 1. How does the JOLA method perform when given larger resources? Can this activation-based editing approach be applied to larger datasets?
2. The paper shows that the activation editing method performs best when applied to the Attention layer. Is this also the case for JOLA? Could the effectiveness of JOLA on the MLP layer be verified?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your comments and the opportunity to clarify and improve our work.
Q1: Huge difference in hyperparameter selection among the baseline methods and JoLA. The reproducibility of our proposed JoLA
A1: As noted in the original papers for each baseline method, these methods are sensitive to hyperparameters, and we detail this in Appendix D.3. The following table shows the sensitivity of these methods, with the results highlighting the lack of robustness, which motivates our proposal for a more stable approach.
**Different learning rate in RED**
||SIQA|WinoGrande|Law|Physics|e2e_nlg|Web_nlg|
|-|-|-|-|-|-|-|
|5e-5|48.47|48.25|28.00|21.00|8.27|12.74|
|2e-4|51.23|50.87|32.00|18.00|11.33|14.53|
|6e-2|50.16|47.14|29.00|23.00|10.14|13.59|
**Different prefix and suffix positions in ReFT**
||SIQA|WinoGrande|Law|Physics|e2e_nlg|Web_nlg|
|p7+s7|62.51|53.96|28.00|40.00|11.44|12.57|
|p11+s11|58.24|51.48|30.00|41.00|12.02|11.35|
**Number of Attention Heads in LoFIT**
||SIQA|WinoGrande|Law|Physics|e2e_nlg|Web_nlg|
|32|54.21|52.34|33.00|10.00|13.18|17.27|
|64|56.52|54.56|35.00|12.00|15.56|16.34|
|128|51.26|55.42|30.00|8.00|14.60|18.07|
Q2: Providing a more detailed explanation of the statistical significance of the results.
A2: Thank you for your suggestion. We will incorporate the statistical significance; note though that the gap is large
We agree that conducting a statistical significance analysis is important to strengthen the credibility of our results. The significance test shows a meaningful difference between our method and the baseline methods.
Q3: How does the JoLA method perform when given larger resources?
A3: Thank you for your suggestion. We conducted experiments on larger datasets, and as shown in the table, our method remains effective with 5,000 and 10,000 samples. However, using 20,000 and 100,000 samples, there is a slight gap between our method and LoRA. These differences are acceptable, as we update fewer parameters compared to LoRA.
||SIQA||WinoGrande||
|-|-|-|-|-|
||JoLA|LoRA|JoLA|LoRA|
|1000|74.77|70.04|74.30|71.16|
|2000|75.24|73.12|74.92|71.75|
|3000|75.43|73.85|75.37|72.32|
|5000|75.88|74.35|75.91|73.57|
|8000|75.91|75.14|76.20|74.69|
|10000|75.96|75.69|76.31|75.92|
|20000|76.02|76.48|76.55|76.47|
|30000|76.08|76.94|76.68|77.24|
|50000|76.15|77.23|76.84|78.33|
|80000|76.21|77.56|76.92|78.52|
|100000|76.26|77.81|77.04|78.96|
Q4: Could the effectiveness of JoLA on the MLP layer be verified?
A4: We also applied the gating mechanism-based strategy to the MLP layer. As shown in the following table, our experimental results demonstrate that the JoLA method is also effective in the MLP layer. However, it empirically tends to be more effective when used in the attention layer.
||Reasoning||Understanding||Generation||
|-|-|-|-|-|-|-|
||SIQA|WinoGrande|Law|Physics|e2e_nlg|Web_nlg|
|MLP w/o gate|50.10|51.62|34.00|20.00|10.31|14.45|
|MLP with gate|52.46|52.43|36.00|23.00|11.23|16.25|
|Attention w/o gate|55.94|55.33|36.00|7.00|14.77|18.12|
|Attention with gate|66.22|58.33|40.00|46.00|15.54|24.39|
|Attention+MLP w/o gate|52.17|48.74|23.00|13.00|8.23|12.36|
|Attention+MLP with gate|53.28|52.07|27.00|16.00|10.42|14.83| | Summary: The paper proposes JORA, an interpretability-inspied parameter-effecient tuning methods. JOLA intervenes on the attention activations with both scaling and offsetting. In addition, JOLA uses HardConcrete gates with expected-L0 regularization to learns the localization together with intervention in an end to end way.
The paper conducts evaluation on multiple datasets and mainly compares against other activation editing baselines (ReFT and LoFiT). The results suggest the proposed approach is effective.
## update after rebuttal
Thank you for your response. I've update the score.
Claims And Evidence: The claims are mostly valid. JoLA extends previous methods by jointly learning to localize and edit activations, which seems effective. The paper also provides abundant analysis.
Though I have some concerns in the way that the paper sets up the comparison with respect to the baselines, and the way that the paper discusses the number or parameters.
Methods And Evaluation Criteria: The proposed method jointly optimize for localization and editing, which is well-motivated and makes sense.
The paper also provides experiments across different model families, datasets, and model sizes
Theoretical Claims: N/A as there aren't theoretical claims in the main paper.
Experimental Designs Or Analyses: First, I have some concerns about the way that the paper sets up the baselines.
The Baseline results seem either under-tuned or too weak to compare with: In Table 1, a lot of the fine-tuning baselines underperform zero-shot (e.g. for Qwen2.5 7B on the reasoning tasks, none of the 6 baselines outperform the zero-shot baseline).
I wonder how the bae line is set up. The paper claims that baseline results were obtained by "selecting five hyperparameters and **averaging** the results." (ln 246) A more common approach is establishing the best results out of all the five hyperparameters as the baselines.
Second, I wonder how is the number of learnable parameters counted in Table 3. Is it only counting the gating variables, or is it counting the "activated" attention heads? If I understand correctly, JOLA incorporates two gating, one scaling, and one additive parameters for each attention heads. All these are learned end to end and need **gradient back-propagation**. Are all these being counted in the number of parameters?
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The methods improves on previous methods by jointly localizing and editing the activations, which is learnt end-to-end with gating.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: The paper provides abundant ablations on the design choice of the method.
Other Comments Or Suggestions: Some figures are hard to read, e.g., Figure 4 and Figure 8.
Questions For Authors: See Experimental Designs Or Analyses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and provide valuable insights.
Q1: Baseline setup seem either under-tuned or too weak to compare with?
A1: Thank you for your feedback on our baseline system setup.
(1) As noted in the original papers for each baseline method, these methods are sensitive to hyperparameters, and we detail this in Appendix D.3. The following table shows the sensitivity of these methods, with the results highlighting the lack of robustness, which motivates our proposal for a more stable approach.
**Different learning rate in RED**
||SIQA|WinoGrande|Law|Physics|e2e_nlg|Web_nlg|
|-|-|-|-|-|-|-|
|5e-5|48.47|48.25|28.00|21.00|8.27|12.74|
|2e-4|51.23|50.87|32.00|18.00|11.33|14.53|
|6e-2|50.16|47.14|29.00|23.00|10.14|13.59|
**Different prefix and suffix positions in ReFT**
||SIQA|WinoGrande|Law|Physics|e2e_nlg|Web_nlg|
|p7+s7|62.51|53.96|28.00|40.00|11.44|12.57|
|p11+s11|58.24|51.48|30.00|41.00|12.02|11.35|
**Number of Attention Heads in LoFIT**
||SIQA|WinoGrande|Law|Physics|e2e_nlg|Web_nlg|
|32|54.21|52.34|33.00|10.00|13.18|17.27|
|64|56.52|54.56|35.00|12.00|15.56|16.34|
|128|51.26|55.42|30.00|8.00|14.60|18.07|
(2) Baseline methods require specific fine-tuning for each task, and we have 26 tasks to evaluate in our method. Fine-tuning all tasks individually is impractical due to the large hyperparameter search space. Therefore, we performed hyperparameter selection using a grid search approach. For each task, we ran a grid search with five different hyperparameter configurations, which were chosen to explore a diverse range of parameter settings that could provide the best model performance. We performed this search over key hyperparameters (e.g., learning rate, selected head/layers/positions etc. highlighted in Appendix D.3), using a validation set to select the configuration that resulted in the best performance. The results presented here correspond to the best hyperparameter configuration selected for each task based on the validation set performance. The final model was evaluated with these hyperparameters, and we averaged the results across all tasks. As observed, this had little effect on their relative performance, with our method continuing to outperform the others.
Table: best hyperparameter configuration in LLaMA
||Reasoning|Understanding|Generation||
|-|-|-|-|-|
||ACC|ACC|BLEU|Rouge-L|
|zero_shot|53.70|40.00|12.56|36.70|
|BitFit|65.37|36.14|10.23|32.59|
|RED|50.26|37.86|12.77|34.19|
|REPE|66.04|37.43|11.49|31.04|
|REFT|67.12|42.29|13.05|38.25|
|LOFIT|57.74|32.71|13.14|35.51|
|Our|70.55|47.00|17.07|40.65|
(3) Given the small dataset (e.g., 200 samples in our setting), overfitting was a concern. To reduce overfitting's impact on the baseline, we used early stopping, which was not applied in the original implementation of the baseline systems. We also found that learning rate adjustment significantly affected the results. We evaluated four strategies, including linear schedule[1], Cyclic Learning Rate Schedule[2], Adaptive Heuristic Schedule[3] and Exponential Decay Schedule[4]. As shown in the following table, the exponential decay strategy proved most stable, so we used it for both the baseline and our method, as explained in Appendix D.1. The comparison of different learning rate decay strategies for JoLA and LoFIT is as follows.
[1] Human-level control through deep reinforcement learning (Mnih et al., 2015)
[2] Cyclical Learning Rates for Training Neural Networks (Smith et al., 2017)
[3] A disciplined approach to neural network hyper-parameters (Smith et al., 2018)
[4] An exponential learning rate sched- ule for deep learning (Li et al., 2019)
**Different learning ratestrategies in JoLA**
|Strategy|SIQA|WinoGrande|law|physics|e2e_nlg|web_nlg|
|-|-|-|-|-|-|-|
|Linear|62.71|56.49|38.00|42.00|14.05|22.83|
|Cycle|64.25|57.26|39.00|43.00|14.37|23.44|
|Adaptive|65.47|58.60|39.00|44.00|15.02|23.86|
|Exponential|66.22|58.33|40.00|46.00|15.54|24.39|
**Different learning rate strategies in LoFIT**
|Strategy|SIQA|WinoGrande|law|physics|e2e_nlg|web_nlg|
|Linear|54.13|53.36|35.00|6.00|13.84|16.95|
|Cycle|54.32|54.25|34.00|6.00|14.37|17.83|
|Adaptive|55.18|55.57|36.00|7.00|15.24|17.64|
|Exponential|55.94|55.33|36.00|7.00|14.77|18.12|
Q2: (same questions as Review QWX1 Review m3pE) parameter counting of the gating mechanism?
A2: We mentioned the parameter counting in Table 3 of Appendix A, using the same calculation method as ReFT and LoFIT, where trainable parameters are divided by the parameters of the base LLM. For example, we calculated the parameters for the SIQA task using the LLaMA-3-8B model. JoLA’s parameter count considered only the interventions. Interestingly, JoLA’s count matches LoFIT’s.
Q3: Some figures are hard to read, e.g., Figure 4 and Figure 8.
A3: Thank you for your feedback. Due to space constraints, we combined multiple images into one, which affected readability. In the next version, we will adjust the layout to improve clarity and readability.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification on the hyper-parameters. Regarding the parameter counting, I feel this way of counting might not reveal the full picture.
I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for the insightful feedback. We appreciate the opportunity to clarify our parameter counting in JoLA and address your concerns in detail.
**1. Parameter Count Clarification**
In JoLA, the trainable parameters include:
+ Multiplicative scaling vectors $m^{(l,i)}$ and additive bias vectors $a^{(l,i)}$ for every attention head.
+ HardConcrete gate parameters $\phi_{m}^{(l,i)}$ and $\phi_{a}^{(l,i)}$ for each head.
During training, all these parameters are updated via gradient descent. However, the $L_0$ regularization encourages most of the gate parameters to drive their corresponding gates toward zero, effectively "pruning" the majority of the heads. At inference time, only the heads with non-zero gate expectations contribute to the model’s computation—meaning only their $m^{(l,i)}$ and $a^{(l,i)}$ are applied.
**2. Comparison to LOFIT**
LOFIT pre-selects a fixed subset of attention heads in a two-step process: (1) Updating parameters for all heads; (2) Fine-tuning only the selected ones.
In contrast, JoLA continuously updates parameters across all heads during training. Thanks to the dynamic gating mechanism, the number of active parameters at inference is comparable to that in LOFIT.
For example, consider the LLaMA-3-8B model. The training parameters can be computed as:
$$P_{\text{trainable}} = \frac{D_{\text{attn}} \times (N_{\text{multi}} + N_{\text{add}} + N_{\text{gate}})}{P_{\text{LLMs}}}$$
Where:
+ $D_{\text{attn}}$ is the dimension of each attention head,
+ $N_{\text{multi}}$, $N_{\text{add}}$, and $N_{\text{gate}}$ are the numbers of multiplicative, additive, and gating parameters, respectively.
+ $P_{\text{LLMs}}$ is the total number of parameters in the base LLM.
**3. Simplified Calculation from Table 3:**
+ LOFIT
$$\text{Trainable Parameters} = \frac{128 \times (32 \times 32 + 32 \times 32)}{8,030,257,152} + \frac{128 \times (64 + 64)}{8,030,257,152} \approx 0.00003468481 \(\text{or } 0.003468481\%)$$
+ JoLA
$$\text{Trainable Parameters} = \frac{128 \times (32 \times 32 + 32 \times 32 + 32 \times 32 + 32 \times 32)}{8,030,257,152} \approx 0.00006528906 \ (\text{or } 0.006528906\%)$$
Where, $32 \times 32$ reflects that each of the 32 layers has 32 attention heads. 64 denotes the number of selected heads in LOFIT, which can optionally be set to 128.
JoLA and LOFIT maintain similar levels of activated parameters at inference. The small variations across tasks are expected: JoLA's activated heads vary dynamically by task. LOFIT's activated heads are determined by a fixed, manually set value.
We plan to update Table 3 in the next version to clearly distinguish between trainable and activated parameters as follows. We believe this will provide a clearer and more comprehensive understanding of our parameter counting methodology.
| Method | Total Params (%) | Active Params (%) |
|--------|------------------|-------------------|
| LOFIT | 0.003468481 | 0.0002 |
| JoLA | 0.006528906 | 0.0002 |
We hope this explanation addresses your concerns and helps clarify our approach. | Summary: This paper presents a novel extension of the activation editing approach. Its primary contribution lies in integrating localization and editing into a single process using a gating mechanism, unlike previous two-stage methods that first manually locate and then edit model components. This makes the proposed method more practical and adaptive. Extensive experiments on multiple LLMs demonstrate the effectiveness of this approach, particularly in low-resource scenarios, where it outperforms existing methods.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem and application at hand:
1. The use of a gating mechanism to jointly perform localization and editing is a reasonable approach.
2. The experiments are well-controlled by ensuring a fair comparison of trainable parameters.
Theoretical Claims: NA
Experimental Designs Or Analyses: The soundness and validity of the experimental designs and analyses were checked.
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: I did not find any essential references that were missing from the discussion.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: I find the description of the gating mechanism not entirely clear. Specifically, the parameterized form of the Hard-Concrete distribution is not explicitly detailed. The trainable parameters of the gating mechanism are not clearly stated. It would be helpful if the authors could clarify these aspects to provide a more precise understanding of how the gating mechanism operates and how it is optimized during training.
Questions For Authors: 1. What exactly do the Hidden States and bias term in Fig. 2 refer to?
2. Could the reason for multiple components performing worse in Fig. 2 be overfitting? Similarly, the authors' gating mechanism demonstrates that selectively editing attention heads is more effective than editing all heads, which is somewhat counterintuitive. Could this also be due to overfitting?
3. By the end of training, how many attention heads are typically completely shut off?
4. Since the gating mechanism is modeled as a distribution, how does its randomness manifest during inference?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your feedback and suggestions.
**Comment:** The parameterized form of the hard-concrete distribution is not explicitly detailed. The trainable parameters of the gating mechanism are not clearly stated.
**Response:** The hard concrete distribution has two associated scalar parameters: a scale parameter and a temperature parameter. Following prior work on sparsification (e.g., Voita et al., 2019; Louizos et al., 2017), we train only the scale parameter and fix the temperature to 0.33. To clarify, these gates do not take any input – each gate is simply an instance of the hard concrete distribution with a single learnable parameter. We will clarify this in the new version of the paper.
+ **Q1: What exactly do the hidden states and bias term in Fig. 2 refer to?**
- **A1:** Figure 2 illustrates the previously proposed forms of intervening in Transformer modules. The “Hidden states” approach follows REFT, applying interventions directly to the MLP hidden states. The “Bias” approach follows BitFit, modifying only the bias terms of attention, dropout, and layer norm activations. We will elaborate on these in Section 3.1 (currently lines 86–88) to clarify further.
+ **Q2: Could the reason for multiple components performing worse in Fig. 2 be overfitting?**
- **A2:** We think it can be interpreted as a form of overfitting, though we do mitigate this using early stopping. More broadly, the results highlight that both deciding where we intervene (i.e., the choice and location of components) and having fewer interventions are important in our low-resource setting. As shown below, even when provided with more data (500 examples), the MLP+Attention intervention fails to match the performance of Attention-only using 200 examples across most datasets.
**The same sample size as in Figure 2 (200)**
||SIQA|WinoGrande|Law|Physics|E2E_NLG|WEB_NLG|
|-|-|-|-|-|-|-|
|Attention|55.94|55.33|36.00|7.00|14.77|18.12|
|MLP|50.10|51.62|34.00|20.00|10.31|14.45|
**Sample size (300)**
||SIQA|WinoGrande|Law|Physics|E2E_NLG|WEB_NLG|
|Attention|56.34|55.85|36.00|18.00|15.25|18.46|
|MLP|53.07|51.93|35.00|20.00|10.68|14.91|
**Sample size (500)**
||SIQA|WinoGrande|Law|Physics|E2E_NLG|WEB_NLG|
|Attention|56.86|56.37|37.00|20.00|15.87|18.85|
|MLP|53.92|52.45|36.00|21.00|11.22|15.26|
**Different Sample Size (Attention+MLP)**
|Samplesize|SIQA|WinoGrande|Law|Physics|E2E_NLG|WEB_NLG|
|200|52.17|48.74|23.00|13.00|8.23|12.36|
|300|52.82|49.51|25.00|14.00|8.85|12.74|
|500|53.24|50.36|28.00|16.00|9.39|13.03|
+ **Q3: How many attention heads are typically completely shut off by the end of training?**
+ **A3:** To clarify, we are not shutting down any attention heads. Instead, we selectively choose which heads to apply interventions to. When both the offset gate $g_a$ and the multiplicative update gate $g_m$ are set to $0$, the head’s computation becomes identical to that of the original model. By the end of training, most gates are closed – for example, on OBQA, 86\% of attention heads have $g_a = 0$, and 94\% have $g_m = 0$ (See Figure 8).
+ **Q4: How does the gating mechanism's randomness manifest during inference?**
+ **A4:** During training, we model each gating variable (e.g., $g_a^{(l,i)}$ and $g_m^{(l,i)}$) as random, sampled from a hard-concrete distribution. However, during inference, we use their expected values, $\mathbb{E}[g_a^{(l,i)}]$ and $\mathbb{E}[g_m^{(l,i)}]$, instead of sampling from the distributions. This removes randomness, ensuring consistency and stability in inference. We briefly mention this in lines 165-166 and will clarify in the next version.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response and the additional experiments, which have addressed all of my concerns. Therefore, I have decided to raise my score. I hope the authors will include these clarifications from the rebuttal in the next version of the paper, as I believe they are important for understanding your work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback and for raising the score. We are glad the additional experiments addressed your concerns. We will include the clarifications from the rebuttal in the next version of the paper for better clarity. | null | null | null | null | null | null |
When and How Does CLIP Enable Domain and Compositional Generalization? | Accept (spotlight poster) | Summary: This paper investigates the domain generalization and compositional generalization capabilities of CLIP models, focusing on how the diversity of training domains affects their ability to generalize to unseen domains and unseen class-domain combinations. The authors systematically construct training distributions with controlled domain diversity and object class exposure to evaluate CLIP's performance in various settings.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: This paper puts forward four findings based on the experimental results, which are reasonable to infer and have sufficient evidence
Experimental Designs Or Analyses: The authors designed a comprehensive set of experiments to investigate CLIP's domain and compositional generalization capabilities. The key components of their experimental design include Controlled Training Distributions, the authors systematically varied the training data by constructing four different setups: Natural-only, Leave-out-domain, CG low-diversity, and CG high-diversity. This approach allows them to isolate the effects of domain diversity and class exposure on CLIP's generalization performance.
Supplementary Material: No supplementary material
Relation To Broader Scientific Literature: The generalization ability of CLIP under different conditions is systematically studied in this paper.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Adv:
1.By constructing a variety of controllable training data distributions, the performance of CLIP in domain generalization and combinatorial generalization is systematically studied, and the generalization ability of CLIP is further revealed.
2.The consistency of the experimental results is verified on different model architectures and a large number of data, which enhances the reliability and universality of the conclusions.
3.The key effects of domain diversity and class exposure on CLIP generalization ability are revealed, especially the discovery that partial class exposure may weaken the combinational generalization ability. Four valid CLIP insights are proposed based on experiments.
Dis:
1.The paper mainly focuses on the generalization ability of CLIP model, and does not compare with other similar visual-language models (such as ALIGN, BLIP, etc.). This makes it difficult for readers to fully understand CLIP's strengths and weaknesses in domain generalization and portfolio generalization.
2.Despite the use of multiple datasets, such as DomainNet, these datasets may still be deficient in domain diversity and category richness, so the size and quality of the dataset may also have an impact on the generalization ability of the model, but this is less discussed in the article.
3.The paper mainly uses top-1 accuracy rate as the evaluation index. It is suggested to introduce more evaluation indicators, such as top-5 accuracy, F1 score, etc., to evaluate the performance of the model more comprehensively.
4.The current experiments focus on ImageNet-Captions and DomainNet datasets. It is recommended to extend the experiment to more datasets, especially those containing more diverse fields (such as LAION), to verify the generality of the conclusions.
Other Comments Or Suggestions: Nothing
Questions For Authors: Nothing
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and constructive feedback. Below we address remaining concerns.
> W1: Comparisons with other vision-language models (VLMs)
We focused on CLIP due to its wide adoption and use in prior related work (e.g., [1, 2]).
We had already verified the consistency of our results with SigLIP (which became more popular recently) in Figure 11 in Appendix B.1. As suggested, we additionally verified consistency of results with BLIP (which are also consistent). We chose to omit ALIGN due to its close similarity to CLIP.
> W2: Discussion of dataset size and quality
We agree that both dataset size and especially quality influence generalization. We will include this to our discussion.
Regarding dataset size: While Figure 10 in Appendix B.1 shows that large base datasets seem to improve generalization performance, there remains a gap to higher diversity settings. We would like to note that the higher generalization performance on CC12M may also be a confounding factor of a less clean and, thus, more diverse base dataset (see the main finding of [2]). As a result, the performance improvement when adding the diverse DomainNet samples is likely smaller because of this.
Regarding quality: Unfortunately, quality remains only loosely addressed in much of the current literature (e.g., [3]), since it is not clear how to quantify it best. Recently, Schrodi et al [4] showed that an information imbalance leads to the modality gap. When we interpret information imbalance as one aspect of data quality, we might be able to indirectly measure this aspect of quality via the modality gap. However, we leave further investigation on how to quantify data quality for future work.
> W3: Additional evaluation metrics
As suggested, we additionally evaluated with balanced top-5 accuracy and multi-class F1 score. The results are consistent with those based on balanced top-1 accuracy.
> W4: Extensions to additional datasets beyond ImageNet-Captions and DomainNet
Figure 10 in Appendix B.1 provides consistent results for the more diverse CC3M & CC12M datasets. However, CC3M & CC12M comprise images from a mix of domains, which creates confounders. Thus, for controlled experiments and clearer analysis, we primarily relied on the cleaner ImageNet-Captions dataset, following prior work [1]. The uncontrolled mix of domains is even more pronounced in the LAION dataset. Verifying our findings on LAION would therefore require robust automated class & domain annotation/filtering methods (which is technically challenging in itself) and large computational resources (which we lack).
---
[1] Fang, Alex, et al. "Data determines distributional robustness in contrastive language image pre-training (clip)." ICML 2022.
[2] Mayilvahanan, Prasanna, et al. "In search of forgotten domain generalization." ICLR 2025.
[3] Nguyen, Thao, et al. "Quality not quantity: On the interaction between dataset design and robustness of clip." NeurIPS 2022.
[4] Schrodi, Simon, et al. "Two effects, one trigger: on the modality gap, object bias, and information imbalance in contrastive vision-language models." ICLR 2025.
---
Rebuttal Comment 1.1:
Comment: The author answered my questions well and solved my doubts, so I finally gave 4. | Summary: This paper tries to find what factors affect the domain generalization and compositional generalization of CLIP. The empirical experiment show that domain diversity is essential for both domain and compositional generalization.
Claims And Evidence: I'm not convinced that the experiments alone back the claims. CLIP's pretraining data is massive and very diverse, so it's unclear if the results are truly due to domain diversity or just the sheer size of the dataset. Additionally, the paper offers no theoretical evidence to support its claim.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: One issue is that DomainNet is a long-tail dataset, which may affect the claim.
Supplementary Material: Yes, I check all the part of the supplementary material.
Relation To Broader Scientific Literature: Actually, improve domain diversity or learning shared information which could improve domain generalization have already been widely investigated in prior work.
Essential References Not Discussed: Include two related works in the introduction—one that enhances domain diversity [1] and another that develops domain-invariant representations in CLIP [2].
[1] Yu, Xi, Shinjae Yoo, and Yuewei Lin. "CLIPCEIL: Domain Generalization through CLIP via Channel rEfinement and Image-text aLignment." Advances in Neural Information Processing Systems 37 (2024): 4267-4294.
[2] Bose, Shirsha, et al. "Stylip: Multi-scale style-conditioned prompt learning for clip-based domain generalization." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Figure 1A is confusing; please consider redrawing it and replacing the abstract symbols with actual image samples.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. Below, we try to address the concerns. However, the review has been very brief, and some points remained unclear to us. We would greatly appreciate further clarification during the discussion phase so we can fully address them, if we have not already done so below.
> Are the results due to domain diversity or just the sheer size of the dataset? (our interpretation of the reviewer’s concern)
To disentangle the effects of dataset size and domain diversity (two typically entangled factors), we conducted controlled experiments, where we fixed dataset size while varying domain diversity. This setup allows us to isolate the impact of diversity. For example, we observe that adding a single domain yields often negligible gains, while greater domain diversity significantly improves generalization performance (see Fig. 2).
In addition, recent work by Mayilvahanan et al [1] showed that “domain contamination [in web-scale datasets] contributes substantially to CLIP’s strong [generalization] performance” [1, p. 9]. Our work goes beyond theirs by analyzing when and how different domain mixtures influence domain and compositional generalization, e.g., we investigate why CLIP sometimes fails to generalize even with high diversity, see our (mechanistic) analysis in Sec. 6.2.
> No theoretical evidence to support its claim
We would like to clarify that this is an empirical analysis paper, with our claims grounded in controlled and well-motivated experiments – as also noted positively by both other reviewers. While we do not offer theoretical analysis, we follow the growing body of work that investigates model behavior through carefully controlled experiments, such as the popular “Physics of Language Models” series (https://physics.allen-zhu.com/).
> DomainNet is a long-tail dataset, which may affect the claim
DomainNet is long-tail; yet this mirrors the nature of web-scale datasets like LAION-400M (e.g., see Fig. 1a in [2]). As such, we view this as a strength of our experimental setup, which aims to closely replicate CLIP’s training while allowing for controlled dataset manipulations.
To mitigate the effect long-tail in the evaluations, we report *balanced* accuracy throughout (as noted positively by reviewer jK9M). Following reviewer H9Z3’s suggestion, we also verified consistency of our results for additional metrics, such as (standard) accuracy or F1 score.
> Actually, improve domain diversity or learning shared information which could improve domain generalization have already been widely investigated in prior work.
Domain diversity has been broadly studied, but we respectfully note that to the best of our knowledge, ours is the first systematic analysis of domain and compositional generalization in CLIP under OOD conditions. Besides this, we (i) analyzed the impact of partial class exposure on compositional generalization (Finding 2), and/or (ii) conducted mechanistic analyses demonstrating that feature and circuit sharing are needed for generalization (Findings 3 & 4). We would be grateful if the reviewer could provide references that have already explored this.
The references mentioned by the reviewer in “Essential References Not Discussed” enhance CLIP’s domain generalization via adapters [3] or style projectors [4]. However, they primarily focus on method improvements and do not offer in-depth analysis (as also positively noted by reviewer jK9M).
> Include related works
We will add both references [3,4].
> Figure 1A is confusing; please consider redrawing it and replacing the abstract symbols with actual image samples.
We used abstract symbols in Figure 1A to compactly highlight the difference between class sets $C_1$ and $C_2$. We believe that actual image samples may obscure this important distinction. That said, if other reviewers share this concern, we are happy to revise the figure accordingly.
---
[1] Mayilvahanan, Prasanna, et al. "In search of forgotten domain generalization." ICLR 2025.
[2] Wen, Xin, et al. "What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights." NeurIPS 2024.
[3] Yu, Xi, Shinjae Yoo, and Yuewei Lin. "CLIPCEIL: Domain Generalization through CLIP via Channel rEfinement and Image-text aLignment." NeurIPS 2024.
[4] Bose, Shirsha, et al. "Stylip: Multi-scale style-conditioned prompt learning for clip-based domain generalization." WACV 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you so much for your detailed response, it has addressed most of my concerns. However, I’m still a bit confused by Figure 1. Could you please clarify what the different colors represent and what the various shapes of the abstract symbols indicate? Specifically, in Figure 1A, which section corresponds to the training set and which to the test set? Additionally, references [3,4] appear to enhance CLIP’s generalizability by incorporating a more diverse range of style representations and emphasizing invariant features, which aligns well with your findings. I believe it’s worth mentioning that prior work has already improved CLIP’s generalizability in this way.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback. We are happy to hear that our previous response addressed most of your concerns. Below, we aim to clarify your remaining questions regarding Figure 1 and how we plan to incorporate references [3,4].
> Clarification on Figure 1
In Figure 1, the *colors* represent different domains (red = natural domain; blue = test domain; green/orange = additional training domains) and the *shapes* represent the two distinct class subsets (square = $c_1,...,c_k=C_1$; circle = $c_{k+1},...,c_n=C_2$). Note that we need these subsets for testing compositional generalization, where some classes of the test domain are seen during training (the squares) and others are not (the circles).
The *opacity* of each symbol indicates whether a class subset from a domain (e.g., blue squares) is included (opaque) or excluded (more transparent) from the training set. Inclusion or exclusion depends on the specific setup, e.g., in the natural-only setup, only natural images from both class subsets are included in training.
Across *all* setups, the *test set* corresponds to the blue circles (see also Figure 1B), which are held out from training. All other colored shapes, including blue squares, may be included in the training, depending on the specific setup.
We will incorporate these clarifications into Figure 1’s caption to improve its clarity.
> Incorporation of references [3,4]
We agree that both works make significant contributions toward enhancing CLIP’s generalization through diverse styles or invariant features. We will therefore add these references, as well as those suggested by reviewer jK9M in “Relation to Broader Scientific Literature”. We will clarify how our analysis extends these works by offering a *more detailed analysis* of the factors that enable generalization - and, crucially, why generalization may *still fail*.
> Final note
We hope that our discussion has addressed your concerns and questions. We would greatly appreciate it if you could consider updating your recommendation to reflect your assessment of the paper following our constructive exchange. Thank you again for your valuable feedback - it has helped us strengthen our work. | Summary: The paper studies the generalization capabilities of CLIP in the Domain Generalization setting. Specifically, the authors study when and how clip exhibits domain generalization - when a model generalizes to unseen domains, and compositional generalization - when a model generalizes to classes from partially seen domains during training. To facilitate this study, the authors conduct carefully crafted experiments on CLIP with DomainNet and ImageNet to understand the influence of domain diversity, language descriptions, and shared representations on the generalization capabilities of CLIP.
Claims And Evidence: Yes. All the claims made in the submission are supported by well-motivated and clear experiments.
Methods And Evaluation Criteria: The proposed method and evaluations use relevant benchmark datasets, and the authors have justified their choice of datasets and evaluation settings.
- **Datasets**: For their experimental evaluation, the authors have chosen ImageNet-Captions, ConceptualCaptions 3M, or ConceptualCaptions 12M as their base dataset, along with domains from the DomainNet dataset for the domain-specific image-text pairs. Given that DomainNet remains the largest domain-shift benchmark for domain generalization, this choice is verified.
- **Evaluation metrics**: The authors use balanced top-1 accuracy as their evaluation metric, which mitigates the effect of the long-tail nature of the DomainNet dataset (L192-195).
Similarly, authors have clearly outlined and justified their setups for evaluating the effect of shared representations on DG performance, and the behavior of CLIP on domains such as quickdraw via model circuit similarities across pairs of domains.
Theoretical Claims: The paper is an empirical study of the generalization properties of CLIP. As a result, there are no significant theoretical claims or proofs in the submission.
Experimental Designs Or Analyses: - **General setups:** The authors consider two experimental setups: domain generalization - where the model is required to generalize to an unseen domain, and compositional generalization - where the model has partial access to classes from the unseen target domain. The authors train CLIP in both of these settings by merging data from ImageNet-Captions and DomainNet. These experimental designs follow the general setup of generalization works.
- **Experiments on the role of visual embeddings:** In Sec. 6.1, the authors discuss their experimental setup and results in determining the role of the shared visual embeddings across domains in facilitating generalization in CLIP. Specifically, the authors train a Sparse Autoencoder (SAE) on the CLIP visual representations across domains and consider the overlap between the top-k embeddings across domains. However, the details of this experiment are unclear (please see the weaknesses section below).
Overall, the experiments and analyzes presented in the submission are valid for the study of CLIP's generalizability across domains.
Supplementary Material: Yes. While the supplementary material includes comprehensive details about various experimental setups used throughout the paper, there is a minor concern:
- The Weisfeiler-Lehman kernel has been used here to compute a similarity measure between the circuitry for various domains, but specifically to analyze the behavior of CLIP with the Quickdraw domain from DomainNet. This similarity measure is known to flag false-positive, i.e., certain non-isomorphic graphs could be incorrectly identified as isomorphic. The authors have not discussed this aspect and how it may or may not affect the study of shared representations in CLIP.
Relation To Broader Scientific Literature: - The contributions of the paper are highly relevant to domain generalization literature. Specifically, several papers utilize the generalization capabilities of CLIP [1, 2, 3] for a typical domain generalization without delving into the analysis of CLIP's generalizability.
- Moreover, the general consensus for the lack of performance of CLIP on the Quickdraw domain in DomainNet has largely been attributed to the lack of images from this domain in the pre-training data of CLIP. However, there has been no work that studies the behavior of CLIP with such domains, which this work explores in great detail.
1. Addepalli, Sravanti, et al. "Leveraging vision-language models for improving domain generalization in image classification." CVPR 2024.
2. Huang, Zeyi, et al. "A sentence speaks a thousand images: Domain generalization through distilling clip with language guidance." ICCV 2023.
3. Yu, Xi, Shinjae Yoo, and Yuewei Lin. "CLIPCEIL: Domain Generalization through CLIP via Channel refinement and Image-text alignment." NeurIPS 2024.
Essential References Not Discussed: The authors have referenced the relevant literature and discussed the same adequately.
Other Strengths And Weaknesses: ### Strengths
- **Relevance to prior works:** The major strength of this submission is its relevance to prior works that utilize pre-trained Vision-Language Models (VLMs) such as CLIP for domain generalization. Specifically, as mentioned above, most works in DG directly consider the generalizability of CLIP as a given and design frameworks using CLIP while the submission delves deeper into the mechanics behind CLIP's generalization capabilities. A noteworthy contribution is the submission's study on the quickdraw domain from DomainNet, which prior works often dismiss as unseen by CLIP.
- **Experiments and results:** The submission presents well-motivated and simple experiments to analyze the properties of CLIP in the domain and compositional generalization settings. Specifically, the experiments on the role of shared intermediate features and similarity between model circuitry across domains is quite interesting.
- Overall, the paper is well-written, easy to follow and understand. All the experiments are well-motivated, outlined clearly and analyzed well.
### Weaknesses
(a) As mentioned above, while the authors have presented an experiment to substantiate the claim that generalizable models share representations across domains, the experiment itself is unclear.
- How is CLIP evaluated using this trained SAE? Are these evaluations conducted in the leave-one-out or CG settings? Which pairs of domains are considered in the setup, i.e., all domains or only the training domains?
- How are the top-k features chosen across domains? What is the objective used to compute the top-k features (eg: cosine similarity, L2 loss, etc.)?
- How is the performance delta collected (in other words, how do the authors evaluate the top-k representations to present the improvement using the shared embeddings)?
Other Comments Or Suggestions: Some of the tables and their captions are unclear and can be better presented for enhanced readability. For example, Tables 3 and 4 do not mention the quantity being presented (whether it is the improvement in accuracy or the accuracy itself). Additionally, as mentioned in the weaknesses section, some experimental settings have not been clearly explained. The reviewer suggests the authors review these details and rewrite them for better understanding.
Questions For Authors: - In Sec. 6.1, the authors present an experiment to analyze the role of shared representations in generalization. Specifically, they train a SAE in the representation space and consider the top-k features. Can the CLIP visual features directly be considered for the top-k computation and further analyzes?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and constructive feedback. Below, we address the remaining concerns and questions.
> Can the CLIP visual features directly be considered for the top-k computation and further analyzes?
Yes, this is possible. However, we would like to note that CLIP’s visual features are in superposition and not directly interpretable w.r.t. an object class. For example, Schrodi et al [1] showed that some of CLIP’s visual feature dimensions have to do something with the semantically non-meaningful modality gap. Thus, recent work [2] and ours opt for dictionary learning methods like SAEs (i.e., linear vectors that need not be axis-aligned) to extract more interpretable and class-relevant features.
> Missing details about the SAE experiment (including the questions raised by the reviewer)
We thank the reviewer for pointing this out. Below, we provide a detailed explanation of our SAE experiment, which we will add to the paper.
We trained a separate SAE on CLIP’s visual features for each of the following 5 models: a natural-only baseline, two leave-out-domain models (where either clipart or sketch is the test domain), and two CG high-diversity setting (also using clipart or sketch as test domain).
For each model, we analyzed the extent to which the top-k most activating SAE features are shared between the test domain (clipart or sketch) and all other domains (not just the training domains). To identify the top-k SAE features, we computed the SAE hidden representations (after applying the non-linearity) for each sample of each class and domain. We then selected the top-k SAE features (with $k \in \lbrace 5, 10, 15, 20\rbrace$) per class-domain pair based on how frequently they ranked among the top-20 most activating features (i.e., largest activation magnitudes).
To measure feature sharing, we calculated the percentage overlap between the top-k SAE features of the test domain (clipart or sketch) and those of each of the other domains using the top-k SAE features that we got from above. To yield a single overlap score per model, we averaged across classes, these pairs of domains, and the four values of k. Finally, we calculated the deltas in Table 3 by comparing the overlap score of the natural-only baseline model with the corresponding leave-out-domain and CG high-diversity models.
> False positives of Weisfeiler-Lehman (WL) kernel
Thank you for pointing this out. We will add this aspect.
While the WL kernel can yield false positives, this is unlikely in our case: The node names in the circuits encode global topological information (i.e., layer and neuron indices), which reduces the chance of false positives. Besides that, we employed the 3-WL kernel, which is more powerful than, e.g., 1-WL, and also reduces the chance of false positives.
Besides that, we verified that the graphs were distinguishable at the node level (0-th iteration of the WL kernel); as indicated by Figure 6b (the figure just shows aggregated results, but we verified it for each set of nodes of each pair of circuits). Note that false positives can only occur if two graphs remain indistinguishable across all iterations of the WL kernel, which is empirically not the case.
> Revise some of the tables, captions, and experimental settings
We will revise them.
---
[1] Schrodi, Simon, et al. "Two effects, one trigger: on the modality gap, object bias, and information imbalance in contrastive vision-language models." ICLR 2025.
[2] Rao, Sukrut, et al. "Discover-then-name: Task-agnostic concept bottlenecks via automated concept discovery." ECCV 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The points discussed in the rebuttal have addressed all of my concerns. Thus, I maintain my score at 4. | null | null | null | null | null | null | null | null |
Self-Disentanglement and Re-Composition for Cross-Domain Few-Shot Segmentation | Accept (poster) | Summary: The paper finds that previous approaches have an entanglement problem, which tends to bind source domain patterns together, making each one difficult to transfer. They analyzed and explained the feature entanglement problem from a new perspective of the natural decomposition of ViT. On this basis, self-disentanglement and re-composition of CD-FSS are proposed.
### update after rebuttal
I would like to thank the authors for taking the time to address my questions and for providing the additional numerical experiments. I am satisfied with the authors' rebuttal. So I will keep my positive rating.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: In fact, the authors present no new theories.
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, the authors provide a comparison with the domain transfer methods and more visualization results in the supplementary material. But the source code is not available.
Relation To Broader Scientific Literature: They analyzed and explained the feature entanglement problem from a new perspective of the natural decomposition of ViT. On this basis, self-disentanglement and re-composition of CD-FSS are proposed.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
The motivation is clear, and the approach is sensible, straightforward, and highly applicable. The paper is easy to understand, the results are easy to reproduce, and a large number of ablation experiments have been done to prove the effectiveness of the results. Paper is well organized. The proposed approach for self-disentanglement and re-composition of CD-FSS from the perspective of natural decomposition of ViT is novel.
Weaknesses:
1. It seems that the network decoupled the CD-FSS task into two tasks: Cross-Domain and FSS, and the model focused more on the FSS part, while there was not much special design for Cross-Domain.
2. I am curious about the portability of the proposed method, the authors did not test the feasibility of the proposed method on other existing methods.
3. Failure cases are not provided, which makes a complete discussion of the approach lacking.
4. The article uses the background prototype in CPC, I am doubtful about the role of background prototype. Because unlike the foreground, the background of the support image and the background of the query image are not constrained to the same class.
5. The $W_{in}$ in Equation 11 should be $W_{out}$.
6. It doesn't describe how to use the network under 5-shot setting.
Other Comments Or Suggestions: I suggest the authors test the performance of the model in the FSS setting as well. See if the model can achieve good results in different Settings.
Questions For Authors: See Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## 1. Design for the cross-domain part
Our method design also highlights the cross-domain part of CDFSS in the following aspects.
(1) Intuitively, for the cross-domain transferability, the overall semantics, like a whole bat in Fig.1, is much harder to transfer than the separated parts of it, like wings, claws. Based on this intuition, our method disentangles the overall semantics into small and transferable parts, and recomposes them on the target domain through efficient tuning of AFW.
(2) Quantitatively, we validated in Fig.4 and Tab.1 that it is the mismatch between layers that harms the transferability between source and target domains by the domain similarity via CKA metrics. Therefore, our design addresses the mismatch problem and encourages the model to focus on more meaningful matches (typically with higher CKA value), and is verified in Fig.9 to show that our model can indeed highlight these matches, improving the cross-domain transferability.
## 2. Portability of our methods
Our method has excellent adaptability to other approaches. To verify this, we combined our method with APM and PATNet. Additionally, in response to reviewer HFkz and PmpS, we applied our method to Swin Transformer, demonstrating its suitability for different transformer architectures.
| 1-shot(ViT-B) | FSS1000 | Deepglobe | ISIC | ChestX | Mean |
| :-------------: | :-------: | :-------: | :-------: | :-------: | --------- |
| APM | 79.84 | 39.78 | 50.37 | 79.29 | 62.32 |
| APM + Ours | **80.73** | **43.67** | **52.82** | **83.01** | **65.06** |
| PATNet | 72.03 | 22.37 | 44.25 | 76.43 | 53.77 |
| PATNet + Ours | **80.50** | **43.18** | **46.62** | **82.49** | **63.20** |
## 3. Failure cases
Here https://anonymous.4open.science/r/Self-Disentangle-Rebuttal-for-ICML25-706E/failure_cases.PNG, we analyze some failure cases. We found that in DeepGlobe, areas with large contiguous regions tend to experience incomplete segmentation, particularly at the edges, where the segmentation granularity may not be fine enough. The main reasons are twofold: 1) DeepGlobe consists of remote sensing images, which are typically high-resolution, yet for computational efficiency, we standardize the images to 400x400; 2) DeepGlobe demands high local feature recognition capability from the model. ViT-based methods, due to their attention mechanism, are strong in global modeling but relatively weaker in recognizing local features compared to models like ResNet, which have inherent local priors. Therefore, we believe that future research could focus on enhancing ViT's ability to recognize local features in cross-domain scenarios.
## 4. Background prototype
#### (1) Role of the background prototype
The final result of segmentation is based on the probability map derived from both the support-set foreground prototype and the support-set background prototype compared with the query feature, rather than relying solely on the background prototype. Thus, the background prototype is complementary to the foreground prototype.
#### (2) Support-set image and query-set image in the same background class
Although the background of support and query image are not in the same class, most current prototype-based methods(PANet, SSP) utilize a single background prototype to model the background patterns, and have been verified to have good performance in both current works and our paper.
Indeed, it is better to consider different background classes for different images. Therefore, here we further utilize clustering to obtain multiple background prototypes. By comparing the single-background prototype versus multi-background prototypes, we observed a slight performance improvement. However, the gains were not substantial enough to justify the computational overhead of clustering. Although this is not the main focus of our paper, we still appreciate your suggestion.
| 1-shot | FSS1000 | Deepglobe | ISIC | ChestX | Mean |
| :-----------------: | :-----: | :-------: | :---: | :----: | ----- |
| single BG prototype | 80.31 | 43.15 | 46.57 | 82.86 | 63.22 |
| multi BG prototype | 80.96 | 43.53 | 46.78 | 83.09 | 63.59 |
## 5. Utilization under 5-shot setting
For the 5-shot setting, the methodology is consistent with the 1-shot approach, with the following differences: 1) During fine-tuning, each task has 5 support images available for learning; 2) Five support foreground prototypes are calculated for the same class and then aggregated by averaging.
## 6. Minor writing error
Thank you very much for your careful correction. Equation 11 should indeed be $W_{out}$ instead of $W_{in}$. We will take extra care to correct this in the final version to prevent any minor writing errors.
## 7. FSS performance
In our response to reviewer PmpS's third point, we validated the effectiveness of our method in the FSS setting. Thank you for your suggestion.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for taking the time to address my questions and for providing the additional numerical experiments. I am satisfied with the authors' rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your time in reviewing our paper. We also sincerely appreciate your acknowledgment of our rebuttal responses. As you indicated no remaining questions and comments, we would be truly grateful if you could reconsider to raise your evaluation score. Your generous reconsideration would mean a lot to our research team. | Summary: This paper addresses the challenge of feature entanglement in Cross-Domain Few-Shot Segmentation by leveraging the inherent structure of Vision Transformers. The authors identify that current methods often suffer from entangled semantic patterns, which hinder the transferability of features across domains. To tackle this, the paper proposes a framework that disentangles semantic patterns by analyzing ViT natural decomposition and re-composing these disentangled features to improve generalization and adaptation. The approach introduces mechanisms to reduce feature correlation, learns meaningful comparisons across layers, and dynamically adjusts feature weights during fine-tuning for better cross-domain segmentation. The method achieves state-of-the-art results across multiple benchmarks. Extensive experiments demonstrate the effectiveness of the proposed approach.
Claims And Evidence: 1. Mutual information analysis shows reduced correlations between disentangled features after applying OSD.
2. Quantitative results on four benchmarks demonstrate significant improvements, supported by qualitative visualizations.
3. Ablation studies show consistent performance gains when CPC and AFW are added.
Methods And Evaluation Criteria: 1. The proposed methods are well-motivated.
2. The experimental setup is comprehensive, with comparisons to strong baselines and state-of-the-art methods.
Theoretical Claims: The theoretical analysis of feature entanglement is convincing.
Experimental Designs Or Analyses: 1. The experimental design is robust, with extensive ablation studies to isolate the contributions of each module.
2. The use of CKA similarity and mutual information to validate disentanglement is effective and aligns with the claims.
Supplementary Material: The supplementary material includes additional details on datasets, ablation studies, and visualizations.
Relation To Broader Scientific Literature: The proposed approach adds a novel perspective by leveraging ViT’s inherent structure for disentanglement and re-composition.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
1. The idea of leveraging ViT’s structural decomposition for disentanglement is novel and insightful.
2. The proposed method achieves state-of-the-art performance with significant improvements across benchmarks.
Weakness:
1. The orthogonal loss weight is manually tuned, which may limit adaptability.
Other Comments Or Suggestions: None
Questions For Authors: Can the proposed framework be applied to other ViT architectures (e.g., Swin Transformer)? If so, what modifications are required?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## 1. Orthogonal loss weight
We validated the impact of the weight of the orthogonal loss on performance. The results indicate that the optimal choice of the weight is in a wide interval, which means the tuning of this hyper-parameter is not difficult. Additionally, we used the same orthogonal loss weight in the Swin Transformer architecture as we did in the ViT architecture. Its performance indicates that our method design is not sensitive to this weight.
| Orth Loss Weight | 0.01 | 0.05 | 0.1 | 0.2 | 0.5 |
| :--------------: | :---: | :---: | :---: | :---: | :---: |
| 1-shot Avg mIoU | 62.59 | 63.01 | 63.22 | 63.18 | 62.87 |
## 2. Apply to Swin Transformer
| Swin-B | Stage1 | Stage2 | Stage3 | Stage4 |
| :-------: | :-----------: | :------------: | :--------------: | :--------------: |
| shape | H/4 * W/4 * C | H/8 * W/8 * 2C | H/16 * W/16 * 4C | H/32 * W/32 * 8C |
| layer num | 2 | 2 | 18 | 2 |
Thank you for your suggestion. We further validated the effectiveness of the method on Swin Transformer. Since the layers in Swin Transformer do not reside in the same feature space, two additional steps are required: 1) For features from stage 2 to stage 4, we upsample them spatially to H/4*W/4; 2) For features from stage 1 to stage 3, we add three mapping linear layers (c,8c), (2c, 8c), and (4c, 8c) to map them to the same feature space as that of stage 4. The mapping layers are trained alongside the model during the source domain training phase. The performance results are as follows: our method is well-suited to Swin Transformer, and Swin Transformer shows a significant improvement in performance compared to ViT. Thank you again for your suggestion!
| 1-shot | FSS1000 | Deepglobe | ISIC | ChestX | Mean |
| :-----------: | :-------: | :-------: | :-------: | :-------: | --------- |
| ViT-B | 77.80 | 33.18 | 36.99 | 51.54 | 49.88 |
| Swin-B | 79.85 | 37.24 | 39.90 | 66.73 | 55.93 |
| Swin-B + Ours | **81.02** | **46.63** | **49.19** | **83.85** | **65.17** | | Summary: This paper addresses feature entanglement in Cross-Domain Few-Shot Segmentation (CD-FSS), discovering that ViT features assign equal weights to both meaningful and meaningless pattern matches when comparing images. To solve this, the authors propose a self-disentanglement and re-composition framework with three modules: OSD to reduce feature correlation, CPC for pattern re-composition, and AFW for target-domain adaptation. Their approach outperforms state-of-the-art methods by 1.92% and 1.88% in 1-shot and 5-shot settings respectively.
Claims And Evidence: The claims in this submission are generally well-supported by evidence.
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A - This is an experimental paper with no theoretical claims or proofs that require verification.
Experimental Designs Or Analyses: All experiments use common benchmarks in the CD-FSS field. The experimental design looks rational, no significant issues.
Supplementary Material: I've checked all the supplementary materials.
Relation To Broader Scientific Literature: **CD-FSS**: Extends the benchmark established by PATNet (Lei et al., 2022), addressing the feature entanglement problem in existing methods (e.g., PFENet, HSNet) to improve cross-domain transferability.
**ViT Structural Decomposition**: Builds on ViT interpretability studies (Gandelsman et al., 2023), proposing the first application of ViT’s residual stream decomposition for feature disentanglement—unlike classical approaches (e.g., InfoGAN), it requires no auxiliary networks.
Essential References Not Discussed: In disentangled representation methods, there is a lack of the Slot attention-based method. [1, 2, 3]
[1] Locatello, Francesco, et al. "Object-centric learning with slot attention." Advances in neural information processing systems 33 (2020): 11525-11538.
[2] Seitzer, Maximilian, et al. "Bridging the gap to real-world object-centric learning." arXiv preprint arXiv:2209.14860 (2022).
[3] Chen, Chaoqi, Luyao Tang, and Hui Huang. "Reconstruct and Match: Out-of-Distribution Robustness via Topological Homogeneity." Advances in Neural Information Processing Systems 37 (2024): 125588-125607.
Other Strengths And Weaknesses: Pros:
1. The paper tackles disentangled representations, which is an important research area. It's the first to use disentangled features for CD-FSS tasks.
2. Well-organized and easy to follow, with clear figures.
3. They propose a novel pipeline, explain their motivation clearly, and back it up with experiments that validate their approach.
4. Experiments are thorough and show excellent performance.
Cons:
1. The paper assumes that comparing different layers of ViT is mostly meaningless. But in reality, ViT layers work together by dynamic self-attention: shallow layers focus on details (like edges), while deeper layers handle the big picture (like the whole bird). In Fig. 4, The CKA metric used in the paper only checks if features distribution similarity, but it misses how these layers complement each other. Also, in L197-L198, the The authors also acknowledge that dynamic self-attention mechanisms may introduce certain meaningful cross-layer associations.
2. In Tab. 1, the CKA value for Top-12 Avg. (shifted matches) (0.8126) is significantly higher than that of Layer-wise Avg. (diagonal matches, 0.6107), suggesting that some cross-layer comparisons might be more effective. However, the authors did not exploit this observation to optimize their method; instead, they directly suppressed non-diagonal comparisons via the OSD module.
3. The author did not compare with existing disentangled representations methods [1, 2].
[1] Locatello, Francesco, et al. "Object-centric learning with slot attention." Advances in neural information processing systems 33 (2020): 11525-11538.
[2] Seitzer, Maximilian, et al. "Bridging the gap to real-world object-centric learning." arXiv preprint arXiv:2209.14860 (2022).
Other Comments Or Suggestions: In the related work section, should discuss some studies based on Slot Attention (SA) representations, since SA's motivation is also about disentangled and then reconstructing.
Questions For Authors: The paper mentions a low parameter count, but how does it perform in terms of FPS and overall time efficiency?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## 1. Compare with more disentangle-based methods and Discussion on slot attention
Our approach differs from slot attention-based methods [1, 2] in two fundamental aspects: 1) Slot attention primarily disentangles distinct objects through object-centric representation optimization, exhibiting coarser granularity, whereas our method focuses on disentangling different patterns within the same object at a finer granularity level. 2) While slot attention employs additional iterative attention modules (external networks) for disentanglement, we leverage the inherent property of ViT layers that naturally attend to distinct spatial regions, augmented with orthogonality constraints to reinforce semantic separation, without requiring additional networks. To validate our method's effectiveness, we conduct comparative evaluations with two representative slot attention-based disentanglement approaches [1, 2]:
| 1-shot | FSS1000 | Deepglobe | ISIC | ChestX | Mean |
| :------: | :-------: | :-------: | :-------: | :-------: | --------- |
| Baseline | 77.80 | 33.18 | 36.99 | 51.54 | 49.88 |
| [1] | 79.05 | 37.83 | 41.22 | 69.59 | 56.92 |
| [2] | 79.62 | 38.57 | 40.89 | 72.33 | 57.85 |
| Ours | **80.31** | **43.15** | **46.57** | **82.86** | **63.22** |
We promise to add the discussion on Slot Attention in the final version for both the related work and experiments.
[1] Locatello, Francesco, et al. "Object-centric learning with slot attention." Advances in neural information processing systems 33 (2020): 11525-11538.
[2] Seitzer, Maximilian, et al. "Bridging the gap to real-world object-centric learning." arXiv preprint arXiv:2209.14860 (2022).
## 2. Misunderstanding of our cross-comparison design
We would like to point out that we did not deny the effectiveness of cross-comparison of layers. Instead, **we surely acknowledge the importance of cross-comparisons, and our method is majorly built upon cross-comparisons**. Specifically,
(1) We hold that different layers tend to focus on different patterns that are complementary to each other. Our analysis is based on this characteristic, utilizing it to achieve self-disentanglement, and the complementary of patterns is achieved by the recomposition. The comparison of feature-similarity distributions measured by CKA is intended to demonstrate this dynamic nature, **indicating that different layers may still have meaningful connections, making it possible to complement different layers by the cross-comparisons, instead of denying it**.
(2) It is because of the meaningful connections across different layers that **we design AFW to dynamically assign weights for these cross-comparisons for recomposition**, instead of simply forcing the position-wise comparison that ignores the cross comparison. In our experiments (Fig.9 and Tab.5), we also validated the effectiveness of the cross comparison.
(3) Also, as validated in Fig.9, our **OSD does not suppress non-diagonal comparisons**; its role is to encourage layers to focus on more complementary patterns instead of those repeated ones that have been focused on by other layers. With the AFW module, our model still encourages the cross-comparisons across layers, while the complementary of patterns is enhanced by the OSD module.
## 3. Computational efficiency
We compared our method with PATNet, HSNet, and SSP. Our method demonstrated higher computational efficiency compared to the other methods. This is because we do not require additional networks but instead leverage ViT’s inherent structure for feature separation.
| | PATNet | HSNet | SSP | Ours |
| ----------------- | :----: | :---: | :---: | :-------: |
| FLOPs (G) | 22.63 | 20.12 | 18.97 | **18.86** |
| Training Time (h) | 6.32 | 5.61 | 5.12 | **5.07** | | Summary: The paper addresses the feature entanglement problem in Cross-Domain Few-Shot Segmentation (CD-FSS) by leveraging the structural decomposition of Vision Transformers (ViTs). The authors identify that cross-layer comparisons in ViTs entangle meaningful and irrelevant patterns, leading to reduced transferability. To resolve this, they propose three modules: (1) Orthogonal Space Decoupling (OSD) to disentangle features via orthogonal constraints, (2) Cross-Pattern Comparison (CPC) to enable adaptive cross-layer feature matching, and (3) Adaptive Fusion Weight (AFW) for target-domain fine-tuning. Experiments on four CD-FSS datasets demonstrate that the proposed approach outperforms state-of-the-art methods.
Claims And Evidence: The claims are well-supported by empirical evidence:
1. Feature entanglement: Validated via Centered Kernel Alignment (CKA) analysis, showing lower domain similarity for cross-layer vs. layer-wise matches (Fig. 4, Table 1).
2. Module effectiveness: Ablation studies confirm the contributions of OSD, CPC, and AFW (Tables 3–7).
3. Superior performance: Results on diverse datasets (Table 2) and visualizations (Figs. 6–9) substantiate improvements.
Methods And Evaluation Criteria: Methods: The ViT decomposition and proposed modules (OSD, CPC, AFW) are conceptually sound, leveraging ViT’s residual structure for disentanglement. The use of orthogonal constraints and adaptive weighting aligns with the goal of reducing feature entanglement.
Evaluation: Standard CD-FSS benchmarks and metrics (mIoU) are appropriate. The inclusion of both natural and medical imaging datasets strengthens validity.
Theoretical Claims: The paper does not present formal theoretical proofs but provides a mathematical interpretation of ViT decomposition (Eqs. 1–7) and entanglement analysis. The hypothesis about cross-layer comparisons is empirically validated through CKA and ablation studies. The derivations are logically consistent and align with the experimental results.
Experimental Designs Or Analyses: The experiments are well-designed and cover multiple aspects:
1. Comparison with SOTA on four CD-FSS datasets, demonstrating superior generalization ability.
2. Ablation studies to verify the contribution of each module (OSD, CPC, AFW).
3. Comparison of different distance metrics (Euclidean, Cosine, EMD, Dot Product).
4. Feature visualization to support the claim that ViT decomposition enables disentanglement.
Overall, the experimental methodology is robust and comprehensive.
Supplementary Material: The supplementary material provides detailed information about:
1. Datasets and preprocessing
2. Additional ablation studies
3. Comparison with domain adaptation and feature fusion methods
4. Extra visualization results
Relation To Broader Scientific Literature: The paper is well-connected to related work in CD-FSS and feature disentanglement.
1. Compared to existing CD-FSS approaches (e.g., PATNet, APSeg, DRA, ABCDFSS), this work introduces a novel ViT decomposition-based perspective.
2. Compared to disentanglement methods (e.g., InfoGAN, Disentangled-VAE, DFR), the proposed method does not require additional networks but instead leverages ViT’s inherent structure for feature separation.
However, additional discussion on ViT feature disentanglement in other contexts (e.g., domain generalization) could further strengthen the paper.
Essential References Not Discussed: The paper includes most relevant references, but it would be beneficial to cite works that analyze ViT feature disentanglement and its impact on domain generalization.
Other Strengths And Weaknesses: Strengths:
1. Novel perspective: The paper introduces a unique way of handling feature entanglement in ViTs for CD-FSS.
2. Methodological soundness: The proposed modules (OSD, CPC, AFW) are well-justified and experimentally validated.
3. Strong experimental results: The method consistently outperforms SOTA across multiple datasets and settings.
4. Computational efficiency: Unlike decoder-based architectures like APSeg, the proposed method only requires an encoder, making it more efficient.
Weaknesses:
1. Theoretical justification of ViT decomposition: While the empirical results support the method, a more formal theoretical analysis of why ViT decomposition is optimal for disentanglement could be provided.
2. The author's method, limited to ViT, may be outdated. Its applicability to more recent architectures like Swin Transformer is questionable. This is something I care about deeply. If the author could answer this question, I would be very willing to significantly improve my score.
3. Limited discussion on broader applicability: The method is focused on CD-FSS, but could it benefit general few-shot segmentation tasks?
4. Limited discussion on computational efficiency (e.g., inference speed) are not deeply analyzed.
Other Comments Or Suggestions: The method section is dense; a flowchart or pseudocode could improve readability.
Questions For Authors: No questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## 1. The theoretical analysis of the effectiveness of ViT disentanglement
In cross-domain few-shot segmentation tasks, models need to transfer knowledge from the **source domain $\mathcal{S}$** with abundant annotations to the **target domain $\mathcal{T}$** with limited data. Let $\mathcal{H}$ represent the hypothesis space of the segmentation model. The upper bound of the generalization error for target domain risk $\epsilon_{\mathcal{T}}(h)$ is defined as:
$$
\epsilon_{\mathcal{T}}(h) \leq \epsilon_{\mathcal{S}}(h) + d_{\mathcal{H}}(\mathcal{S}, \mathcal{T}) + \lambda,
$$
where $h$ denotes features extracted by the encoder, $\epsilon_{\mathcal{S}}(h)$ is the source domain risk, $d_{\mathcal{H}}(\mathcal{S}, \mathcal{T})$ represents the $\mathcal{H}$-divergence (domain gap) between the source and target domains, and $\lambda$ is the irreducible ideal joint risk.
Our approach reduces $\epsilon_{\mathcal{T}}(h)$ through two mechanisms:
1)**Adaptive Fusion Weights (AFW):** adaptively assigns higher weights to semantically appropriate matches, leading to better alignment (as confirmed by the experiments on the source domain in Answer 3), thereby optimizing source domain output and reducing $\epsilon_{\mathcal{S}}(h)$.
2)**Domain-Invariant Component Isolation:** Minimizing $d_{\mathcal{H}}(\mathcal{S}, \mathcal{T})$ by isolating domain-invariant patterns (e.g., object parts) via:
$$
d_{\mathcal{H}}(\mathcal{S}, \mathcal{T}) \approx \sum_{i=1}^L \sum_{j=1}^L w_{ij}d_{\mathcal{H}}^{(ij)}(\mathcal{S}, \mathcal{T}) \leq \sum_{i=1}^L \sum_{j=1}^L d_{\mathcal{H}}^{(ij)}(\mathcal{S}, \mathcal{T})
$$
where $d_{\mathcal{H}}^{(ij)}$ denotes the inter-layer domain discrepancy. By leveraging the self-disentangle property and the orthogonal constraints from the OSD module, inappropriate matches are learned to have small $w_{ij}$, reducing the inter-layer mutual information $\mathbb{I}[\mathbf{h}_i \mathbf{h}_j^T]$, thereby tightening the boundary.
## 2. Effectness in Swin Transformers
| Swin-B | Stage1 | Stage2 | Stage3 | Stage4 |
| :-------: | :-----------: | :------------: | :--------------: | :--------------: |
| shape | H/4 * W/4 * C | H/8 * W/8 * 2C | H/16 * W/16 * 4C | H/32 * W/32 * 8C |
| layer num | 2 | 2 | 18 | 2 |
Thank you for your suggestion. We further validated the effectiveness of the method on Swin Transformer. Since the layers in Swin Transformer do not reside in the same feature space, two additional steps are required: 1) For features from stage 2 to stage 4, we upsample them spatially to H/4*W/4; 2) For features from stage 1 to stage 3, we add three mapping linear layers (c,8c), (2c, 8c), and (4c, 8c) to map them to the same feature space as that of stage 4. The mapping layers are trained alongside the model during the source domain training phase. The performance results are as follows: our method is well-suited to Swin Transformer, and Swin Transformer shows a significant improvement in performance compared to ViT. Thank you again for your suggestion!
| 1-shot | FSS1000 | Deepglobe | ISIC | ChestX | Mean |
| :-----------: | :-------: | :-------: | :-------: | :-------: | :-------: |
| ViT-B | 77.80 | 33.18 | 36.99 | 51.54 | 49.88 |
| Swin-B | 79.85 | 37.24 | 39.90 | 66.73 | 55.93 |
| Swin-B + Ours | **81.02** | **46.63** | **49.19** | **83.85** | **65.17** |
## 3. Our methods can benefit general few-shot segmentation tasks and domain generalization tasks
We measure the FSS performance of our methods on Pascal. Pascal consists of 20 classes and is set to a 4-fold configuration in the FSS setup. This means training is conducted on 5 classes, while testing is performed on 15 classes that were not seen during the training phase. The experimental results show that our method can also effectively improve the performance of general FSS tasks.
|1shot|Fold0|Fold1|Fold2|Fold3|Mean|
| ------- | :------: | :------: | :------: | :------: | :------: |
| Baseline|61.5|68.2|66.7|52.5|62.2|
| Ours |**63.1**|**70.3**|**67.8**|**55.4**|**64.2**|
Under the domain generalization setting, our method, trained on Pascal and tested on FSS1000, demonstrates an improvement in domain generalization.
| | baseline |ours|
| --- | :-----: | --- |
|FSS|72.6|75.3|
## 4. Discussion on computational efficiency
We compared our method with PATNet, HSNet, and SSP. Our method demonstrated higher computational efficiency compared to the other methods. This is because we do not require additional networks but instead leverage ViT’s inherent structure for feature separation.
| |PATNet|HSNet|SSP|Ours|
| ------------ | :----: | :---: | :---: | :-------: |
| FLOPs (G) |22.63|20.12|18.97|**18.86**|
| Training Time (h)|6.32|5.61|5.12|**5.07**|
## 5. Method section
Thank you for your suggestion; we will work on improving it in the final version. | null | null | null | null | null | null |
Causal Attribution Analysis for Continuous Outcomes | Accept (spotlight poster) | Summary: Previous studies have focused on attribution problems for binary outcomes, but binarizing continuous outcomes can lead to information loss or bias. To address this, the study introduces posterior causal estimands for evaluating multiple correlated causes in continuous outcomes. These estimands include posterior intervention effects, posterior total causal effects, and posterior natural direct effects. Under assumptions like sequential ignorability, monotonicity, and perfect positive rank, the study establishes the identifiability of these estimands and derives corresponding identification equations. A simple yet effective estimation procedure is proposed, with theoretical guarantees on asymptotic properties. The method is demonstrated through an artificial hypertension example and a real developmental toxicity dataset.
Claims And Evidence: Yes, the claims are very vlear.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I have checked the correctness of theoretical claims. They are all correct.
Experimental Designs Or Analyses: It is advisable to include confidence intervals.
Supplementary Material: I have review all supplementary material.
Relation To Broader Scientific Literature: As the authors indicate, their findings have broad applications in social science, health risk assessment, legal contexts, and explainable AI. They showed in Introduction section.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: This paper is well-structured. They address a naturally arising causal question that researchers have thus far not explored.
Other Comments Or Suggestions: You have examined posterior natural direct and indirect effects for the variables (X_{k+1},...,X_p). Can you also identify the path‐specific effect, as described by Pearl (2001)? (May be future work)
Judea Pearl. 2001. Direct and indirect effects. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence (UAI'01). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 411–420.
Questions For Authors: 1. In line 165, you wrote, “However, this monotonic relationship may not be applicable when the outcome variable is continuous.” However, I believe that the monotonicity assumption is indeed directly applicable to continuous outcomes. Are you suggesting that posterior causal estimands cannot be identified through this monotonicity assumption?
2. Does Assumption 3.3 imply Assumption 3.2? It seems Assumption 3.3 imposes strict monotonicity, whereas Assumption 3.2 only requires non-strict monotonicity.
3. Do you requires all Assumptions 3.1, 3.2, and 3.3 for identifying postNDE, postNIE, and postTCE? The statement Theorem 3.6 may look that postNDE, postNIE, and postTCE are idenrtified from Assumptions 3.1, 3.2.
4. I would like additional clarification regarding Lemma 4.2. Specifically, what does each property guarantee?
5. Do you presume that the probability of the evidence $pr(X=x,{\cal E})\ne 0$?
Ethical Review Concerns: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: ---
We sincerely thank the reviewer for the thoughtful feedback and strong accept recommendation. Your recognition of our contribution is very encouraging, and your suggestions are highly valuable for improving the paper.
---
**Q1: Path-Specific Effects and Pearl (2001)**
**A1:** Thank you for the interesting comments. Following your suggestion, we revisited Pearl (2001) and examined how path-specific effects can be incorporated into our framework.
Given a causal graph $G$ over $(X, Y)$ and a subgraph $l \subseteq G$ representing the path(s) of interest, our goal is to isolate the effect of $X_k$ on $Y$ transmitted specifically through path $l$. Let $\mathrm{Pa}_j$ denote the set of parent nodes of $X_j$, which we decompose as $\mathrm{Pa}_j = \mathrm{Pa}_j(l) \cup \mathrm{Pa}_j(\bar{l})$, where $\mathrm{Pa}_j(l)$ includes parents along path $l$, and $\mathrm{Pa}_j(\bar{l})$ includes parents outside of it. To activate only path $l$, we fix the non-path parents $\mathrm{Pa}_j(\bar{l})$ to their counterfactual values under the reference setting $X_k = 0.$
The $l$-specific counterfactual outcome is defined as $Y^l_{X_k = x_k} = (X_{p+1})^l_{X_k = x_k}$, computed under the modified system with only path $l$ active. When $X_k = 0$, this reduces to the usual counterfactual: $Y^l_{X_k = 0} = Y_{X_k = 0}$, since all variables follow the reference path.
We define the posterior path-specific effect as
$$
\mathrm{postPSE}(X_k \Rightarrow Y; x, \mathcal{E}, l) = E\left( Y^l_{X_k = 1} - Y_{X_k= 0} \mid x, \mathcal{E}\right).
$$
The second term is identifiable via Theorem 3.6. Identifying the first term requires a finer strategy that accounts for node-level structures along path $l$ and their interaction with the evidence $ \mathcal{E} $. We leave this technical development for future work.
---
**Q2: Monotonicity with Continuous Outcomes**
**A2:** Thank you for the insightful question. Our intention was not to suggest that monotonicity is inappropriate for continuous outcomes, but rather to note that a strong condition such as $Y_x\leq Y_{x'}$ for all $x \preceq x'$ may be restrictive in certain settings. To illustrate this, consider a linear model:
$$Y = \alpha_0 + \alpha_1 X_1 + \alpha_2 X_2 + \cdots + \alpha_p X_p + \epsilon.$$
If we assume $Y_x \leq Y_{x'}$ whenever $x \preceq x'$, then this implies that increasing any component of exposure vairables $x$ (with others held fixed or increased) must not decrease $Y$. In a linear model, this requirement implies $\alpha_j \geq 0$ for all $j = 1, \ldots, p$, which may limit model flexibility.
We hence adopt the perfect rank assumption as a more general alternative. It includes linear models and permits identification without requiring all coefficients to be non-negative.
We acknowledge that this point was not clearly explained and will revise the manuscript to clarify it.
---
**Q3: Relationship between Assumptions 3.2 and 3.3**
**A3:** Thank you for this insightful question. Assumption 3.3 pertains solely to the outcome variable $Y$ (denoted as $W_{p+1}$), whereas Assumption 3.2 pertains to the treatment variables $W_2, \ldots, W_p$. Since they apply to different variable sets, Assumption 3.3 does not imply Assumption 3.2. We will make this distinction clearer in the revised text.
---
**Q4: Required Assumptions for Theorem 3.6**
**A4:** Thank you for this careful and insightful observation. You are correct—our intention was to show that the identification expression of postNDE, postNIE, and postTCE in the *first part* of Theorem 3.6 relies only on Assumptions 3.1 and 3.2. However, identification of the counterfactual term $\mathbb{E}(Y_{x_k^\ast, d_k^\ast} \mid x, \mathcal{E})$ in the *second part* of the theorem does require Assumption 3.3 via Lemma 3.4.
We appreciate your comments and will revise the statement of Theorem 3.6 to explicitly include Assumption 3.3.
---
**Q5: Clarification on Lemma 4.2**
**A5:** Thank you for your thoughtful question. The properties of the function $\rho_{x\to x'}(\cdot;y)$ serve important theoretical roles:
- **Continuous differentiability** ensures that gradient-based optimization is feasible and reliable.
- **Weak convexity** guarantees the existence of at least one global minimizer.
- **Strict convexity** (within the interior of the support $S_{Y_{x'}}$ guarantees uniqueness of the minimizer, which ensures that the counterfactual mapping $\phi_{x \to x'}(y)$ is well-defined and stable.
Together, these properties underpin the theoretical soundness of our estimation method and justify the nonparametric approach adopted. We will elaborate on this in the revised manuscript.
---
**Q6: On the Support of the Conditioning Set**
**A6:** Thank you for the helpful comment! Yes, we require $\mathrm{pr}(X = x, \mathcal{E}) \neq 0$ to ensure that the conditional distributions are well-defined. We have made this assumption explicit in the revised manuscript for clarity.
--- | Summary: This paper focuses on the causal attribution analysis, which answers retrospective questions like "given that an outcome has occurred, how can we figure out how much each potential cause contributed to it?" Most existing literature on this are introduced with binary outcome variables, which is not the case in real-world analysis.
Therefore, in this paper, authors define a new set of "posterior causal estimands" for continuous outcomes. These include: posterior total causal effect, posterior natural direct effect, posterior natural indirect effect, and posterior intervention causal effect. The authors develop identifiability of these measures under certain assumptions. Further, for estimation of these quantities, a two-step approach is proposed.
---
## update after rebuttal:
I thank the authors for the response. I keep my score of acceptance.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. I read the measures' formulations, the assumptions, and the theoretical claims about identifiability. I skimmed through the proofs. The results seem correct to me but I cannot guarantee.
Experimental Designs Or Analyses: Yes.
Supplementary Material: I skimmed through the proofs and the real data analysis of risk factors on abnormal weight.
Relation To Broader Scientific Literature: /
Essential References Not Discussed: /
Other Strengths And Weaknesses: Strengths:
1. The problem studied is necessary and crucial. It addresses a clear gap by extending causal attribution analysis to continuous outcomes, which is quite novel.
2. The technical development is well-grounded and rigorous. The paper is also well structured, with formulations of quantities built first, and then the identifiability guarantees under certain assumptions.
3. The real-world examples on hypertension and abnormal weights are interesting. One of my questions is: when only observational data is available, do we have any metrics to evaluate the quality of the estimated quantities, in terms of accuracy to the true quantities from counterfactual experiments?
Weaknesses:
1. The assumptions are strong, especially for the Assumption 3.2 (Monotonicity) so that there are "no prevention" relations. Are these assumptions in any way testable?
2. Except for listing such four posterior causal effect estimands, a discussion and practical guide on which to use under different scenarios is expected. On top of that, an analysis to showcase the connections between them, and also to their counterparts in the binary outcome case, are expected.
Other Comments Or Suggestions: /
Questions For Authors: /
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive feedback and the positive recommendation. We appreciate your recognition of our contribution and your valuable suggestions, which we will address in the revised manuscript.
---
**Q1.** *The assumptions are strong, especially for Assumption 3.2 (Monotonicity), so that there are “no prevention” relations. Are these assumptions in any way testable?*
**A1.** Thank you for this insightful question. We agree that Assumption 3.2 is strong and may not always hold in practice. As discussed in Section 3.2 of the manuscript, although this assumption is not confirmed in the statistical sense—that is, it cannot be verified solely from observed data—it can be rejected under Assumption 3.1 (sequential ignorability), by checking whether certain inequality constraints on observed conditional probabilities hold in the data.
To illustrate, consider the case with two binary treatment variables $X_1$ and $X_2$. The monotonicity assumption implies that if $X_1 = 1$, then $X_2$ must be 1; whereas if $X_1 = 0$, then $X_2$ can be either 0 or 1. This leads to the testable implication:
$\mathrm{pr}(X_2 = 1 \mid X_1 = 0) \leq \mathrm{pr}(X_2 = 1 \mid X_1 = 1).$
If we observe in the data that $\mathrm{pr}(X_2 = 1 \mid X_1 = 0) = 0.7$ and $\mathrm{pr}(X_2 = 1 \mid X_1 = 1) = 0.5$, this inequality is violated, providing empirical evidence against monotonicity.
We will incorporate these testable implications and further concrete illustrations into the revised manuscript.
---
**Q2.** *Except for listing the four posterior causal effect estimands, a discussion and practical guide on which to use under different scenarios is expected.*
**A2.** Thank you for the helpful suggestion. To clarify the roles of the proposed posterior causal effect estimands, we have added a summary table in the revised manuscript. A simplified version is provided below:
| Estimand | Definition | Interpretation |
|----------|------------|----------------|
| **PostTCE** | $E(Y_{X_k=1} - Y_{X_k=0} \mid x, \mathcal{E})$ | Total effect of switching $X_k$ from 0 to 1, given treatments $x$ and observed event $\mathcal{E}$. |
| **PostNDE** | $E(Y_{X_k=1, D_k(a_k, 0)} - Y_{X_k=0} \mid x, \mathcal{E})$ | Direct effect of changing $X_k$ while holding mediator fixed at its value under the reference value $X_k = 0$. |
| **PostNIE** | $E(Y_{X_k=1} - Y_{X_k=1, D_k(a_k, 0)} \mid x, \mathcal{E})$ | Indirect effect induced by changes in the intermediate variables $D_k$. |
| **PostICE** | $E(Y_{X=x'} - Y \mid x, \mathcal{E})$ | Effect of changing the entire treatment vector from $x$ to $x'$. |
| **ITE** | $Y_{x'} - Y_{x^*}$ | Individual-level contrast between two treatment configurations. |
We have also revised the illustrative example. Suppose an individual has exposure profile $x = (\text{E}, \text{D}, \text{Hb}, \text{HD}, \text{CP})$ and observed outcome $Y > 140$. Given the observed evidence, the posterior estimands enable retrospective attribution in the following way:
- PostTCE assesses the overall effect of a single factor (e.g., lack of exercise) on blood pressure.
- PostNDE identifies the portion of the effect that is direct.
- PostNIE captures the indirect path through variables like heart disease.
- PostICE quantifies the joint impact of multiple exposures (e.g., poor diet and no exercise).
- ITE compares the individual's outcome under two hypothetical exposure profiles.
We will include this practical guidance in the revised manuscript to assist interpretation and application.
---
**Q3.** *An analysis to show the connections between the estimands, and their counterparts in the binary outcome case, is expected.*
**A3.** Thank you for your helpful comments. In the revised manuscript, we provide a unified set of definitions for the five posterior causal effect estimands under both continuous and binary outcomes, and clarify their applicability and identifiability.
Specifically, let $ Y^*$ $ denote the binary outcome. Following the framework of Lu et al. (2023), we define the binary versions of the posterior causal estimands as follows:
- PostTCE: $\operatorname{postTCE}^*(X_k \Rightarrow\mathcal{E} \mid x,Y^* = 1) = E(Y^*_{X_k=1} - Y^*_{X_k=0} \mid x, Y^* = 1)$
- PostNDE: $\operatorname{postNDE}^*(X_k \Rightarrow\mathcal{E} \mid x,Y^* = 1) = E(Y^*_{X_k=1, D_k(a_k, 0)} - Y^*_{X_k=0} \mid x, Y^* = 1) $
- PostNIE: $\operatorname{postNIE}^*(X_k \Rightarrow \mathcal{E} \mid x, Y^* = 1) = E(Y^*_{X_k=1} - Y^*_{X_k=1, D_k(a_k, 0)} \mid x, Y^* = 1) $
- PostICE: $\operatorname{postICE}^*(Y^*_{x'} \mid x, Y^* = 1) = E(Y^*_{x'} - Y^* \mid x, Y^* = 1) $
While the definitions of $\operatorname{postNDE}^*$ and $\operatorname{postNIE}^*$ are conceptually consistent with their continuous-outcome counterparts, their identifiability under the binary setting has not yet been fully established in the literature.
We will provide a unified presentation and discuss this point in the revised manuscript.
--- | Summary: This paper addresses the causal attribution problem for continuous outcome variables, an interesting and realistic scenario compared to binary outcomes. It introduces a set of posterior causal estimands to retrospectively analyze causal attribution: PostTCE, PostNDE, PostNIE, PostICE. Under assumptions of sequential ignorability, monotonicity, and perfect positive rank, the authors demonstrate the identifiability of these estimands, and propose an efficient two-step estimation method based on quantile matching. The theories are validated through hypertention dataset and toxicity risk dataset.
Claims And Evidence: The claims regarding the identifiability of posterior causal estimands seem to be sound under stated assumptions.
Methods And Evaluation Criteria: The proposed evaluation methods and datasets are relevant and standard for the issue being discussed.
Theoretical Claims: I mainly checked the proofs of Theorem 3.6, but not for other corollaries. The part that I read is correct.
Experimental Designs Or Analyses: Yes, I checked the experiments designs. It adequately satisfied the assumptions and the analysis is valid,.
Supplementary Material: Yes, I check the supplimentary part for simulation details.
Relation To Broader Scientific Literature: This paper mainly focus on extending attribution analysis from binary outcomes to continuous outcomes. It builds upon the line of work by Pearl (2000), Dawid et al. (2014), and especially recent studies by Lu et al. (2023) and Li et al. (2023).
Essential References Not Discussed: There are related work discussing causal attribution problems in the context of Directed Acyclic Graphs (DAG) [1] and using Shapley Values [2]. It might be interesting to discuss the relation between them and posterior estimands.
[1] Schamberg, Gabriel, William Chapman, Shang-Ping Xie, and Todd P. Coleman. "Direct and indirect effects—An information theoretic perspective." *Entropy* 22, no. 8 (2020): 854.
[2] Jung, Yonghan, Shiva Kasiviswanathan, Jin Tian, Dominik Janzing, Patrick Blöbaum, and Elias Bareinboim. "On measuring causal contributions via do-interventions." In *International Conference on Machine Learning*, pp. 10476-10501. PMLR, 2022.
Other Strengths And Weaknesses: Overall, I think the paper addresses an important and underexplored issue in causal inference for continuous outcomes. There are certain parts in the proof that I didn't follow because posterio causal estimands is not my expertise, and the proofs are largely built upon [Lu 2023] and [Li 2023] and quite dense, but based on my current understanding, the claims seem to be sound.
Other Comments Or Suggestions: Typo: L106 on the right column: PostDNE -> PostNDE
Questions For Authors: Given that the assumptions are similar to to that of [Lu 2023] and [Li 2023] for binary outcomes, (i) ihow likely are assumptions 3.2 and 3.3 going to be satisfied in general for continuous outcomes? My worry is that the assumptions would be too strong. (ii) how difficult is it to be justified/tested out?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive evaluation and constructive suggestions, which have been very helpful in improving the clarity and quality of our manuscript.
---
**Q1.** Discuss relation to essential references.
**A1.** Thank you for pointing out the related work by Schamberg et al. (2020) and Jung et al. (2022). Both are highly relevant to our goal of understanding and quantifying causal contributions, and—like our study—they offer principled frameworks for attributing the effects of multiple variables. While we share a common objective, our approach differs in several important ways.
First, our method adopts a retrospective **causal attribution perspective**, conditioning on the observed outcome $Y$ to assess how each treatment component contributed to that specific realization. In contrast, Schamberg et al. and Jung et al. take an **effects-of-causes** perspective, focusing on how interventions influence the distribution of outcomes. Second, we introduce posterior causal estimands, defined based on observed treatment–outcome pairs, which allow for individual-level attribution. In comparison, Schamberg et al. rely on population-level, information-theoretic measures, while Jung et al. use do-Shapley values to quantify feature contributions under hypothetical interventions.
We will cite both papers and discuss these connections in the revised manuscript.
---
**Q2.** *How likely are Assumptions 3.2 and 3.3 to be satisfied in general for continuous outcomes?*
**A2.** Thank you for raising this interesting question. We believe that in many practical applications with temporal structures, progressive interventions, or latent rank-based heterogeneity, Assumptions 3.2 and 3.3 are generally plausible, or at least serve as reasonable modeling approximations.
- **Assumption 3.2 (Monotonicity)** concerns the relationship among the treatment variables $X_1, \ldots, X_p$, rather than between treatments and the continuous outcome. Specifically, it assumes that earlier treatment components have non-negative effects on subsequent ones. This assumption is likely to be satisfied in sequentially administered or progressive treatments, where prior treatments increase the chance or intensity of receiving later ones. For example, in multi-stage educational or medical programs, earlier stages (e.g., early screening or primary education) often facilitate participation in later ones (e.g., advanced therapies or higher education).
- **Assumption 3.3 (Perfect Positive Rank)** is more abstract but remains plausible in many applications. It assumes that the individual-level outcome $Y$ is determined by a stable, unobserved factor in a strictly increasing way. This holds approximately when treatment primarily shifts the overall level of the outcome without changing individuals' relative ranking. For example, if a person has the lowest blood pressure under treatment, he is also expected to have the lowest blood pressure without treatment. Such rank-preserving behavior is commonly assumed in structural equation models, linear models, and additive models such as
$$
Y = \alpha_0 + \alpha_1 X_1 + \cdots + \alpha_p X_p + \epsilon, \quad \text{or} \quad Y = f(X_1, \ldots, X_p) + \epsilon,
$$
where $\epsilon$ captures unobserved heterogeneity.
We will incorporate the above discussions and examples in the revised manuscript.
---
**Q3.** *How difficult is it to justify or test these assumptions?*
**A3.** Thank you for raising this insightful point. While Assumptions 3.2 and 3.3 are not statistically testable in the strict sense, their plausibility can be assessed through empirical strategies and domain expertise.
- **Assumption 3.2** is **falsifiable** under Assumption 3.1 (sequential ignorability), by verifying inequality constraints on observed conditional probabilities. For example, in a setting with binary treatment variables $X_1$ and $X_2$, monotonicity implies: $
\operatorname{pr}(X_2 = 1\mid X_1 = 0) \leq \operatorname{pr}(X_2 = 1\mid X_1 = 1). $
If we observe in the data that: $
\operatorname{pr}(X_2 = 1\mid X_1 = 0) = 0.7, \operatorname{pr}(X_2 = 1\mid X_1 = 1) = 0.5, $
then monotonicity is violated, offering empirical evidence against the assumption.
- For **Assumption 3.3**, we first suggest assessing, in real data, whether a common unobserved rank variable exists across different treatment conditions. This can serve as an empirical check of the assumption’s plausibility. In addition, one may fit linear or additive models, examine residual patterns, and evaluate model fit using criteria such as AIC, BIC, or R$^2$. If these models can explain a substantial proportion of the outcome variation, this would provide indirect support for the reasonableness of the assumption.
We will incorporate these discussions in the revised manuscript.
---
**Q4.** *Line 106: “PostDNE” → “PostNDE”*
**A4.** Thank you for this careful observation. We will correct it in the revised manuscript.
--- | Summary: The submission describe a method for counterfactual analysis for continuous outcomes.
Identifiability conditions and results are given. Moreover based on minimization of a certain loss the authors propose an estimator and derive some theoretical properties.
A practical example using simulated data show the results of the proposed methods, moreover a real problem is developed.
Claims And Evidence: Yes clear
Methods And Evaluation Criteria: Yes they make sense
Theoretical Claims: I did not check proofs in the supplementary, the theoretical claims seems sounds.
Experimental Designs Or Analyses: The experimental design and analysis seems sound and correct.
It mainly follows the experiments of Lu et al. 2023
Supplementary Material: No
Relation To Broader Scientific Literature: The submission extend the work of Lu et al. To the case of continuous outcomes..
The setting, definition and experiments are mainly taken from Lu et. al
Essential References Not Discussed: No
Other Strengths And Weaknesses: The paper is well written and clear, it can be followed well.
The originality is somehow restricted to the extension of a previous approach to continuous outcomes.
Other Comments Or Suggestions: None
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ---
**Q1.** *The paper is well written and clear; it can be followed well. The originality is somehow restricted to the extension of a previous approach to continuous outcomes.*
**A1.** We sincerely thank the reviewer for the positive and encouraging feedback. We truly appreciate your kind recognition of the manuscript’s clarity, the soundness of the proposed methodology, and the overall quality of both the theoretical and empirical components.
While our work builds upon the framework introduced by Lu et al. (2023), our aim is not simply to apply it to continuous outcomes, but to extend it in several meaningful directions. In particular, we
- formulate a new class of posterior causal attribution estimands suitable for continuous responses,
- establish their identification under structured assumptions tailored to the continuous setting, and
- design a two-step estimation strategy that facilitates practical implementation and retrospective interpretation.
We are grateful for your comments and will revise the manuscript to better highlight these contributions and more clearly position our work in relation to the existing literature.
---
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the responses.
I will keep my positive score
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the positive evaluation and continued support of our work. | null | null | null | null | null | null |
Discovering Global False Negatives On the Fly for Self-supervised Contrastive Learning | Accept (poster) | Summary: This paper presents an approach to detect false negative pairs in contrastive learning. Contrary to previous approaches, they work by detecting FN globally on the pretraining dataset, and they are computationally efficient as they apply SGD for each anchor and do not require to compute clustering for the whole dataset. They conduct extensive experiments to compare and validate the approach.
Claims And Evidence: This paper claims to have a efficient and effective approach for FN detection in contrastive learning. They do not overclaim, and they conduct the necessary experiments to prove their approach works well.
It would have been nice to see other downstream tasks than classification to validate further the quality of the learned representation. The contributions should state clearly that the findings are only validated in the context of classification and that further experiments are needed to evaluate performance of representation learning for other tasks.
Methods And Evaluation Criteria: Proposed datasets and metrics are sufficient and appropriate for evaluation of the method.
Theoretical Claims: I did not check the proof in the supplementary material in full details, but it seems correct.
Experimental Designs Or Analyses: Experiments are sufficient to support the claim about the FN detection method being good and efficient.
It would be good to display computational overhead results compared with other approaches in the core of the paper, since it is part of the claimed advantages of the method.
Supplementary Material: I did not read the supplementary material in details.
Relation To Broader Scientific Literature: The literature review is very complete, it flows logically and presents a nice introduction to the topic.
Essential References Not Discussed: None
Other Strengths And Weaknesses: It would be good to provide the reader with a rough idea of the scale of the amount of FN in practical CL scenarios. For example, for practical CL training on ImageNet, how many FN do we have on average? Also, it would be good to give an idea of the impact these FN have on final results (results on a particular downstream tasks can be shown in an early figure). This should be mentioned explicitely in the introduction, as it would help grasping the importance of the proposed line of work.
By removing examples that are close to each other in the latent space as FN, isn't there a risk of removing hard true negatives, thus making the contrastive learning too simple and thus reducing the performance of the representation learned? --> In other words, could it decrease the performance for fine grained classification downstream tasks? This could be tested in the experiments by including fine grained datasets.
"Moreover, we assume that the top α% most similar negative data share similar semantics with the anchor data based on their current representations" --> This is a rather strong assumption. This should be backed up by experiments using the labels. What proportions of detected FN are actually FN in practice?
Other Comments Or Suggestions: See above
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your review and questions.
**Q1: It would have been nice to see other downstream tasks than classification to validate further the quality of the learned representation.**
**A:** Thanks for pointing this out. We agree that the unimodal performance has only been validated in the context of classification. In the bimodal scenario, please note we use the Datacomp Benchmark. This benchmark includes 38 zero-shot downstream tasks, which also include cross-modal image-text retrieval tasks. As suggested, we will modify the contributions to clarify the tasks used for validation.
**Q2: Display computational overhead results in the core of the paper.**
**A:** Thanks for your comment. We will move the computational overhead result to the main paper.
**Q3: Introduction, for practical CL training on ImageNet, how many FN do we have on average? Also, it would be good to give an idea of the impact these FN have on final results (results on a particular downstream tasks can be shown in an early figure).**
**A:** During training on ImageNet100, the empirical average of FN is 1%, that's around 20k FN per batch with a batch size of 1024 and 325 with a batch size of 128. Thanks for your suggestion. We agree both of these questions would provide a better idea of the importance of this line of work. We will make changes to the introduction to address both.
**Q4: GloFND performance for fine-grained classification downstream tasks?**
**A:** Thanks for the question, this is something we could have done a better job pointing out. Table 2 presents the performance on several fine-grained downstream datasets such as Stanford Cars, Oxford 102 Flowers, Oxford-IIIT Pets, Caltech-101, and Food-101.
**Q5: About the top $\alpha$% assumption. What proportions of detected FN are actually FN in practice?**
**A:** Table 1 presents "False Negative Identification" metrics, which evaluate what proportion of detected FN are actually FN in practice. As it can be observed, GloFND achieves much better precision, recall, and f1-score compared to FNC. Answering your question, the last epoch mean for GloFND is 48.40% of predicted FN are true FN, compared to 27.57% for FNC. | Summary: Previous contrastive learning methods may generate negative sample pairs with similar semantics when constructing negative samples. Different from them, this paper introduces a method that automatically learns on the fly the threshold for each anchor data to identify its false negatives during training. Meanwhile, it can globally detect false negative samples rather than locally within the mini-batch. Experiments are done to verify the effectiveness of the method.
Claims And Evidence: The claims made in the opinion are supported by clear and compelling evidence and further validated by experiments.
Methods And Evaluation Criteria: Yes, the method proposed in this paper is a good research direction in the field of contrastive learning.
Theoretical Claims: The false negative sample detection method proposed in this paper has no obvious problems in theory.
Experimental Designs Or Analyses: This paper has basically no problems in terms of experimental design and analysis, but there is one problem is that the authors claim that GLOFND is not limited to any specific contrastive learning (CL) technique, however, there does not seem to be direct evidence supporting this claim in the experiments.
Supplementary Material: Yes, I read all the supplementary materials.
Relation To Broader Scientific Literature: In this paper, the method "GLOFND" is proposed from the perspective of detecting false negative samples. However, how to reasonably define false negative samples and eliminate them is an important research direction of contrast learning, which has a certain impact on the improvement of model performance.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
(1)In this paper, a novel method is proposed to automatically find false negative samples, and its view is proved by experiments. The article has clear ideas. The method is simple and effective.
(2)The structure of this paper is clear.
(3)The proposed method has better performance and experimental results in some cases
Weaknesses:
(1)The authors claim that GLOFND is not limited to any specific contrastive learning (CL) technique. However, they only integrate it with SogCLR and do not demonstrate its effectiveness on classical contrastive learning algorithms, thereby reducing the credibility of their claims.
(2)The novelty of this method is insufficient, and its generalizability does not seem to be well demonstrated in the paper.
(3)The experiments in this paper seem to be insufficient. The author focuses on how to find false negative samples, but ignores the generality research in specific comparative learning or self-supervised learning methods.
(4)The author's focus does not seem to be on downstream task performance, and adding some downstream task evaluations, such as object detection, might be more convincing.
Other Comments Or Suggestions: (1)The authors mention that previous methods require computing cosine similarity across the entire dataset when selecting the most similar negative samples to the anchor. However, it appears that they have not effectively addressed this computational overhead.
(2)The authors should apply their method to a broader range of contrastive learning (CL) approaches to further validate its effectiveness and strengthen its credibility.
Questions For Authors: There's no question.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your review and questions.
**Q1: About GloFND limitations to a specific contrastive learning (CL) technique and integration on classical contrastive learning algorithms.**
**A:** Thanks for this question, this is something we could have done a better job explaining. To clarify, the computation of $\lambda$ in GloFND is independent of the contrastive loss used, as it relies solely on the embedding similarity of negative pairs. This makes it applicable across different contrastive learning methods. In our experiments, we applied GloFND to unimodal SogCLR, bimodal SogCLR and FastCLIP. As you suggested, we have additionally run unimodal GloFND with SimCLR with results shown in Reviewer tdJR's Q2 answer. However, it is worth noting that prior work [1] has shown that SogCLR outperforms SimCLR. Additionally, SimCLR requires a large batch size, and its performance can be more sensitive to the impact of false negatives as batch size increases.
[1] Yuan, Zhuoning, Yuexin Wu, Zi-Hao Qiu, Xianzhi Du, Lijun Zhang, Denny Zhou, and Tianbao Yang. “Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance.” arXiv, September 20, 2022. http://arxiv.org/abs/2202.12387.
**Q2: The novelty of this method is insufficient, and its generalizability does not seem to be well demonstrated in the paper.**
**A:** We appreciate the reviewer’s feedback and would like to clarify the novelty and generalizability of our approach. Formulating false negative discovery as identifying the k-th largest similarity across the **entire dataset** is, to the best of our knowledge, a novel contribution. This formulation enables a simple yet effective algorithm based on stochastic optimization for efficient computation.
We have conducted experiments to demonstrate its applicability in unimodal CL (semi-supervised and transfer learning) and bimodal CL. Moreover, to further demonstrate generalizability, we have conducted additional experiments with SimCLR, reinforcing its broad applicability (see the previous question). We appreciate the reviewer’s concerns and are open to suggestions that could further strengthen this aspect.
**Q3: The experiments in this paper seem to be insufficient. The author focuses on how to find false negative samples, but ignores the generality research in specific comparative learning or self-supervised learning methods.**
**A:** To address your concern, we have conducted additional experiments using SimCLR (see Reviewer tdJR Q2). We have also conducted an experiment fine-tuning OpenAI's CLIP model on CC3M (see Reviewer tdJR Q3).
**Q4: The author's focus does not seem to be on downstream task performance.**
**A:** We would like to draw the reviewer’s attention to Tables 2 and 3, which present results on downstream task performance. In Table 2, we pretrain on ImageNet100, after which a logistic regression classifier is trained on top of the frozen embeddings for multiple unimodal downstream datasets. In Table 3, we pretrain on CC3M and evaluate performance on 38 zero-shot downstream tasks using the DataComp benchmark. Notably, GloFND improves downstream performance in most scenarios.
**Q5: Addressing computational overhead of computing cosine similarity across the entire dataset when selecting the most similar negative samples to the anchor.**
**A:** Thank you for the question. While GloFND computes a global threshold for the entire dataset, it does not require computing cosine similarity across all data points in the dataset. Instead, all computation is done in the mini-batch. This is what makes GloFND shine.
GloFND frames the problem as a stochastic optimization task, allowing for efficient computation. Equation 4 details how the $\lambda_I$ values are obtained, and as shown, GloFND relies only on mini-batch computations to optimize $\lambda_i$.
Importantly, the pairwise similarities are already computed as part of the contrastive loss, meaning the only additional computational overhead introduced by GloFND is:
1. Updating the $\lambda$ values for samples in the mini-batch, which involves a simple gradient computation (Equation 4).
2. Filtering false negatives, which is also done via matrix operations by comparing similarities against the computed $\lambda$ values and applying masking.
Both can run efficiently on GPUs. Overall, our method consists of basic matrix computations and runs in linear time with respect to the number of pairs in a batch $O(B^2)$, where $B$ is the batch size). This overhead is minimal compared to the cost of cosine similarity computations and the forward/backward passes. For the unimodal case, the per-epoch computation time increases by only 2% (from 427s to 435s), demonstrating the efficiency of our approach. | Summary: In this work, authors propose GLOFND, a way to find and automatically threshold false negative samples during self-supervised training with contrastive learning. The proposed method works by determining adaptive thresholds $\lambda_i$ for each anchor $i$ which, thanks to the optimization-based approach, are global to the entire dataset and not limited to the current minibatch. Contrarily to other existing methods which require to compute similarity across all possible pairs, GLOFND does not introduce significant complexity in the training.
Authors empirically test their approach on standard vision benchmarks such as ImageNet-100, CIFAR, DTD Caltech and Oxford datasets.
## Update after rebuttal
After reading the author's comments and other reviewers' comments, I maintain my score.
Claims And Evidence: - This work identifies the issue of determining false positive pairs in a global and efficient manner
- The proposed approach is in line with the authors' goal
- The proposed approach can be integrated into to many existing contrastive learning framework
Methods And Evaluation Criteria: - The evaluation criteria ideally are okay, my main concern is in the lack of comparison and well-established baselines [see experimental design or analyses].
- The ablation study is thorough
Theoretical Claims: - I think the theoretical claims are well motivated, although I would make a stronger connection to meta-learning literature (as essentially the thresholds $\lambda_i$ will change the main objective function)
Experimental Designs Or Analyses: I think that the main weakness of this work lies in the experimental setting:
- Larger datasets such as ImageNet-1k are missing (I think experiments on it should be doable)
- The only comparison is with FNC, which selects local false positive samples in a minibatch
- GLOFND is only applied to SogCLR
For these reasons i think that:
- Some more experiments are required, showcasing GLOFND applied on other baselines (SimCLR, VICReg, Barlow Twins, etc.)
- GLOFND requires the network to be sufficiently trained to work: for me this makes sense, and I think that applying GLOFND for fine-tuning large pre-trained model would be for an excellent use case. I would like to see some experiments in fine-tuning some existing large models to reduce the impact of false-positive pairs (authors only tested with SogCLR and FastCLIP)
- A comparison with finding global thresholds in the entire dataset (even if computationally expensive) should be added
Supplementary Material: I briefly read the supplementary material.
Relation To Broader Scientific Literature: This work proposes an adaptive method to find thresholds for filtering out false positive samples. It may represent an interesting contribution to the field.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - The paper is very well written and the objective is clear
- I think Eq. 2 would benefit from a clearer explanation
Other Comments Or Suggestions: - L190-L191 are not clear
Questions For Authors: - In Fig. 5d, authors show the results of GLOFND with varying starting epochs; I wonder why the performance decreases after epoch 70?
- Improvements in the bimodal setting are less significant than in the unimodal setting, what could be the reason for it?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your review and questions.
**Q1: Experiment on larger dataset than ImageNet**
**A:** We would like to point the reviewer to Table 3, where we tested GloFND on CC3M, which is larger than ImageNet-1k, with 2.7 million image-text pairs.
**Q2: GLOFND applied on other baselines (SimCLR, VICReg, Barlow Twins, etc.)**
**A:** We agree that it would be interesting to apply GloFND to other contrastive losses. To the best of our knowledge, VICReg and Barlow Twins do not use negative pairs, so the issue of false negative detection is not directly applicable. Regarding SimCLR, SogCLR has been previously shown [1] to outperform SimCLR, which requires a large batch size. Nevertheless, we have tried SimCLR using a batch size 512 with results shown below.
| Method | 100% | 10% | 1% | 0.1% | Average |
| -------- | ----- | ----- | ----- | ----- | ------- |
| Baseline | 76.88 | 73.38 | 66.40 | 33.56 | 62.56 |
| FNC | 76.90 | 73.10 | 64.88 | 34.20 | 62.27 |
| GloFND | **77.14** | **73.66** | **66.50** | **35.58** | **63.22** |
[1] Yuan, Zhuoning, Yuexin Wu, Zi-Hao Qiu, Xianzhi Du, Lijun Zhang, Denny Zhou, and Tianbao Yang. “Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance.” arXiv, September 20, 2022. http://arxiv.org/abs/2202.12387.
**Q3: Fine-tuning large pre-trained model**
**A:** We have fine-tuned OpenAI's CLIP model on CC3M using GloFND. The results on CC3M's validation set are shown below.
| Method | IR@1 | IR@5 | IR@10 | IR Avg | TR@1 | TR@5 | TR@10 | TR Avg |
| -------- | ------- | --------- | ------------ | ------ | ---------- | --------------- | ------------- | ------ |
| Baseline | 36.07 | 58.94 | 67.22 | 54.08 | **35.76** | 59.19 | 67.44 | 54.13 |
| FNC | 33.69 | 58.01 | 67.51 | 53.07 | 33.61 | 57.95 | 67.53 | 53.03 |
| GloFND | **36.52** | **59.44** | **68.02** | **54.66** | 35.71 | **59.27** | **67.97** | **54.32** |
**Q4: Comparison with finding global thresholds in the entire dataset**
**A:** We agree such a comparison would be beneficial. Kindly note that computing the global threshold in the entire dataset would need to be done every iteration, which is intractable. Instead, we freeze the encoder network and compare GloFND with the (estimated) global thresholds. You can find this analysis in Section 4.3 (iii).
**Q5: Why does the performance decrease after epoch 70?**
**A:** Thanks for this question. We hypothesize the reason is that the total number of training epochs is fixed to 200. That is, there is a trade-off between starting at a later epoch (and having a better "sufficiently" trained network), and the amount of training epochs for GloFND to positively impact. Starting at a later epoch entails less training time with GloFND which reduces the potential improvement of removing false negatives. FNC shows a similar pattern, with its performance dropping after 110.
**Q6: Improvements in the bimodal setting are less significant than in the unimodal setting**
**A:** Thanks for pointing this out. We agree with your observation. We hypothesize that this is due to the training time because the bimodal dataset is much larger. While we train GloFND for 130 epochs in the unimodal setting, training for only 22 epochs in the bimodal setting makes the value of $\lambda$ less stable. | null | null | null | null | null | null | null | null |
Incremental Gradient Descent with Small Epoch Counts is Surprisingly Slow on Ill-Conditioned Problems | Accept (poster) | Summary: This work investigates the convergence of shuffling gradient methods, especially focusing on the small epoch regime. The authors establish several new upper/lower bounds that are matched to each other, providing new insights into the finite-sum optimization problem.
## update after rebuttal
I keep my positive score as this is a good paper in my opinion.
Claims And Evidence: All claims are proved.
Methods And Evaluation Criteria: N/A.
Theoretical Claims: As far as I can see, theorems are correct.
Experimental Designs Or Analyses: The experiments are enough to demonstrate the correctness of theorems.
Supplementary Material: I went through the appendix quickly but did not examine every detail.
Relation To Broader Scientific Literature: N/A.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: The paper is written clearly and is highly polished. There is no specific weakness I can find.
Other Comments Or Suggestions: I found one inaccurate statement.
Line 216 (right column), [1] also refines the exponential term. Concretely, the exponential term in Theorem 4.6 of [1] is in the order of $LD^2\exp(-K/\kappa)K^{-1}$, which improves a factor of $K$. Although this change doesn't imply [1] is optimal in the small epoch regime, it is better to be accurate.
**Reference**
[1] Liu, Zijian, and Zhengyuan Zhou. "On the last-iterate convergence of shuffling gradient methods." arXiv preprint arXiv:2403.07723 (2024).
Questions For Authors: I only have one question.
Theorem 5 in [1] only requires $\frac{1}{n}\sum_{i=1}^n\left\Vert\nabla f_i(\boldsymbol{x}^*)\right\Vert \leq G_*$ in contrast to the stronger assumption $\left\Vert\nabla f_i(\boldsymbol{x}^*)\right\Vert \leq G_*,\forall i\in\left[n\right]$ assumed in Proposition 3.4. Can this gap be fixed?
**Reference**
[1] Mishchenko, Konstantin, Ahmed Khaled, and Peter Richtárik. "Random reshuffling: Simple analysis with vast improvements." Advances in Neural Information Processing Systems 33 (2020): 17309-17320.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the positive feedback. Below, we address the reviewer’s comments.
1. **The exponential term in Theorem 4.6 of [Liu et al., 2024] is in the order of $LD^2 \exp(-K/\kappa)K^{-1}$, which improves a factor of $K$.**
- Thank you for pointing out this issue. We have revised our original statement to accurately reflect that Theorem 4.6 of [Liu et al., 2024] also refines the exponential term. Specifically, we now write: “Also, a more recent result by [Liu & Zhou, 2024] (Theorem 4.6) refines the polynomial term and also improves the exponential term to $\exp(-K/\kappa) \frac{LD^2}{K}$. However, since the term inside the exponential remains unchanged, this still fails to reveal a tight bound when $K$ is small.”
2. **Can the assumption in Proposition 3.4 be relaxed to $\frac{1}{n} \sum_{i=1}^n \|\| \nabla f_i (x^*) \|\| \le G_*$?**
- Great question. Yes, the reviewer is absolutely right that we can obtain exactly the same convergence result as in Proposition 3.4 under the weaker assumption $\frac{1}{n} \sum_{i=1}^n \|\| \nabla f_i (x^*) \|\| \le G_*$. The reason why we state the stronger assumption is to maintain consistency with the assumptions used throughout the rest of the paper.
If you have any remaining questions or comments, we will happily answer them. | Summary: This paper studies the Incremental Gradient Descent, a permutation-based SGD method. The authors derive a lower bound on the algorithm's progress when the number of epochs is small. This paper provides results for various classes of problems, when 1. all he component functions are strongly convex, 2. all components are strongly convex with the same Hessian, or 3. some functions are non-convex. This is an essential result as modern machine learning applications are ill-conditioned; in most cases, we are in a small epoch regime.
Claims And Evidence: The algorithm's proofs use a step size that depends on the optimal point $x_*$. This is not practical as $x_*$ is unknown.
Methods And Evaluation Criteria: The paper doesn't provide extensive experiments, so it is hard to evaluate whether the IGD algorithm is worse than SGD with uniform sampling for the small epoch regime.
Theoretical Claims: The authors claim "Our lower bounds reveal that for the small epoch regime, IGD can exhibit surprisingly slow convergence even when all component functions are strongly convex."
Experimental Designs Or Analyses: The papers lack experiments. It fails to show in practice that Incremental Gradient Descent does not perform well in the small epoch regime.
Supplementary Material: I have checked the experiments in the Appendix and the proof of Theorem 3.1.
Relation To Broader Scientific Literature: This is an essential result as modern machine learning applications are ill-conditioned; and in most cases, we are in a small epoch regime. Therefore, this result helps us to understand what is a good algorithm for training modern neural networks.
Essential References Not Discussed: I am happy with the papers cited in the paper to understand this work.
Other Strengths And Weaknesses: Strength:
1. The paper provides new lower bounds for Incremental Gradient Descent in small epoch regime.
2. This paper provides insight into the computationally constrained setting.
Weakness:
1. I believe similar lower bounds can be derived for other algorithms like SGD (with uniform sampling) in the small epoch regime. This is because you put a upper bound on the number of iterations (with condition number).
2. The paper requires experiments to evaluate if IGD performs worse than SGD with uniform sampling in the small epoch regime.
3. The step sizes in Theorem 3.2, Proposition 3.4 require the knowledge of $x_*$ which is not known in practice.
Other Comments Or Suggestions: 1. Authors should conduct more experiments to evaluate the performance of these algorithms in the small epoch regime.
Questions For Authors: 1. The IGD algorithm is deterministic. Then why do you need assumptions 2.4, 2.5 to prove convergence?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the constructive feedback. Below, we address the reviewer’s concerns.
1. **Similar lower bounds can be derived for other algorithms like SGD (with uniform sampling) in the small epoch regime.**
- As the reviewer pointed out, it is true that any algorithm (including IGD, SGD with uniform sampling, etc) with a finite number of iterations will naturally have some lower bound. However, our focus is more specific: given a fixed number of epochs $K$, we aim to characterize how fast an algorithm converges to the minimum with respect to $K$.
- It is well known that for SGD with uniform sampling, both the upper and lower bounds are of order $O(1/\mu T)$, where $T$ is the total iterations. Recall that $T = nK$ in our setting. This rate holds uniformly in both the large and small epoch regimes. In contrast, our results for IGD reveal a clear separation between the convergence rates in the large and small epoch regimes. In particular, we show that in the small epoch regime, the LB for IGD is strictly worse than the UB for SGD with uniform sampling. This directly implies that IGD converges more slowly than SGD with uniform sampling in this setting.
- If we have misunderstood the reviewer's question, we would appreciate it if you could kindly clarify.
2. **The algorithm's proofs use a step size that depends on the optimal point $x_*$.**
- The reason why we introduced $x^*$ into the step size is to reduce the dependence of the final convergence rate on the initial optimality gap $f(x_0)-f(x^*)$ from linear to a logarithmic scale. Even if the step size is chosen without $x^*$, the overall dependence on major parameters (e.g., $\mu, L, n, K$) remains unchanged.
- We also would like to note that using step sizes depending on $x^*$ is a common practice in the literature [1, 2, 3].
3. **The paper requires experiments to evaluate if IGD performs worse than SGD with uniform sampling in the small epoch regime.**
- During the rebuttal period, we conducted experiments on a binary classification task using two selected labels from the MNIST dataset. We trained a CNN using IGD, RR, and with-replacement SGD. In our setup, IGD uses all samples from one class, followed by the remaining samples from the other class. We provide the figures for the training loss and test accuracy at the following link: https://anonymous.4open.science/r/ICML2025-IGD-134A. In the figures, confidence intervals are calculated over 10 different runs; for IGD, the variability arises from different random initializations. As expected, we observe that IGD converges much slower than vanilla SGD. We plan to conduct more extensive experiments on broader datasets and model architectures, and will include these results in the next revision. We hope this addresses your concerns.
4. **IGD is deterministic. Why do you need assumptions 2.4, 2.5?**
- We note that our upper bound results hold for arbitrary permutation-based SGD, not solely for IGD. As this covers all possible permutation selection schemes, including random choices, we believe that it is natural to have some kind of gradient error bound like Assumptions 2.4 and 2.5.
- However, even when focusing solely on upper bounds for IGD, we still believe that Assumptions 2.4 and 2.5 are essential. While IGD is a deterministic algorithm, the convergence behavior is still significantly influenced by the structure of component gradients. For instance, in the case where all components are identical, i.e., $f_i=f$, IGD can exhibit an exponential convergence rate. On the other hand, when the gradients of component function differ substantially from each other, the iterates can fluctuate significantly, leading to slower convergence.
- As discussed in Section 2.4, a key technique in analyzing the convergence of permutation-based SGD methods is controlling the cumulative error within each epoch. We note that this cumulative error and the fluctuation of iterates are closely related. Therefore, even for IGD—the simplest and fully deterministic permutation-based algorithm—some form of assumption to control gradient error is still necessary to establish proper convergence guarantees.
- Finally, we note that similar assumptions on component gradients are commonly made in prior IGD literature [4, 5].
If you have any remaining questions or comments, we will happily answer them.
---
[1] Lu, Yucheng, et al. "Grab: Finding provably better data permutations than random reshuffling." NeurIPS 2022
[2] Cha, Jaeyoung, et al. "Tighter lower bounds for shuffling SGD: Random permutations and beyond." ICML 2023
[3] Liu, Zijian, and Zhengyuan Zhou. "On the last-iterate convergence of shuffling gradient methods." ICML 2024
[4] Mishchenko, Konstantin, et al. "Random reshuffling: Simple analysis with vast improvements." NeurIPS 2020
[5] Koloskova, Anastasia, et al. "On convergence of incremental gradient for non-convex smooth functions." ICML 2024 | Summary: The paper is inspired by the common use of shuffling-based methods such as Random Reshuffling in practice, and the authors provide new theoretical results for specific permutations. In particular, they give new lower bounds on the Incremental Gradient (IG) method in the low-epoch regime, which wasn't considered much in the prior literature. The results concern the strongly convex case, but with multiple options on the individual components: either all of them are strongly convex, or just convex, or even potentially nonconvex. Somewhat surprisingly, this affects the provided lower bounds, despite that distinction having smaller importance in the large-epoch regime.
The results can be of interest due to the popularity of shuffling methods in practice and the fact that there are still open questions and gaps. The result concerning Herding is interesting since it shows that potentially better permutations can be obtained to make shuffling methods faster.
## update after rebuttal
I thanks the authors for the interesting discussion. I remain positive this is a good paper and I hope it gets accepted.
Claims And Evidence: All theoretical claims are supported by rigorous proofs. One of the results is also verified numerically.
Methods And Evaluation Criteria: The used criteria are standard in the optimization community.
Theoretical Claims: I checked the correctness of Theorems 3.1 and Theorem 3.7. The former is a simple construction based on three one-dimensional functions, each of which plays a role in a certain stepsize regime: small, medium, or large. The small and the large regime functions are simple quadratics. The actually interesting case is the medium stepsize, that's when the dynamic is controlled by linear terms that make IG stray away from the optimal point.
The proof of Theorem 3.7 is mostly the same as the proof of Theorem 1 in (Mishchenko et al., 2020), which the authors explicitly state in the appendix. The main difference is the variance bound, which for Herding is improved since it's by definition better than a randomly sampled permutation.
Experimental Designs Or Analyses: The numerical results are pretty simple and their soundness is immediately verifiable.
Supplementary Material: I went through some of the proofs that are in the appendix (Theorems 3.1 and Theorem 3.7 specifically).
Relation To Broader Scientific Literature: The work follows up on the series of papers published in the last few years and adds a solid contribution closing some of the theoretical gaps.
Essential References Not Discussed: I don't think there are any crucial papers missed. The paper doesn't discuss some of the papers on random reshuffling in other settings (proximal, distributed, etc.) but I think it's completely reasonable as the main focus on closing the gaps in the most basic setting of stochastic optimization.
Other Strengths And Weaknesses: The main limitation of the work, in my opinion, is that it considers the least interesting of the shuffling methods. I still believe it provides sufficient new intuition to publish the paper, but I just wanted to point out that having new results on random reshuffling would have been more interesting.
Other Comments Or Suggestions: From the proofs, it appears to me that the lower bounds can be immediately extended to also include claims on the distance to the solutions. Am I missing something? If not, I encourage the authors to state those as well.
Minor:
In the equation in Definition 2.1, $x$ and $y$ after the $\forall$ symbol should be bold as well.
"Cha et al.(2023) establishes" -> "Cha et al.(2023) establish".
It would be nice to capitalize names in the citations, for instance "SGD" instead of "Sgd".
Questions For Authors: Please see my suggestion/question on the extension of lower bounds to distances.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful review and for the positive assessment of our paper. Below, we address the reviewer’s concerns and comments.
1. **The main limitation of the work, in my opinion, is that it considers the least interesting of the shuffling methods.**
- We agree with the reviewer’s view that IGD is the least interesting among the shuffling methods, and that deriving new results for RR would be more impactful. While we did attempt to extend our analysis to RR, we were unable to derive new convergence bounds. Nevertheless, given the scarcity of existing literature on permutation-based SGD in the small epoch regime, we believe that IGD offers a good starting point for understanding this regime better.
- Specifically, for IGD, constructing a lower bound requires designing a function specifically tailored to exhibit poor performance under a fixed permutation, and it is acceptable if this function converges quickly under other permutations. In contrast, for RR, we now have to design a function that consistently exhibits slow convergence on average over a wide range of permutations, making the analysis more challenging.
- We believe that obtaining new bounds (both upper and lower) for RR on general functions in the small epoch regime likely requires fundamentally new techniques. Developing such bounds for more complex shuffling schemes remains an important direction for future work.
2. **From the proofs, it appears to me that the lower bounds can be immediately extended to also include claims on the distance to the solutions.**
- Yes, the reviewer is absolutely correct that the proofs can be directly applied to derive lower bounds in terms of the distance to the solution, i.e., $\lVert x_n^K - x^*\rVert$. The reason why we state the lower bounds in terms of the function optimality gap is to match the form of the upper bounds. This consistency allows us to make a direct comparison and claim tightness between the lower and upper bounds. We will add a remark to clarify that our lower bound proofs can directly be used to obtain bounds in terms of the distance metric.
3. **Minor suggestion (vector, plural, capitalize)**
- Thank you for the careful reading. We have revised your suggested corrections.
We are glad the reviewer took the time to read the proofs of Theorems 3.1 and 3.7. If the reviewer is interested, we would also like to recommend going over Theorem 3.3, which we believe contains the most novel idea in the paper. If there are any further comments or questions, we would be happy to address them.
---
Rebuttal Comment 1.1:
Comment: > In contrast, for RR, we now have to design a function that consistently exhibits slow convergence on average over a wide range of permutations, making the analysis more challenging.
I can see why this is more challenging, IGD seems easier to attack since we can control both the functions and the permutation. My intuition is that the counterexample for RR should be somewhat similar in nature. The cases of small and large stepsizes are equally trivial for RR, while for the mid range of stepsizes, the functions should probably still be roughly linear to make the method stray away from the solution. After all, the upper bound in the strongly convex case is derived using the extra sequence of points $x^*_i = x^* - \eta \sum\_{j=0}^{i-1} \nabla f\_{\pi\_{j}} (x^*)$, so we only care about the values of $\nabla f\_{\pi\_{j}} (x^*)$ and making the associated functions linear would only make it easier to study.
> The reason why we state the lower bounds in terms of the function optimality gap is to match the form of the upper bounds.
I encourage you to state the lower bounds for the distance terms as well, just in case others manage to derive upper bounds on the distances instead of the functional values, you will make their job of comparing the results easier. For instance, the guarantees in Theorem 1 of Mishchenko et al. (2020) are stated in terms of distances, so it's not a far fetched scenario.
> We believe that obtaining new bounds (both upper and lower) for RR on general functions in the small epoch regime likely requires fundamentally new techniques. Developing such bounds for more complex shuffling schemes remains an important direction for future work.
Hmmm, maybe, but I somehow feel that the lower bound construction shouldn't be that different. My intuition is that we can construct high-dimensional functions such that $\nabla f_i(x^*)$ is very hard to cancel unless a lot of other functions are sampled and added to it. In other words, we want to avoid the effect of the law of large numbers by using many dimensions and ensuring that $\Vert \sum\_{j=0}^{i-1} \nabla f\_{\pi\_{j}} (x^*)\Vert$ stays away from 0 with very high probability. Then, a construction similar to yours should do the thing I think.
For instance, let us choose for each coordinate $s$ a pair of indices $i\_s, j\_s$ so that only $\nabla f\_{i\_s}(x^*)$ and $\nabla f\_{j\_s}(x^*)$ have non-zero entries at coordinate $s$ and they cancel each other out. In other words $[\nabla f\_{i\_s}(x^*)]\_s = -[\nabla f\_{j\_s}(x^*)]\_s$. Then, if $i\_s$ is sampled at the beginning of a permutation, with high probability the coordinate $s$ of $\sum\_{j=0}^{i-1} \nabla f\_{\pi\_{j}} (x^*)\$ is going to stay away from 0 for a long time just because it will take a lot of time to sample $j\_s$ to cancel that coordinate. And if we do this for a lot of coordinates, it basically means that whatever functions I sample at the beginning, they are likely to keep increasing the magnitude of $\Vert \sum\_{j=0}^{i-1} \nabla f\_{\pi\_{j}} (x^*)\Vert$ as long as $i\le n/2$. More formally, if we split numbers $1, 2, \dotsc, n$ into $n/2$ pairs, and then sample a permutation, it seems likely that among the first $n/2$ numbers there would be at least $\Omega(n)$ numbers without a pair among them. Maybe something like that would work?
> we would also like to recommend going over Theorem 3.3, which we believe contains the most novel idea in the paper.
Thanks, that's indeed quite interesting. Reminded me of the old counterexample (in the sense of slow convergence) for the iterative projection method, where $n$ lines intersecting at 0 are constructed so that the iterates go in a slow spiral around the solutions. Maybe it's even equivalent since your function's are quadratic. I can't find the reference, but the visualization looks roughly like this:
```
\ | /
\ | /
\ | /
----+---
/ | \
/ | \
/ | \
```
---
Reply to Comment 1.1.1:
Comment: Thank you very much for the insightful comments. We fully agree with the reviewer’s points, and would like to briefly share our own thoughts regarding upper and lower bounds for RR in the small epoch regime.
We would like to begin by summarizing the current state of research on RR in the small epoch regime. To the best of our knowledge, there are **two noteworthy results** (under the assumption that the overall function is strongly convex and each component is smooth):
1. [Mishchenko et al., 2020]: When all component functions are also strongly convex, an upper bound of $O(\frac{L^2}{\mu^3 n K^2})$ is provided.
2. [Safran & Shamir, 2021]: When all component functions are quadratic and their Hessians commute, a tight convergence rate of $\Theta(\frac{1}{\mu n K})$ is established.
Unlike scenario (2) where the authors provide matching UB and LB (up to polylogarithmic factor), the lower bound in scenario (1) is unknown, and it remains open whether the rate $O(\frac{L^2}{\mu^3 n K^2})$ can be improved or not.
Given this context, there are **two clear directions for future exploration** in small epoch RR literature:
- [**Upper Bound Direction**]: Improve the existing bound of $O(\frac{L^2}{\mu^3 n K^2})$ under the strongly convex component assumption, or derive new bounds under weaker assumptions (e.g., convexity, or even without convexity).
- [**Lower Bound Direction**]: Develop a matching lower bound (under the strongly convex component case) to close the gap with the existing upper bound $O(\frac{L^2}{\mu^3 n K^2})$.
The primary challenge on the **upper bound** side is that deriving new upper bounds in the small epoch regime appears to require sophisticated analytical techniques (due to challenges discussed in Section 2.4). As can be found in [Safran & Shamir, 2021], even the proof for 1D quadratic is highly technical. One promising technique we explored is from [Koloskova et al., 2024]. In contrast to traditional analyses that group updates within a single epoch (i.e., chunks of size $n$), this method groups updates into chunks of size $\tau:=1/\eta L$. While this chunk-based approach can be successfully applied to derive upper bounds for IGD, it becomes problematic for RR. Specifically, when the chunk size $\tau$ does not align neatly within epochs, handling the dependencies between iterates becomes extremely difficult.
Regarding the **lower bound** direction, we believe any progress beyond current results will likely require more “complicated” constructions that go beyond simple quadratic functions. This is because for simple quadratic functions where the Hessians commute with each other (e.g., $f_i(x_1, x_2) = \frac{L}{2}x_1^2 + a_i x_1 + \frac{\mu}{2}x_2^2 + b_i x_2$), the tight rate of $\Theta(\frac{1}{\mu n K})$ is already established by [Safran & Shamir, 2021]. Therefore, to surpass the existing LB barrier $\Omega(\frac{1}{\mu n K})$, future constructions must involve **quadratic functions with non-commuting Hessians or even non-quadratic functions**, necessitating more advanced analytical techniques. While our own lower bound construction in Theorem 3.3 is based on quadratic functions with non-commuting Hessians, it is tailored to IGD, and we do not see a clear way to extend this idea to RR.
Regarding the reviewer’s intuition about potential lower bound constructions for RR involving high-dimensional pairing, such constructions would similarly require quadratic functions with non-commuting Hessians or non-quadratic functions, thus still posing analytical challenges. Moreover, introducing a pairing-based construction may not necessarily be beneficial. To explain why, consider a simple example: focusing on the first dimension (denoted as $x_1$), and suppose the first two component functions are $f_1(x_1) = a_1x_1^2 - Gx_1$ and $f_2(x_1) = a_2x_1^2 + Gx_1$, respectively. Now, if the remaining component functions are set to zero, i.e., $f_i(x_1) \equiv 0$, then the overall strong convexity parameter decreases by a factor of $1/n$, negatively affecting the function optimality gap. Conversely, if the component functions are set to $f_i(x_1) = a_ix_1^2$, then the iterates along the first dimension shrink with each step, preventing the function optimality gap from becoming sufficiently large. Due to these reasons, we do not find an immediate way to establish a new lower bound using the reviewer’s suggested construction. Nevertheless, we find the reviewer’s suggestion valuable, and believe it serves as a promising starting point. | Summary: This paper analyzes Incremental Gradient Descent (IGD) method in various convex settings, and establishes lower bounds of IGD in the small epoch regime and large epoch regime. This paper also provides upper bound results for arbitrary permutation-based SGD in several small epoch and large epoch regimes. This paper also design a new permutation-based SGD method that outperforms with-replacement SGD in small epoch regime with strongly convex component functions.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I do not have the time to go through all the details. Up to what I have verified, everything looks correct.
Experimental Designs Or Analyses: NA, purely theoretic work.
Supplementary Material: I read part of the proofs.
Relation To Broader Scientific Literature: There are several permutation-based SGD published in past ICML events.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
- This paper is generally well-written.
- The upper bound and lower bound analysis of IGD is comprehensive, complementing existing work.
- The discovered slow performance of IGD in the small epoch regime is interesting.
Weaknesses:
- The storyline of paper is not so clear. The current version seems like a collection of upper bounds and lower bounds under various setups, using different permutation schemes, i.e., the results are not strongly connected.
- Several assumptions are very restrictive, e.g., 1-dimension assumption in Theorem 3.2
- This paper seems to lack a stricking result imho.
Other Comments Or Suggestions: Adding some simple numerical experiments to justify the theoretical results would greatly boost the confidence in the correctness of the proofs.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review. Below, we address the reviewer’s concerns.
1. **The storyline of paper is not so clear.**
- While our analysis does involve three different permutation schemes—Incremental Gradient Descent (IGD), arbitrary permutation-based SGD, and Herding at Optimum—we would like to clarify that these schemes form a well-connected framework rather than a disjointed collection of results.
- First, we emphasize that **(1) IGD** appears exclusively in **lower bounds**, and **(2) Arbitrary permutation-based SGD** appears only in **upper bounds**. Importantly, arbitrary permutation-based SGD contains the worst-case permutation scenario. Therefore, when the LB of IGD matches the UB of arbitrary permutation-based SGD, we obtain tight convergence guarantees for worst-case permutation-based SGD (as discussed in Line 171 (left) and Appendix A). Our results establish such bounds across both small and large epoch regimes, and under varying assumptions on the component functions.
- **Herding at Optimum** (Theorem 3.7) illustrates how fast permutation-based SGD can converge under (near-)optimal permutations in the small epoch regime. Together, the three schemes characterize the full spectrum of convergence behavior for permutation-based SGD, from worst-case to (nearly) best-case. One caveat is that as discussed in Line 377 (left), the current Herding at Optimum is not an implementable algorithm in general scenarios; making it practical is a promising direction for future work.
2. **Several assumptions are very restrictive.**
- We would like to clarify several points that may help the reviewer better understand the context behind these choices.
- First, unlike the upper bound analysis, strong assumptions in lower bound theorems strengthen the result. This is because the **narrower** the function class for which a lower bound holds, the **stronger** the applicability when compared with upper bounds.
- Also, we acknowledge that assumptions on the component functions’ Hessians (e.g., identical Hessians or strong convexity) may appear restrictive. However, our results show that the convergence behavior of IGD varies significantly under different settings. Thus, these assumptions serve to illustrate meaningful distinctions, rather than to artificially strengthen the conclusions.
- Lastly, in upper bound analyses, stronger assumptions indeed weaken the result. As pointed out by the reviewer, 1D setting in Theorem 3.2 is quite restrictive. However, deriving upper bounds for permutation-based SGD methods in the small epoch regime is known to be **extremely challenging** without strong assumptions. Specifically, [1] derives an upper bound for Random Reshuffling, but only under strong conditions—quadratic objectives with component Hessians that are mutually commutative, symmetric, and PSD—essentially reducing the problem to 1D.
3. **This paper seems to lack a striking result.**
- We would like to highlight that our main contribution lies in addressing a largely underexplored regime: **the small-epoch behavior of shuffling methods**. To our knowledge, except for the result by [1], no prior work establishes tight convergence bounds in this regime.
- Our paper provides tight lower and upper bounds for IGD in this regime. While IGD is the simplest among permutation-based methods, the theoretical analysis in this regime is highly nontrivial and challenging (see Section 2.4).
- In particular, when the component functions are allowed to be nonconvex, we show that the convergence of IGD can be *exponentially* slow. Our result is the first to rigorously establish this phenomenon, suggesting that permutation-based SGD methods can suffer severe slowdowns in the small epoch regime. This motivates further investigation into whether similar slowdowns occur in other permutation-based SGD methods, including Random Reshuffling (RR).
4. **Adding some simple numerical experiments would boost the confidence in the correctness of the proofs.**
- We would like to note that we included numerical experiments on our lower bound constructions in Appendix G (as mentioned in Line 303 (right)). These experiments confirm that IGD exhibits slow convergence on the constructed objectives from Theorems 3.3 and 3.5. In particular, we observe exponential-type slow convergence in the case of Theorem 3.5.
- In addition, another reviewer suggested evaluating performance in a more practical scenario. During the rebuttal period, we conducted experiments on a binary classification task using two selected labels from the MNIST dataset. We kindly refer the reviewer to our response to Reviewer RSZD for further details.
We hope our responses have fully addressed your concerns. If you have any remaining concerns or questions, we would be happy to respond.
---
[1] Safran, Itay, and Ohad Shamir. "Random shuffling beats SGD only after many epochs on ill-conditioned problems." NeurIPS 2021 | null | null | null | null | null | null |
Self-Bootstrapping for Versatile Test-Time Adaptation | Accept (poster) | Summary: This paper proposes Self-bootstrapping for versatile test-time adaptation, a general TTA framework that adapts models across classification, regression, and dense prediction tasks without requiring source data. SPA introduces weak-to-strong self-bootstrapping learning, ensuring adaptation by aligning a deteriorated (weak) view’s predictions with the original (strong) view using Fourier-based augmentations—low-frequency amplitude masking and high-frequency noise injection.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Almost make sense.
This paper aims to demonstrate that the proposed Fourier-based augmentations work across various image-based tasks, including classification, segmentation, and 3D detection. However, the evaluation could benefit from a clear primary focus, such as image classification, to better align with existing TTA benchmarks. Expanding the evaluation to include CIFAR-10-C and CIFAR-100-C datasets and incorporating additional backbones like WideResNet and ResNet would strengthen the comparison with prior TTA methods and provide a more comprehensive assessment of generalizability.
Theoretical Claims: No theoretical claim and proofs.
Experimental Designs Or Analyses: Many new TTA works on image classification were published in 2024, and comparing SPA with them on CIFAR-10-C, CIFAR-100-C, DomainNet, and additional architectures (e.g., WideResNet, ResNet) would strengthen its evaluation.
For segmentation, the use of "rounds" is unclear—if the same test samples appear multiple times, this is not true online TTA; if not, the sample split criteria per corruption type need clarification.
For object detection, the MemCLR (WACV 2023) method might be considered to provide a more complete comparison.
Supplementary Material: Yes. All parts.
Relation To Broader Scientific Literature: This paper extends TTA by adapting test samples into a weak-to-strong self-bootstrapping framework for stable adaptation. SPA introduces Fourier-based augmentations—low-frequency masking and high-frequency noise injection—to provide stronger adaptation signals while preserving geometric structure. This augmentation is general across classification, segmentation, and detection tasks, and might benefit to supervised learning (not verified)
Essential References Not Discussed: A lot of TTA method for image classification missed:
1. Roid, published in WACV 2024;
2. ViDA, published in ICLR 2024;
3. A Versatile Framework for Continual Test-Time Domain Adaptation: Balancing Discriminability and Generalizability, published in CVPR 2024;
4. CMF, published in ICLR 2024;
5. SLWI, ICML 2024, etc.
For object detection:
1. IoU-filter published in CVPRW 2024
2. Memclr published in wacv 2023
Other Strengths And Weaknesses: Strengths:
- The Fourier-based augmentation strategy has potential for broader applications in various image-based tasks beyond TTA.
- The paper evaluates the method across different image-based tasks (classification, segmentation, and detection), demonstrating its generalizability.
Weaknesses:
- The method's complexity is limited, as it primarily modifies the augmentation strategy within an existing consistency-based TTA framework, which may not be a major conceptual leap.
- The experiments lack a clear focus, spreading across multiple tasks but not providing in-depth validation on any single one.
- The high-/low-frequency augmentation approach could benefit from stronger empirical analysis or theoretical justification to validate its effectiveness.
- Adaptation efficiency and latency are not discussed, which are crucial for real-world TTA deployment.
- Additional comparisons with recent TTA methods (e.g., 2024 TTA classification works, MemCLR for detection) would strengthen the evaluation.
Other Comments Or Suggestions: See the comments.
Questions For Authors: Have you explored the effectiveness of this augmentation strategy on Vision-Language Models (VLMs) like CLIP and SigLIP?
Your experiments include continual adaptation for segmentation—does the strategy also work for continual or mixed-domain TTA in image classification?
How does the computational cost of the proposed Fourier-based augmentations compare to standard TTA augmentations? Will it slow the adaptation time?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: >Q1. Results on CIFAR-100 and ResNet
In the submission, we compare SPA on ImageNet-C/R/Sketch/A as they are large-scale, more challenging than CIFAR-100, and commonly used. We now report more results on CIFAR100-C below to further validate SPA. **Pls see Reviewer 61cM's Q1 for more ResNet results**.
|Method|Avg Acc on CIFAR100-C|
|-|:-:|
|CoTTA|69.6|
|EATA|72.1|
|ActMAD|74.0|
|DeYO|74.0|
|ROID|76.3|
|SPA(Ours)|75.4|
|SPA+ActMAD+Tent|77.0|
|SPA+ROID|77.4|
>Q2. Clarification on "rounds" in segmentation exps
“Round” refers to the number of repeated sequence of datasets to simulate long-term continual TTA (e.g., Fog-Night-Rain-Snow-Fog-…). This is a standard evaluation protocol in continual TTA following CoTTA. We will make this clearer.
>Q3. Comparison with more baselines
As you suggested, we compare SPA with more baselines in the table below on classification, showing our efficacy. For detection, we acknowledge that the suggested baselines are designed for 2D tasks, while we focus on more challenging 3D monocular object detection. Due to time limits, it is non-trivial to implement them in our 3D setting. Here, we would like to clarify that MonoTTA (ECCV'24) is tailored for 3D detection and is the current SOTA (more latest than suggested ones), making it more relevant for comparison. Thus, we believe the current thorough comparisons with MonoTTA have fully validated SPA's effectiveness. We will discuss all mentioned methods in the revision.
Table. More results on ImageNet-C with ViT-B
|Method|Avg Acc|
|-|-|
|ROID|68.0|
|ViDA|60.8|
|CMF|69.2|
|SPA(Ours)|70.1|
|SPA+ActMAD|71.2|
|SPA+ActMAD+TENT|71.6|
>Q4. The method's complexity is limited, ...
SPA's main contribution lies in its overall design for versatile online fully TTA. It comprises an active, deterioration-driven self-bootstrapping scheme and geometry-preserving augmentations inspired by our Fourier analysis of domain shifts.
Unlike prior methods enforcing consistency across different augmentations with a teacher-student scheme, our SPA generates a deteriorated image to create an information gap, enabling self-distillation *in a single model* from strong to weak predictions. The distilled knowledge is then directly fed back to the strong branch via shared parameters, forming a closed-loop TTA process.
Though simple in form, SPA’s insights and overall design are non-trivial and challenging, reflected in the result of extensive preliminaries where direct consistency learning or traditional augmentations failed to deliver satisfactory performance. We also believe such method simplicity helps enhance both usability and impact.
>Q5. The experiments lack a clear focus, ...
Our goal is to design a versatile TTA method, so we evaluate SPA across multiple tasks. Actually, our validations are thorough: we verified our superiority as standalone or as a plug-and-play module on different tasks, e.g., outperforming prior SOTAs like DeYO (ICLR'24) in classification and MonoTTA (ECCV'24) in 3D detection. We also provided extensive ablations and analyses for different tasks. As you suggested, we also compare with more baselines to verify our efficacy, pls see Q3 and Q9.
>Q6. The method could benefit from stronger empirical analysis to validate its effectiveness
In Figure 2, we empirically analyze how domain shifts manifest in Fourier domain, and derive insights of which image deteriorations can supply sufficient learning signals for our self-bootstrapping. Extensive experiments (Tables 5,6,7) further validate the conclusions drawn from Figure 2. These analyses are also positively acknowledged by other reviewers, e.g., *“analyses…particularly insightful”* [DdZn] and *“sufficiently supporting the conclusions drawn”* [QKc5].
>Q7. Adaptation efficiency
Our SPA is much more efficient than augmentation-based CoTTA, which uses 2 or 34 augmentations per sample, while SPA needs only 1–2. It also matches the efficiency of entropy-based SAR. FPS on ImageNet-C with ViT-Base (on A100) are: SPA-I(125)>SAR(102)>SPA(79)>CoTTA(36). Here, SPA-I is a variant that applies two deteriorations within a single image (1 aug) and achieves similar ImageNet-C acc: SPA(70.1%) vs SPA-I(69.0%). Partial results were in Appendix C and we will make this clearer.
>Q8. Efficacy on VLM
Our SPA works on VLM, pls see Reviewer DdZn's Q4.
>Q9. Results under continual or mixed-domain TTA in classification
Our SPA works under such scenarios, pls see Reviewer DdZn'Q3.
>Q10. Computation cost of augmentation
Our augmentations are efficient: 1) Fourier masking uses PyTorch's FFT/iFFT, accelerated via parallelized GPU operations; 2) Noise injection is a trivial element-wise addition. Augmentation takes only a tiny (negligible) fraction of the total adaptation time, and the FPS of SPA is almost the same whether using our augs or MoCo/SimCLR’s.
We deeply appreciate your constructive comments. We sincerely hope our clarifications above have addressed your questions and can improve your opinion of our work.
---
Rebuttal Comment 1.1:
Comment: I saw the results on CLIP, the Imagenet-r is for the ood dataset benchmarks. I am wondering:
1. if this could work on cross-dataset benchmark.
2. the setups of the CLIP experiments
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer tvoR,
Thanks for your reply. This is a valuable question, and we would like to answer it below.
**$\bullet$ The setups of CLIP experiments:**
We freeze the text encoder of the CLIP model and treat it as a fixed classifier. Our method is then applied to the image encoder, where we adapt the affine parameters of its normalization layers.
**$\bullet$ On cross-dataset benchmarks:**
Our method primarily focuses on addressing the **OOD issue in the visual modality** through our deterioration-driven self-bootstrapping learning, by **improving the visual feature representation**.
However, for commonly used **cross-dataset benchmarks on CLIP model**, the image distributions are often relatively more stable compared to OOD datasets, and the image encoder already provides more semantical representations. The **performance bottleneck instead often lies in the text modality (i.e., the text encoder)**—specifically, the quality of the classifier formed from text embeddings. This is also supported by 1) prior VLM TTA methods, such as TPT and C-TPT, which focus on adapting the text branch to improve cross-dataset generalization, and 2) prior visual modality-focused TTA methods (like Tent and ETA) achieve limited performance gains in this cross-dataset scenario.
Therefore, the performance gain of our SPA method on cross-dataset scenarios with CLIP is not as competitive as its improvement on OOD scenarios—but it still provides benefits—as we do not adapt the text branch.
Extending our method to also adapt the text branch to further boost performance in cross-dataset scenarios is an interesting and promising direction. We leave this for future work, and believe that our current focus on addressing visual OOD issues already makes meaningful contributions, as this is a fundamental challenge for numerous vision-involved models.
Table. Results on CLIP-ViT-B for cross-dataset generalization.
|Method \ Dataset|DTD|UCF101|Aircraft|Avg. Acc. (%)|
|-|:-:|:-:|:-:|:-:|
|Source|44.3|65.1|23.8|44.4|
|VLM TTA methods:
|TPT|46.7|67.3|23.4|45.8|
|C-TPT|46.0|65.7|24.0|45.2|
|Vision TTA methods:
|Tent|45.2|66.0|23.4|44.9|
|ETA|44.7|66.1|23.7|44.8|
|SAR|44.6|66.5|23.4|44.8|
|DeYO|44.2|66.0|22.7|44.3|
|SPA (ours)|45.4|66.2|23.6|45.1|
We greatly appreciate your invaluable reviews to improve the quality of our paper and your insightful further comments on cross-dataset evaluation. We sincerely hope our clarifications above have addressed your questions and can improve your opinion of our work. We are happy to continue the discussion if you have further questions.
Best,
The Authors
### Post Discussion Update:
Dear Reviewer tvoR,
Thank you for upgrading your score! Your invaluable review comments are immensely beneficial in improving the quality of our paper!
Best,
The Authors | Summary: The paper explores the impact of typical distribution shifts on the information content of images across different spatial frequencies in the Fourier domain. It highlights that low-frequency components dominate in terms of information power, and removing these components provides more effective learning signals compared to masking high-frequency components. Based on this observation, the authors propose a data augmentation strategy that involves randomly masking the low-frequency amplitudes of an image in the Fourier domain. Additionally, they inject noise into the image to enhance the information power of high-frequency components, compensating for the loss of learning signals at lower frequencies. Experimental results demonstrate that this method, whether applied on its own or as a plug-and-play module, improves performance in tasks such as classification, segmentation, and 3D monocular detection across both transformer-based and CNN-based models.
## update after rebuttal
Thanks to the authors for their response. I do not have any further queries.
Claims And Evidence: The main claim of this paper centers around the validity of the proposed data generation scheme, supported by experiments. Table 1 provides evidence for classification tasks, Table 3 validates the approach on object detection, and Table 4 focuses on segmentation. Additionally, Table 5 includes an ablation study that appears to substantiate the points made, sufficiently supporting the conclusions drawn. However, the novelty of the SPA approach may be somewhat overstated in the paper; it presents ideas that are not particularly new.
Methods And Evaluation Criteria: The TTA benchmark is well-established, and this paper runs most of the convincing datasets, with a sufficiently large testing scale to support its claims.
Theoretical Claims: The augmentation scheme proposed in this paper relies more on empirical experimental validation. While Figure 2 provides some insightful theoretical basis, it is not sufficiently theoretical to fully support the proposed method.
Experimental Designs Or Analyses: The experiments are comprehensive, with no gaps in the ablation studies. All argued applications have been validated through experimentation.
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: This paper is well-written.
Other Comments Or Suggestions: Please emphasize augmentation more than SPA in future versions.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We deeply appreciate your valuable feedback and your recognition of the novelty and contributions of our work for designing a challenging versatile fully TTA framework. Our SPA incorporates several components, including an active, deterioration-driven self-bootstrapping scheme (distinct from feature-level BYOL), as well as carefully crafted insights into geometry-preserving augmentation strategy design. The overall design of SPA with these innovations is challenging and non-trivial based on our analysis. These innovations collectively ensure the stability and effectiveness of SPA for online fully test-time adaptation. Thank you again for your insightful suggestion. We will make the novelty and contributions of our work clearer in the revised version. | Summary: The paper introduces Self-Bootstrapping for versatile Test-Time Adaptation (SPA), a novel framework that enables TTA across multiple tasks—classification, segmentation, and 3D detection. The authors propose a geometry-preserving augmentation strategy using low-frequency amplitude masking and high-frequency noise injection in the Fourier domain. This approach maintains spatial structure while providing sufficient learning signals for adapting models at test time without source data access.
Claims And Evidence: - The core claim of versatility across task types is well-supported through experiments on ImageNet-C/R/A/Sketch, KITTI-C, and ACDC datasets.
- Results demonstrate consistent improvements over baseline methods across different architectures (ViT, CNN) and tasks.
- The ablation studies effectively validate design choices, particularly regarding frequency-domain augmentation strategies.
Methods And Evaluation Criteria: - The proposed frequency-domain analysis and augmentation approach is novel and well-motivated.
- Benchmarks cover a good range of distribution shifts and tasks.
- Evaluation methodology follows established protocols in the field.
Theoretical Claims: No formal proofs are presented.
Experimental Designs Or Analyses: - Experimental setup is sound with appropriate baselines.
- Missing comparison with certain TTA methods: Notable omissions include MT3, TTT_MAE, and Diffusion-TTA, which would strengthen the evaluation.
- The analyses of how different domain shifts manifest in the frequency domain are particularly insightful.
Diffusion-TTA - https://diffusion-tta.github.io/
TTT-MAE - https://papers.neurips.cc/paper_files/paper/2022/file/bcdec1c2d60f94a93b6e36f937aa0530-Paper-Conference.pdf
Supplementary Material: No only glanced through it.
Relation To Broader Scientific Literature: The paper builds upon and extends several research directions in domain adaptation and self-supervised learning. It draws conceptually from self-supervised contrastive methods like BYOL and DINO but adapts these approaches specifically for test-time adaptation. The authors' frequency-domain augmentation strategy connects to a growing body of work on Fourier-domain analysis in computer vision, though they uniquely apply it to preserve geometric structure for dense prediction tasks. The paper sits at the intersection of consistency-based TTA methods (like MEMO), entropy-based approaches (TENT, EATA), and structure-preserving adaptation—extending beyond prior work by creating a unifying framework applicable to a wider range of tasks.
Essential References Not Discussed: Diffusion-TTA - https://diffusion-tta.github.io/
TTT-MAE - https://papers.neurips.cc/paper_files/paper/2022/file/bcdec1c2d60f94a93b6e36f937aa0530-Paper-Conference.pdf
Other Strengths And Weaknesses: Strengths:
- The approach's versatility across regression and classification tasks is impressive
- Geometry-preserving augmentations address a key limitation in prior work
- Functions well as both standalone method and plug-and-play module
Weaknesses:
- No comparison with TTT-series methods that might be competitive, i would prefer if the authors could do apples to apples comparision against MT3, so that we can use directly the numbers reported in their paper
- I'm still missing a good intution/analysis as to why conventional augmentations are not good, although Figure 2 shows RAPSD changes with different shifts, i'm still struggling to understand why standard augmentations such as Random Masking or Blur are not sufficient?
Other Comments Or Suggestions: Nothing in particular
Questions For Authors: - Why were TTT methods like MT3, TTT_MAE and Diffusion-TTA not included in the comparisons? These seem like competitive approaches worth benchmarking against.
- How does SPA perform on more realistic distribution shifts beyond those tested? like for instance objectnet
- Have you explored potential integration with foundation models?
[1] Objectnet - https://papers.nips.cc/paper_files/paper/2019/hash/97af07a14cacba681feacf3012730892-Abstract.html
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We deeply appreciate your valuable feedback and constructive comments on improving the quality of our paper. We would like to address your questions below.
>Q1. Comparison with TTT-series methods.
Thank you for your suggestion. We follow the comparison setting used in Diffusion-TTA and directly compare our results with the TTT methods reported in their work, by implementing our method using their adopted model and experimental environment. This is because TTT-series methods modify the model training process, making them relatively difficult to implement under our diverse environmental setups. As shown in Table A, our SPA, as a fully TTA method, still achieves superior performance, further demonstrating its effectiveness. We will discuss all mentioned TTT-series methods in the revision.
Table A. Comparisons with TTT-series methods on ImageNet-C w.r.t. acc (%).
|Model+Method|gauss|fog|pixel|snow|contrast|Avg|
|-|:-:|:-:|:-:|:-:|:-:|:-:|
|Customized ViT-L/16 classifier|17.1|38.7|47.1|35.6|6.9|29.1|
| + TTT-MAE|37.9|51.1|65.7|56.5|10.0|44.2|
|ViT-B/32|39.5|35.9|55.0|30.0|31.5|38.4|
| + Diffusion-TTA|46.5|56.2|64.7|50.4|33.6|50.3|
| + SPA (Ours)|49.4|56.9|66.7|49.9|51.5|54.9|
>Q2. Why are conventional augmentations not good?
Conventional augmentations can be broadly categorized into two groups based on whether they preserve geometric information.
- **Geometry-preserving augmentations** (e.g., grayscale conversion, brightness, contrast, blur) typically adjust the global distribution of images—for instance, changing brightness by adding the same constant across all pixels. Such augmentation patterns are relatively simple and lack diversity or uniqueness for different samples, thus providing limited learning signals for our self-bootstrapping learning pipeline. As in Table 6, none of these augmentations, individually or combined, effectively provide TTA with rich learning signals. These augmentations are also often sensitive to the corruption type and struggle to perform stably across all corruptions, leading to limited overall performance.
- **Non-geometry-preserving augmentations** (e.g., random resizing, cropping, image masking, SimCLR, MoCo, AugMix) introduce randomness for different samples, which randomly deteriorate the local information of different images, thus can provide more diverse learning signals for SPA. However, these augmentations shall disrupt the image’s geometric structure and directly desert the potentially useful image information. As in Table 7, SPA with these augmentations achieves considerable performance on image classification but performs poorly on finer dense prediction tasks like 3D monocular detection. In contrast, our Fourier augmentations preserve the overall information and geometry to support dense prediction tasks and also introduce sufficient diversity for different samples, supplying rich signals for SPA and achieving improved adaptation performance.
>Q3. Results on more realistic distribution shift scenarios.
Thank you for your valuable suggestion. We have conducted additional experiments on ObjectNet in Table B, based on the codebase of VisDA-2021 challenge. Furthermore, we also evaluate SPA under more 'realistic settings', including continual adaptation (Table C) and mixed domain adaptation (Table D). The results across all these tables further validate the effectiveness of SPA, both as a standalone method and as a plug-and-play module to enhance existing approaches.
Table B. Results on ObjectNet with ResNet-50.
|Method|Source|Tent|ETA|SAR|DeYO|ROID|SPA(ours)|SPA+ETA(ours)|
|-|-|-|-|-|-|-|-|-|
|Acc (%)|27.2|27.5|26.0|28.6|29.8|28.7|30.9|32.3|
Table C. Results of continual adaptation on ImageNet-C (level 5) with ViT-Base. We report avg acc of 15 corruptions.
|Method|Avg Acc (%)|
|-|-|
|Source|55.5|
|EATA|67.3|
|SAR|61.3|
|ROID|67.9|
|ActMAD|59.9|
|SPA (ours)|68.6|
|SPA+ActMAD (ours)|71.5|
|SPA+ActMAD+Tent (ours)|71.8|
Table D. Results on mixture of 15 corruptions of ImageNet-C (level 5) with ViT-Base.
|Method|Acc (%)|
|-|-|
|Source|55.5|
|EATA|63.7|
|SAR|60.7|
|DeYO|55.1|
|ROID|62.0|
|SPA (ours)|67.1|
|SPA+ActMAD (ours)|68.2|
|SPA+ActMAD+Tent (ours)|68.5|
>Q4. Integration with foundation models.
We briefly test our method on CLIP-ViT model and report results in Table E. With CLIP, our method still works, further suggesting the versatility of SPA.
Table E. Results with CLIP-ViT-B on ImageNet-R.
|Method|Source|TPT|C-TPT|ETA|SAR|DeYO|SPA (ours)|SPA+ETA (ours)|
|-|-|-|-|-|-|-|-|-|
|Acc (%)|74.0|77.1|76.0|76.9|75.6|76.6|77.2|78.2|
We thank you for appreciating our contributions. We sincerely hope our clarifications above have addressed your questions and can improve your opinion of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the new experiments, I hope the authors include the new experiments and the new baselines in the final version of the paper. Also it will be good to make a clear distinction between TTT methods that require specialized model training like TTT-MAE and methods such as Diffusion-TTA that do not. As it's unclear to me what falls under the TTT umbrella.
I have increased my rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer DdZn,
We are super glad and appreciate that you have increased your score! We will include all these new results and discussions in our revised paper.
The differences between TTT method (TTT-MAE), Diffusion-TTA, and our proposed SPA are: 1) TTT-MAE modifies the original model training process; 2) Diffusion-TTA does not alter the original training process, but it relies on an additional pre-trained diffusion model and jointly trains both the discriminative model (which requires adaptation) and the diffusion model during the testing phase; 3) our SPA does not need additional model and can be directly applied to a single (discriminative) model for adaptation. We will make these distinctions clearer in our revision.
||Modify original training process?|Rely on additional pre-trained (diffusion) model? |Model learned at test time|
|-|-|-|-|
|TTT-MAE|Yes|No|Discriminative model|
|Diffusion-TTA|No|Yes|Discriminative model and Diffusion model|
|SPA (Ours)|No|No|Discriminative model|
Thank you again for your invaluable review comments on improving the quality of our work!
Best,
The Authors | Summary: This paper proposes an image augmentation strategy utilizing the Fourier domain for randomly masking the low-frequency amplitude of an image. Further, it augments the image with noise injection to account for the lack of learning signals at high frequencies.
The paper reports experimental results on classification, segmentation, and 3D monocular detection tasks to show the effectiveness of the proposed approach.
Claims And Evidence: The claims made in the submission are supported by the improved performance in the experiments.
Methods And Evaluation Criteria: The benchmark datasets make sense but it would better a more fair comparison if experiments on ImageNetC is on ViT, and not ResNet-50 like prior works such as CoTTA [1], PETAL [2], EcoTTA [3], etc.
**References**
1. Wang, Qin, et al. "Continual test-time domain adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
2. Brahma, Dhanajit, and Piyush Rai. "A probabilistic framework for lifelong test-time adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
3. Song, Junha, et al. "Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
4. Yang, Yanchao, and Stefano Soatto. "Fda: Fourier domain adaptation for semantic segmentation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
Theoretical Claims: This work does not involve theoretical claims or proofs.
Experimental Designs Or Analyses: Refer to Questions and Weaknesses.
Supplementary Material: The supplementary material is not included, and thus, the code is not submitted. The Appendix elaborates on the datasets and provides some additional experiments as well as hyperparameter details.
Relation To Broader Scientific Literature: The key contributions of this paper can lead to better test-time adaptation in general. However, the novelty seems limited, as pointed out in "Essential References Not Discussed" and "Questions".
Essential References Not Discussed: A Fourier transform for domain adaptation that also involves masking has already been proposed by [1].
This limits the novelty of the proposed approach utilizing Fourier transform with masking for TTA, which is a setting related to domain adaptation.
**References**
1. Yang, Yanchao, and Stefano Soatto. "Fda: Fourier domain adaptation for semantic segmentation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
Other Strengths And Weaknesses: **Strengths**
* Performing image augmentation by utilizing Fourier transform for TTA is an interesting idea
* Experimental gains demonstrate the effectiveness of the proposed approach
**Weaknesses**
* Fourier transform for domain adaptation that also involves masking has already been proposed by [4].
* Experiments on ImageNetC is on ViT, and not ResNet-50 like prior works such as CoTTA [1], PETAL [2], EcoTTA [3], etc.
**References**
1. Wang, Qin, et al. "Continual test-time domain adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
2. Brahma, Dhanajit, and Piyush Rai. "A probabilistic framework for lifelong test-time adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
3. Song, Junha, et al. "Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
4. Yang, Yanchao, and Stefano Soatto. "Fda: Fourier domain adaptation for semantic segmentation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
Other Comments Or Suggestions: Please include the reference and highlight the novelty or difference between [1] and the proposed approach in the paper (appendix if the space is limited).
**References**
1. Yang, Yanchao, and Stefano Soatto. "Fda: Fourier domain adaptation for semantic segmentation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
Questions For Authors: 1. How is the noise factor γ hyperparameter tuned? Is the performance on the same test data monitored to get the best hyperparameters?
2. A Fourier transform for domain adaptation that also involves masking has already been proposed by [1]. This limits the novelty of the proposed approach utilizing Fourier transform with masking for TTA, which is a setting related to domain adaptation. Can the authors elaborate on how the proposed approach is a non-trivial extension of this prior work?
**References**
1. Yang, Yanchao, and Stefano Soatto. "Fda: Fourier domain adaptation for semantic segmentation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We deeply appreciate your valuable feedback and constructive comments on improving the quality of our paper. We would like to address your questions below.
>Q1. More classification results on ResNet-50.
Thanks for your suggestion. We conduct additional experiments on ResNet-50 and compare with more SOTA approaches. As in the table below, the improvements observed on ResNet-50 are consistent with our experiments on ViT-Base. This further underscores SPA's effectiveness both as a standalone method and as a plug-and-play module to enhance existing methods.
|Method|Avg Acc (%) over 15 corruptions of ImageNet-C|
|-|:-:|
|NoAdapt (ResNet-50)|31.4|
|CoTTA|34.0|
|EATA|44.4|
|DeYO|45.9|
|ROID|46.8|
|CMF|48.1|
|SPA (ours)|49.2|
|SPA+Tent (ours)|50.9|
|SPA+EATA (ours)|52.5|
>Q2. Differences from FDA [A].
Thank you for pointing out this related work. While FDA also utilizes the Fourier augmentation, SPA's primary contributions lie in the overall design of a versatile TTA scheme. SPA comprises several key components, including an active, deterioration-driven self-bootstrapping scheme (distinct from BYOL), and geometry-preserving augmentations inspired by our Fourier analysis across domain shifts. All these innovations are non-trivial and collectively contribute to SPA’s effectiveness and versatility. Compared with FDA, which focuses on unsupervised domain adaptation, our SPA method mainly differs in the following aspects.
- **Different Augmentations:** FDA augments a source domain image by replacing the low-frequency spectrum of the source image with that from a target image. This augmentation relies on paired source and target data, which is infeasible in TTA as only a single unlabeled target sample is available at test time. Unlike FDA, SPA randomly masks the low-frequency spectrum and injects noise for a single target image, without requiring the source images.
- **Different Methods:** FDA uses Fourier augmentation to transfer the source image style to the target domain, creating a target-style labeled source dataset $D^{s\rightarrow t}$. It then jointly trains the model offline using both $D^{s\rightarrow t}$ and the original target dataset $D^{t}$ to mitigate domain shifts. In contrast, SPA targets the more general and challenging setting of fully online TTA. It establishes a versatile self-bootstrapping framework that performs active weak-to-strong learning from a deteriorated view to the original image, for each given target test sample.
- **Different Design Motivations:** The key motivation of FDA is style transfer($s\rightarrow t$)for cross-domain training, by swapping the spectrum in Fourier domain. Unlike FDA, the online unsupervised learning in SPA is highly unstable, and thus our key motivation is how to design effective deterioration/augmentation strategies to conquer this. To this end, we analyze how information power shifts across frequencies under domain shift and leverage these insights to design augmentations that consistently deliver informative signals across all frequencies for SPA, meanwhile maintaining stability.
We will include the above discussions in the revised paper.
[A] Fda: Fourier domain adaptation for semantic segmentation. CVPR 2020.
>Q3. How is the noise factor $\gamma$ hyperparameter tuned?
We did not carefully tune $\gamma$ on each individual test set as at test time we do not have ground truth labels. In main experiments, we briefly set $\gamma$ to 0.4 for all classification datasets, including ImageNet-C (15 corruptions, ImageNet-R/Sketch/A), and $\gamma=0.1$ for all fine-grained tasks of detection and segmentation, including KITTI-C (13 corruptions) and Cityscape-to-ACDC.
We also show the sensitivity of $\gamma$ in Figure 3 on a single corruption (test set) of ImageNet-C(Gaussian) and KITTI-C(Fog). From the results, $\gamma\in[0.1,0.5]$ works stably on classification, while the stable range of $\gamma$ on mono 3D detection is relatively narrower, with performance slightly declining when $\gamma>0.2$—though it still remains significantly better than without adaptation. This difference arises because, for classification, the task is at the image level and does not strictly require content invariance, allowing for a higher $\gamma$ to provide richer learning signals. In contrast, 3D detection involves dense predictions where high noise levels could significantly disrupt the original image content, making it challenging for our self-bootstrapping learning.
We thank you for appreciating our contributions. We sincerely hope our clarifications above have addressed your questions and can improve your opinion of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response.
I do not have any further queries.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 61cM,
We are very happy to know that you have no further questions. Your invaluable review comments are immensely beneficial in improving the quality of our paper! Thank you!
Best,
The Authors | null | null | null | null | null | null |
TRACE Back from the Future: A Probabilistic Reasoning Approach to Controllable Language Generation | Accept (poster) | Summary: This paper introduces TRACE, a framework for controllable text generation that combines a distilled Hidden Markov Model (HMM) with lightweight classifiers to guide language model outputs toward desired attributes. The core idea is to compute the Expected Attribute Probability (EAP) tractably via HMM forward-backward inference, enabling efficient reweighting of next-token probabilities during decoding. TRACE aims to address the limitations of existing methods—such as computational expense, inflexibility, and reliance on sampling—by decoupling generation (via HMM) and control (via classifiers). The method is evaluated on detoxification, personalized language model adaptation, and compositional attribute generation. Empirical results demonstrate state-of-the-art detoxification performance, rapid adaptation to new attributes (e.g., 76 personalized roles trained in seconds), and seamless compositionality, with minimal decoding overhead (1.1× baseline latency).
Claims And Evidence: The claims are generally supported by empirical evidence.
However, the claim that HMM quality directly impacts performance (Section 5.1) lacks rigorous analysis. While Table 1 shows TRACE (↓HMM) underperforms, the paper does not quantify HMM expressiveness (e.g., hidden state size, KL divergence between HMM and LM distributions). Additionally, the paper does not explicitly define "↓HMM."
Methods And Evaluation Criteria: Methods: The HMM distillation + classifier framework is sensible for tractable EAP computation. However, the token-level factorized classifier (Equation 6) oversimplifies semantic attributes (e.g., style) and is not validated for non-factorizable tasks.
Evaluation: Human evaluation on toxicity and fluency may bring insights.
Theoretical Claims: The paper makes no formal theoretical claims. The HMM forward-backward algorithm and EAP derivation (Section 4.2) are standard but correctly applied.
Experimental Designs Or Analyses: Detoxification: Experiments on GPT-2 and Gemma-2B are thorough, but toxicity evaluation lacks diversity (e.g., no human judgments).
Personalization: Training classifiers on 300 samples per role (Section 5.2) may risk overfitting, and the results are only qualitatively validated (Table 2).
Compositionality: The independence assumption for combining attributes (Section 5.4) is untested for correlated and anti-correlated attributes.
Supplementary Material: NA
Relation To Broader Scientific Literature: TRACE builds on:
- HMM-based control: Extends Ctrl-G from lexical to semantic attributes.
- Decoding-time control: Improves upon FUDGE and DExperts by replacing neural discriminators with tractable HMMs.
- Compositionality: Similar to energy-based methods (e.g., COLD) but avoids MCMC sampling.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: While mostly well-structured, the decoding-time transformation (Section 5.1) is under-explained.
Other Comments Or Suggestions: Suggestion: Include ablation studies on HMM Quality.
Questions For Authors: Can TRACE handle non-factorizable attributes (e.g., coherence)? If not, what modifications are needed?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the insightful feedback.
```
Detoxification: Experiments on GPT-2 and Gemma-2B are thorough, but toxicity evaluation lacks diversity (e.g., no human judgments).
```
We ran LLM-as-judge evaluations with GPT-4 to evaluate the toxicity, fluency, and diversity of the generated continuations on a scale of 1-5 (where 1 is toxic/nonfluent/nondiverse and 5 nontoxic/fluent/diverse), and took the average over the dataset. The results for the Gemma-2B based methods can be found in [Table](https://anonymous.4open.science/r/TRACERebuttal-97A4/gpt4_evaluations.pdf); these are consistent with the conclusions from the automated metrics.
```
Personalization: Training classifiers on 300 samples per role (Section 5.2) may risk overfitting, and the results are only qualitatively validated (Table 2).
```
We agree that training classifiers with only 300 samples per role may risk overfitting, but TRACE’s use of a factorized classifier may be less susceptible compared to neural classifiers. Following your suggestion, we added a quantitative evaluation of personalization performance against a prompting baseline, using the prompt “You are a role-playing assistant trained to embody characters with accuracy and authenticity. In this instance, you will assume the persona of {role_name}. Answer the question: {question}”. TRACE performs better in role quality as shown in [Fig](https://anonymous.4open.science/r/TRACERebuttal-97A4/role_quality.png).
```
Compositionality: The independence assumption for combining attributes (Section 5.4) is untested for correlated and anti-correlated attributes.
```
The independence assumption says that two attributes are independent conditional on the full text. This is fundamentally a modeling assumption regarding the uncertainty of the classifier given by the joint distribution over attributes given text. Intuitively, it says that “I believe this text is toxic with probability 0.7, and political with probability 0.4, and so I believe it is toxic and political with probability 0.28”. In order to test the hypothesis empirically, one would need a ground-truth probabilistic classifier that models the joint distribution over attributes given text, which does not typically exist.
In particular, we do not require that two attributes are independent unconditionally (which would be violated with correlated or anti-correlated attributes). We demonstrate this empirically with the politics and nontoxicity attributes, where TRACE is able to enforce both politics and nontoxicity compositionally to the same level as each individually, despite the fact that these attributes appear to be anticorrelated.
```
While mostly well-structured, the decoding-time transformation (Section 5.1) is under-explained.
```
We will include a more detailed description to the revised version. We would be happy to answer any questions the reviewer may have about the decoding-time transformation.
```
Can TRACE handle non-factorizable attributes (e.g., coherence)? If not, what modifications are needed?
```
Please see our response to Reviewer 7o4g regarding the empirical performance on non-factorizable attributes. In summary, all attributes are to some extent non-factorizable, but TRACE obtains good performance even for (mildly) non-factorizable attributes.
Though TRACE cannot currently handle heavily non-factorizable attributes except by approximation, we believe that the methodology of TRACE could be modified to directly capture non-factorized attributes. The key requirement is for the computation of EAP to be tractable with respect to a HMM. One extension could be to utilize a (weighted) sum of factorized classifiers, and then utilize linearity of expectation to compute EAP, though this incurs an increase in computational cost. More generally, the literature on tractable probabilistic models [1] describes more complex functions whose expectation can be computed efficiently with respect to a tractable distribution such as an HMM. Given the strong empirical performance of TRACE, we believe that investigating compute-efficient methods of relaxing the factorization assumption offers highly promising avenues for future work.
[1] Khosravi et al. “On Tractable Computation of Expected Predictions” NeurIPS 2019
```
Suggestion: Include ablation studies on HMM Quality
```
We ran a study analysing the performance of TRACE at different points in the training/distillation process to evaluate the effect of HMM quality on toxicity reduction [Fig](https://anonymous.4open.science/r/TRACERebuttal-97A4/hmm_quality.pdf); the results show that the toxicity reduction improves with HMM log-likelihood. | Summary: The paper proposes a new method for controlled language modeling. Motivated by the compute and data inefficiency of previous solutions, the paper introduces a new method called TRACE, which uses conditional probabilities from an HMM to adjust token probabilities such that the text demonstrates desired attributes. The corresponding attribute classifier uses log-MSE on probabilities obtained through an oracle together with a probability transformation technique to make the distribution more likely to be bimodal. This technique can also be applied during decoding to improve results.
The method is evaluated on detoxification, character role play, and the generation of both political and non-toxic text.
Claims And Evidence: Claim #1: the method reaches superior performance to the baselines. This is supported through table 1, although the training details of baselines are unclear.
Claim #2: the method can quickly adapt to new attributes. This is demonstrated on a role-playing task, where the LLM is adapted to speak like a specific character based on 200 texts of that character. This method is not compared to other baselines, so it is unclear how strong the results are.
Claim #3: the method is more compute efficient. Table 4 and 5 support this clearly.
Claim #4: TRACE can combine multiple attributes easily. This is somewhat supported by Table 3. However, again there is no comparison to a baseline so it is unclear how much of an achievement this is.
Methods And Evaluation Criteria: The proposed method is reasonable and the evaluation benchmarks are solid. Obviously, there could always be more evaluations, e.g. on other types of attributes, but the amount of support in this paper is satisfactory.
As noted previously, the details of the experiments are insufficient.
Theoretical Claims: I checked the derivation of the method and found no mistake.
Experimental Designs Or Analyses: The experimental design is probably correct, however, there are too little details given. See question section.
Supplementary Material: Yes. Appendix A, despite its simplicity, seems central to the success of the method and should be moved to the paper.
Appendix B is insufficient - see question section.
Relation To Broader Scientific Literature: The paper proposes a method for controlled generation that is simple, efficient, and effective. It is related relatively well to the existing literature on this topic. Baseline evaluations are missing on two tasks, however.
Essential References Not Discussed: None
Other Strengths And Weaknesses: The paper lacks in clarity regarding the experimental results. See questions and comments below.
Other Comments Or Suggestions: * line 250: detofixication instead of detoxification
* Is log-MSE a standard method in this case? Can you cite a reference?
* The caption of Table 1 contains no explanation of why some fluency measures are striked-through. This is only in the text.
* Table 4: should probably say > 1 day for GeDi
* training time transformation seems so important it shouldn't be in Appendix A but rather be in the main body of the paper
Questions For Authors: * How are the baseline numbers obtained that are listed in Table 1? If you got them from the respective numbers, clarify so. If you trained them yourself, what did you do to ensure that the differences in performance are not due to differences in evaluation and/or details of the implementation that are not central to the method?
* How do the baselines perform on character role-playing and political and nontoxic text generation?
* How is character probability in Figure 3 measured? I cannot find any information in the text or the appendix.
* The impact of HMM quality is not well explained. What is the difference between TRACE and TRACE (HMM)? Don't all TRACE results use an HMM?
* Since this seems to be a crucial aspect of your method, what is the performance of the method without training-time transformation?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the detailed feedback.
```
Is log-MSE a standard method in this case? Can you cite a reference?
```
The concept of log-MSE is somewhat analogous to mean squared logarithmic error in the context of regression. It shares the intuition of penalizing more heavily one type of misprediction than the other. In our case, if the classifier predicts 0.5, then under the cross-entropy loss we have the same penalty whether the true probability is 0.01 or 0.99. On the other hand, under log-MSE the penalty is $|ln(0.01) - ln(0.5)|^2 = 15.31$ versus $|ln(0.99) - ln(0.5)|^2 = 0.4666$, meaning the model is incentivized to conservatively skew towards nontoxicity. We are not aware of prior work specifically in the context of controllable generation / attribute classification which uses log-MSE.
```
How are the baseline numbers obtained that are listed in Table 1? If you got them from the respective numbers, clarify so.
```
The baseline results for PPLM, DAPT, GeDi, and DExperts were obtained from the DExperts paper; the results for FUDGE and MuCoLa were obtained from the MuCoLa paper; and the results for PPO and Quark were taken from the Quark paper. All of these follow the same experimental setup of DExperts. Regarding the DPO baseline that we trained and evaluated for GPT2-large and Gemma (see [Table](https://anonymous.4open.science/r/TRACERebuttal-97A4/detoxification.pdf)), we followed the evaluation setup of DExperts (that we also use for TRACE) and used the reference implementation for training, replacing their GPT2-medium model with Gemma-2B.
```
How do the baselines perform on character role-playing and political and nontoxic text generation?
```
The main purpose of these experiments is to provide evidence for the flexibility and efficiency of TRACE, with the empirical observations that (i) TRACE requires minimal training/decoding overhead for new attributes compared to baselines (Figure 3, Table 4, Table 5); and (ii) TRACE handles composition seamlessly (e.g. Detox, Pol and Detox+Pol rows of Table 3) with no additional overhead. In particular, the baselines would have difficulty adapting quickly to tens of new attributes or handling compositional attributes, requiring significant additional computational overhead for training and decoding (e.g. GeDi would require training class-conditional language models for combinations of attributes, and all of the role-playing characters, which takes 5 hours for each new attribute). This precisely demonstrates the unique position of TRACE within the literature.
We added a comparison of TRACE and prompting the LLM with “You are a role-playing assistant trained to embody characters with accuracy and authenticity. In this instance, you will assume the persona of {role_name}. Answer the question: {question}”. (See [Fig](https://anonymous.4open.science/r/TRACERebuttal-97A4/role_quality.png)). TRACE outperforms prompting in role quality.
```
How is character probability in Figure 3 measured?
```
The character probability is measured by the linear classifier we trained on the characters; this is because we do not have a ground truth model/metric available.
```
The impact of HMM quality: What is the difference between TRACE and TRACE (\downarrow HMM)? Don't all TRACE results use an HMM?
```
The TRACE (\downarrow HMM) refers to the results of TRACE when using less data to train the HMM (500K vs 5M examples); the results show that a stronger HMM can provide stronger guidance with the same classifier to reduce toxicity further. Inspired by the reviewers’ comment, we ran a study analysing the performance of TRACE at different points in the training/distillation process to evaluate the effect of HMM quality on toxicity reduction ([Fig](https://anonymous.4open.science/r/TRACERebuttal-97A4/hmm_quality.pdf)); the results show that the toxicity reduction improves with HMM log-likelihood.
```
Since this seems to be a crucial aspect of your method, what is the performance of the method without training-time transformation?
```
TRACE with training time has a result of max tox 0.336, avg tox 0.2, fluency 33.87, diversity 0.86. As the reviewer notes, the transformation is crucial to produce a more bimodal distribution of scores, which also leads to significantly improved performance. This is necessary because the scores from the Perspective API do not align well with the inductive bias needed for controlled generation. Without this transformation, the linear classifier predominantly defaults to labeling text as nontoxic.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. It made me somewhat more confident that this is a good paper, assuming that the responses will be reflected in the CR version of this paper. I raised my score to reflect that. | Summary: This paper proposes TRACE, a new algorithm for controllable generation that uses a hidden markov model and a small classifier to "look ahead" and reweight the language model's token probabilities. They compare to many other controllable generation methods and report promising results on two types of controllable generation: controlling for "toxic" text as well as "political" text.
Claims And Evidence: Within domain, the benchmarking results are very compelling, as well as the training and inference time measurements. However, I find the advertisement of this method as a general improvement for controllable generation to be a large overclaim, since the method relies on token-level classifier predictions to determine whether text satisfies the control task. This method seems specifically designed to work for sentiment-like tasks such as politicality and toxicality, for which bag-of-words classifiers are sufficient. They did not test on other common controllability tasks, for example poetry generation, because those tasks require sequence-level reasoning.
Methods And Evaluation Criteria: As noted several times, there are many other controllable text generation tests that are not shown here. A classic place to start would be with the three tasks from FUDGE (https://arxiv.org/abs/2104.05218): couplet completion, topic control, and formality.
Theoretical Claims: I checked the mathematical derivations in Section 3 and Section 4, which seem correct.
Experimental Designs Or Analyses: The experiments seem correct, and in particular the inference-time analysis is useful, as well as the tradeoff measurement between fluency and control satisfaction. As noted multiple times, this work seems to be missing important benchmarks from the wider controllable generation literature.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper contributes a new method to controllable text generation and compares to many other controllable generation methods. Notably absent is comparison across the same tasks.
Essential References Not Discussed: The list of benchmarks in Table 1 is nearly comprehensive, but seems to be missing COLD (https://arxiv.org/abs/2202.11705), which is mentioned in the related work. Is there a reason why this comparison was omitted?
Other Strengths And Weaknesses: Strengths:
The method relies on distilling a GPT-like transformer LM to a small hidden markov model, which is interesting and novel.
Weaknesses:
- The mathematical formulation relies on a token-level classifier p(s | x_{1 ...n}) that's simply the sum of token-level predictions (Section 4.3). This seems like a huge problem as almost all control conditions of interest can not be modeled at the token level and require some level of contextual dependence. While interesting, this method is almost specifically designed for toxicity or sentiment-like tasks which are famously solvable with bag-of-words approaches.
Other Comments Or Suggestions: - More details about the HMM would be useful
- Table 4 caption and Table 5 caption should explain more about the experiments – what datasets and context lengths were used, things like that.
Questions For Authors: Is there a reason why you only evaluate on these specific domains (toxicity + political text)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback.
```
Why evaluate on these specific domains (toxicity + political text)?
```
In the related literature, papers have evaluated on many different tasks rather than (a set of) fixed common benchmarks. We selected toxicity evaluation as it is a widely recognized benchmark in the literature (e.g., DExperts). Additional experiments, including personalized LLM and political compositionality, were included to showcase TRACE’s efficiency and flexibility relative to computationally intensive methods.
```
Additional controllable generation tasks (topic control, formality, couplet completion)
```
Following your suggestion, we performed additional topic control experiments, comparing TRACE to the base model and GeDi (https://anonymous.4open.science/r/TRACERebuttal-97A4/topic_control.pdf). TRACE significantly improves over the baseline and outperforms GeDi on two of four topics, demonstrating effectiveness even with mild contextual dependencies. Formality is primarily considered a translation task rather than a generation task and thus was not evaluated. Couplet completion, which heavily relies on contextual dependencies, is discussed below.
```
Limitations of token-level classifiers: …almost all control condition…require some level of contextual dependence. While interesting, this method is almost specifically designed for toxicity or sentiment-like tasks which are famously solvable with bag-of-words approaches.
```
The reviewer is correct in saying that linear/factorized classifiers cannot perfectly capture all attributes. However, this is not a fundamental problem for our approach, for three main reasons:
1. Even when an attribute is nonlinear, exactly conditioning a linear classifier at generation/decoding time is effective - which is what we show in the toxicity and topic control experiments in which we are competitive with or beat existing approaches;
2. Our method has virtually no training or inference-time overhead, making it the only option when flexibility and latency are critical.
3. For tasks with heavy contextual dependence, one can still trade compute for accuracy.
To validate point 1, we ran experiments investigating the difference in classification performance between our factorized classifiers and neural classifiers for these tasks (https://anonymous.4open.science/r/TRACERebuttal-97A4/nonfactorizable.pdf). On all tasks (including toxicity and politics), there is a gap in classification performance (as measured by cross-entropy with oracle scores), illustrating that *none of these attributes are completely factorizable*. Despite this, TRACE achieves superior performance on these mildly non-factorizable tasks. The reason is that conditional generation is a computationally hard problem, even for a factorized attribute. TRACE addresses this computational hardness of exact conditional generation by employing a lightweight, tractable approximation (HMM + linear classifier), and in doing so can outperform approaches that rely on neural classifiers but cannot effectively approximate expected attribute probability (EAP).
For point 3, for tasks/attributes with heavy contextual dependence, e.g. couplet completion, we can employ TRACE to condition on the “linear part” of the attribute, and filter the generated outputs post-hoc using a more powerful classifier. Alternatively, we believe there is scope to relax the factorized classifier assumption. The key requirement is for the computation of EAP to be tractable with respect to a HMM. One extension could be to use a (weighted) sum of factorized classifiers, and then use linearity of expectation to compute EAP, though this incurs an increase in computational cost. More generally, the literature on tractable probabilistic models [1] describes more complex functions whose expectation can be computed efficiently with respect to a tractable distribution such as an HMM. Given the strong empirical performance of TRACE, we believe that investigating compute-efficient methods of relaxing the factorization assumption offers highly promising avenues for future work.
```
COLD appears in Related Work, but not detoxification results table?
```
We chose to report results from papers testing on the toxicity reduction task for RealToxicityPrompts following the widely-used evaluation setup of DExperts; however COLD does not have any results on this task.
```
More details about the HMM
```
Thanks, we will add these to the paper. The HMM used in the experiments has 4096 hidden states with around 223M parameters. We will add details of the distillation procedure to the Appendix.
```
More experiment details on Table 4 and Table 5 captions – what datasets and context lengths were used.
```
We will add these details to the captions. The statistics for TRACE were gathered through the toxicity (RealToxicityPrompts) and personalization experiments.
[1] Khosravi et al. “On Tractable Computation of Expected Predictions” NeurIPS 2019 | Summary: Large language models (LLMs) are increasingly being deployed in real-world applications, and the need to control their outputs to align with human values is becoming more important. However, current autoregressive models struggle when attempting to control their generations. This paper proposes a technique called TRACE (Tractable Probabilistic Reasoning for Adaptable Controllable Generation) that can manage LLMs' outputs using a Hidden Markov Model (HMM). The authors' framework is tractable and lightweight compared to other proposed techniques in the literature. They evaluate their proposed method on three tasks: detoxification, personalized LLMs, and compositional attributes, and demonstrate that their approach outperforms baseline algorithms in the literature.
Claims And Evidence: The paper claims that RLHF is expensive and risks degrading the fluency or diversity of generated text. The authors conducted experiments to support their claims, but Table 1 contains several numbers struck out with asterisks, and it's unclear what that signifies. Additionally, the results for Gemma-2B with other algorithms are missing, which means there is insufficient evidence to support this claim. In particular, GPT-2 is known to be an inferior model compared to Gemma, Llama, and more recent LLMs, so experimenting with all algorithms under a modern LLM would provide better evidence. If the baseline algorithm numbers were taken from previous papers, then these results might be misleading, especially since the infrastructure supporting algorithms has changed tremendously.
Methods And Evaluation Criteria: The paper evaluated their proposed algorithm on three datasets: detoxification, personalized LLMs, and compositional attributes. Evaluating on three popular controllable generation tasks addresses the problem the paper is trying to tackle.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The authors' experimental design is generally sound; however, I am currently unclear about the meaning of the lines through the numbers in Table 1. Additionally, I am uncertain whether the authors conducted all the experiments themselves or if the numbers were sourced from previous papers. The results also lack qualitative measures, such as human or AI model evaluations. The metrics presented capture only one aspect of the problem. Using perplexity to assess fluency is insufficient for evaluating the quality of the generated text. Furthermore, it is not clear how toxicity was scored; if a external model provides the scores their is not prompt templated provided in the appendix.
Supplementary Material: No
Relation To Broader Scientific Literature: The authors are addressing a challenging problem in the scientific literature. In particular, the ability to control generative models has been a difficult issue that researchers have been trying to solve for quite some time. The authors observe that, unlike current solutions, HMM can provide a more tractable and lightweight solution with strong practical performance.
Essential References Not Discussed: - Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization by Ramamurthy et al. 2023
- Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space by Nguyen et al 2016
- Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning by Qin et al 2020
- Technical Report: Auxiliary Tuning and its Application to Conditional Text Generation by Zeldes et al 2020
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: None
Questions For Authors: a) How does the proposed approach compare to Zeldes et al., 2020?
b) Why is equation 2 generally intractable? If s represents text, then equation 2 essentially corresponds to the first term of equation 3, which is simply the language model's next token distribution.
c) How does the most naive baseline of simply prompting the model perform? Most personalization papers have this baseline incorporated in their results. (see Jang et al. 2023)
d) The text below Equation 5 says, conditional independence no longer holds when a generic attribute s is introduced, as it depends on all of x . However, in Equation 6, a conditional independence assumption is made in order to factorize over all of x. I am confused about why this factorization is considered a fair assumption when conditional independence was not.
e) Equation on line 176, p(s|x_{1:n}) does not makes sense because x_{1:n} is not past into function.
f) Did you train the baseline algorithms in the GPT2 row yourself? If not, what papers did you get the numbers from?
g) How does PPO, Quak, and DPO perform with Gemma-2B?
h) How does the model perform when conducting GPT-4 evaluations for text quality and toxicity qualitative analysis? Currently, you are using perplexity as a metric, but that measure is known to be quite misleading. Similarly, output perplexity has its own set of drawbacks as well.
- Technical Report: Auxiliary Tuning and its Application to Conditional Text Generation by Zeldes et al. 2020
- Personalized Soups: Personalized Large Language Model Alignment Via Post-Hoc Parameter Merging by Jang et al. 2023
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the detailed review.
```
Table 1 contains several numbers struck out with asterisks, and it's unclear what that signifies.
```
We explained this in Lines 261-274 (left) in the main text but forgot to add it to the Table caption - we will do this in the revision.
```
a) Comparison to Zeldes et al. (2020).
```
Zeldes et al. (2020) combines the logits of a base LM $p(x_t | x_{< t})$ with the logits of an auxiliary LM that models the conditional next token distribution $p(x_t|x_{<t}, \alpha)$. The distinguishing feature of TRACE (also compared to approaches such as GeDi) is that, instead of training an auxiliary neural model for each attribute, we simply use one HMM as an approximation to the base LM, and condition it on the linear classifier of any attribute to guide the base LM’s generation.
```
b) Why is equation 2 intractable? If s represents text, equation 2 corresponds to the first term of equation 3, which is simply the language model's next token distribution.
```
In Equation 2, $s$ represents an attribute, rather than a text prefix. This attribute is a function of the entire text $x_{1:n}$, and as illustrated in the following equation computing this requires an exponential sum over future continuations.
```
c) Naive prompting baseline?
```
We added a comparison of TRACE and prompting the LLM with “You are a role-playing assistant trained to embody characters with accuracy and authenticity. In this instance, you will assume the persona of {role_name}. Answer the question: {question}”. https://anonymous.4open.science/r/TRACERebuttal-97A4/role_quality.png
```
d) The text below Equation 5 says, conditional independence no longer holds… I am confused about why this factorization is considered a fair assumption when conditional independence was not.
```
We are not saying that conditional independence is an unreasonable assumption, but rather deriving a condition (factorized classifier) under which it holds, and noting that it does not hold in general.
```
e) Equation on line 176, p(s|x_{1:n}) does not makes sense because x_{1:n} is not past into function.
```
By $x_{1:n}$ we mean the entire text, which is the concatenation of the prefix $x_{<t}$ and current token $x_t$ (which are passed in), and the future continuation $x_{>t}$ (which appears in the summation). We will revise to make this clearer.
```
f) Source of GPT2 baseline numbers.
```
The baseline results come from:
- PPLM, DAPT, GeDi, DExperts: from DExperts paper.
- FUDGE, MuCoLa: from MuCoLa paper.
- PPO, Quark: from Quark paper.
All of these follow the same experimental setup of DExperts. For fair comparison, we fine-tuned GPT2-large using DPO (official implementation of Lee et al. [1]) with the same DExperts setup, resulting in avg max toxicity 0.180, toxicity prob. 0.03, fluency 21.59, dist-2 diversity 0.76, and dist-3 diversity 0.78.
```
g) How does PPO, Quak, and DPO perform with Gemma-2B?
```
We trained DPO on Gemma-2B using [1]'s official code; results are at https://anonymous.4open.science/r/TRACERebuttal-97A4/detoxification.pdf. Consistent with our GPT-2 experiments, DPO improves fluency over the base model while diversity is significantly reduced. Meanwhile, TRACE achieves superior toxicity reduction while largely maintaining Gemma's original fluency and diversity. Unfortunately, we had trouble adapting PPO and Quark to Gemma during the rebuttal period due to GPT-2 specific codebases.
```
h) GPT-4 evaluation for text quality and toxicity.
```
We utilize perplexity to evaluate the fluency of text following the common practice among the baselines. While perplexity may not align perfectly with LLM-as-a-judge or human evaluations of quality, it carries important and distinct scientific value in measuring the deviation from the text distribution of the base model. For instance, the non-RL baselines use the same decoding strategies (top-$p$ sampling), enabling direct comparison of the different approaches. Meanwhile, the RL methods modify the base model which significantly impacts the distribution as becomes apparent from the fluency (perplexity) and diversity metrics, but may not be apparent from an LLM-as-a-judge or human evaluation. For toxicity evaluation, we use the Perspective API following existing practice in previous works.
We also conducted evaluations with GPT-4 as LLM-as-a-judge to evaluate the toxicity, fluency, and diversity of the generated continuations on a scale of 1-5 (where 1 is toxic/nonfluent/nondiverse and 5 nontoxic/fluent/diverse), averaged over the dataset. The results (https://anonymous.4open.science/r/TRACERebuttal-97A4/gpt4_evaluations.pdf) are consistent with the evaluations given by the automated metrics. In particular, both TRACE and DPO effectively reduce toxicity, but TRACE maintains similar fluency and diversity of generations to Gemma, while DPO distorts the distribution.
[1] Lee et al. “A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity” ICML 2024 | null | null | null | null | null | null |
Deep Bayesian Filter for Bayes-Faithful Data Assimilation | Accept (poster) | Summary: In order to address the challenge of nonlinear filtering, the present work proposes _Deep Bayesian Filter (DBF)_, which leverages a learnable inverse observation operator (IOO) to transform the problem into a linear filtering problem, where one can apply standard Kalman methodologies. The cases of linear and nonlinear dynamical systems are treated differently: In the linear dynamics setting, having access to the IOO directly enables linear updates in physical space. In the nonlinear setting, an extra mapping to a high-dimensional latent space is required, where the hidden dynamics become approximately linear. Training is performed once offline in a manner similar to VAEs. The model is tested on various benchmark dynamical systems, both linear and nonlinear, to demonstrate the robustness and performance gain achieved by DRF over classical filters.
## Update after rebuttal
The authors have adequately addressed my main concern regarding the scalability of the algorithm during the rebuttal period. Therefore, I have increased my evaluation of the paper from a 3 to a 4.
Claims And Evidence: The claims in this paper are supported by clear and convincing evidence. In particular, they identify examples where the proposed model, DBF, significantly outperforms both classical DA methods and Dynamic VAEs. Results are presented clearly and demonstrate the benefits of using DBF.
Methods And Evaluation Criteria: The datasets used to evaluate the method are appropriate for the application at hand. The authors mainly evaluate the RMSE, which, however, can be limiting as the ability to quantify uncertainties is also crucial in data assimilation. Regarding uncertainty quantification, only the Jeffreys divergence of normalised errors with the standard Gaussian is presented in the example of the nonlinear pendulum. However, I believe that it is also worth evaluating other UQ metrics, such as the negative log-likelihood and the CRPS, which are commonly used metrics to compare the UQ ability of DA methods.
Theoretical Claims: The main contribution of the paper is in the model proposal and empirical evidence of it working in certain settings where classical DA methods tend to struggle. There are no proofs to check.
Experimental Designs Or Analyses: The experimental setup is sound, evaluating the methods and baselines on commonly used benchmarks. The analysis of the experiments is also reasonable, and the arguments made are convincing. However, I would again point out the lack of comparison in the ability of the proposed model to perform UQ. In addition, some ablation study on the various components of DBF would be useful to have as well. For example, an ablation on the number of hidden state dimensions (in the case of nonlinear dynamics), a comparison of the two training strategies, etc... I would also think that having a discussion on what happens when you observe only a fraction of the components of Lorenz 96 model would be valuable too, as this is a common setting in DA (since it is rare in practice to observe the full state).
Supplementary Material: I have consulted Appendices B and C for some details of the experiments, which I did not fully grasp after reading the main body. I found that the appendices cover sufficient details of the experiments, however, details of the baseline models are missing.
Relation To Broader Scientific Literature: The present work builds on a body of literature that aims to augment data assimilation/stochastic filtering with deep learning methodologies. However, in contrast to previous efforts like KalmanNet or methods based on dynamical VAEs, which use RNNs to model the latent dynamics, the present work uses explicit updates by assuming linearity of dynamics in the latent space. This helps prevent difficulties in training of RNN-based methods arising from problems such as vanishing gradient and Monte-Carlo approximation of the ELBO.
Essential References Not Discussed: The authors cover related literature in Section 2.6 fairly extensively. I would also point out that there is a line of works that has been coming out recently that use generative modelling techniques for data assimilation. For example,
- A Score-based Nonlinear Filter for Data Assimilation, Bao et al. (2023)
- DiffDA: A Diffusion Model for Weather-scale Data Assimilation, Huang et al. (2024)
- Score-based Data Assimilation, Rozet and Louppe (2024)
Other Strengths And Weaknesses: __Strengths:__
- The presentation and organization of the paper are well thought-out, making it easy to follow.
- The experiments are fairly extensive, covering a range of examples where DBF can be beneficial over classical filters.
- The "Bayes-faithful" property of DRF is appealing, which is missing in most deep learning-based approaches. However, I also wonder how "faithful" it is in terms of its ability to approximate the true non-Gaussian posterior.
__Weaknesses:__
I think the biggest flaw of the proposed method is in its scalability to high-dimensional settings, where computation of the filtered mean and covariance (1)--(2) becomes intractable due to the cubic/quadratric scaling (with respect to dimension) in computational cost/memory. This is also the problem with Kalman filters in high dimensions, which is why ensemble methods like EnKF and ETKF were proposed in the first place. Especially when considering SSMs with nonlinear dynamics, DBF requires lifting the dynamics to an even higher dimensional space, where applying (4)--(5) will become too expensive to run, or otherwise one has to settle for latent dynamics in a lower dimensional space, where results may not be very accurate. Hence, the applicability of the method lies in low-to-moderate dimensional systems, in which case other classical Kalman-based methods like the Extended Kalman filter (EKF) and Unscented Kalman (UKF) filter can be applied. In particular, the UKF is known to perform well and robustly in such settings, hence, I believe it is only fair to compare with this, too.
Other Comments Or Suggestions: - I believe the $r_{true}(h_t | o_t)$ at the end of page 2 is supposed to be $r_{true}(z_t | o_t)$.
- I think the "rightmost table" at the end of page 5 is referring to Table 1, which is not "rightmost".
- It will be better to add standard deviations in the results in Table 3, as done in the other tables.
Questions For Authors: - Can the DBF framework deal with partial observations (i.e., only observing a few components of the L96) and observation operators that change with time (e.g., observing different components at each time step)? I believe these are important cases to consider as they occur commonly in practice. However, especially in the case where the observation operator changes with time, this does not seem straightforward, as one needs to amortize the inverse observation operator across various observation operators.
- In the setting of nonlinear dynamics, is there a general criteria for choosing the latent space dimension? i.e., how do we know a priori what latent space dimension should be chosen for the hidden dynamics to be approximately linear? It would be interesting to see an ablation study in the hidden dimension size.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We deeply appreciate Reviewer gEdD for their thoughtful and constructive feedback. Below, we specifically address the reviewer’s important concerns.
- Scalability in High-Dimensional Settings:
We have conducted experiments varying latent dimensions in both the double pendulum and Lorenz96 settings, as reported in Appendix E of the original manuscript. Although we have not explicitly tested DBF in higher-dimensional physical systems, we expect that the latent dimensions required to accurately model the system dynamics would be substantially lower than the actual physical dimensions due to significant redundancy in degrees of freedom in large simulations.
Moreover, even if higher dimensions were required, DBF remains resilient against explosive growth in computational demands. This aspect is detailed clearly in section C.1 of the Appendix, highlighting DBF’s efficient parameterization strategy that enables linear scaling with respect to latent dimension size, significantly alleviating the computational bottlenecks associated with traditional methods.
- Partial Observations and Time-Varying Observation Operators:
DBF finds no difficulty in handling partial observations. We need specific strategies to apply DBF for observation operators that vary over time. Amortizing a single inverse observation operator (IOO) across different observation operators can be indeed challenging. Future work could explore methods to learn conditional IOOs or multiple specialized IOOs to effectively handle scenarios with time-varying observation operators.
- Criteria for Choosing Latent Space Dimension:
The choice of latent space dimension primarily depends on the complexity of the underlying dynamics. In practice, we recommend selecting the smallest latent dimension that achieves good validation performance. Indeed, we have already conducted an extensive ablation study on latent dimension sizes in Appendix E of our manuscript. The results offer valuable insights and suggest practical heuristics for balancing model performance and computational efficiency.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarifications.
I understand that the authors have chosen a particular parameterization for the latent linear dynamics operator $A$ to make the prediction cost scalable in dimensions. However, the filtering step still involves taking matrix inversions (equations (4)-(5)) or determinants (should appear in the KL divergence computation in (6)), which is still going to be expensive. Could the authors please comment on this? The authors consider latent dimensions up to $O(10^3)$ in the appendix, which is still amenable to computation. However, if we require larger latent dimensions $> O(10^4)$ to adequately capture the dynamics (e.g. we are modelling a weather system), then how is computation managed?
---
Reply to Comment 1.1.1:
Comment: Thank you very much for the question.
Although equations (4) and (5) involve the inversion of matrices such as $(A\Sigma_t A^T + Q)$, $G_{\theta}(o_t)$, and $V$, these are all block diagonal matrices given that $A$, $Q$, and $G_{\theta}(o_t)$ are block diagonal. Specifically, $A$ and $G_{\theta}(o_t)$ are maintained as block diagonal matrices, and the process noise covariance $Q$ is expressed as a diagonal matrix. Since the initial covariance matrix $\Sigma_{t=1}$ is block diagonal, it follows by induction (see equation (5)) that subsequent covariance matrices $\Sigma_{t}$ remain block diagonal. Consequently, rather than inverting one large matrix, we invert many small matrices — for example, in our implementation, we invert $10^4$ small $2\times 2$ matrices (i.e., when the full latent dimension is $2\times 10^4$). This structure significantly reduces the computational complexity of the filtering step.
A similar argument applies to the computation of the KL divergence. Consider two Gaussian distributions:
$q(h) = \mathcal{N}(\mu_1, \Sigma_1) \quad \text{and} \quad p(h) = \mathcal{N}(\mu_2, \Sigma_2),$
where $\Sigma_1$ and $\Sigma_2$ are block diagonal, with each block being a $2 \times 2$ matrix, and there are $N$ such blocks. The KL divergence is given by:
$\text{KL}[q\|p] = \frac{1}{2}\left[\log\frac{|\Sigma_2|}{|\Sigma_1|} - 2N + \operatorname{Tr}(\Sigma_2^{-1}\Sigma_1) + (\mu_2 - \mu_1)^T \Sigma_2^{-1}(\mu_2 - \mu_1)\right].$
Due to the block diagonal structure, this expression can be factorized into a sum over the blocks:
$\text{KL}[q\|p] = \sum_{i=1}^{N} \frac{1}{2}\left[\log\frac{|\Sigma_{2,i}|}{|\Sigma_{1,i}|} - 2 + \operatorname{Tr}(\Sigma_{2,i}^{-1}\Sigma_{1,i}) + (\mu_{2,i} - \mu_{1,i})^T \Sigma_{2,i}^{-1}(\mu_{2,i} - \mu_{1,i})\right].$
This factorization means that the KL divergence computation is reduced to summing over many small, $2 \times 2$ matrix computations, making the process computationally manageable even for large latent dimensions.
In summary, the block diagonal structure ensures that both the matrix inversion in the filtering step and the determinant computations for the KL divergence remain efficient, even if the latent dimension scales to $O(10^4)$ or beyond. In fact, this good scaling is another characteristic of our methodology, making it very promising for applications on very high dimensional problems, as required in data assimilation problems in natural sciences. | Summary: This paper proposes a method for Bayesian filtering with nonlinear observations and dynamics, using a VAE specialized to Markov processes with linear latent dynamics. The result is a closed-form loss that can be optimized for both the encoder (latent state) and decoder (inverse observation operator, IOO).
Claims And Evidence: The DBF shows strong improvement over relevant baselines.
Methods And Evaluation Criteria: The experiments are a challenging set of nonlinear filtering tasks.
Theoretical Claims: I have convinced myself of the relevant derivations. The framework is very elegant.
In strategy 1, I think it would help to give more details on learning the map $z\mapsto A$ or $A\mapsto z$ from samples of $z_t$.
Experimental Designs Or Analyses: No concerns
Supplementary Material: I went through it briefly. Everything seems sound.
Relation To Broader Scientific Literature: Relevant literature is discussed and it’s clear how the present method advances over previous ones.
Essential References Not Discussed: None noted
Other Strengths And Weaknesses: Despite its flexibility the IOO still assumes the log-likelihood is quadratic in $z_t$ which is a major simplification for many applications.
Other Comments Or Suggestions: There are many settings where we don’t care about $z$ and just want to predict $o$. In that case the linear method of eq 6 can be used even if the mapping from $z$ to $o$ is nonlinear, because we only need $h$ while $z$ can be ignored.
Last line of page 2: I think you mean $h_t$ to be $z_t$ since $h$ has not been introduced yet.
Fig 1 caption: (c) should be (b)
Questions For Authors: I believe the Koopman operator works even for non-Markov processes: the sequence $h_t = g(z_t))$ can be made Markov even if $z_t$ is not (e.g., if $z$ is an AR-$k$ process for some $k>1$). Does this make DBF applicable in non-Markov settings?
Is the virtual prior $\rho$ necessary? Conceptually it’s strange because the prior is already present as $p(z_t|o_{1:t-1})$. Mathematically it’s redundant because it could be absorbed into $G$ as $(G_\theta(o_t)^{-1} - V^{-1})^{-1}$.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We gratefully acknowledge Reviewer VtfJ for the positive evaluation and insightful feedback.
- Direct prediction of $o_t$:
We agree and will explicitly clarify that Eq. (6) is particularly useful for cases where direct observation prediction is sufficient, even when the mapping from $z$ to $o$ is nonlinear.
- Applicability to non-Markov settings:
We confirm that even if $z_t$ is not Markov, DBF extends naturally if there exist a Koopman embedding that satisfies $h_t = g(z_t))$, where $h_t$ is Markov. We will clarify this explicitly in our revision. We would like to thank VtfJ again for the insightful comment.
- Virtual prior $\rho$:
We recognize that our explanation was insufficient. We have now clarified this point in the main text and provided additional details in Appendix A.4.
We would like to thank VtfJ again for acknowledging the value of our methodology.
---
Rebuttal Comment 1.1:
Comment: Thanks for the replies. I think this will be a good paper. | Summary: The authors propose a novel variational method for data assimilation that constructs its variational family by replacing the non-linear observation model by a linear-Gaussian observation model whose mean and covariance are parametrized by a neural network.
If the dynamics of the prior are also linear-Gaussian, then the variational family is a linear-Gaussian state-space model in which is tractable using the Kalman filtering recursions. Inspired by Koopman operator theory, the authors extend the method to nonlinear dynamics by learning latent linear dynamics in a higher-dimensional space that map to the physical dynamics through a learned nonlinear transformation.
This transformation then makes the original method for linear dynamics with nonlinear observation models applicable.
The approach is shown to be competitive to classical DA methods (EnKF, ETKF) and multiple deep learning-based approaches on three benchmark problems covering both linear and nonlinear dynamics.
## Update after rebuttal
I am still very concerned with the introduction of the auxiliary prior in the derivation of the method, which I feel significantly impacts clarity (cf. review `VtfJ`). Given how pivotal this derivation is for the rest of the paper, I do not feel comfortable to raise my score without having seen the updated section of the paper.
Claims And Evidence: - page 2, lines 68-69: The way this is written makes it sound like the Kalman filter only supports time-invariant dynamics and observation models, which is clearly not the case
Methods And Evaluation Criteria: - page 6, lines 285-288, left: The definition of success seems arbitrary. What is the motivation for choosing exactly this threshold?
Theoretical Claims: - There are no theorems that require a proof.
- I did not check the derivations of the ELBOs.
- page 4, lines 177-180, left: It is unclear what is meant here. Is this an additional approximation introduced to make training tractable? Normally one is only at liberty to drop $z_{1:t}$ from $q(h_t \mid o_{1:t}, z_{1:t})$ in case $h_t$ and $z_{1:t}$ are conditionally independent given $o_{1:t}$, which does not seem to be the case here.
Experimental Designs Or Analyses: While the experimental design seems sound, some of the choices made appear somewhat contrived. For instance, this includes the nonlinear observation operator $o_{t, j} = \min(z_{t, j}^4, 10) + \epsilon$, whose physical significance is unclear, and the choice of the moving MNIST problem as the main benchmark problem for linear dynamics. The object tracking experiment in the appendix seems to be a better choice, as it is much more interpretable.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The related work section is extensive and gives a detailed account of the commonalities and differences of the present work to the broader scientific literature.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: Section 2.3 has significant clarity issues as it seems that the likelihood $p(o_t \mid z_t)$ is replaced by a posterior distribution $r(z_t \mid o_t)$ over the unobserved quantity. While this is remedied by dividing by a prior $\rho(z_t)$ "virtually introduced for the IOO", it is unclear what the significance of this prior is in the context of the dynamics model, which already induces a prior $p(z_t)$ over the state. The fact that the covariance matrix of $\rho(z_t)$ is fixed at $V = 10^8 I$ suggests that it is in fact irrelevant for the algorithm (see also equations 4 and 5) and only needed for the internal logic of the exposition.
I would hence like to suggest an equivalent exposition of the algorithm that avoids the virtual $\rho(z_t)$ prior and seems somewhat easier to follow: The authors aim at deriving a tractable variational family for a state-space model with linear-Gaussian transition model and nonlinear observation model. As noted in the paper, the nonlinear observation model hinders tractability of the original model and should hence be replaced, e.g. by a linear-Gaussian one in the variational family. Hence, inspired by the idea of an inverse observation operator, the variational family approximates the observation model by $p(o_t \mid z_t) \approx q(o_t \mid z_t) \propto \mathcal{N}(f_\theta(o_t); z_t, G_\theta(o_t)),$ which is linear-Gaussian. The dynamics of the variational family are simply given by the dynamics of the prior, i.e. $q(z_1) \coloneqq p(z_1)$ and $q(z_{t + 1} \mid z_t) \coloneqq p(z_{t + 1} \mid z_t)$. Then the conditional distribution $q(z_t \mid o_{1:t})$ can be computed by the Kalman filter recursion as in the paper. And used to compute the ELBO.
Note how this "likelihood-approximation" perspective renders the unintuitive virtual prior $\rho(z_t)$ unnecessary.
I'm very open to a discussion about resolving the clarity issues in this section and will likely raise my score should this be addressed satisfactorily.
Other Comments Or Suggestions: - page 1, lines 14-15, right: Does the term "test distribution" refer to the approximate filtering/smoothing posteriors provided by the respective methods?
- page 2, line 109, right: The use of $h_t$ in the context of linear dynamics is somewhat confusing, should this maybe read $r_\text{true}(z_t \mid o_t)$? This also applies to the first paragraph of Section 2.5.
- page 3, line 127, left: Missing "model" in "linear-Gaussian state-space *model* (LGSS*M*)"
- page 3, line 122, left: "Panel (b)" instead of "Panel (c)"
- page 3, line 128, right: Why is the physical state referred to as a "teacher signal" here? This term is not previously defined.
Questions For Authors: - Strategies 1 and 2 don't seem mutually exclusive. Which one should be used in which case? An experimental analysis comparing the two strategies might grant a lot of insight into the differences.
- Did you try to apply the approach to time-invariant dynamics? At least for linear dynamics or when using Strategy 2 for nonlinear dynamics, the method seems to be applicable to time-varying dynamics.
- What is meant by "marginalizing over $h_t$ with this emission model"? Surely this marginalization can't be performed in closed form?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer zP6r for the thoughtful comments and constructive suggestions.
- Clarification on Section 2.3
We would like to respectfully clarify a key point regarding Section 2.3, which may have caused confusion due to insufficient clarity in our original exposition.
The reviewer pointed out:
“Section 2.3 has significant clarity issues as it seems that the likelihood p(ot∣zt)p(o_t \mid z_t) is replaced by a posterior distribution r(zt∣ot)r(z_t \mid o_t) over the unobserved quantity.”
We acknowledge that the current presentation may have inadvertently led to this misunderstanding, and we appreciate the opportunity to clarify our intention.
To be precise, our method does not replace the likelihood $p(o_t \mid z_t)$ with the posterior distribution $r(z_t \mid o_t)$. Rather, $r(z_t \mid o_t)$ is defined via Bayes' rule as being proportional to the product of the likelihood and a prior distribution $\rho(z_t)$:
$r(z_t \mid o_t) \propto p(o_t \mid z_t) \cdot \rho(z_t).$
No approximation is introduced in this definition itself. However, the challenge arises when we aim to perform recursive Bayesian inference, where we must evaluate:
$p(z_t \mid o_{1:t}) \propto p(o_t \mid z_t) \cdot p(z_t \mid o_{1:t-1}).$
Even if $p(o_t \mid z_t)$ is Gaussian, the dependence of its parameters on $z_t$ can make the posterior $p(z_t \mid o_{1:t})$ analytically intractable, particularly when the mean or covariance is a nonlinear function of $z_t$. To address this, we exploit the identity:
$p(z_t \mid o_{1:t}) \propto \frac{r(z_t \mid o_t)}{\rho(z_t)} \cdot p(z_t \mid o_{1:t-1}),$
and assume that $r(z_t \mid o_t)$, $\rho(z_t)$, and $p(z_t \mid o_{1:t-1})$ are all Gaussian distributions. This ensures that $p(z_t \mid o_{1:t})$ remains Gaussian and can be computed in closed form, thereby maintaining tractability of the filtering process.
The approximations in our method thus lie in the following modeling choices:
1. We model $r(z_t \mid o_t)$ as a Gaussian distribution whose mean and covariance are parameterized by neural networks, denoted as $f_\theta(o_t)$ and $G_\theta(o_t)$, respectively.
2. The auxiliary prior $\rho(z_t)$ is assumed to be a Gaussian with fixed mean and fixed (large) covariance.
In principle, both the parameters of $r(z_t \mid o_t)$ and those of $\rho(z_t)$ could be optimized during ELBO maximization. However, as shown in Equations (4) and (5), if the neural networks $f_\theta$ and $G_\theta$ are sufficiently expressive, fixing the parameters of $\rho(z_t)$ does not limit the representational capacity. The variability needed for inference is effectively captured by the learned functions $f_\theta(o_t)$ and $G_\theta(o_t)$. The large covariance of $\rho(z_t)$ ensures that its support covers the latent space broadly enough for effective approximation.
We acknowledge that this important design rationale was not sufficiently explained in the original manuscript. We will revise Section 2.3 to clearly state the role of $\rho(z_t)$, clarify that it is not a replacement for the dynamics-induced prior $p(z_t)$, and explain why it can be treated as a fixed auxiliary distribution without loss of generality.
We truly appreciate the reviewer’s insightful suggestion. We are confident that incorporating this clarification will significantly enhance the clarity and theoretical rigor of the manuscript.
- Clarification regarding the approximation in $q(h_t \mid o_{1:t}, z_{1:t})$
We appreciate the reviewer’s insightful question regarding the approximation introduced in the variational distribution.
As correctly noted, in general, $q(h_t \mid o_{1:t}, z_{1:t}) \neq q(h_t \mid o_{1:t})$. In our method, we do introduce an approximation by replacing the former with the latter—i.e., we approximate the richer distribution $q(h_t \mid o_{1:t}, z_{1:t})$ using a simplified form $q(h_t \mid o_{1:t})$ for tractability.
This choice restricts the expressiveness of the variational family, but it still provides a valid lower bound on the marginal log-likelihood, and therefore constitutes a legitimate ELBO formulation.
The equality $q(h_t \mid o_{1:t}, z_{1:t}) = q(h_t \mid o_{1:t})$ would only strictly hold if the mapping from $z_t$ to $o_t$ were deterministic and invertible—i.e., when $z_t$ is fully determined by $o_t$. Since this is not the case in our setting, we acknowledge that the replacement is indeed an approximation.
Nevertheless, this design allows for a computationally feasible training procedure, and we empirically find that it strikes a good balance between tractability and performance. We will clarify this point in the revised manuscript to avoid potential confusion.
---
Rebuttal Comment 1.1:
Comment: ### Clarification on Section 2.3
> We would like to respectfully clarify a key point regarding Section 2.3, which may have caused confusion due to insufficient clarity in our original exposition.
Thank you for the clarification, but I believe I already understood all of this from the paper. I have to admit that the formulation
> [...] it seems that the likelihood p(ot∣zt)p(o_t \mid z_t) is replaced by a posterior distribution r(zt∣ot)r(z_t \mid o_t) over the unobserved quantity [...]
was poorly chosen. However, I find the main point of my rebuttal unaddressed and hence would like to reiterate. My main concern is that the virtual prior makes the exposition very confusing (see also review VtfJ) and the method can be derived in a vastly simpler fashion without it.
From equations (4) and (5) we can see that $\rho(z_t)$ has virtually no impact on $\mu_t$ and $\Sigma_t$, as $V^{-1} m = 0$ and $V^{-1} = 10^{-8} I \approx 0$. Your rebuttal suggests that this is by design, i.e., a maximally uninformative prior with a large support is sought. The derivation of the algorithm that I outlined in my review arrives at this result without the need for $\rho$. I will expand upon this here: The authors seek tractable variational families $q(z_t \mid o_{1:t})$ and $q(z_t \mid o_{1:t-1})$ for use in variational inference. These can be constructed by directly approximating the non-linear-Gaussian likelihood $p(o_t \mid z_t)$ with the linear-Gaussian likelihood $q(o_t \mid z_t) \propto \mathcal{N}(f_\theta(o_t); z_t, G_\theta(o_t))$ induced by the IOO (here the normalization is irrelevant, since the authors are interested in the conditionals of the variational family). Now we define
$$q(z_{1:T}, o_{1:T}) = q(o_1 \mid z_1) p(z_1) \prod_{t = 2}^T q(o_t \mid z_t) p(z_t \mid z_{t - 1}),$$
and, since $q(o_t \mid z_t)$ is linear-Gaussian, we can compute $q(z_t \mid o_{1:t})$ and $q(z_t \mid o_{1:t-1})$ with the Kalman filter. For instance, this yields $q(z_t \mid o_{1:t}) = \mathcal{N}(z_t; \mu_t, \Sigma_t)$ with
$$\mu_t = \Sigma_t (A \Sigma_{t - 1} A^\top + Q)^{-1} A \mu_{t - 1} + G_\theta(o_t)^{-1} f_\theta(o_t)$$
and
$$\Sigma_t^{-1} = (A \Sigma_{t - 1} A^\top + Q)^{-1} + G_\theta(o_t)^{-1}.$$
Since reviewer VtfJ also had doubts about the virtual prior, I would strongly encourage the authors to use this alternative derivation in the main paper, as it massively clarifies the exposition without any impact on the method.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the continued engagement and thoughtful follow-up. We are glad to hear that the clarification helped, and we appreciate your suggestion of an alternative derivation. We especially value your effort to propose a more concise formulation that may improve the clarity of exposition.
However, we would like to respectfully point out a technical issue with the expression:
$q(o_t \mid z_t) \propto \mathcal{N}(f_\theta(o_t); z_t, G_\theta(o_t)).$
At first glance, this form may seem justified by the relationship:
$p(o_t \mid z_t) \propto \frac{r(z_t \mid o_t)}{\rho(z_t)},$
where $r(z_t \mid o_t) \propto p(o_t \mid z_t) \cdot \rho(z_t)$. Since $\rho(z_t)$ is independent of $o_t$, when interpreting $p(o_t \mid z_t)$ as a function of $o_t$, it is valid to write:
$p(o_t \mid z_t) \propto r(z_t \mid o_t).$
However, even if $r(z_t \mid o_t)$ is Gaussian in $z_t$, this does $\textbf{not}$ imply that $p(o_t \mid z_t)$ is Gaussian in $o_t$. More precisely, for any Gaussian emission model $p(o_t \mid z_t)$—whether linear or nonlinear—it must take the form:
$p(o_t \mid z_t) = \mathcal{N}(o_t; \mu(z_t), \Sigma(z_t)) \propto \exp\left(-\frac{1}{2}(o_t - \mu(z_t))^\top \Sigma(z_t)^{-1}(o_t - \mu(z_t))\right),$
which defines a density over $o_t$ with parameters depending on $z_t$. In contrast, the expression
$q(o_t \mid z_t) \propto \mathcal{N}(f_\theta(o_t); z_t, G_\theta(o_t))$
defines a density over $f_\theta(o_t)$, not over $o_t$, and is not generally interpretable as a valid probability density function over $o_t$ unless $f_\theta$ is invertible and appropriately constrained—which we do not assume.
Therefore, while we appreciate your proposal and understand its motivation, we believe your formulation overlooks a key point: the tractability of our methodology does not arise from defining a Gaussian emission model over $o_t$, but from expressing $p(z_t \mid o_{1:t})$ as a Gaussian in $z_t$ via the expression:
$p(z_t \mid o_{1:t}) \propto \frac{r(z_t \mid o_t)}{\rho(z_t)} \cdot p(z_t \mid o_{1:t-1})$
with all terms on the right-hand side being Gaussians in $z_t$.
In our approach, this is made possible by approximating $r(z_t \mid o_t)$, $\rho(z_t)$, and $p(z_t \mid o_{1:t-1})$ all as Gaussian distributions in $z_t$, which results in recursive update rules that are not only analytically tractable, but also generalize the classical Kalman filtering updates.
More specifically, when the functions $f_\theta(o_t)$ and $G_\theta(o_t)$ are chosen appropriately, our update rules can recover the standard Kalman filter equations as a special case. Therefore, our formulation can be seen as a principled and flexible extension of Kalman filtering that enables inference in settings with nonlinear and learned observation models, while preserving closed-form computation at each time step.
We acknowledge that the role of the auxiliary prior $\rho(z_t)$ may initially appear redundant, and we are grateful that Reviewer zP6r pointed out this source of confusion. We will revise the manuscript to clearly explain that $\rho(z_t)$ serves a functional purpose in the derivation—namely, enabling us to treat the ratio $r(z_t \mid o_t)/\rho(z_t)$ as a valid unnormalized likelihood over $z_t$—and why it cannot be skipped without undermining the theoretical grounding of the recursive inference.
Once again, we thank the reviewer for raising this important point and for prompting us to improve the clarity of our exposition. | null | null | null | null | null | null | null | null |
A Unified Framework for Entropy Search and Expected Improvement in Bayesian Optimization | Accept (oral) | Summary: The paper derives a novel variational inference scheme for optimizing the Max-value Entropy Search acquisition function in Bayesian optimization. Interestingly, for a particular choice of variational distribution, the classic Expected Improvement acquisition function shows up as a special case, thus showing it to be an approximation of MES, something not previously known. An additional MES approximation, called VES-Gamma, is developed and shows promising performance as an acquisition function.
## update after rebuttal
No change, score already Accept.
Claims And Evidence: The paper does a thorough and rigorous job of supporting its claims that (1) EI is a variational approximation to MES, and (2) we can use the framework to develop other novel acquisition functions that perform well.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, that EI is a variational approximation for MES using an exponential distribution as the variational distribution.
Experimental Designs Or Analyses: The empirical evaluation of the methods is satisfactory.
Supplementary Material: Yes, A2 deriving the equivalence between EI and MES.
Relation To Broader Scientific Literature: The paper provides a satisfactory review of acquisition functions and the connections with information-based approaches.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The main strength of the paper is the finding that EI is equivalent to a particular variational approximation to MES. This was not obvious to me and I found it to be very insightful and interesting. I expect other people in the BO community will also be interested to see this.
The further use of the variational approximation framework to derive VES-Gamma was interesting, and I think the fact that it performs only moderately better than MES but at significantly higher computational cost is fine because it was the theoretical insight that is most valuable in the paper.
Other Comments Or Suggestions: There is a paragraph in 4.2 describing discrepancies between VES-Exp and EI. Is the EI being used here the LogEI? Earlier in the paper it said LogEI was being used, and used interchangeably with EI. If so that would be another discrepancy potentially worth noting. Would VES-Gamma benefit from a similar strategy to improve optimize-ability?
If there is any commentary on the seemingly larger variance on the Hartmann6 problem that would be interesting to provide.
Questions For Authors: What are the options for improving the runtime?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your support.
> There is a paragraph in 4.2 describing discrepancies between VES-Exp and EI. Is the EI being used here the LogEI? Earlier in the paper it said LogEI was being used, and used interchangeably with EI. If so that would be another discrepancy potentially worth noting. Would VES-Gamma benefit from a similar strategy to improve optimize-ability?
Thank you for bringing this up. For the comparative analysis with VES-Exp, we used log-EI instead of EI, which could impact their empirical performance differences. We will clarify this in the final draft.
Applying a similar strategy from log-EI to VES-Gamma might be possible, but the exact approach remains unclear. We plan to explore this further in future work.
> If there is any commentary on the seemingly larger variance on the Hartmann6 problem that would be interesting to provide.
We believe this is due to a local optimum of the Hartmann6 function. This was discussed recently in [1].
[1] Battiti, Roberto, and Mauro Brunato. "Pushing the Limits of the Reactive Affine Shaker Algorithm to Higher Dimensions." arXiv preprint arXiv:2502.12877 (2025).
> What are the options for improving the runtime?
To address computational efficiency, we are investigating a \textit{Variable Projection} (VarPro) method from the numerical optimization [2,3] literature, which allows us to efficiently optimize ESLBO without compromising on performance. VarPro enables this problem reformulation:
$$\max_{\boldsymbol{x}, k, \beta}\text{ESLBO}(\boldsymbol{x}; k, \beta) = \max_{\boldsymbol{x}}\underbrace{\max_{k, \beta}\text{ESLBO}(\boldsymbol{x}; k, \beta)}_{\phi(\boldsymbol{x})},$$
Under the condition that the optimal $k^\ast$ and $\beta^\ast$ exist uniquely, the function $\phi(\boldsymbol{x})$ remains differentiable with
$$\nabla_{\boldsymbol{x}}\phi(\boldsymbol{x}) = \frac{\partial \text{ESLBO}(\boldsymbol{x}; k^\ast, \beta^\ast)}{\partial \boldsymbol{x}}.$$
VarPro requires $k^\ast$ and $\beta^\ast$ to be unique. While this is true for the Gamma distribution, it may no longer hold when extended to other distributions.
[2] Golub, G. H. and Pereyra, V. The differentiation of pseudoinverses and nonlinear least squares problems whose variables separate. SIAM Journal on Numerical Analysis, 10(2):413–432, 1973
[3] Poon, C. and Peyré, G. Smooth over-parameterized solvers for non-smooth structured optimization. Mathematical Programming, 201(1):897–952, 2023.
---
Rebuttal Comment 1.1:
Comment: Appreciate the answers to my questions. | Summary: The paper looks at Bayesian Optimization (BO) and tries to connect two types of acquisition functions that people have always thought of as different approaches. On one side, we have Expected Improvement (EI), which mostly focuses on exploitation by picking points that are likely to be better than what we've already found. On the other side, there are information-theoretic methods like Entropy Search and Max value Entropy Search (MES) that focus more on exploration by reducing uncertainty about where the optimum might be.
What the authors do here is come up with a method called Variational Entropy Search (VES) that shows EI and MES are actually related. They prove that EI is basically a special case of MES when viewed through this variational inference lens. The math involves using the Barber-Agakov bound for mutual information and showing that with a particular choice of variational distribution, you get exactly the EI formula.
They also introduce VES-Gamma, which uses a Gamma distribution as the variational family. This new acquisition function can behave like EI or MES depending on the situation, so it's more flexible.
They tested VES-Gamma on a bunch of problems - toy functions, GP samples, and some real-world optimization tasks. The results show VES-Gamma generally works as well as or better than both EI and MES. It did particularly well on a 388-dimensional SVM hyperparameter tuning problem, and was among the top performers on a 124-dimensional vehicle design problem and a 180-dimensional Lasso feature selection task. So it seems to handle both low and high-dimensional problems pretty well.
Claims And Evidence: The authors claim EI is actually just a special case of MES, viewed through their variational framework. They back this up with math derivations and a theorem that shows when you use an exponential variational distribution, their proposed bound optimization gives the same query point as EI. The math checks out, though it only works for noiseless observations.
They introduce VES-Gamma as a new acquisition function that tries to get the best of both EI and MES. It uses a Gamma distribution variational family (which includes exponential as a special case, so it can recover EI) but adds more flexibility to better match the true maximum value distribution. Their experiments show VES-Gamma does well - often better than both EI and MES across different test problems.
For empirical results, they've got performance curves and statistical tests showing VES-Gamma is at least competitive with, and frequently better than, both EI and MES across their benchmarks. The experimental data supports their claims pretty well. They're honest about limitations though - they note that for the Rover problem, VES-Gamma only performs about as well as EI, not beating MES.
Methods And Evaluation Criteria: Proposed Method:
I think the variational inference approach to maximize the lower bound of information gain is pretty novel and makes good sense. They managed to transform that complicated nested expectation in MES into something more tractable by alternating between optimizing the variational distribution and query point. Using the Gamma family for the variational distribution seems reasonable given its flexibility in capturing the true distribution of the maximum value.
Evaluation Benchmarks:
The experiments look good - they cover a nice range of benchmarks from low-dimensional synthetic functions to GP samples and some challenging high-dimensional real-world problems. This gives me confidence that they've tested their method under different conditions.
Metrics and Procedure:
They're using standard metrics (simple regret/best-found value) and their experimental procedures seem appropriate - multiple trials, consistent initialization, and scaling iteration counts with dimensionality. The comparison with EI and MES baselines strengthens their findings.
Comparative Baselines:
While they focus mainly on EI and MES (which makes sense given their paper's goals), it might have been nice to see comparisons with other methods like PES or UCB for more context.
Theoretical Claims: I think the derivation of the variational lower bound on the MES objective looks solid - they used the Barber-Agakov bound which makes sense here. This gives them a good theoretical foundation for connecting information-based acquisition functions to variational inference approaches.
Their main theoretical result (Theorem 3.2) shows that when you use an exponential variational distribution, their method gives you exactly the same thing as EI. The math checks out, and they even verified this empirically with a Kolmogorov-Smirnov test, which is nice. Note that this only works in the noiseless setting though.
For the VES-Gamma part, they don't give us a formal theorem, but the intuition makes sense - moving from exponential to Gamma distribution lets the method capture more complex behaviors in the true distribution. The experiments back this up, so I'm convinced even without a formal proof here.
Experimental Designs Or Analyses: The experiments are quite thorough, with tests on lots of different benchmarks. This shows the method works well in both simple and complex high-dimensional problems.
As for reproducibility, they've done a good job describing their setup - they include details about GP model settings, how they initialized everything, how many iterations they ran, and they averaged results over multiple runs. All this makes it much easier for others to reproduce their work.
The way they analyze their results seems fair to me. They're upfront about when VES-Gamma does better than other methods and when it just matches them. They also talk about runtime, showing they understand there's a computational tradeoff with their approach.
I like that they used proper statistical tests (like the Kolmogorov-Smirnov test) to compare VES-Exp and EI. This gives more weight to their theoretical claims.
The main downsides I see are the high computational cost of their method and that they only tested in noiseless settings. That said, they do justify these limitations by pointing out that in typical BO applications, function evaluations are usually so expensive that the extra computational overhead isn't a big deal.
Supplementary Material: The appendix has good detailed proofs for their main theorems, which definitely helps verify their theoretical claims are solid.
They've also included some extra experiments on more synthetic benchmarks (even some really high-dimensional ones) that back up what they found in the main paper. They talk about this VES-Gamma-Sequential variant too, which shows the tradeoffs between computation time and how well it performs.
The supplement gives more implementation details and explains how they did the Kolmogorov-Smirnov test, which is helpful if someone wants to reproduce their work.
Relation To Broader Scientific Literature: The paper does a good job connecting two approaches that have been separate in BO for a while - the EI stuff and the information theory stuff. This unifying view is pretty novel and hasn't really been laid out clearly before.
The authors aren't working in a vacuum here - they're clearly building on previous research like Entropy Search, Predictive Entropy Search, and MES, and they bring in ideas from variational inference. Their work fits well within the field and they cite all the important papers in BO you'd expect.
They mostly focus on EI and MES in their comparisons, but they do mention other methods like GP-UCB and Knowledge Gradient in the related work section. What's nice is that instead of just heuristically combining different acquisition functions like some other work, their framework gives a more principled way to unify these approaches.
Essential References Not Discussed: The paper covers most of the important papers on EI, MES, and variational methods. They've cited the key works in these areas. But I think they could have mentioned the original EGO paper and maybe Snoek et al. (2012) which talks about practical BO implementation. Adding these wouldn't be essential but would help place their work better in the historical development of Bayesian optimization methods.
Other Strengths And Weaknesses: Strengths:
I think the paper has several strong points. First, it's quite original - connecting EI and information-theoretic acquisition functions through this variational framework is a fresh take that nobody's really done before. This is actually a pretty significant conceptual contribution to the field.
The theoretical unification work they did and their VES-Gamma algorithm looks promising for future BO research, especially for those high-dimensional problems that are so challenging. I can see this influencing both theoretical work and practical applications going forward.
The writing is solid - well-structured with clear explanations and they backed everything up with detailed experiments. It's easy to follow their reasoning throughout.
Finally, seeing how well their method performs on those tough high-dimensional tasks makes me think this could be really useful for real-world optimization problems where function evaluations are expensive.
Weaknesses:
There are some limitations worth noting. The computational complexity is a big one - VES-Gamma takes more compute than EI and MES, which might make people hesitant to use it unless they can optimize it further.
I found it limiting that they only tested in noiseless settings. They should extend this to noisy scenarios to make their case stronger.
They claim VES-Gamma dynamically balances exploration and exploitation, but don't really show us how the variational parameters adapt during the optimization process. More insight into this mechanism would help understand what's happening under the hood.
While focusing on EI and MES makes sense, they could have included more baseline comparisons like PES, GP-UCB, or Knowledge Gradient for a more comprehensive evaluation. That would give us a better sense of where this method sits in the broader landscape.
Other Comments Or Suggestions: More intuition about why Gamma vs. Exponential:
The paper should explain better why you chose the Gamma distribution and how its parameters actually balance exploration vs exploitation in practice.
What about VES-Gamma-Sequential?:
You should add a short explanation of the sequential variant in the main paper - it's only in the supplement now, which isn't ideal.
Handling noise?:
The paper only deals with noiseless settings. You should at least discuss how you might extend this to noisy observations, which is a limitation right now.
Future directions:
Have you thought about applying your variational approach to other acquisition functions in BO? Or maybe extending to batch or multi-fidelity settings?
Code release:
Will you release your implementation for reproducibility?
Sensitivity analysis:
I'd like to see some analysis of how sensitive VES-Gamma is to its hyperparameters. This would help practitioners know what to expect.
Questions For Authors: I'm curious about how you'd handle noisy settings with VES. Would your EI equivalence still work, or would you need a different approach?
Did you look at how VES-Gamma compares to PES? Do you think your variational approach could work with PES too?
Why Gamma distribution? Did you try other variational families, and how much does performance change if you use something else?
Could you show us more about how the variational parameters actually change during optimization? It would help understand how it balances exploration and exploitation in practice.
For the Rover problem, why did VES-Gamma only match EI when MES did better? Any insights on that?
VES-Gamma seems computationally heavy. Any ideas on making that inner optimization loop faster?
What were the actual numbers from your KS tests comparing VES-Exp and EI? Some p-values or pass rates would be helpful.
Do you think this variational approach could work for other acquisition functions like Knowledge Gradient?
Given the computational cost, what real-world problems would benefit most from VES-Gamma, especially in high dimensions?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for the detailed comments and support on this work. Due to word limit constraints we will give brief answers:
> High computational cost
We are investigating the VarPro method, as detailed in our response to reviewer VbQ5.
> EGO and Snoek et al.
We will include them in the Related Work section.
> How the variational parameters adapt
We agree that visualizing the evolution of $k$ and $\beta$ would provide valuable insight, but it's challenging since their values depend on the query point $x$, which changes at each iteration. Simply showing their values at each step wouldn't effectively capture the trends.
We have put considerable effort into understanding VES-Gamma's mechanics and tried to formalize our findings into a theorem. However, we acknowledge the difficulty of doing so rigorously and will continue exploring this in future work.
> More baseline
Due to rebuttal time constraints, we focus on generating results using synthetic functions for UCB-0.1 and KG: https://ibb.co/4n0Nxpnv. We met numerical issues running PES and are working on fixing it.
> Why Gamma distribution
We chose the Gamma distribution because it is a generalization of the exponential distribution and it is characterized by a limited degree of freedom. We also tested the generalized Gamma distribution, but its extra flexibility prevents solving for $k$ and $\beta$ in closed form.
Regarding the impact of $k$ and $\beta$ on exploration vs. exploitation, we hypothesize that larger $k$ promotes exploration by weakening the EI term. However, we lack strong numerical or theoretical evidence, suggesting that there is no simple relationship.
> VES-Gamma-Sequential
Apologies for the confusion. We initially included VES-Gamma-Sequential as a potential extension of VES-Gamma but later decided to remove it due to unresolved issues. Some references were mistakenly left in the appendix, which we will remove in the final version.
> Handling noise
The noiseless assumption is critical to our theoretical analysis, as detailed in our response to reviewer VbQ5.
> Future direction of variational approach
Extending the current work to include other acquisition functions and exploring the noisy setting, batch, multi-fidelity, and multi-objective optimization are promising future research directions.
> Code release
Yes
> Sensitivity analysis
We conducted ablation studies on two key parameters: the clamping threshold for $z_{\boldsymbol{x}}^\ast$ and the number of samples used to estimate the expectation. The results can be found at https://ibb.co/RpY4MXbQ and https://ibb.co/v4dsvNh4, respectively. Each experiment was repeated 10 times to compute the uncertainty bars. The findings indicate that VES-Gamma is relatively robust to variations in these parameters.
> EI and VES-Exp equivalence with noise
Unfortunately, in noisy settings, the current theory breaks down. Alternative approaches will be necessary to address this issue.
> VES to PES
The challenge is selecting a flexible variational family that captures the characteristics of $x^\ast$ while remaining computationally feasible. Due to the multi-modality of $x^\ast$, a Gaussian or Beta mixture may be needed, but handling the ESLBO with a mixture density is complex. This poses a major challenge in extending VES to PES, though we remain optimistic about future research on this!
> Rover problem
While it is difficult to draw a definitive conclusion, we observe that all three methods fall within the uncertainty bounds. We will conduct additional repetitions to assess whether MES's superiority is statistically significant.
> KS tests p-values
More p-value details of benchmarks are available: https://ibb.co/WpkPJqWq. This information will also be included in the appendix of the final version.
> VES for KG
It is possible. The main challenge lies in designing a suitable variational family for the Knowledge Gradient acquisition function.
> What problems for VES
We observe that VES-Gamma performs better in high-dimensional problems, and we believe it is well-suited for solving such problems with expensive function evaluations. | Summary: This paper introduces "Variational Entropy Search" (VES), a unified theoretical framework that reveals a previously unrecognized connection between Expected Improvement (EI) and information-theoretic acquisition functions in Bayesian optimization. The authors demonstrate that EI, traditionally considered distinct from information-theoretic methods, can be interpreted as a variational lower bound on the Max-value Entropy Search (MES) acquisition function when using an exponential distribution as the variational distribution.
Building on this theoretical insight, the authors propose VES-Gamma, a novel acquisition function that employs a Gamma distribution (which generalizes the exponential distribution) to approximate the maximum value distribution. This approach creates an intermediary between EI and MES that balances their respective strengths. The ESLBO (Entropy Search Lower Bound) objective in VES-Gamma includes EI as one of its components, with additional terms that dynamically adjust the exploration-exploitation trade-off.
Through empirical evaluations across low and high-dimensional synthetic benchmarks, GP samples, and real-world problems, the authors claim that VES-Gamma performs competitively with and can even outperform both EI and MES.
Claims And Evidence: The paper's claims appear to be supported by both theoretical derivations and empirical evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria in this paper are well-aligned with its theoretical contributions and practical goals. The VES framework employs mathematically sound variational methods to establish the connection between EI and MES, while VES-Gamma represents a natural theoretical extension. The evaluation approach is appropriate, using simple regret as the performance metric and directly comparing against the most relevant baselines (EI and MES). The benchmark selection is reasonably comprehensive, spanning synthetic functions (Branin, Levy, Hartmann, Griewank), GP samples with varying length scales, and real-world problems (Rover, Mopta08, Lasso-DNA, SVM) across dimensions from 2D to 388D. This diverse test suite provides sound evidence for both the theoretical claims and practical performance benefits of the proposed approach.
Theoretical Claims: I examined the two primary theoretical claims in the paper: Theorem 3.1 (ESLB Proof) and Theorem 3.2 (VES-Exp and EI Equivalence). Both proofs appear mathematically sound, though I note that this theoretical equivalence depends on the noiseless observation assumption, which the authors acknowledge as a limitation. The proof legitimately establishes the connection, though its practical applicability has the constraints noted by the authors. The mathematical development of both theorems appears correct and follows established techniques from variational inference.
Experimental Designs Or Analyses: The paper employs a sound experimental methodology, evaluating algorithms across diverse benchmarks that span synthetic functions (Branin, Levy, Hartmann, Griewank), GP samples with varying dimensionality (2D to 100D), and complex real-world problems (Rover, Mopta08, Lasso-DNA, SVM) with dimensions up to 388D. The experimental protocol follows good practices including: consistent GP hyperparameter settings across all methods, proper warm-starting with 20 random samples, repeated trials (10 per experiment) with statistical reporting, and appropriate metrics (simple regret). The authors also carried out a Kolmogorov-Smirnov test validation of the theoretical equivalence between VES-Exp and EI that showed high passing rates (>90%) across all benchmarks.
A few concerns worth highlighting: the substantial computational cost disparity between VES (53.17s per iteration) versus EI/MES (~1-1.6s) limits practical applicability; the timeout issues on very high-dimensional problems (e.g., 1000D Lasso-Hard); and the restriction to noiseless settings. However, these limitations are transparently acknowledged by the authors, and the overall experimental design remains sound and supports the paper's claims.
Supplementary Material: I skimmed the supplementary material, focusing primarily on the mathematical proofs in Appendix A and the additional experimental results in Appendix B.
Relation To Broader Scientific Literature: The paper bridges a conceptual gap in Bayesian optimization by revealing that Expected Improvement and information-theoretic approaches like Max-value Entropy Search are variations of the same underlying framework rather than distinct methodologies. This theoretical unification challenges conventional wisdom in the field, where these approaches have been treated as separate paradigms, while also delivering a practical payoff through VES-Gamma, which leverages this insight to balance exploration and exploitation more effectively than either approach alone.
Essential References Not Discussed: No essential references appear to be missing
Other Strengths And Weaknesses: A notable weakness is the significant computational overhead of VES-Gamma (53.17s per iteration versus 1.63s for EI), which could limit its practical adoption despite performance gains. Additionally, the framework currently only addresses noiseless settings, an important limitation that the authors acknowledge needs addressing in future work.
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your support for this work.
> the substantial computational cost disparity between VES (53.17s per iteration) versus EI/MES (~1-1.6s) limits practical applicability; the timeout issues on very high-dimensional problems (e.g., 1000D Lasso-Hard);
To address computational efficiency, we are investigating a *Variable Projection* (VarPro) method from the numerical optimization [1,2] literature, which allows us to efficiently optimize ESLBO without compromising on performance. VarPro enables this problem reformulation:
$$\max_{\boldsymbol{x}, k, \beta}\text{ESLBO}(\boldsymbol{x}; k, \beta) = \max_{\boldsymbol{x}}\underbrace{\max_{k, \beta}\text{ESLBO}(\boldsymbol{x}; k, \beta)}_{\phi(\boldsymbol{x})},$$
Under the condition that the optimal $k^\ast$ and $\beta^\ast$ exist uniquely, the function $\phi(\boldsymbol{x})$ remains differentiable with
$$\nabla_{\boldsymbol{x}}\phi(\boldsymbol{x}) = \frac{\partial \text{ESLBO}(\boldsymbol{x}; k^\ast, \beta^\ast)}{\partial \boldsymbol{x}}.$$
VarPro requires $k^\ast$ and $\beta^\ast$ to be unique. While this is true for the Gamma distribution, it may no longer hold when extended to other distributions.
[1] Golub, G. H. and Pereyra, V. The differentiation of pseudoinverses and nonlinear least squares problems whose variables separate. SIAM Journal on Numerical Analysis, 10(2):413–432, 1973
[2] Poon, C. and Peyré, G. Smooth over-parameterized solvers for non-smooth structured optimization. Mathematical Programming, 201(1):897–952, 2023.
> and the restriction to noiseless settings
We recognize that observation noise plays a crucial role in Bayesian optimization. The primary reason for assuming noiseless observations in this work is that our theoretical framework relies on this assumption. Specifically, we assume that the support of $p(y^\ast \mid D_t, y_x)$ is $[\max\{y_x, y^\ast_t\}, +\infty)$, which may no longer hold if $y_x$ is noisy.
We also note that EI and MES, the acquisition functions most closely related to VES, were derived under a noise-free assumption. Furthermore, many real-world problems really are noiseless, including the benchmarks considered in our paper. Adapting VES to handle observation noise is a promising avenue for future research, and we plan to explore this direction in subsequent work. | null | null | null | null | null | null | null | null |
KV Shifting Attention Enhances Language Modeling | Accept (poster) | Summary: - This paper proposes a new attention mechanism named KV shifting attention to enhance the in context learning ability of transformer.
- The paper aims at enhance inductive head bias of transformer by shifting the key&value vectors.
- This paper provides extensive analysis to show the effectiveness of the KV shifting attention in enhancing learning ability and reduce model width requirements,.
Claims And Evidence: - The paper clearly written and well motivated.
- The proposed method is simple and elegant, by only introducing few parameters and computation.
- Yes, extensive experiments are provided to support the claims.
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes.
The authors aim at formulating the behavior of induction heads in one or two transformer layers,
Experimental Designs Or Analyses: Yes, Extensive experiments have been conducted to validate the effectiveness of the method.
Supplementary Material: Yes,
Relation To Broader Scientific Literature: The proposed method is simple and seems effective, it could be included in future work and more scaled-up models to enhance the language modeling ability.
Essential References Not Discussed: None
Other Strengths And Weaknesses: weakness:
- As claimed in line 398, the key to the effectiveness is to achieve better induction heads by decoupling key-value pairs in time sequence. I wonder how 1d causal conv performs compared with kv shifting. In addition, the novelty of the proposed might be weak as it is a simplified version of short conv.
- Training and inference speed comparison can be included.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your review. I hope the following response can address your concerns.
**Causal Conv**
Thank you very much for pointing this out. As demonstrated in our discussion, the shifting operation can be seen as a short convolution with kernel size 2.
(a) From an operational perspective, this is not a very new operation. But where to perform this operation is crucial. As shown in Figure 11, using short convolutions on key and value to decouple them in time is beneficial for the emergence of benchmarks such as MMLU. However, if a similar operation is also used on query, interestingly, the effect will actually deteriorate.
(b) On the other hand, I think we have clarified to some extent the true role of short convolutions in attention. Decoupling the key and value sequences to achieve better context learning. Some previous works believed that this convolution was intended to enhance local information exchange.
**Training and inference speed**
We can note that the computational cost of the kv shift operation is $O(nd)$, Compared to $O (nd^2)$ for matrix projection and $O(n^2d)$ for attention calculation, this is much small. At the same time, under group query attention, the cost of this part is even smaller. Therefore, there was not much additional computational cost added during the training and inference phases. When considering inference, the additional storage space required is $O (d)$ to store the previous time's kv, which has a relatively small storage overhead compared to the original $O(nd)$. Therefore, kv shift does not have significant memory access overhead during the inference phase. The actual end-to-end training and inference speed is related to the framework used. In our own framework, the additional time cost is less than 2% for a 3B model with kv shift. | Summary: This paper introduces KV shifting attention, a modification to the standard transformer attention mechanism that changes how keys and values are processed. By decoupling the temporal relationship between keys and values, the model can more efficiently learn induction heads - a critical mechanism for in-context learning in language models. Hence, this could suggest a new approach to improve language modeling.
The authors provide theoretical analysis showing that KV shifting attention can reduce the width and depth requirements for forming induction heads compared to vanilla transformers. While traditional transformers need at least two layers to implement induction heads effectively, the authors prove that KV shifting attention can implement this mechanism with a single layer.
The authors evaluate their approach on both small-scale experiments designed to test induction capabilities and large-scale language model pre-training (models ranging from 1.5B to 19B parameters). Their experiments demonstrate that models with KV shifting attention consistently outperform vanilla attention baselines across model scales, with improved performance on standard benchmarks and faster convergence during training.
Claims And Evidence: There are four main claims in the paper, each of which are well-supported.
- Improved theoretical foundation for induction heads: The authors provide formal proofs (Theorems 3.2 and 3.3) that KV shifting attention requires less model complexity to form induction heads.
- Better induction capabilities: Experimental results demonstrate that KV shifting attention models can learn induction patterns more efficiently than vanilla attention models of equivalent or even larger size.
- Enhanced language modeling performance: The comprehensive evaluations on different models sizes show consistent improvements in both convergence speed and final performance across datasets and benchmarks.
- Robustness to hyperparameters: The experiments with different random seeds, learning rates, and model sizes provide evidence that the improvements are consistent.
Methods And Evaluation Criteria: - The authors evaluate on a suite of standard language modeling benchmarks.
- The scaling experiments compare performance across model sizes and training compute.
- Experiments with different hyperparameters and random seeds verify the robustness of the improvements.
- The toy task suggested measure the specific capability the method aims to improve.
Theoretical Claims: The theoretical claims provided in Section 3 are supported by formal proofs.
- Theorem 3.2 states that a two-layer vanilla transformer with specific parameter settings can approximate the induction heads mechanism with a certain error bound.
- Theorem 3.3 states that a one-layer KV shifting attention model can exactly implement the induction heads mechanism.
- Theorem 3.4 provides an analysis of the learning dynamics for KV shifting attention.
The proofs appear sound, with detailed derivations provided in the appendices.
Experimental Designs Or Analyses: - The toy model experiments isolate the induction capabilities being studied, allowing clear observation of the learning dynamics.
- The scaling experiments with models from 1.5B to 19B parameters follow best practices in the field and allow for meaningful comparisons.
- The ablation studies (Section 4.5) help identify which components of the proposed method contribute to the performance improvements.
- The robustness experiments with different random seeds and learning rates strengthen confidence in the results.
Supplementary Material: I have briefly The supplementary material includes:
- Detailed proofs of the theoretical claims (Appendices A, B, C)
- Additional experiments on multi-hop reasoning, mathematics, and n-gram learning (Appendix E)
- Analysis of the learned parameters (Appendix F)
- Variants of the proposed approach (Appendices G, H)
- Implementation details and code snippets (Appendices K, L, M)
- Experimental setup details (Appendix N)
Relation To Broader Scientific Literature: - It builds on the mechanistic interpretability work by Elhage et al. (2021) and Olsson et al. (2022) who identified and characterized induction heads.
- The theoretical analysis extends recent work on the theoretical capabilities of transformers, such as Wang et al. (2024) and Sanford et al. (2024a, 2024b).
- The approach relates to other architectural modifications in language modeling, such as RWKV (Peng et al., 2023), Mamba (Gu & Dao, 2023), and RetNet (Sun et al., 2023), but focuses specifically on enhancing the standard transformer architecture.
- The paper connects to broader work on in-context learning mechanisms in large language models, an active area of research.
Essential References Not Discussed: The following paper might be highly relevant: PermuteFormer: Efficient Relative Position Encoding for Long Sequences.
Other Strengths And Weaknesses: Strengths:
- The paper presents a simple yet effective modification to the attention mechanism with minimal computational overhead.
- The theoretical analysis provides clear insights into why the method works.
- The comprehensive evaluation across model scales demonstrates practical value for real-world language modeling.
- The method is compatible with existing transformer optimization techniques (as shown in Appendix J).
Weaknesses:
- The paper could provide more analysis of what types of tasks or text patterns benefit most from the improved induction capabilities.
- While the authors show that KV shifting attention helps with math reasoning (Appendix E.2), a more detailed analysis of reasoning capabilities would strengthen the paper.
- The paper mentions but does not extensively explore potential limitations of the approach, such as whether there are specific scenarios where vanilla attention might be preferable.
Other Comments Or Suggestions: It would be very interesting to see how this method would interact with more recently suggested transformer architectures, or with other techniques such as prompting.
Questions For Authors: - Would this method also apply for encoder-decoder structures? Would there need to be some modifications in the mechanism?
- Would this method also apply when training with multimodal models, or have benefit when adapting the two modalities?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your meticulous review. We hope our response can be helpful to you.
**More analysis**
(a) **What types of tasks or text patterns benefit most from the improved induction capabilities?** To be honest, it is difficult to exhaust what kind of pattern is more suitable, but we can provide some examples about what kind of pattern is suitable and what kind of pattern cannot be gained. As we have stated in our paper, hop k tasks can achieve better results, and math tasks can also achieve better results. However, n-gram tasks do not have very good results. We later conducted experiments on context free gramma(CFG) [1]. We found that KV shifting attention **does not** improve the learning of CFG, the convergence speed and final performace are almost the same as the vanilla model. Therefore, we speculate that the enhancement of language modeling by kv shifting comes from semantics rather than context free syntax.
(b) **A more detailed analysis of reasoning capabilities.**
The analysis in this section might also serve as part of the answer to the previous question. Based on our test of mathematical ability, we find that KV shifting attention can effectively package the information of adjacent tokens and store them separately in key and value for ease of subsequent operations. For example, "3+5=", the information of "3" is used as the value, shifted to "+", and the value of "5" is shifted to "=". Then, "=" uses two attention heads, the first one obtains the information of "3+" by following "+", and the other head obtains the information of "5" by following "=", and then uses MLP to obtain the correct answer "8".
On the other hand, induction heads are also more conducive to helping establish relationships between things. For example, if a sentence contains "Beijing roast duck" or "Beijing, China", then kv shifting can easily create some key value pairs in one layer of attention, such as<key: Beijing, value: Roast Duck><key: Beijing, value: In the following text, it is easy to associate Beijing with China or roast duck, which is beneficial for the model to answer questions such as which country Beijing is in or what its cuisine is.
(c) **In terms of potential limitations.** We speculate that KV shifting may enhance the pattern of repetition in the model due to the enhancement of induction heads. Although there is no conclusive evidence, I would like to share an interesting discovery. We later trained a model with 14B parameters with 15T tokens. On the benchmark math500, we find that the optimal generation configuration is to set the repetition penalty to 1.1, but for qwen2.5-14b, only the repetition penalty needs to be set to 1.05.
**More extensive applications**
(a) **For encoder-decoder structures.** Due to the absence of a causal mask, the direction of shift in the encoder can be either forward or backward. I believe that KV shift can also enhance the encoder decoder model, as it can benefit from easier to implement induction mechanisms.
(b) **Multimodal model.** In visual models, there is a similar approach to using token shift [2], which they believe is beneficial for "back and forth for motion captioning". Therefore, I believe that similar operations can contribute to multimodal models. If it can really work, then the real reason for its effectiveness will be even more interesting. Does it enhance local information exchange, or did it enhance something similar to the induction heads mechanism?
[1] Physics of language models: Part 1, learning hierarchical language structures.
[2] Token ShiftTransformer for Video Classification.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. I have enjoyed reading the paper and the rebuttal. While I think that this line of work is an important one, I think the reviewer fHVj raises a valid concern. Hence I retain my score | Summary: Based on the analysis of 2-layer attention, the authors found that 2-layer attention cannot effectively represent information flow from i+1 -> k -> j (for j>=i, k<i). Therefore, the authors proposed KV shifting attention to address this issue. The authors also conducted large-scale pre-training experiments to validate its effect.
## update after rebuttal
Regarding my main question, "whether the limitation proposed in this paper still exists when attention layers >=3", the authors mainly use Figure 8 to respond. Figure 8 does provide some explanation, but I believe it's insufficient and indirect. Therefore, I maintain my score.
Claims And Evidence: 1. I'm not entirely sure if my understanding is correct. According to Property 1, for j to indirectly attend to i+1 through token k, k needs to be k>= i+1. This is indeed a limitation of attention. However, this property only considers the 2-layer case. If LLMs have more than 2 layers, would this problem be solved? I think this is similar to MLP's expressiveness: although 2-layer MLPs cannot represent non-linearly separable functions, 3-layer MLPs can represent all functions. Therefore, if the limitations proposed in Property 1 can be solved by 3-layer attentions, is this problem still serious considering that current LLMs generally have dozens of attention layers?
2. Following point 1, the impact of model depth on Property 1 and experimental results has not been verified.
3. Following point 1, according to figure 3, the impact of KV shifting on loss is minor, especially in the 19B model. Does this confirm the limitation of this paper's motivation: for larger and deeper LLMs, the limitations of 2-layer attention revealed by property 1 disappear or are no longer a key issue?
Methods And Evaluation Criteria: Yes. However, according to Figure 3, its effect is not significant.
Theoretical Claims: I tried to understand property 1. Please see Claims And Evidence regarding my concerns.
Experimental Designs Or Analyses: Please see the third point in Claims And Evidence.
Supplementary Material: Not provided.
Relation To Broader Scientific Literature: I think this is similar to MLP's expressiveness: although 2-layer MLPs cannot represent non-linearly separable functions, 3-layer MLPs can represent all functions.
Essential References Not Discussed: No
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: Typos: line 84, Property 3.2 -> Property 1
Citation: Press,O., lack of year information.
Questions For Authors: Please refer to Claims And Evidence.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you very much for your valuable review comments. I hope the following response can answer your concerns.
**Regarding expressive power**
Firstly, if I understand correctly, the "three" in the three-layer MLP you mentioned refers to the layer meaning of neuron nodes, which actually corresponds to MLP with two layers of parameter. I think using more layers can also achieve the function of induction heads, as shown in Figure 1 (a), four layers of attention can also achieve the function of induction heads. But for the simple task of induction heads, 4 layers cannot improve compared to 2 layers, so more difficult tasks can be considered. For example, you can refer to the hop k task in Appendix E1, where KV shifting attention can achieve better performance in hop k tasks. Here, we compared L=2,3,4,5,6 layers. In terms of motivation, the two layers situation has inspired the work, and rigorous theoretical analysis of multiple layers is difficult, which we leave for the future.
**The impact of model depth**
You can refer to the hop k task in Appendix E1, where KV shifting attention can achieve better performance in hop k tasks. Here, we compared L=2,3,4,5,6 layers. From the experimental results, we speculate that multi-layer did not alleviate this situation. Because if there is relief, multi-layer attention can better achieve multi hop through some shortcut, thereby filling the gap between Figure 8 (b) and Figure 8 (c). In fact, this gap did not decrease with the increase of layers. I think the causal mask has caused such restrictions on information flow.
**Loss and Benchmark**
(a) Firstly, the learning rate of the model trained on the 19B is 2e-4, which is relatively small. As shown in our Figure 4 at a scale of 1.5B, the gap between the two will narrow in the case of primary school learning rate. If a larger learning rate is adopted, it may be speculated that the gap in loss between the two will increase. (b) Secondly, I would like to point out that loss can be confusing during the pre training process of large language models, because there are a large number of tokens that can be easily predicted using n-gram information, without truly utilizing contextual information[1]. Therefore, we compared some benchmarks, among which benchmarks like MMLU are relatively reliable[2], as shown in Figure 11, where KV shifting attention achieved good results on MMLU. (c) The results of some toy tasks in Appendix make us believe that the model can also achieve better in context learning ability at larger scales.
[1]This is particularly evident in the training of long texts, where models with similar losses have significantly different abilities for long-distance information retrieval. "Can Perplexity Reflect Large Language Model's Ability in Long Text Understanding?"
[2] For example, we can observe that the performance on MMLU can clearly separate transformer models from mamba model in Table 4 of "An Empirical Study of Mamba-based Language Models". | null | null | null | null | null | null | null | null |
The Underlying Universal Statistical Structure of Natural Datasets | Accept (poster) | Summary: A model of natural dataset (at least natural images) is build based on a random matrix analysis and extensive data analysis. An interpretation is proposed for the power law spectrum of the covariance matrix of input features which generically shows up on natural data. In addition a further characterization of the statistical ensemble is provided in terms of a random matrix ensemble (Gaussian orthogonal ensemble (GOE)) based on a emprirical study of the statistics of level spacing.
Claims And Evidence: The main claims is that the bulk spectrum statistics of the feature-feature covariance matrix arises from long range correlations following a power law and is in GOE universality class. This claim is backed on a precise data analysis comparing the RMT model obtained with population matrices given by a full band Toeplitz matrix corresponding to a 1d model of homogeneous interactions growing with distance raised to some exponent.
Methods And Evaluation Criteria: On one side the RMT model for the considered 1d model of long range interactions is solved in order to get the corresponding spectral density in the proportional scaling regime (fixed ratio N_data/N_features ) using Marchenko Pastur equations. It is then used to fit very well the spectral density of a large selection of datasets (image datasets). Then the level-spacing statistics corresponding to these datasets is determined empirically using methods from quantum chaos, namely the unfolding method used to normalize the level spacing w.r.t the spectral density, the distribution of level-spacing statistical indicator called the r-statistics and the spectral two-point form factor,
which have well defined signature for different RM ensembles. These are then compared with the expected result for GOE yielding a very good agreement. An estimation of the number of samples needed to be in this RMT regime is also provided based on various indicators having well defined values in the matrix ensemble once the exponent is known. I don't see how this can be used in practice, since the exponent which is in principle unknown, is estimated itself at finite M. This looks more like an qualitative heuristic estimation.
Theoretical Claims: No specific theoretical claims
Experimental Designs Or Analyses: As discussed already in the Methods and evaluation criteria section, the experimental data analysis is perfectly sound.
Supplementary Material: All the necessary background is provided in the supplementary material for RMT and the various statistical estimation.
Relation To Broader Scientific Literature: I think the bibliography is fine.
Essential References Not Discussed: No essential reference is missing in my opinion.
Other Strengths And Weaknesses: - strength: the statements on the statistical properties of natural data are interesting and instructive, I found the paper clear and the methodology rather convincing.
- weakness: the scope is limited to natural image and their texture properties. This might have limited impact in ML theory (see my questions below).
Other Comments Or Suggestions: equation (9) to check, should it not be continuous at tau=1?
Questions For Authors: I find the paper quite interesting but I wonder about the real scope of this work regarding ML in particular the modelling of data for theoretical analysis. Among the claims, the fact that the real datasets fall into the GOE is well expected (Wishart matrices obtained from large data embedded in large dimensional spaces), but the empirical analysis of the level-spacing has the merit of providing a clean validation of this hypothesis. Still, the impact on the theory of ML is not clear to me, since the power-law hypothesis (with possibly some additional spike model of outliers) of the population matrix is already commonly used, while the level-spacing statistics of the Gram matrices seems not to play (to the best of my knowledge) any role for theoretical analysis of ML. Then remains the 1 dimensional long-range correlated model of the population matrix, in the form of Toeplitz matrices, which looks quite artificial to me. I guess any d-dimensional model with spatial power-law decay would lead to the same result, but the the 2-point function would decay with distance for d>1 instead of increasing in 1d. I believe this bulk modelling is a simplified model of texture in this context of image dataset and wonder whether a 2d model would not be more appropriate to interpret the data?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer ZEdS,
Thank you for your careful review and positive appraisal of our work, tending towards acceptance. We’re glad you found the work interesting and well supported by empirical evidence.
Below, we address the remaining issues you raised, as well as questions and comments.
**Methods and Evaluation**
*” I don't see how this can be used in practice, since the exponent which is in principle unknown, is estimated itself at finite M. This looks more like an qualitative heuristic estimation.”* - We agree that defining the asymptotic exponent itself at finite $M$ is suboptimal, and therefore its value may not be entirely trustable, but this is truly the best one can do on any real-world finite dataset. The important part is the universal convergence towards this exponent, and the interesting fact that this convergence occurs in two sharply separated phases, as seen in Fig 5 (middle).
**Weaknesses and Questions**
- *”…level-spacing statistics of the Gram matrices seems not to play (to the best of my knowledge) any role for theoretical analysis of ML.”* - This is inaccurate, please see Appendix F for more details. Briefly, the role of the level spacing metric is to confirm the spectral density distribution, which is critical for analysis of even linear regression in the empirical limit (the resolvent is required to find the optimal network solution, which depends directly on the data covariance matrix’s eigenvalue distribution).
- *”…whether a 2d model would not be more appropriate to interpret the data?”* - In order to capture the eigenvalue statistics a 1-d model is sufficient, and any higher dimensional model can work, provided that there is a component that decays as a power law. For instance if we were to take a fourier type model, as long as the spectrum had a component that decayed with the magnitude $|\vec{k}|$ as a power law, the same type of results would hold. Our goal was simply to provide the simplest possible model.
- “*equation (9) to check, should it not be continuous at tau=1?*” Yes it should, thank you for bringing this to our attention, we will correct the formula in the revised version.
We hope that our replies are sufficient to raise your confidence in our work and possibly raise your scoring of our paper. We welcome any further questions and comments. | Summary: The paper examines the empirical eigenvalues of Gram matrices of natural image datasets, showing that they follow a power law. It offers a simple model that reproduces this power law behavior using Gaussian data with long-range correlations. These results suggest that natural image Gram matrices can be approximated by Wishart random matrices with a simple covariance structure, enabling rigorous analysis of neural network behavior.
Claims And Evidence: As far as I can see, all claims are clearly stated and well-supported by theoretical or empirical evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem or application at hand.
Theoretical Claims: I checked the theoretical claims and found no major issues with their correctness.
Experimental Designs Or Analyses: I found the experiments to be generally sound and valid.
Supplementary Material: I reviewed most of it to some extent and had a closer look at Apps A, B, E, F.
Relation To Broader Scientific Literature: I found the subsection “Neural scaling laws” under Background and Related Work somewhat non-satisfactory. It leaves an impression that the relation between the spectral properties of the data and the learning curve is unclear.
Essential References Not Discussed: • Ba et al. 2022 - High-dimensional asymptotics of feature learning: how one gradient step improves the representation.
• Wang et al. 2023 - Spectral evolution and invariance in linear-width neural networks.
Other Strengths And Weaknesses: Strengths:
1. The paper is well structured, clearly written, and rather easy to follow, given the heavy math.
2. Tackles a very timely subject, both interesting and relevant.
Weaknesses:
1. I think that the main weakness is establishing the claim in the last sentence of the abstract: “image Gram matrices can be approximated by Wishart random matrices with simple covariance structure, enabling rigorous analysis of neural network behavior.” The paper could be much improved if this point was made more central, beyond what is provided in App. F, e.g by some experiments of training a DNN on real-world data vs on the corresponding CGD.
Other Comments Or Suggestions: Typos and other small issues:
1. Line 153 right column, above eq 2: 'with' is misplaced.
2. I find the index notation confusing, e.g. $X_{ia}\in\mathbb{R}^{d\times M}$; usually $X_{ia}\in\mathbb{R}$ as in the matrix element.
Questions For Authors: 1. Lines 434-436: “accomplishing learning tasks for which the spectrum is insufficient and eigenvector properties are needed, requires a different scale of data samples.” - would it not be fair to say that for most typical learning tasks eigenvector properties are needed?
2. What information is NOT captured by the CGD?
3. Can you speculate why the scaling law behaior begins around i=10
4. Lines 189-191: “For real-world data, we consistently find that \alpha>0” - is there a simple reason for this?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer kWZr,
Thank you for your careful review and positive appraisal of our work, deeming it acceptable at ICML. We’re glad you found the work interesting and well supported by empirical evidence.
Below, we address the remaining issues you raised, as well as questions and comments.
**Related work**
1) We apologize for not bringing the connection between neural scaling laws, learning curves and the spectral properties of the data more to the forefront in this section. We provide a concrete example for this connection in Appendix F, but we will clarify these connections in the related work section as well in the revised version.
2) Thank you for pointing out these references, we will update the revised manuscript to include them both.
**Weaknesses**
1) *Highlighting the connection to real world DNN training on CGD* - We understand the reviewer’s suggestion and agree it has considerable merit, however, our intent in this paper was to bridge a long line of existing semi-empirical works (including ones that train networks on gaussian versions of real world data, see Ref [1,2] and similar works), and purely theoretical works that make the assumption of gaussian data. We will include Ref [1,2] and several other works that perform such experiments to better connect our results to the already large existing literature.
**Comments**
1. *Line 153 right column, above eq 2: 'with' is misplaced.* - Thank you for this comment, the typo will be fixed.
2. *I find the index notation confusing, e.g. $X_{ia}\in \mathbb{R}^{d×M}$; usually $X_{ia}\in \mathbb{R}$ as in the matrix element.* - We accept your comment, and will augment the final version accordingly.
**Questions**
1. *Lines 434-436: “accomplishing learning tasks for which the spectrum is insufficient and eigenvector properties are needed, requires a different scale of data samples.” - would it not be fair to say that for most typical learning tasks eigenvector properties are needed?* - We agree that there are many cases, such as classification tasks, the eigenvectors are crucial, and we study to what extent in a future work. However, for teacher-student settings, many other regression based settings, the eigenvectors are not necessarily important, and the spectrum is sufficient to predict performance and dynamics.
2) & 3. *What information is NOT captured by the CGD? why does the scaling law behavior begins around i=10?* - First, the CGD does
not capture the top ~10 eigenvalues, since these eigenvalues contain the least “universal” properties of the data, and constitute the most particular aspects of every different dataset, also corresponding to the most informative eigenvectors. Therefore, the CGD, which describes the universal part of the spectrum, is shared between datasets and is only valid for the bulk of eigenvalues. Second, as stated in the main text, the CGD does not capture the eigenvector structure, as we do not assume a special basis for the CGD model. It would be interesting to explore which combinations of bases and spectra would render the CGD model a more complete description of particular datasets.
4. *Lines 189-191: “For real-world data, we consistently find that $\alpha>0$” - is there a simple reason for this?* - This is a very good question, which is currently still open. One possibility is that these values of power law spectra can emerge from hierarchical data structures with varying levels of rare features [3]. Answering this question could also shed light on certain aspects of neuroscience, see [4] for the emergence of power laws in human visual cortex activity data. We would also like to note that $\alpha>0$ seems to be prevalent for natural datasets, but not necessarily for other types of data. For instance, simulations of physical systems can give rise to negative exponents (see for instance [5] for the case of turbulence).
References:
[1] Refinetti et al., Neural networks trained with SGD learn distributions of increasing complexity (2023)
[2] Székely et al., Learning from higher-order correlations, efficiently: hypothesis tests, random features, and neural networks (2024)
[3] Cagnetta et al., How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model (2024)
[4] Gauthaman et al., Universal scale-free representations in human visual cortex (2024)
[5] Levi et al., The Universal Statistical Structure and Scaling Laws of Chaos and Turbulence (2023) | Summary: This paper aims to exploit Gaussian universality to generate synthetic data that accurately captures the statistics of natural data distributions. In particular, the paper proposes a synthetic model that reproduces the eigenvalue statistics of the covariance matrices of natural data. Since the synthetic data is described by RMT, the authors focus on demonstrating that the RMT predictions match natural datasets with increasing accuracy as the sample covariance approaches the population covariance.
Claims And Evidence: Claim 1: Powerlaw spectrum of population covariance matrices. This claim is well supported by the experiments and is in agreement with much previous work.
Claim 2: The powerlaw spectrum originates from underlying correlational structure. This claim may be true but it is not supported by the given evidence. See Theoretical Claims section.
Claim 3: The correlated Gaussian datasets are a correct proxy for natural data. This claim is well supported by the experiments, but it follows from the Gaussian universality principle, which has been extensively studied in previous work. See Essential References Not Discussed section.
Claim 4: Convergence of sample covariance to population covariance corresponds to convergence of sample statistics to RMT statistics. I believe this claim already follows from Gaussian universality and law of large numbers.
Claim 5: Shannon entropy is correlated with the local RMT structure, is smaller in correlated datasets, and converges to the population entropy in much fewer samples. The meaning of this claim is unclear. The proposed definition of Shannon entropy does not appear to match the commonly accepted definition for Gaussian distributions. See Methods And Evaluation Criteria section.
To me, the main claim of this paper seems to be that Gaussian universality holds. This has been confirmed in many previous works, so it's not clear what the novel contribution is.
Methods And Evaluation Criteria: The definition of dataset entropy is confusing to me. I don't see the connection between the entropy as defined and the entropy of the data distribution, which seems the more natural measure of entropy. What motivates the proposed definition?
Theoretical Claims: I don't see novel proofs or derivations in this paper. There are some conceptual issues I will point out here.
It's not clear to me why the setup is called "correlated." The data vectors are drawn from a Gaussian with diagonal covariance. It seems that the uncorrelated/correlated distinction being drawn is in fact a distinction between isotropic/anisotropic. Relatedly, it's not clear to me what the Toeplitz matrix brings to the analysis. The exponent in the Toeplitz matrix must be measured from the dataset to be modeled anyways; why not simply use that measurement to directly set the diagonal of the covariance according to eq. 3? With this simplification, I believe the main claim boils down to the usual Gaussian universality assumption.
Experimental Designs Or Analyses: It is known to be notoriously difficult to directly measure powerlaw exponents (see https://arxiv.org/pdf/cond-mat/0402322). Appendices B and C in https://arxiv.org/pdf/2311.14646 also point out that it is easy to make drastic measurement errors trying to directly fit the sample covariance eigenvalues. Due to the ease of measurement errors, I do not know whether to trust the measurements in this paper. The previously mentioned paper provides a more reliable procedure, based in RMT, for exponent estimation.
Supplementary Material: Appendix F: the inability of GCD to well-capture the eigenvector structure indicates to me that it is no more powerful than the usual Gaussian universality assumption.
Relation To Broader Scientific Literature: Line 78 "and many advances have been made by appealing to the RMT framework." Give examples?
See Other Strengths And Weaknesses section.
Essential References Not Discussed: The following papers give convincing evidence for Gaussian universality in natural datasets. Many of these concern kernel ridge regression, where the random matrix in question is the kernel matrix \phi(X)^T \phi(X) rather than the covariance XX^T, but the result is analogous (since the spectra of covariance and gram matrices are the same).
https://arxiv.org/pdf/2302.08923
https://arxiv.org/pdf/2102.08127
https://arxiv.org/pdf/2203.06176
https://arxiv.org/pdf/1905.10843
https://arxiv.org/pdf/2311.14646
Other Strengths And Weaknesses: My primary concern with this paper is that the main claim seems to be that natural data matrices behave like Gaussian random matrix ensembles when the data dimension is large enough. This is already a well-established claim; there is a broad literature on Gaussian universality in high-dimensional statistical inference problems (see Essential References Not Discussed). This paper suggests some physics-inspired diagnostics for further verifying this claim and find agreement; this is useful but, on its own, I don't think it is a sufficient contribution for this venue.
The strengths include proposing to port over some empirical measures of RMT behavior from systems exhibiting quantum chaos. Another novel idea was to suggest a data generation process via Toeplitz matrices, which produces the anisotropic Gaussian features, but I didn't find it convincing.
Other Comments Or Suggestions: Eq 1: I believe it should be X_{ja} or X^T_{aj} instead of X_{aj}
Questions For Authors: 1. How does your analysis and measurements extend the empirically known Gaussian universality principle?
2. Do the specific measurements you proposed shine light on any specific training phenomena? E.g., learning problems like linear regression or neural networks?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer FARz,
We appreciate your thoughtful reading of our manuscript. Your main concern is the statement that natural data matrices behaving as Gaussian at sufficiently large dimensions is already known in the literature. We argue that this is **not the case** and we will attempt to explain our estimation of the distinction between what was known and our work below.
**Weakness and Questions**
We believe there is a subtle misunderstanding between us and the reviewer, which may come from the focus of our work. The large body of literature which concerns Gaussian universality concerns the **learning curves of neural networks when taking gaussian data as a surrogate for real data**, or the **equivalence between neural networks in certain limits and Linear noisy networks** applied to **Gaussian data** [1]. None of the works cited by the reviewer deal directly with the statistical structure of the **data itself**. This is a key difference in our opinion. We agree that the studies of generalized linear regression rely on the properties of the covariance matrix which we study, but they are an indirect measurement. Our work then serves two purposes: (1) it is an agnostic, and much stronger validation of the conjectures used in the literature to model the data (2) the r statistics (as well as the other local statistical measures) can offer a systematic path towards explaining the discrepancies between predictions for gaussian and real world data, since $r^{(n)}$ statistics explicitly measure deviations from Gaussianity (3) the local statistics inform you of the number of samples required to reach the RMT regime, showing a non-trivial universal behavior which is not discussed in the previous works (4) the fact that we find level repulsion could have implications for the localization of eigenvectors, which certainly does have an effect for learning tasks, particularly classification tasks. Note that some of the works you cited are already discussed in our text, particularly in App. F, but we should highlight these in the main text.
**Regarding specific questions:**
1. *”How does your analysis and measurements extend the empirically known Gaussian universality principle?*” – as we replied in the above paragraph, please see our points (1,2) and (3).
2. *”Do the specific measurements you proposed shine light on any specific training phenomena? E.g., learning problems like linear regression or neural networks?”* - We have not studied the direct effects of our results on models beyond App. F, but it is clear that this is the key question that will be studied in future work. The intuition is that by studying the higher level spacing statistics on real data we can systematically describe the deviations from Gaussianity. By focusing on the effects of these deviations for learning curves, we should be able to make better predictions for learning curves as described in (2) above. Additionally, see point (4) in the previous paragraph.
**Methods**
The reason for this definition of entropy is a common practice in quantum chaotic systems – we wish to interpret the data matrix as the equivalent of a physical hamiltonian whose complexity can be analyzed using its spectrum. Then, the Renyi entropy definition corresponds to a version of entanglement entropy, measuring the structural complexity of the data.
**Theoretical claims**
We use the “correlated” term simply due to the Toeplitz matrix assumption. The reason for this model is that it gives a simple “real-space” interpretation of the known spectrum. In that sense the source for anisotropy comes from correlations, offering some direction towards understanding why the power law appears at all.
**Experimental design**
We thank the reviewer for bringing this work to our attention. We will incorporate the proposed methods into our work. Still, we would argue that since the errors reported in Simon et al. with direct methods are at most ~5%, our results can be trusted, since we do not focus on the precise estimation of $\alpha$, but rather showing that it is generically between 0 and 1 for natural data. We believe our observations regarding convergence trends and spacings are also reasonable within this 5% error margin.
**Relation to broader literature**
*”Line 78 "and many advances have been made by appealing to the RMT framework." Give examples?”* - We relegated some of the relevant citations to the appendices due to space limitations, we will revise our text to better reflect the contributions in the main text.
**Comments**
Thank you for the comment regarding Eq.1, it will be fixed.
We hope that our replies are sufficient to re-evaluate the merit of our work and change your score of our paper. We welcome any further questions and comments
---
Rebuttal Comment 1.1:
Comment: I'm afraid I still don't understand. The previous works on Gaussian universality and ridge regression (e.g. the one I mentioned earlier, https://arxiv.org/pdf/2311.14646) make claims akin to "some learning algorithms trained on real data behave as if the data were Gaussian with powerlaw covariances." Your work says "real data behave as if they are Gaussian with powerlaw covariances." (E.g., 324L "we show that correlated Gaussian datasets capture [...] the spectral density and the distribution of eigenvalues for ImageNet, CIFAR10, and FMNIST.") In this sense, your results appear to be strictly weaker than what was known before. If there is some subtlety I'm missing, please let me know.
Points (2) and (4) would be much more convincing if this analysis (or a preliminary version of it) appeared in the manuscript. (3) is interesting to me, but I don't feel that it is a sufficient contribution for acceptance.
"We wish to interpret the data matrix as the equivalent of a physical hamiltonian whose complexity can be analyzed using its spectrum." Why? I'm not understanding what one learns about the data by defining and measuring this quantity. I'm sure one can write down a long list of functionals of the spectrum which converge to RMT predictions w.r.t. dataset size, and it's not clear what's special about this one. I think it adds to confusion, since it contradicts the established information-theoretic definition of entropy that is common in ML.
"In that sense the source for anisotropy comes from correlations, offering some direction towards understanding why the power law appears at all." I don't see how. It looks like (reading Eq. 4) the data covariance powerlaw is inherited directly from the Toeplitz powerlaw. From a modeling standpoint, it appears that the questions of "why is the data covariance powerlaw" has not been resolved, only been pushed back to the question "why are the Toeplitz correlations powerlaw." In the absence of more evidence that the Toeplitz picture is the right one, this feels like adding extra complications without any new insight. I'm also uncomfortable with the fact that the exponent for natural data is positive -- if the Toeplitz picture is correct, this implies that the correlations increase without bound, which seems unphysical.
**EDIT: In response to reply below.**
Thank you for explaining the interpretation of the Gaussian universality claim. I see what you are saying, though I argue that in the end what we (or I) care about is the behavior of some algorithm. Your point is taken, though.
Regarding entropy, I agree that the measure you are referring to is interesting for other reasons. Perhaps it would be salient to connect your notion to established notions of effective rank (https://www.eurasip.org/Proceedings/Eusipco/Eusipco2007/Papers/a5p-h05.pdf). (In fact, this paper refers to your quantity as "spectral entropy," which seems a far more apt term than "distribution entropy," which carries a distinct connotation.)
I'm still not convinced by the argument regarding the Toeplitz matrices. My initial qualm was with the term "correlated Gaussian data" which seems a misnomer since mean-zero Gaussian data is definitionally uncorrelated. I acknowledge that you derive the spectrum from a correlation argument, but my point is that one can arrive at this spectrum in a variety of other ways that are all in the same "Occam's razor equivalence class" (I made up that term but hopefully it gets the point across). In fact, it seems to me that simply positing that the spectrum is powerlaw is in a *lower* Occam's razor class (it makes the same predictions with fewer steps). This "posit powerlaw spectrum" approach is taken in other works on Gaussian universality. (This is what I meant in my original review by "The exponent in the Toeplitz matrix must be measured from the dataset to be modeled anyways; why not simply use that measurement to directly set the diagonal of the covariance according to eq. 3?") I didn't know about the increasing correlation of velocity differences within the eddy-size spatial scale, thanks for pointing that out.
I've increased my score to 2 due to these discussions.
---
Reply to Comment 1.1.1:
Comment: Thank you for your continued engagement and willingness to discuss your concerns. We hope the following addresses your concerns:
The first statements you made accurately describe the observations, but your conclusion from these observations is the opposite of ours. Perhaps a simple way to see why we claim our statement is stronger than just looking at the learning curves is the following observable counting argument: the loss/learning curves constitute an averaged, scalar observable over the data (in the case of ridge regression). This averaging, as well as the fact that it is a single number, makes it quite insensitive. Concretely, any deviation of the data from GOE will manifest as a small correction to a single number (the loss). On the other hand, measuring the level spacing distribution, r-statistics, SFF etc., constitute a much larger number of **direct** measurements of the data. All of the local statistics distributions have moments, and all of them can computed and compared with the data. By constraining all of these observables we offer a new, much more quantitatively controlled approach to understanding natural datasets, agnostic of any neural network training. Additionally, we can distinguish between GOE and other closely related universality classes such as the short range plasma model [1], which could fall below the sensitivity of a single scalar observable (learning curves). We sincerely hope that this clarifies why we believe directly measuring the data is a much stronger statement than any implicit measurement (such as measuring the loss of networks).
**Further analysis:** Thank you for acknowledging that points (2) and (4) are interesting. We have done a preliminary analysis of the $r^{(n)}$ statistics for $n=1,2,4,6$, for CIFAR 10 and for samples from a GOE data, which can be viewed in this link: https://imgur.com/a/PGb7Dfh.
We see that the nearest neighbor statistics agree very well with the GOE, but as the value of $n$ gets larger, we probe farther neighbors and we start seeing discrepancies. We will perform a more complete analysis and add it to the appendix, highlighting this as a possible new avenue to study GOE in datasets. Regarding point (4), we refer the reviewer to App. E, where we have begun an analysis of the eigenvector/localization structure. We have future work along this direction, but it seemed to us that it warrants its own work rather than being a part of this one.
**Regarding entropy:** The eigenvalues of a matrix tell you how its action is distributed across different orthogonal directions, the eigenvectors. An entropy built from those eigenvalues quantifies whether the matrix is dominated by one or a few large eigenvalues, or whether its eigenvalues are more evenly spread out. We don’t mean to conflate this with the standard information theory definition of entropy, and will clarify this in the text.
**Regarding our interpretation of the Toeplitz matrix correlation structure:** We do not aim to interpret the correlation structure in the 2 dimensional space of the pixels, since obviously these correlations should not be growing ones. Instead, these are correlations (that do grow) in a 1 dimensional space that should somehow be related to the features of the data. Understanding why this particular structure forms is indeed a different way of asking the question of why power laws are generated, but we think additional ways of phrasing the question could lead to interest results. Additionally, we note that non-unitary theories such as turbulence, where the Kolmogorov scaling exponents show that correlations of velocity differences grow with the separation distance in the inertial range of scales have negative $\alpha$, while real world datasets have positive $\alpha$. That suggests that we should interpret differently the correlation, and that real data correspond to classes of unitary physical models.
Thank you again for the discussion and useful insights. We would gladly have continued discussing if not for the constraints of the review process. We hope that we have answered your questions, and if so, we would greatly appreciate raising our score towards acceptance.
References:
[1] Rao, W., Critical Level Statistics at the Many-Body Localization Transition Region (2021) | Summary: The paper investigates the universal statistical structure underlying the covariance matrices (Gram matrices) of natural datasets. Leveraging Random Matrix Theory (RMT), it demonstrates that real-world datasets and correlated Gaussian datasets (CGDs) share universal spectral properties, including a characteristic power-law decay of eigenvalues and chaotic statistical behavior consistent with Gaussian Orthogonal Ensemble (GOE) universality. This universal behavior suggests that natural data covariance structures can be effectively modeled by simpler correlated Wishart matrices, enabling analytical tractability and better understanding of neural network learning dynamics.
Claims And Evidence: Extensive numerical experiments across multiple datasets, rigorous statistical tests, and clear comparisons between empirical data and theoretical models from RMT support the claims.
Methods And Evaluation Criteria: NA
Theoretical Claims: Yes, the theoretical claims appear sound. The authors effectively apply well-established RMT concepts and derive theoretical predictions, which closely match the empirical data.
Experimental Designs Or Analyses: Experiments are thorough, involving various real-world datasets (MNIST, FMNIST, CIFAR-10, ImageNet), uncorrelated Gaussian data, and correlated Gaussian datasets. The methodology is clearly explained and robustly implemented. However, experiments are limited to image domain only and it would be interesting to see how the findings extend beyond image modality.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper effectively situates its findings within the broader context of neural scaling laws, chaos theory, and Random Matrix Theory. It provides a novel connection between empirical observations of natural datasets and theoretical models traditionally used in quantum chaos and statistical physics.
Essential References Not Discussed: Key related works essential to understanding the paper include studies on neural scaling laws (Kaplan et al., 2020; Maloney et al., 2022), foundational papers in RMT (Mehta, 1991; Couillet and Liao, 2022), and previous analyses of data spectral structure (Ruderman, 1997; Pennington & Worah, 2017).
Other Strengths And Weaknesses: Clear identification of a universal statistical structure in real-world datasets.
Strong theoretical underpinning via RMT.
Rigorous and comprehensive experiments.
Other Comments Or Suggestions: NA
Questions For Authors: How robust is the observed universality across modalities other than vision (e.g., natural language or audio)?
Could different neural architectures affect the emergence of universal spectral structures?
What are the practical implications or potential limitations when applying these universal properties to realistic neural network training scenarios?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer LgA8,
Thank you for your positive reading of our manuscript, deeming our paper acceptable.
Below, we address your comments/questions:
**Essential References Not Discussed:**
We believe the list offered is likely a mistake, since all of these references are explicitly cited in our related work section under “Random Matrix Theory”. Please let us know if we have missed something.
**Questions**
1) *How robust is the observed universality across modalities other than vision (e.g., natural language or audio)?* - The results appear to be quite universal, also across different modalities. In particular, we studied the RMT properties of language data, and found similar results. These results will be postponed to future work.
2) *Could different neural architectures affect the emergence of universal spectral structures?* - Since the essence of our work deals directly with the data itself, the notion of a neural architecture doesn’t explicitly enter into our analysis. However, it is clear (and known) that performing different nonlinear transformations on data, can dramatically change its RMT behavior, since it essentially changes the moments of the underlying data distribution, leading to different sample complexities and training dynamics.
3) *What are the practical implications or potential limitations when applying these universal properties to realistic neural network training scenarios?* - The practical implications could range from different sampling techniques based on PCA (and eigenvector sampling) and obtaining scaling laws for sampling required to reach ergodicity, to a more detailed study of the structure of data required for successful learning, moment by moment (second, third etc.).
We hope that our replies are sufficient to raise your confidence in our work and accept the paper. We welcome any further questions and comments. | null | null | null | null | null | null |
KAN-AD: Time Series Anomaly Detection with Kolmogorov–Arnold Networks | Accept (poster) | Summary: The paper introduces KAN-AD, a novel approach to time series anomaly detection (TSAD) based on Kolmogorov–Arnold Networks (KANs). The motivation for this work stems from the limitations of existing TSAD methods, particularly those relying on forecasting models, which often overfit to local fluctuations and fail to generalize well in the presence of noise. The authors argue that effective TSAD should prioritize modeling "normal" behavior using smooth local patterns rather than attempting to capture every minor variation in the data.
To address this issue, the authors reformulate time series modeling by approximating the series with smooth univariate functions. They observe that KAN, in its original form, is susceptible to localized disturbances due to its reliance on B-spline functions. To overcome this limitation, they propose KAN-AD, which replaces B-splines with truncated Fourier expansions. This modification enhances robustness against local fluctuations while preserving the ability to capture global patterns.
The core methodological contributions include the reformulation of time series modeling using Fourier series for improved smoothness, the development of a novel coefficient-learning mechanism based on one-dimensional convolutional neural networks. The authors validate their approach through empirical analysis, demonstrating its effectiveness across various datasets, including those with noisy training data.
## after rebuttal
I changed my score from reject to weak accept. The authors did address my questions, however I'm still not certain about the authors' choice of evaluation metric. If you see from their tables in the second round, the real performances of the detectors are not as high as they claim with $F1_e$.
Claims And Evidence: The authors makes claims that raise some questions. Here, I'm reporting some of them:
1. In the introduction the authors state that normal sequences exhibit greater local smoothness than abnormal ones. Isn't this claim just a collorary of reconstruction-based approaches? In other words, in reconstruction-based detectors, the anomalies have higher reconstruction error than normal ones, since the detectors (usually autoencoder-based) are trained on normal sequences only? How do the authors justify this?
Methods And Evaluation Criteria: The benchmarks are fine, however the evaluation metrics don't seem to make sense (see next section for my doubts, please).
Also, the authors fail to show whether they do any sort of cross-validation after they split the series in windows, or whether they use the time component to define the train, validation, and tests sets. In other words, are you using time, say first 10 windows for training, the 11th and 12th window for validation, and from the 13th window you test? Or are you mixing these windows, say 80% of all windows goes to training, 2% to validation, and the rest to test? If the former is the case, then how many $k$-cross validations are you doing?
Theoretical Claims: The authors don't make any theoretical claims, which haven't been shown/proven before: i.e., using Fourier transformata instead of B-splines when wanting to capture periodic patterns, or having more local smoothness.
Experimental Designs Or Analyses: I don't know why the authors are proposing different evaluation strategies. All of the F1 variants adopted seem to misrepresent the performances of KAN-AD. Sure, the delay penalizes KAN-AD, but why do you need to penalize it? Here's an example.
| | $t_1$ | $t_2$ | $t_3$ | $t_4$ | $t_5$ | $t_6$ | $t_7$ | $t_8$ | $t_9$ | $t_{10}$ |
|---|---|---|---|---|---|---|---|---|---|---|
| GT | | | | $\times$ | $\times$ | $\times$ | $\times$ | | | $\times$ |
| Detection | | | | | | | T | T | | T |
| Timestep-based | TN | TN | TN | FN | FN | FN | TP | FP | TN | TP |
| 3-Delay PA | TN | TN | TN | FN | FN | FN | FN | FP | TN | TP |
| Point-wise PA | TN | TN | TN | TP | TP | TP | TP | FP | TN | TP |
$\times$ is true anomaly; T = detected anomaly; TP = True Positive; FN = False Negative; TN = True Negative; FP = False Positive.
So, my doubt is on the evaluation itself. Since the authors treat univariate series only, this is even clearer. Each $x_i$ at any timestamp $t_i$ can be either normal or anomalous. Therefore, the predictor can be evaluated for each $t_i$ as shown above: i.e., timestep-based. In these conditions, the authors report $F1_d = 0.2857, F1_e = 0.5714$ - since it doesn't consider the length of the detected anomalies nor false positives. Nevertheless, the real F1 score should be that for each time-step: i.e., $F1_t = 0.5$. The authors claim that they use the event-based calculation - i.e., $F1_e$ - which smears false negatives as being true positives (see the point-wise PA and Figure 4 in the paper). The authors were self-critical with the delay in the beginning and then decide to use the event-based one. It's very confusing, and the reason the authors provide is *"For the sake of convenience [...] use Event F1 [...] as it is more alignment (**typo**) with the need for real-time anomaly detection in real-world situations."* Who says that event F1 is more aligned with real-time anomaly detection? Is there any other work that argues this? If not, this statement is unfounded, and actually harmful, since, again, event F1 obfuscates false negatives, a critical aspect in critical domains, especially healthcare: e.g., if you miss an anomaly in neurodegenerative patients, they might die.
If we take the example in Figure 4 of the paper, we have $F1_d = 0.1429$, and $F1_e = 0.5714$, which is an overestimate of the true performance according to the timestep-based approach proposed above $F1_t = 0.2667$. Therefore, $F1_t$ emulates quite closely the "self-harming" $F1_d$. Again, $F1_e$ is an overestimate of the true performances.
The only sound metric here is AUPRC, which doesn't actually show that KAN-AD is way better than the second-best (i.e., KAN in KPI and WSD). So, in KPI you're comparable to KAN (difference of .029 which is statistically insignificant) and in WSD you underperofm by .013 (which again is statistically insignificant). Let's agree that in 2/4 datasets you're comparable to SoTA, and you win in 2 others. Therefore, across the board KAN-AD is not the best. Moreover, as per my comment in Sec. **Methods And Evaluation Criteria**, we can't even be sure that these metrics make sense since we don't know the number of folds you tested this on and what is the train-validation-test split you used.
Moreover, the average $F1_e$ reported in Table 2 isn't indicative of KAN-AD's overall performance. There are statistical tests that showcase how much better is one method compared to others. For example the Friedmann test with a post-hoc Bonferroni-Dunn could actually show if KAN-AD is better than KAN in the previous two edge cases, which I doubt it would. However, these tests work if you have multiple runs/folds for each of the detectors compared.
**In Table 3, why isn't SAND's resource consumptions reported? It has the second-best $F1_e$ score according to Table 2!** Even though you can execute it on CPU, the reader still should have these values reported. What does the justification you provided *"SAND’s CPU-only
execution requirement and SubLOF’s limited multi-core utilization capabilities preclude fair comparison in modern hardware acceleration contexts"* mean? You're using most of the other methods on GPU which actually accelerate the methods. What fair comparison are you doing then here by excluding SANDS or SubLOF? Also, how come KAN-AD has a lower execution time on CPU rather than on GPU? Are you doing a lot of I/O transfer operations? Actually this happens for all methods with 1k or less parameters.
Supplementary Material: I checked all of it. It could've been omitted as far as I'm concerned. It doesn't provide anything interesting to support the main paper. The code is there and I checked it. I tried to run it as described in the README file. It doesn't seem to work when building the environment. Here's the error:
Could not solve for environment specs
The following packages are incompatible
├─ _libgcc_mutex ==0.1 conda_forge does not exist (perhaps a typo or a missing channel);
├─ _openmp_mutex ==4.5 2_kmp_llvm does not exist (perhaps a typo or a missing channel);
├─ aiohttp ==3.9.5 py310h2372a71_0 does not exist (perhaps a typo or a missing channel);
├─ alsa-lib ==1.2.10 hd590300_0 does not exist (perhaps a typo or a missing channel);
├─ argon2-cffi-bindings ==21.2.0 py310h2372a71_4 does not exist (perhaps a typo or a missing channel);
├─ attr ==2.5.1 h166bdaf_1 does not exist (perhaps a typo or a missing channel);
├─ aws-c-auth ==0.7.16 h70caa3e_0 does not exist (perhaps a typo or a missing channel);
├─ aws-c-cal ==0.6.9 h14ec70c_3 does not exist (perhaps a typo or a missing channel);
├─ aws-c-common ==0.9.12 hd590300_0 does not exist (perhaps a typo or a missing channel);
├─ aws-c-compression ==0.2.17 h572eabf_8 does not exist (perhaps a typo or a missing channel);
├─ aws-c-event-stream ==0.4.2 h17cd1f3_0 does not exist (perhaps a typo or a missing channel);
├─ aws-c-http ==0.8.0 hc6da83f_5 does not exist (perhaps a typo or a missing channel);
├─ aws-c-io ==0.14.3 h3c8c088_1 does not exist (perhaps a typo or a missing channel);
├─ aws-c-mqtt ==0.10.2 h0ef3971_0 does not exist (perhaps a typo or a missing channel);
├─ aws-c-s3 ==0.5.1 h2910485_1 does not exist (perhaps a typo or a missing channel);
├─ aws-c-sdkutils ==0.1.14 h572eabf_0 does not exist (perhaps a typo or a missing channel);
├─ aws-checksums ==0.1.17 h572eabf_7 does not exist (perhaps a typo or a missing channel);
├─ aws-crt-cpp ==0.26.2 ha623a59_3 does not exist (perhaps a typo or a missing channel);
├─ aws-sdk-cpp ==1.11.267 h0bb408c_0 does not exist (perhaps a typo or a missing channel);
├─ binutils_impl_linux-64 ==2.40 hf600244_0 does not exist (perhaps a typo or a missing channel);
├─ binutils_linux-64 ==2.40 hbdbef99_2 does not exist (perhaps a typo or a missing channel);
├─ blas-devel ==3.9.0 20_linux64_openblas does not exist (perhaps a typo or a missing channel);
├─ blas ==2.120 openblas is requested and can be installed;
├─ blessed ==1.19.1 pyhe4f9e05_2 is not installable because it requires
│ └─ __unix, which is missing on the system;
The stacktrace is huge, above is a snippet. **Other reviewers should try to build the environment and see if this occurs to them as well**.
Also, I've noticed in the *run.py* file that there are more datasets that authors could've tested: e.g., Yahoo, NAB, AIOPS. How come the authors decided not to?
In *method/kanad/config.toml* there is the window size of 96 that performs a sliding/slicing window over the time series. Why isn't this discussed in the hypeparameters (column 1 lines 270-272)
Relation To Broader Scientific Literature: The scope of the paper seems shortsighted. Nowadays, especially foundational models [1], treat multivariate time-series [2]. KAN-AD treats only univariate ones. Real-world scenarios usually have multiple variables which makes detecting anomalies more complicated especially if one wants to produce an explanation/interpretation of why the anomaly happened. I'm sceptical about KAN-AD's usage in broader TSAD. The overall experiments (especially the evaluation) raises a lot of question about KAN-AD's utility (see **Experimental Designs Or Analyses**)
[1] Gao et al. Units: A unified multi-task time series model. NeurIPS'25
[2] Flaborea et al. Are we certain it's anomalous?. Workshops CVPR'23.
Essential References Not Discussed: **Why are the authors only considering forecasting approaches?** AD has also reconstruction-based approaches. This neglection hinders the generalizability of KAN-AD. Therefore, the authors have missed a lot of important competitors in TSAD. Here are a few that should be discussed and, perhaps, compared against. Most importantly, the comparison against UniTS [11] is paramount since they are SoTA in all time-series problems. I'm aware that some of these are >5 years old, however [1,9,11] should definitely be considered in the experiments.
[1] Flaborea et al. Are we certain it's anomalous?. Workshops CVPR'23.
[2] Audibert et al. Usad: Unsupervised anomaly detection on multivariate time series. KDD'20
[3] Bieber et al. Low sampling rate for physical activity recognition. PETRA'14
[4] Geiger et al. Tadgan: Time series anomaly detection using generative adversarial networks. Big Data'20
[5] Hundman et al. Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding. KDD'18
[6] Li et al: Mad-gan: Multivariate anomaly detection for time series data with generative adversarial networks. ICANN'19
[7] Su et al. Robust anomaly detection for multivariate time series through stochastic recurrent neural network. KDD'19
[8] Zhang et al. A deep neural network for unsupervised anomaly detection and diagnosis in multivariate time series data. AAAI'19
[9] Zhang et al. Unsupervised deep anomaly detection for multi-sensor time-series signals. IEEE TKDE'21
[10] Zhao et al. Multivariate time-series anomaly detection via graph attention network. ICDM'20
[11] Gao et al. Units: A unified multi-task time series model. NeurIPS'25 (**however, the arxiv was published since February 2024, so this cannot be considered simultaneous and concurrent paper, and the authors should've included it in their discussion, especially since it can be used also in forecasting modality to detect anomalies**). I'm mostly interested to see KAN-AD compared against UniTS, the latter beign a foundational model for time-series problems. I suggest the authors look at [12] to see how to adapt it for anomaly detection.
[12] Gabrielli et al. Seamless Monitoring of Stress Levels Leveraging a Universal Model for Time Sequences. arXiv preprint arXiv:2407.03821. 2024 Jul 4.
Other Strengths And Weaknesses: ### Weakness
I can't seem to find how the anomaly is detected. After the projection component of KAN-AD, you have a forecast on the time series. Then you have a ground truth (GT) series corresponding to the same window. How do you compare the forecasted and the GT? Nowhere in the paper this is written. Or are you predicting the classes $c_i$ for each input value at timestamp $t_i$? This is confusing.
### Strength (but not so much)
The authors rely on the EasyTSAD (probably they forget the project on github) framework to implement their KAN-AD class. This framework provides a straightforward evaluation pipeline which contains the also $F1_d$ the authors show in the paper, among other things. However, although using an-already-built framework makes it easier to prototype and evaluate stuff, the authors refrain from implementing the works I enlisted in **Essential References Not Discussed**. All the reported SoTA methods were already implemented in EasyTSAD, which is ok, but doesn't justify the authors' choice of not discussing the rest, because "probably it's a hassle to port them to EasyTSAD's design patterns".
Other Comments Or Suggestions: 1. Why is the first figure on the paper labelled as figure 2? I can't seem to find figure 1.
2. Figure 7 should rather be a 2d plot where the AUPRC axis can be transformed into a heatmap/colorbar. This improves readability of the plot. 3d-plots are often discouraged, but this is a minor suggestion.
3. Why is Table 2 the first table in the paper. Please check your labelling system in latex.
4. Figure 5 isn't much useful. I notice that you provide anomaly scores for each method, but the reader would much appreciate to see which are the detected anomalies. Here you might want to change the GT anomaly color to green, and the detected ones to red. Only in this way one can appreciate the detection capabilities of KAN-AD vs. rest. How the figure is now shown doesn't make much sense. Anomaly errors don't tell me anything.
Questions For Authors: I've put them in the other sections to stay consistent with my thought process. This layout of reviewing doesn't allow for cross-referencing. I'm sorry if I scrambled the questions above. Alas, this is my way of reviewing papers, as I feel it's more organic to put questions on each of the sections above.
Here are some more due to character limitation in Sec. **Experimental Designs Or Analyses**.
1. Figure 9 doesn't actually tell me, most of the time, that Fourier is the best. The most indicative example is in UCR, but the others don't show much of difference with the others. So why do you opt for Fourier? Also, what are the $F1_d$ and the AUPRC for these cases? I'd like to see AUPRC in the rebuttal, please.
2. Why aren't the trends in Figure 10 non-increasing? This doesn't make sense, especially for SoTA forecasting methods. The more the training is tainted, the lower the performances are going to be. It's very wierd, especially for the lower portion of the figure.
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Other expertise']
Ethical Review Concerns: The authors missed the impact statement which is mandatory according to the submission guidelines at https://icml.cc/Conferences/2025/CallForPapers.
Is this paper a desk reject?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to reviewer UWPv
We thank the reviewer for the constructive suggestions and will further revise the manuscript accordingly. For **KAN-AD's Performance on MTS**, please refer to our response to reviewer HB8y.
**Clarification on the Claim of Local Smoothness in Normal Curves**: The smoothness of normal patterns is a fact, independent of the type of model or training method used. The Anomaly Transformer [1] also leverages this observation to design its algorithm. Moreover, TSAD methods are not trained solely on data containing normal patterns but are trained on data that mixes both normal and anomalous data. In Table 1 (220-226, Column 2), we show that the training sets for KPI, TODS, and WSD datasets contain a certain proportion of anomalous data.
**Data Splitting**: We have detailed data splitting in 4.1.2 (270-274 col 1).
**Evaluation Metrics**: We are glad that you mention the healthcare example, what you mentioned illustrates that each evaluation metric has its appropriate use case. For the healthcare example, we should use F1 delay, as it allows us to select methods that detect anomalies as early as possible. However, for internet service systems, some anomalies last longer (as shown in Figure 6, where the anomaly duration can exceed 300 points). Using only point-wise PA or AUPRC in such cases may inflate the F1 score, since the algorithm only detects one point in the long anomaly segment will be considered as successfully detecting the entire segment. Event F1 is designed to overcome this issue by measuring whether the algorithm can detect more anomaly **events**, which is more appropriate and aligns with findings in [2].
**Inference Time of SAND and SubLOF**: We tested the inference time under the same experimental conditions as in the paper.
||CPU Time|UCR F1e|
|-|-|-|
|SAND|5637s|0.5108|
|SubLOF|299s|0.4772|
|KAN-AD|36s|0.5335|
**Faster CPU Inference Time**: Your observation is correct. CPU inference time can be faster because the GPUs cannot fully leverage parallel processing for models with fewer than 1k parameters.
**Reproducibility Environment**: As indicated in the first line of the list of incompatible dependencies you provided, the issue arose because the conda-forge channel was not configured on your local machine. We understand that no channel can guarantee 100% coverage, so to further facilitate reproducibility, we have provided a Dockerfile in the original repo to assist with this.
**Yahoo and NAB Datasets**: These datasets were considered flawed in [3], and we have adopted their suggestions by not using these datasets in the main experiments. In fact, our algorithm performs even better on these two datasets, especially Yahoo. The AUPRC results for these datasets are shown below. Additionally, the KPI dataset is also known as the AIOps dataset, and we refer to it as AIOps in the code because the KPI dataset was originally used in the AIOps challenge.
||NAB|Yahoo|
|-|-|-|
|SRCNN|0.8561|0.1459|
|SAND|0.6595|0.5412|
|Anomaly Transformer|0.9624|0.1109|
|TranAD|0.9965|0.568|
|SubLOF|0.9582|0.5222|
|TimesNet|0.9842|0.4736|
|FITS|0.9916|0.7803|
|OFA|0.9947|0.7777|
|FCVAE|0.9861|0.7049|
|LSTMAD|0.9932|0.5655|
|KAN|0.9976|0.7142|
|KAN-AD|0.9918|0.9547|
**Discussion on Hyperparameters**: We have discussed hyperparameter settings in 4.3 (342-344, col 2).
**Reconstruction-based Baselines**: As outlined in Sec. 5 col 1, forecasting methods can be divided into reconstruction and prediction categories. For the reviewer’s interest in reconstruction methods, we have selected popular SOTA methods: Anomaly Transformer (ICLR 2022), TranAD (VLDB 2022), TimesNet (ICLR 2023), FITS (ICLR 2024), OFA (NeurIPS 2023), and FCVAE (WWW 2024) as our baseline.
**AUPRC of Figure 9**: Due to space constraints, we will only show the AUPRC, and if necessary we will show the F1d in round 2.
||KPI|TODS|WSD|UCR|
|-|-|-|-|-|
|Taylor|0.9411|0.9572|0.9757|0.6904|
|CI|0.9585|0.9714|0.9832|0.7911|
|CII|0.9616|0.9584|0.9819|0.7835|
|Fourier|0.9693|0.9716|0.9868|0.8188|
**Explanation of Figure 10**: The x-axis represents the anomaly ratio in the training set. Since real-world anomaly detection always involve some proportion of anomalies in training data, this figure aims to evaluate algorithm robustness across different anomaly ratios. Performance degrades as anomaly ratios rise, since models are increasingly influenced by anomalous data. This reduces normal pattern accuracy, causing more false positives in detection. The lower part methods show more sensitivity to training anomalies. Even at low ratios, they fail to maintain consistent normal-pattern accuracy, resulting in unpredictable performance curves.
[1] Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy. ICLR 2022.
[2] An Evaluation of Anomaly Detection and Diagnosis in Multivariate Time Series. TNNLS 2021.
[3] Current Time Series Anomaly Detection Benchmarks Are Flawed and Are Creating the Illusion of Progress. TKDE 2021.
---
Rebuttal Comment 1.1:
Comment: **Train-test split**: Thanks for the pointer. Do you do any cross-validations over the split, or are the splits time-wise. For example, when employing a 4:1:5 split for train-val-test, do you reserve the first 40% of the series to the training, the next 10% for validation, and so on? Or is it mixed?
**Eval metrics**: In the table I provided, I wanted to propose a metric that just considers the point detected as anomaly and not the entire segment. I believe the F1 should be point-based and not segment nor event based. I think you should still report the timestep-based version. I think putting this in the appendix is a good compromise between us.
**SAND CPU runtime**: Can you include this table at camera ready, please? You can maybe merge it with the other where you show the runtime in GPU.
**UniTS as baseline**: I still think that you should include UniTS as comparison for anomaly detection. You can either use it in reconstruction or forecasting mode. Its inclusion would definitely make your claims to be SoTA stronger. Again, it's arxiv version is public since February 2024. It's now officially a NeurIPS 2025 paper.
**NAB and YAHOO**: This is cool. I missed [3] during my experiments. I'm going to refer it in my next paper. Thanks for the pointer :-)
**Reconstruction-based Baselines**: Do you mean here that anomaly detection methods can be divided into reconstruction and prediction approaches?
**Hyperparams**: Thanks, I missed the window_size=96 in the main paper. So you fine-tune KAN-AD in UCR and then use these hyperparams for the other datasets?
**Fig. 9 Fourier**: Besides UCR, I feel like the other AUCPRs are statistically insignificant. Can you do a one-way ANOVA test with a post-hoc Tuckey test with maybe p=.05?
**Fig. 10 expected non-increasing trends**: Let's take SRCNN. Why does it have a jump in performance when apssing from 25% to 30% training test taint. How come the lower portion of the plot have similar F1$_e$ at 10% and at 40%? Can you measure the timestep-based F1 here and maybe give standard deviations say for a 5-cross validation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will address your concerns point by point.
**Train-test split**: The KPI, TODS, and WSD datasets are split time-wise. Following your suggestion, we conducted experiments with reversed splits for KPI/TODS/WSD (original test→train/val, original train/val→test). UCR unmodified (test-set anomalies only). The resulting F1e scores are as follows:
||KPI|TODS|WSD|Avg|
|-|-|-|-|-|
|SubLOF|0.29|0.44|0.65|0.46|
|SRCNN|0.08|0.21|0.22|0.17|
|OFA|0.61|0.57|0.80|0.66|
|FITS|0.64|0.52|0.79|0.65|
|SAND|0.02|0.19|0.08|0.10|
|AnomalyTransformer|0.24|0.18|0.24|0.22|
|TimesNet|0.61|0.32|0.83|0.59|
|LSTMAD|0.81|0.60|0.84|0.75|
|FCVAE|0.75|0.70|0.80|0.75|
|KAN|0.78|0.69|0.83|0.77|
|KAN-AD|0.81|0.90|0.84|**0.85**|
**Eval metrics**: We performed timestep-based F1 evaluation (results in table below) and will include full results in the appendix. However, to avoid misleading readers, we will clearly indicate that these metrics may not fully align with practical application scenarios. For example, a method specialized in detecting one anomaly type may outperform a generalist approach if that anomaly persists for long periods. As discussed in [1] (WWW 2018, 1068 citations):
> In real applications, the human operators generally do not care about the point-wise metrics. It is acceptable for an algorithm to trigger an alert for any point in a contiguous anomaly segment, if the delay is not too long.
||KPI|TODS|WSD|UCR|
|-|-|-|-|-|
|SubLOF|0.09|0.07|0.29|0.13|
|FITS|0.06|0.10|0.21|0.04|
|OFA|0.06|0.11|0.15|0.03|
|LSTMAD|0.10|0.22|0.27|0.06|
|FCVAE|0.09|0.19|0.20|0.08|
|KAN|0.09|0.16|0.22|0.06|
|KAN-AD|0.09|0.18|0.22|0.10|
**UniTS as baseline**: We appreciate your mention of UniTS. UniTS presents an innovative approach to unifying various TS tasks, which we find particularly insightful. We have extended evaluation to UTS and MTS anomaly detection (results in tables below). We will add a discussion and citation of UniTS in our paper and include it as baseline method.
|UTS|KPI|TODS|WSD|UCR|
|-|-|-|-|-|
|UniTS|0.61|0.49|0.39|0.33|
|KAN-AD|0.94|0.94|0.99|0.86|
|MTS|SMD|MSL|SMAP|SWaT|PSM|Avg|Params@MSL|
|-|-|-|-|-|-|-|-|
|UniTS|0.88|0.84|0.84|0.93|0.97|0.89|8,066,376|
|KAN-AD|0.84|0.85|0.95|0.94|0.97|0.91|**4,491**|
**Recon-based Baselines**: As noted in Related Work, TSAD methods include forecasting-based approaches (split into recon- and pred-based) and pattern change detection.
**Hyperparams**: For fair comparison, we optimized hyperparameters based on overall dataset performance and keep them fixed throughout experiments. Due to space limitations, detailed sensitivity analysis is provided only for the UCR dataset. Other datasets showed similar sensitivity trends which are omitted for brevity.
**Fig 9**: We examined the assumptions for one-way ANOVA [2], including independence, normality, and homogeneity of variance. Levene's test [3] confirmed homogeneity (p=0.96>0.05), but Shapiro-Wilk tests [4] indicated non-normality (Taylor: 0.02, CI: 0.03, CII: 0.03, Fourier: 0.02, all p<0.05). Since normality was violated, the ANOVA results would be unreliable, and consequently the Tukey test cannot be used. We agree that the significance experiments are important. Therefore,we used the Friedman test [5] (p=0.01<0.05), showing significant differences. Cliff's Delta analysis [6] revealed Taylor has moderate negative effects vs. Fourier, while Chebyshev variants show smaller negative effects, supporting Fourier (KAN-AD) as the optimal choice.
```
Pairwise Cliff's Delta:
T vs {CI,CII,F} = -0.375
CI vs CII = 0.000
{CI,CII} vs F = -0.250
```
**Fig 10**: Timestamp-based F1 mean and variance results:
|Ano% in train|10|15|20|25|30|35|40|
|-|-|-|-|-|-|-|-|
|SRCNN|0.017±0.018|0.036±0.021|0.028±0.013|0.002±0.001|0.025±0.012|0.002±0.001|0.018±0.008|
SRCNN exhibits high anomaly sensitivity but lower precision and large performance fluctuations. This instability arises from its difficulty handling high anomaly proportions in training data, leading to near-random classification and irregular trends.
The above is our experimental findings and the response to your second round of comments. We hope this addresses your concerns. We will incorporate the supplementary experiments from both rounds into the main text or appendix of the paper, along with citations to outstanding methods such as UniTS. If our response has resolved your questions, we would sincerely appreciate it if you could reconsider your evaluation score. Once again, thank you for your valuable feedback and support for our work.
[1]Unsupervised anomaly detection via variational auto-encoder for seasonal kpis in web applications.
[2]Statistical methods for psychology.
[3]Robust tests for equality of variances.
[4]An analysis of variance test for normality.
[5]The use of ranks to avoid the assumption of normality implicit in the analysis of variance.
[6]Cliff's Delta Calculator: A non-parametric effect size program for two groups of observations. | Summary: The paper discusses about the problem that most TSAD methods using forecasting models tend to overfit to local fluctuations, and reformulates time series modeling while approximating to smooth univariate functions. The paper adopted KAN backbone for TSAD, while replacing B-spine function with Fourier series for local smoothness. The proposed method KAN-AD is composed of mapping phase where raw time series is converted into multiple univariate functions, reducing phase where the coefficients of univariate functions are obtained, and projection phase where the coefficients are aggregated into the normal pattern. Experimental results show that KAN-AD achieves state-of-the-art performance on univariate time series datasets, showcasing its effectiveness.
Claims And Evidence: The claim that existing time series forecasting methods tend to overfit to local fluctuations is well illustrated with qualitative results.
Methods And Evaluation Criteria: The proposed method is designed for only univariate time series, and experimented only in univariate datasets, making it impossible to be used in multivariate time series, which is the case in many cases in real life. I assume that the proposed method can be extended to multivariate time series, and I wonder if there is any specific reason for KAN-AD not applied to multivariate time series in the paper.
Theoretical Claims: The paper does not contain any theoretical claims.
Experimental Designs Or Analyses: The paper does not clarify how TODS benchmark is synthesized, which can affect the performance highly. The authors should clarify detailed information about the benchmark for its validity.
Supplementary Material: I reviewed the supplementary material covering the information about the datasets and the baselines.
Relation To Broader Scientific Literature: The key contribution of the paper that it adopts KAN backbone for efficient TSAD is related to recent advent of KAN network. The paper adequately adopted Kolmogorov-Arnold representation theorem that it decomposes multivariate continuous function into a finite sum of univariate functions to time series data.
Essential References Not Discussed: The paper cited the related works appropriately.
Other Strengths And Weaknesses: The attempt to integrate Kolmogorov-Arnold Network for TSAD is original, demonstrating further expandability of KAN into time series analysis in the future. In addition, the design of KAN-AD is clearly described to understand the key components of it. However, it was only employed to univariate time series, covering very limited scope of time series data.
Other Comments Or Suggestions: There is no other comment.
Questions For Authors: 1. I wonder if KAN-AD works with multivariate dataset, since the absence of results on multivariate dataset is a critical limitation of the paper. I am willing to change my evaluation of the paper if KAN-AD proves it to be effective in multivariate datasets.
2. Why is using N=2 the best? I think using bigger N enables more accurate modeling of time window into fourier series, and is more helpful for the task. It would be helpful if the authors provide analysis on it.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Response to reviewer HB8y
**Synthetic Method of the TODS Dataset**: We used the synthetic TODS dataset from [1], which includes all five anomaly types with diverse durations and the non-trivial characteristics introduced by TODS. This dataset is publicly available in their repository.
**KAN-AD's Performance on MTS Data**: We can reshape the original multivariate input `(batch, window, n_multivariate)` into `(batch * n_multivariate, window)`, which effectively supports multivariate time-series data. We implemented KAN-AD in a popular third-party Times-Series-Library [2] and trained it using the same seed (seed=2021) as the SOTA methods. We compared KAN-AD trained in this way with SOTA multivariate anomaly detection algorithms across several datasets, with the following results:
| Methods | SMD | MSL | SMAP | SWaT | PSM | Avg | Parameters@MSL |
| ------------------- | --------- | --------- | --------- | --------- | --------- | --------- | -------------: |
| Informer | 81.65 | 84.06 | 69.92 | 81.43 | 77.10 | 78.83 | 504,174 |
| Anomaly Transformer | *85.49* | 83.31 | 71.18 | 83.10 | 79.40 | 80.50 | 4,863,055 |
| DLinear | 77.10 | *84.88* | 69.26 | 87.52 | 93.55 | 82.46 | 20,200 |
| Autoformer | 85.11 | 79.05 | 71.12 | 92.74 | 93.29 | 84.26 | 325,431 |
| FEDformer | 85.08 | 78.57 | 70.76 | 93.19 | 97.23 | 84.97 | 1,119, 982 |
| TimesNet | 84.62 | 81.80 | 69.50 | 93.00 | *97.38* | 85.26 | 75,223 |
| UniTS | **88.09** | 83.46 | *83.80* | *93.26* | **97.43** | *89.21* | 8,066,376 |
| KAN-AD | 84.29 | **85.01** | **94.50** | **93.50** | 96.50 | **90.76** | $\mathbf{4,491}$ |
As seen from the results, KAN-AD outperforms the SOTA methods in average, while using only 0.05% of the parameters compared to UniTS. We have made the MTS version of KAN-AD available in an anonymous repository for further review. https://anonymous.4open.science/r/TSL-C6AC
**Impact of Parameter N on Model Performance**: Your suggestion is great, larger N allows for more precise modeling of the time-series data, but it also comes with the risk of "overfitting" to the anomaly patterns within the time window. When N becomes too large, the model creates an accurate prediction of the anomaly pattern, which leads to very small anomaly scores when the anomaly occurs. As a result, the model might fail to detect anomalies, leading to performance degradation.
[1] Si H, et al. Timeseriesbench: An industrial-grade benchmark for time series anomaly detection models. ISSRE 2024.
[2] https://github.com/thuml/Time-Series-Library Machine Learning Group of Tsinghua University.
---
Rebuttal Comment 1.1:
Comment: Thank you for the informative rebuttal. The authors' responses resolve most of my concerns, so I raise the final score. | Summary: This paper focuses on time series anomaly detection (TSAD), and proposes KAN-AD, which models "normal" behavior of time series using smooth functions. The paper addresses the shortcoming of existing methods that tend to overfit local variances in time series data. Proposed KAN-AD is a clever and novel approach -- introduces Fourier expansion within the KAN network -- which shows its effectiveness on benchmark time series datasets.
## update after rebuttal
I appreciate authors for addressing my comments. I think including the rebuttal responses makes the paper stronger. Achieving competitive/superior performance on the MTS task with only a fraction of parameters points to the superior architecture KAN-AD offers for the task.
My other comment was about testing the model's performance on real use-case dataset outside of the benchmarks that often contain synthetic anomalies. Nevertheless, I appreciate your efforts in providing detailed responses to my comments. I already liked this work, and will keep my score as is.
Claims And Evidence: I think the claims made — better detection, faster inference, robustness to noise — are supported through empirical results.
Methods And Evaluation Criteria: KAN-AD definitely makes sense. With respect to evaluation, the metrics Event F1 and Delay F1 align with real world use cases of detecting sustained as well as varied length anomalus segments in the time series.
Theoretical Claims: The paper doesn’t introduce a new theory/theorems, instead relies on the Kolmogorov-Arnold representation theorem as the theoretical foundation. Though I haven’t verified it, I think the correctness of the Kolmogorov-Arnold theorem is well established in the literature.
Experimental Designs Or Analyses: I’d answer yes. Experimental setup is clearly described, with details on datasets, training and testing setup, baselines and evaluation metrics as well as ablation studies (Section 4 as well as appendix C). I do not find issues with designs as such.
Supplementary Material: Yes, I reviewed Appendix A – datasets, B – baselines, and C – ablation, they are pretty compact.
Relation To Broader Scientific Literature: The paper draws on KAN networks, and clearly points to overfitting phenomena in deep anomaly detection models. I actually like the paper, and find that it provides a new perspective on TSAD approach, overcomes the overfitting issues of existing methods, and cleverly uses Fourier transform to improve current KANs for time series anomaly detection, which is an extremely relevant problem to study.
Essential References Not Discussed: I think the paper references related literature.
Other Strengths And Weaknesses: ++ The paper presents an original approach to time series anomaly detection by reformulating the problem and leveraging KANs. The modifications made to the original KAN architecture, including the use of Fourier series and the lightweight learning mechanism, are also original and well-motivated.
++ The paper studies an TSAD which has significant practical applications. Importantly, the proposed model is nimble with only about 300 parameters making it suitable for real-world environment with improvements in detection accuracy and inference speed.
++ The paper is well-written, organized, and is easy to read.
– Not really a weakness, but something that I find worth mentioning. Most of the real world time series datasets, are naturally multivariate, while the paper focuses on univariate series. How hard would it be to extend the framework to multivariate setup? How will it affect the design choices in the proposed KAN?
– The experiments cover a wide range of time series dataset, however, these are mostly well curated datasets, with many containing synthetic anomalies. I think the empirical study could be stronger if the method is applied on a real world dataset. It would be particularly interesting to see how well does it detect anomalies around 2008 recession, covid 19 etc.
– Discussions around why on some datasets, KAN-AD is comparable to other methods while in some datasets it clearly outperforms (UCR vs WSD). What are the failure cases or scenarios of KAN-AD? What dataset properties support or oppose KAN-AD?
Other Comments Or Suggestions: Please double-check the references; I believe there are duplicates.
Questions For Authors: In the discussion of future work, the paper mentions exploring whether normal patterns in time series can be represented more efficiently by leveraging additional data. What types of additional data could be considered and could it be integrated into the KAN-AD framework?
As stated earlier, are there specific types of time series data or anomaly patterns where KAN-AD might be expected to struggle?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to reviewer HZbR
We appreciate the constructive suggestions provided by the reviewer and will incorporate improvements in the subsequent version of the paper.
**Performance of KAN-AD on MTS**: We can reshape the original multivariate input `(batch, window, n_multivariate)` into `(batch * n_multivariate, window)`, which effectively supports multivariate time-series data. We implemented KAN-AD in a popular third-party Times-Series-Library [2] and trained it using the same seed (seed=2021) as the SOTA methods. We compared KAN-AD trained in this way with SOTA multivariate anomaly detection algorithms across several datasets, with the following results:
| Methods | SMD | MSL | SMAP | SWaT | PSM | Avg | Parameters@MSL |
| ------------------- | --------- | --------- | --------- | --------- | --------- | --------- | -------------: |
| Informer | 81.65 | 84.06 | 69.92 | 81.43 | 77.10 | 78.83 | 504,174 |
| Anomaly Transformer | *85.49* | 83.31 | 71.18 | 83.10 | 79.40 | 80.50 | 4,863,055 |
| DLinear | 77.10 | *84.88* | 69.26 | 87.52 | 93.55 | 82.46 | 20,200 |
| Autoformer | 85.11 | 79.05 | 71.12 | 92.74 | 93.29 | 84.26 | 325,431 |
| FEDformer | 85.08 | 78.57 | 70.76 | 93.19 | 97.23 | 84.97 | 1,119, 982 |
| TimesNet | 84.62 | 81.80 | 69.50 | 93.00 | *97.38* | 85.26 | 75,223 |
| UniTS | **88.09** | 83.46 | *83.80* | *93.26* | **97.43** | *89.21* | 8,066,376 |
| KAN-AD | 84.29 | **85.01** | **94.50** | **93.50** | 96.50 | **90.76** | $\mathbf{4,491}$ |
As seen from the results, KAN-AD outperforms the SOTA methods in average, while using only 0.05% of the parameters compared to UniTS. We have made the MTS version of KAN-AD available in an anonymous repository for further review. https://anonymous.4open.science/r/TSL-C6AC
**Impact of data characteristics on performance**: In our paper (Column 2, Lines 292-295), we briefly discuss how different types of data across datasets influence the final performance. We will further elaborate on this in the next version.
**Additional data**: Thanks to KAN-AD's parameter efficiency, we believe incorporating textual information (such as metric name) could enhance its performance. Furthermore, leveraging a Large Language Model to route multiple downstream KAN-AD models could further boost overall performance. Since KAN-AD is a very lightweight approach, this integration will be much easier.
**Challenges with rapidly oscillating data**: For EPG-type time-series data in the UCR dataset, the detection accuracy of both KAN-AD and other baselines remains relatively low due to the highly volatile nature of these signals, even within very small time windows. | Summary: This paper proposes a method for univariate time series anomaly detection. In particular, they aim to approximate the time series using smooth univariate functions. They build upon a method that uses Kolmogorov-Arnold Networks to approximate the time series, by replacing B-splines functions with Fourier series. They demonstrate that the new model is more robust to anomalies in the training set and therefore tends to perform better on the test set, especially, not surprisingly, where the training set has a larger percentage of anomalies. The models also are smaller compared to state-of-the-art anomaly detection algorithms, in terms of the number of parameters, and also have lower running times.
## update after rebuttal
I had minor questions, which the authors answered. For this reason, I will keep my review as it is.
Claims And Evidence: The algorithmic claims made by the paper are clearly true. The algorithm is clearly explained, and performance metrics (F1-based metrics) and running times and model sizes are given. The latter two are particularly notable, since Machine Learning papers rarely state their running times and model sizes, even when these are presented as advantages of the proposed methods. I'm glad the authors are bucking this disturbing tendency.
The number of datasets and types of datasets on which the algorithm is tested is insufficient. The percentage of anomalies in the training and test set range from zero to 7% is low, but I think that this is okay since the proposed method is aimed at addressing robustness to anomalies, so actually having a relatively low fraction of anomalies is more difficult for the method. However, the range of types of anomalies and the range of normal behavior seems to need broadening---amplitude, frequency, combinations of these.
Methods And Evaluation Criteria: While the range of benchmark datasets needs broadening, as I mentioned in the claims and evidence section, the metrics that they use for evaluation are quite appropriate.
Theoretical Claims: There are no proofs provided, which is reasonable given the nature of this paper, which is more algorithmic and experimental.
Experimental Designs Or Analyses: I did check the soundness of the experimental designs and analysis. The analysis provided is fairly thorough and appropriate. I have a few questions/comments on this:
1. In figure 4, it appears that the 5-delay PA and Point-wise PA rows are swapped.
2. What are the differences in the types of anomalies detected by the different methods?
3. In 4.4.2, the authors state "In contrast, Taylor series exhibited persistent bias due to non-zero function values in most cases, hindering optimal model performance." Is this true specifically in the context of this ablation study? I can't see this being an issue in general, since Taylor series, like the other methods, allows for constant term elimination.
Supplementary Material: I reviewed the appendices, which were helpful in understanding the paper.
Relation To Broader Scientific Literature: This paper yields an anomaly detection method that has greater robustness to anomalies in the training set, is smaller in terms of the number of patterns, and runs faster; relative to recent methods that have a large number of parameters.
Essential References Not Discussed: I cannot think of any essential references that were not discussed in this paper.
Other Strengths And Weaknesses: I have no other notable strengths and weaknesses to bring up beyond what I wrote in other sections.
Other Comments Or Suggestions: 1. In figure 2, the anomalies in the bottom row appear to be more difficult to detect than the anomalies in the top row, which seems to better explain the differences in performance than the claimed difference, which is the level of noise in the training sample.
Questions For Authors: 1. The main question that I have is about differences in the types of anomalies detected by the different methods, going beyond simply performance measures. This likely will not shift my decision by a full level given that there are five choices, but this is nevertheless an important question to answer in general, and one that is only occasionally answered in Machine Learning papers.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Response to reviewer GaXC
We appreciate the constructive comments provided by the reviewer and will incorporate improvements in the subsequent version of the paper.
**Comparative analysis of algorithm strengths**: Indeed, different methods are good at detecting specific types of anomalies. For example, frequency-domain-based methods (FCVAE [1], FITS [2], etc.) excel at detecting periodic anomalies, methods with differential modules [3] are more adept at handling spike-type anomalies, and methods with shape clustering modules (e.g., SAND [4]) are more effective at detecting shapelet anomalies.
**Bias in the Taylor series**: The Taylor series can keep the value to be fitted around 0 through a constant term elimination mechanism. However, based on the mathematical properties of the Taylor series, when using a finite number of terms to fit curves with large fluctuations in absolute values, there will always be substantial residual errors (this is what we refer to as "bias" in the original paper) [5].
[1] Z Wang, et al. Revisiting VAE for Unsupervised Time Series Anomaly Detection: A Frequency Perspective. WWW 2024.
[2] Z Xu, et al. FITS: Modeling Time Series with 10k Parameters. ICLR 2024.
[3] Wu R, et al. Current time series anomaly detection benchmarks are flawed and are creating the illusion of progress. TKDE 2021.
[4] P Boniol, et al. Sand: streaming subsequence anomaly detection. VLDB 2021.
[5] Rudin W. Principles of Mathematical Analysis. 2021.
---
Rebuttal Comment 1.1:
Comment: In response to your comment under "comparative analysis of algorithm strengths," thank you for this discussion. However, my question was on how well your proposed method detects anomalies of the types that you mentioned and others.
---
Reply to Comment 1.1.1:
Comment: # Response to Reviewer GaXC
**Comparative Analysis of Algorithm Strengths**: Thank you for your kindly reminder. In our first-round response, we mentioned three types of anomalies, and KAN-AD demonstrates strong performance in detecting all of them. Specifically, for periodicity-related anomalies, KAN-AD benefits from the introduced Fourier univariate functions, enabling precise identification of low-frequency variations, as illustrated in Figure 2 of the paper. Regarding spike-type anomalies, the constant term elimination module in KAN-AD enhances the significance of spikes, making them easier to detect. Finally, for shape-related anomalies, KAN-AD combines univariate functions in both the time and frequency domains, effectively capturing amplitude and periodic characteristics, leading to improved detection performance, as shown in Figure 5 of the paper. | null | null | null | null | null | null |
From RAG to Memory: Non-Parametric Continual Learning for Large Language Models | Accept (poster) | Summary: The paper introduces *ContinualRAG*, a novel retrieval-augmented generation (RAG) framework designed to enhance large language models (LLMs) with a human-like long-term memory system for non-parametric continual learning. Building on the HippoRAG framework, ContinualRAG aims to address limitations in standard RAG by improving performance across three key memory dimensions: *factual memory* (simple QA tasks), *sense-making* (interpreting complex contexts), and *associativity* (multi-hop reasoning). Its main algorithmic contributions include:
1. Dense-Sparse Integration: Incorporating passage nodes into the knowledge graph (KG) alongside phrase nodes to better capture context, inspired by dense and sparse coding in human memory.
2. Deeper Contextualization: Using a query-to-triple matching approach instead of entity-centric methods to improve query-KG alignment.
3. Recognition Memory: Adding an LLM-based triple filtering step to refine retrieved triples for graph search.
The authors evaluate ContinualRAG on benchmarks like NaturalQuestions (NQ), PopQA, MuSiQue, 2WikiMultiHopQA, HotpotQA, LV-Eval, and NarrativeQA.
Claims And Evidence: The paper’s primary claims are well-supported by empirical evidence, though some areas could benefit from deeper analysis:
- **Claim: ContinualRAG outperforms standard RAG across all memory tasks.**
- **Evidence**: Tables 2 and 3 provide F1 scores and recall@5 metrics across seven benchmarks, showing consistent superiority over baselines (e.g., 63.3 vs. 61.9 on NQ, 48.6 vs. 45.7 on MuSiQue). Ablation studies (Table 4) further validate the contributions of each component (e.g., query-to-triple improves recall@5 by 12.5% over NER-to-node).
- **Weakness**: The claim of "comprehensive" outperformance lacks statistical significance tests to confirm robustness across runs or datasets.
Overall, the evidence is convincing, but transparency in metric aggregation and statistical validation could strengthen the claims.
Methods And Evaluation Criteria: Yes
Theoretical Claims: The paper does not present formal proofs but draws theoretical inspiration from neurobiology (e.g., dense-sparse coding, recognition memory). It remains conceptual rather than rigorously proven.
Experimental Designs Or Analyses: - **QA Performance (Table 2)**: Sound design comparing ContinualRAG against diverse baselines (BM25, NV-Embed-v2, HippoRAG) using Llama-3.3-70B-Instruct. The use of F1 scores across seven datasets is valid, but the lack of error bars or multiple runs limits reliability assessment.
- **Retrieval Performance (Table 3)**: Recall@5 is a standard metric, and reproducing HippoRAG with the same setup ensures fairness. However, the original HippoRAG paper reports higher R@5 (89.1 vs. 79.4 on 2Wiki), suggesting a potential issue in reproduction fidelity.
- **Ablation Study (Table 4)**: Well-designed to isolate contributions (e.g., query-to-triple vs. NER-to-node), with clear recall@5 improvements. The omission of filtering in NER-to-node baselines is justified but could be explored further.
**Issue**: No statistical significance tests are reported, weakening confidence in the results’ robustness.
Supplementary Material: -A,B,C,F,G.
Relation To Broader Scientific Literature: ContinualRAG builds on:
- *HippoRAG*: Extends its PPR and OpenIE approach, adding passage integration and recognition memory to address context loss.
- *RAG Evolution*: Cites standard RAG (Zhong et al., 2023) and structure-augmented methods (RAPTOR, GraphRAG), positioning itself as a comprehensive solution.
- *Neurobiology*: Links to hippocampal indexing (Klein et al., 2006; Suzuki, 2005) mirror HippoRAG’s inspiration, grounding it in memory theory.
Essential References Not Discussed: No
Other Strengths And Weaknesses: - **Strengths**:
- Integration of passage nodes and recognition memory builds on HippoRAG.
- Well-structured, with figures (e.g., Figure 2) and tables enhancing readability.
- Broad applicability across memory types.
- **Weaknesses**:
- Heavy reliance on HippoRAG reduces novelty; In my opinion the only major difference with the former is the passage integration part.
Other Comments Or Suggestions: No.
Questions For Authors: 1. Why were no statistical tests reported? Adding them could confirm result reliability—would significant p-values change your confidence in ContinualRAG’s superiority?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are very thankful for the reviewer’s kind acknowledgment of our work as convincing, well-structured and broadly applicable across memory types. We will address the reviewer’s careful suggestions in the sections below.
## Statistical Significance Testing
> Why were no statistical tests reported? Adding them could confirm result reliability—would significant p-values change your confidence in ContinualRAG’s superiority?
We thank the reviewer for pointing out the need for statistical significance testing to ensure the reliability of our results. In order to ascertain the significance of ContinualRAG’s performance over NV-Embed-V2 (the best performing baseline), we run a simple bootstrapping statistical test. More specifically, we created 10,000 different datasets by sampling from each set of answers with replacement and thus obtained a distribution over the differences in their QA performance.
Through this method, we find that ContinualRAG’s performance is significantly larger than that of NV-Embed-V2 (p-value < 0.05) in 4 out of 7 datasets. Additionally, NV-Embed-V2 does not significantly outperform ContinualRAG in the other 3 datasets, demonstrating that ContinualRAG is robustly superior to our strongest baseline. We will include these significance testing results in the camera-ready version of our paper.
| Dataset | p-value |
|-------------------|----------|
| 2WikiMultiHopQA | 0.0000 |
| MuSiQue | 0.0013 |
| NQ | 0.0209 |
| LVEval | 0.0484 |
## Novelty Concerns
> Heavy reliance on HippoRAG reduces novelty; In my opinion the only major difference with the former is the passage integration part.
We appreciate the reviewer’s perspective, however, we would like to clarify our view on our work’s originality. While it is true that we build upon the existing HippoRAG framework, our approach systematically explores and enhances key components of the system in ways that are both principled and non-trivial. The space of possible design choices in structure-augmented RAG systems is vast, and identifying which modules to modify and how to do so effectively is itself a meaningful research contribution.
Our modifications are not superficial; rather, they are informed by clear hypotheses about the limitations of the original modules and are validated through rigorous ablation studies (Table 4), which demonstrate substantial and consistent improvements in performance. We believe that such targeted, evidence-based improvements can be as impactful as proposing entirely new frameworks, especially when they advance the capabilities of an already influential baseline.
Moreover, our work provides actionable insights into the design of RAG systems that others in the community can build upon—offering a path forward for both incremental and architectural innovations.
## Reproduction Fidelity
> Retrieval Performance (Table 3): Recall@5 is a standard metric, and reproducing HippoRAG with the same setup ensures fairness. However, the original HippoRAG paper reports higher R@5 (89.1 vs. 79.4 on 2Wiki), suggesting a potential issue in reproduction fidelity.
We would like to note that our reproduced HippoRAG results, reported on Table 3, are very close (or slightly better) than the ones reported in the original paper due to the use of a stronger embedding model and LLM for OpenIE. We changed both models in order to effectively compare with all other baselines.
More specifically, the R@5 scores reported in the original HippoRAG paper are 51.9, 89.1 and 77.7 respectively for MuSiQue, 2Wiki and HotpotQA. Our reproduced HippoRAG R@5 scores are 53.2, 90.4 and 77.3 for the same three datasets.
We ask the reviewer to let us know if they have any other questions concerning our reproduced HippoRAG results.
## Filtering & Query-to-Node
> Ablation Study (Table 4): Well-designed to isolate contributions (e.g., query-to-triple vs. NER-to-node), with clear recall@5 improvements. The omission of filtering in NER-to-node baselines is justified but could be explored further.
Although the reviewer makes an interesting point there are a few reasons why we omit a filter for NER-to-node:
- NER-to-node is already using an LLM to extract named entities before retrieving nodes.
- Though a post-extraction filter could be added to NER-to-node, it would require designing a different filtering methodology specifically for this module.
- Given that the performance of ContinualRAG w/o filter (avg. 86.4) is already much better than with NER-to-node (avg. 74.6), as shown in Table 4, we believe that designing such a filter was not the most worthwhile direction to explore. | Summary: This paper proposes ContinualRAG that improves the performance of RAG on natural question answering and multi-hop reasoning benchmarks.
The method builds on the prior work, HippoRAG, which performs reasoning on a knowledge graph constructed at an offline phase. In the offline indexing phase, HippoRAG extracts knowledge triples of named-entities and detects synonyms to create additional edges in the knowledge graph. In the online retrieval phase, a query is deconstructed to its named-entities, the knowledge graph is queried for the named entities and their synonyms, then Personalized PageRank is used to retrieve the information from the graph as the response to the query.
This work argues HippoRAG is limited because it is entity-centric and loses information both during indexing and inference. As such, ContinualRAG makes the following modifications to HippoRAG:
1) Adds passage nodes to the knowledge graph that connect to named-entity nodes in the passage with context edges. The goal is to have more contextual information in the KG.
2) Improves the linking of queries to the KG from HippoRAG’s named-entity-recognition to a Query-to-triple approach which matches the entire query to triples in the graph using text embeddings.
3) Improve the retrieval step to two 1) retrieve top-k triples from the graph using an embedding model and 2) filter triples using an LLM.
Given these improvements, for QA evaluations the online retrieval of ContinualRAG involves assigning scores to retrieved passage and seed nodes and then executing Personalized PageRank (PPR) to retrieve the top-ranked passages as the answer to the question.
The evaluations are divided into QA and retrieval evaluations. On QA evaluations, they demonstrate on average 6.7% improvement on HippoRAG. Their improvement brings them above the performance of large embedding models utilizing 7B LLMs by 2.8%. On retrieval evaluations they achieve 15% improvement on HippoRAG and nearly 5% better than large embedding models.
The ablations in Table 4 demonstrate the importance of all 3 proposed modifications. Particularly, the query-triple approach for linking the queries to the KG accounts for the majority of the improvement on most evaluations (15% on average) except for 2Wiki. The other two modifications of adding passage nodes (6% on average) and LLM filtering in retrieval (0.7% on average) are also effective.
## Update after rebuttal
I thank authors for their response and recommend incorporating the clarifications into a revision. I maintain my rating of weak accept conditioned on applying needed improvements to the text to improve clarity.
Claims And Evidence: The paper claims their 3 modifications to HippoRAG improve the contextual-awareness of the method. Substantial improvements across a diverse set of evaluations, along with ablation studies, substantiate their claim.
Methods And Evaluation Criteria: The proposed method involves three well-justified modifications to HippoRAG. The evaluations are similar to prior work and evaluate the reasoning and question-answering capability of the model.
Theoretical Claims: N/A. The paper makes no theoretical claims.
Experimental Designs Or Analyses: Lines 370-373 state that the method utilizes Llama-3.3-70B Instruct for extraction and triple filtering. However, the tables refer to models that utilize 7B parameter LLMs as large embedding models. The relation to such methods and why they are referred to as large embedding models while the method utilizes 70B models albeit at a limited capacity is not clear.
It is also not clear what the impact of the model size is in the proposed method. Table 9 in the appendix provides some ablations with GPT-4o-mini but does not ablate on different model sizes of the Llama-3.3 family. Prior work HippoRAG provided some ablations between Llama-3.1 8B/70B models but the importance of these models in ContinualRAG might be different.
Supplementary Material: I reviewed Appendix A/B to understand the method and Table 9 when looking for important ablations.
Relation To Broader Scientific Literature: This paper advances the question-answering capabilities that require a knowledge graph to answer factual questions.
Essential References Not Discussed: If the paper is position itself as continual method as the title may suggest, it should consider expanding the related works section to discuss more continual pretraining methods.
- Roth, Karsten, et al. "A Practitioner's Guide to Continual Multimodal Pretraining." arXiv preprint arXiv:2408.14471 (2024).
- Li, Jeffrey, et al. "Tic-lm: A multi-year benchmark for continual pretraining of language models."
Other Strengths And Weaknesses: - The paper is sparse in details about prior works such as HippoRAG. In particular, understanding section 3.1 is crucial for the understanding of the contributions, however, definitions and details on the following terms are missing: OpenIE, Personalized PageRank, PHR, reset probability, etc.
- The paper is missing descriptions for baselines in section 4.1. At least a description of groups of methods and a comparison between the bottlenecks and capacities of methods to the proposed method is required.
- A clear description of the evaluations and execution difference between QA and retrieval setups is missing.
- The paper is named “ContinualRAG”, however, the method does not seem to have any “continual” aspect.
Other Comments Or Suggestions: - Line 355: N otably ->Notably
- Line 427: It combining -> It combines
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for thoroughly reviewing our work and noting that our modifications are well-justified and bring strong improvements over all our baselines.
## ContinualRAG is a Continual Learning Method
> The paper is named “ContinualRAG”, however, the method does not seem to have any “continual” aspect.
As we argued in our work, RAG has become the de facto solution for continual learning in LLMs due to its simplicity and strong performance when compared to parametric alternatives. Our methodology builds on these already robust continual learning systems, enabling them to handle tasks that require more in-depth learning of new knowledge, such as associativity and sense-making. Given its strong performance on these demanding tasks, we argue that ContinualRAG system not only classifies as a continual learning system but also elevates the standard of what it means to continually learn.
To further show this, we provide an experiment that directly evaluates our method’s performance as more knowledge is aggregated. We refer the reviewer to the *Continual Knowledge Injection Experiments* section in our response to **reviewer gi53** above for experimental details, results and discussion.
For a more detailed explanation of motivation being our experiments, we refer the reviewer to the *Non-Parametric Continual Learning for LLMs: Ideal Experiments* section of our response to **reviewer K3hz**.
## Why are 7B embedding models “large”?
> The method utilizes Llama-3.3-70B Instruct for extraction and triple filtering. However, the tables refer to models that utilize 7B parameter LLMs as large embedding models. The relation to such methods and why they are referred to as large embedding models while the method utilizes 70B models albeit at a limited capacity is not clear.
We refer to 7B embedding models as ''large'' because they are much larger than classic ones like GTR (335M) and Contriever (110M). Although smaller than LLMs, they are the largest and strongest models available on the MTEB benchmark.
## How does LLM size impact ContinualRAG?
> It is also not clear what the impact of the model size is in the proposed method. Table 9 in the appendix provides some ablations with GPT-4o-mini but does not ablate on different model sizes of the Llama-3.3 family.
We report results using Llama-3.1-8B for ContinualRAG (Llama 3.3 family only has 70B models). It shows the 8B model is not sufficiently capable of supporting our system on both types of tasks (MuSiQue & NQ).
| Model | MuSiQue | NQ |
|---|---|---|
| NV-Embed-v2 (7B) | 45.70 | 61.90 |
| ContinualRAG (Llama-3.1-8B-Instruct) | 37.93 | 55.28 |
| ContinualRAG (Llama-3.3-70B-Instruct) | 48.60 | 63.30 |
## Clarifications
> The paper is sparse in details about prior works such as HippoRAG. In particular, understanding section 3.1 is crucial for the understanding of the contributions, however, definitions and details on the following terms are missing: OpenIE, Personalized PageRank, PHR, reset probability, etc.
**OpenIE:** OpenIE extracts entity–relation–entity triples from text without predefined relation types, in contrast to standard IE.
**Personalized PageRank (PPR):** PPR is a variation of PageRank that measures the importance of nodes in a graph relative to a source node (or source nodes).
**Reset Probabilities:** This vector quantifies the importance of source nodes for PPR.
**Parahippocampal regions (PHR):** This terminology is borrowed from the HippoRAG paper, which is an analogy between their retrieval encoder and the parahippocampal regions of the human brain.
> The paper is missing descriptions for baselines in section 4.1. At least a description of groups of methods and a comparison is required.
We include several RAG baselines like BM25, popular retrievers (Contriever, GTR) and 3 SoTA models from MTEB. RAPTOR, GraphRAG, and LightRAG use summarization to enable sense-making capacity. HippoRAG performs well on associative tasks.
> A clear description of the evaluations and execution difference between QA and retrieval setups is missing.
Our QA module uses the top-5 retrieved passages as context for an LLM (GPT-4o-mini or Llama-3.3-70B-Instruct) to generate the final answer. The QA result is evaluated by token-based EM/F1 scores, aligning with MuSiQue/2Wiki/HotpotQA official metrics.
## Continual Pretraining Related Work
> If the paper is position itself as continual method as the title may suggest, it should consider expanding the related works section to discuss more continual pretraining methods.
> - Roth, Karsten, et al. "A Practitioner's Guide to Continual Multimodal Pretraining." arXiv preprint arXiv:2408.14471 (2024).
> - Li, Jeffrey, et al. "Tic-lm: A multi-year benchmark for continual pretraining of language models."
We thank the reviewer for highlighting these important works, we will incorporate them into Section 2.1 of our camera-ready version.
---
Rebuttal Comment 1.1:
Comment: I thank authors for their response and recommend incorporating the clarifications into a revision. I maintain my rating of weak accept conditioned on applying needed improvements to the text to improve clarity. | Summary: This paper presents a method to enhance traditional RAG models for large language models. The proposed approach is based on HippoRAG and introduces a combination of phrase nodes and passage nodes, inspired by how human memory represents and processes information at different granularities. Additionally, the method incorporates query-to-triple contextualization, tightly associating user queries with knowledge graph nodes. Experimental results demonstrate that the proposed approach achieves a 7% performance gain over state-of-the-art embedding models in associative memory while maintaining superior factual recall and discourse understanding.
Claims And Evidence: The dense-sparse integration, inspired by human memory processing at different granularities, is well-motivated. However, it seems questionable to connect all edges to phrases linked to passages. While this approach may work well for small documents, it poses a scalability issue, when dealing with numerous documents, the KG can grow excessively large. Moreover, the method does not account for temporal changes in knowledge, which could limit its practical applicability.
Methods And Evaluation Criteria: - The evaluation methods and datasets used for comparison with existing RAG systems are meaningful. However, they do not adequately address experiments related to continual learning.
- If the proposed method is intended for continual learning, it is crucial to evaluate how the knowledge graph updates over time such that new documents are continuously introduced. This includes its impact on offline indexing, search, and question-answering.
- Additionally, the proposed method modifies only certain modules of the existing HippoRAG framework. As a result, it does not qualify as a novel framework, as suggested in the conclusion.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: - If the proposed method is intended for RAG-based question answering systems, the experimental design and ablation studies appear valid and sound.
- However, since this approach stores more information in the KG compared to the existing structure-augmented RAG, an analysis is required to determine how much additional storage and computational resources are needed.
Supplementary Material: - Yes, I reviewed some of the prompts used in this paper as well as implementation details from the experiments.
Relation To Broader Scientific Literature: - The proposed method improves entity-centric retrieval in HippoRAG and enhances alignment in knowledge graphs. Experimental results indicate that the QA performance improves by approximately 3 percentage points compared to SOTA models.
Essential References Not Discussed: - Most of the relevant studies are appropriately cited.
Other Strengths And Weaknesses: - Strength: The study effectively integrates cognitive-inspired memory representation into the RAG model, reflecting how the brain processes information at different levels of granularity.
- Weakness: The originality of this work is somewhat limited, as it primarily modifies existing modules in HippoRAG rather than introducing an entirely new framework.
Other Comments Or Suggestions: - If this method is designed for continual learning, the experimental setup should be revised to incorporate a temporal dimension.
- If the paper focuses on traditional RAG systems, the study should emphasize document storage, retrieval, and utilization rather than continual learning.
Questions For Authors: - How much additional time and computational resources does this approach require compared to the existing models used for comparison?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s detailed comments and questions, they will surely enhance the quality of our work. We are happy to know that the reviewer found our human memory inspired methodology well-motivated and our experimental settings meaningful, valid and sound.
## Non-Parametric Continual Learning for LLMs: Ideal Experiments
> If this method is designed for continual learning, the experimental setup should be revised to incorporate a temporal dimension.
We appreciate the reviewer’s suggestion to add a temporal dimension to our experiments; however, we believe that our current setup appropriately evaluates the continual learning abilities of non-parametric methods given their unique strengths and limitations.
As explained in our paper, non-parametric methods have become the de facto continual learning solution for LLM due to their simplicity and strong performance. In standard continual learning benchmarks, which measure catastrophic forgetting and simple factual learning, standard RAG outperforms parametric alternatives like model editing and continual pretraining by substantial margins (MQuAKE, EvolvingQA).
In contrast to their strong performance in these simpler settings, non-parametric methods struggle with richer forms of continual learning. More specifically, given that standard RAG acquires isolated knowledge, it is limited in its capacity to enable complex tasks over new knowledge, such as associativity and sense-making. Our experimental setup, which consists of a set of such tasks, is thus designed to explore this limitation in non-parametric continual learning methods.
That said, we agree that assessing performance as more knowledge is incrementally introduced would be a valuable addition to our paper. We refer the reviewer to the *Continual Knowledge Injection Experiments* section in our response to **reviewer gi53** above for experimental details, results and discussion.
We will add this discussion and experiment to the camera-ready version.
## ContinualRAG’s Computational Overhead
> How much additional time and computational resources does this approach require compared to the existing models used for comparison?
We appreciate the reviewer’s question concerning ContinualRAG’s efficiency compared to our baselines. To address this, we report the time and memory resources required for offline indexing and online retrieval, we will add them to Appendix F alongside the token-level costs reported in Table 12. For indexing, we indexed 11k documents using a Llama-3.3-70B model using vLLM on 4 H100s. For the memory requirements, we ignore all memory for model weights since it is shared across all systems.
| Model | Indexing Time (min) | QA Time per Query (s) | QA Memory (GB) |
|---|---|---|---|
| NV-Embed-V2 | 12.12 | 0.33 | 1.7 |
| RAPTOR | 100.50 | 0.61 | 1.4 |
| GraphRAG | 276.95 | 10.70 | 3.7 |
| LightRAG | 234.95 | 13.31 | 4.5 |
| HippoRAG | 57.50 | 0.90 | 6.0 |
| ContinualRAG | 99.50 | 1.15 | 9.9 |
As we can see, in terms of time, ContinualRAG is much more efficient than GraphRAG and LightRAG and only slightly less efficient than both RAPTOR and HippoRAG. For memory usage, ContinualRAG’s use of fact embeddings does increase its requirements, however, we believe this is acceptable given its performance benefits. Additionally, while all approaches lag behind standard RAG in terms of time and memory efficiency, ContinualRAG is the only one that outperforms this strong baseline substantially.
## Novelty Concerns
> The originality of this work is somewhat limited, as it primarily modifies existing modules in HippoRAG rather than introducing an entirely new framework.
We appreciate the reviewer’s perspective, however, we would like to clarify our view on our work’s originality. While it is true that we build upon the existing HippoRAG framework, our approach systematically explores and enhances key components of the system in ways that are both principled and non-trivial. The space of possible design choices in structure-augmented RAG systems is vast, and identifying which modules to modify and how to do so effectively is itself a meaningful research contribution.
Our modifications are not superficial; rather, they are informed by clear hypotheses about the limitations of the original modules and are validated through rigorous ablation studies (Table 4), which demonstrate substantial and consistent improvements in performance. We believe that such targeted, evidence-based improvements can be as impactful as proposing entirely new frameworks, especially when they advance the capabilities of an already influential baseline.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response and recommend incorporating the clarifications into a revised version. Thank you, in particular, for providing the results of additional experiments on knowledge injection within a short time frame. However, I am curious whether one of the experiments follows the same setting as the existing continual learning-based QA, specifically dividing the full corpus into four parts.
That said, I disagree with the authors' statement that ``non-parametric methods have become the de facto continual learning solution for LLMs.'' The authors appear to formulate the task based on the assumption that LLMs cannot be updated. However, essential continual learning for LLMs is fundamentally different, and this distinction needs to be clarified.
Furthermore, I still perceive the added module as primarily an incremental novelty rather than a fundamentally new combination of structure-aware and dense-based RAG systems (such as a combined method based on HippoRAG). Therefore, I find it difficult to agree that the research contribution is as meaningful as claimed by the authors.
For these reasons, I maintain my original rating.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer’s thoughtful comment, as it allows us to further clarify the conceptual motivation behind our work.
## The Role of Continual Learning in LLMs
Continual learning has long been a foundational pursuit of AI research, aiming to allow machine learning models to **learn from new data** without **forgetting** what they previously learned. In recent years, LLMs have been shown to attain remarkable zero-shot capabilities across a wide variety of tasks and domains simultaneously. This impressive adaptability has made traditional domain or task-specific continual learning methods less relevant for LLMs, however, their **inability to continuously absorb new factual knowledge** has opened a crucial new line of research in continual learning within these models.
Many parametric continual learning methods, such as **model editing** and **continual training**, have thus been developed to update factual knowledge within LLMs. However, these methods have faced **severe practical constraints**. First, while model editing appears to be a promising solution for updating a small number of facts, these updates are not reflected outside of a narrow set of contexts (Zhong et al. 2023, Wang et al. 2024) and lead to catastrophic forgetting as the process is repeated for more facts (Gupta et al. 2024). Meanwhile, continual training of LLMs suffers from similar ineffectiveness while being prohibitively expensive as well.
Given these challenges, **retrieval-augmented generation (RAG)** has emerged as an effective and efficient solution for **continual learning in LLMs**. RAG sidesteps concerns of parametric updates, allowing systems to add new facts **without risking catastrophic forgetting** by changing the LLM. Moreover, most deployed LLMs (e.g., ChatGPT, Perplexity) retrieve web knowledge to support generation, reinforcing non-parametric methods as the de facto continual learning solution in practice.
In summary, our claim is not that parametric continual learning lacks merit or is impossible—but rather that in the context of LLMs, RAG has become the most viable and widely adopted means of maintaining LLM systems factually up-to-date.
## Continual Learning in LLMs Beyond Factual Recall
Nevertheless, while *RAG* excels at simple factual tasks, it has **major limitations** in more complex continual learning abilities—such as **associative reasoning** and **sense-making**. This is the gap our work addresses. Our focus is not to further demonstrate that RAG can continuously integrate new factual knowledge but to provide a non-parametric solution that addresses these deficiencies in LLM continual learning. Thus, our experimental setting is designed to evaluate each method’s ability to **use new knowledge in associative, discourse-level** tasks while retaining RAG’s performance in factual QA.
Using our comprehensive experimental design, we demonstrate that popular solutions like RAPTOR, GraphRAG and HippoRAG are still unable to endow LLMs with these abilities. **Only our method, ContinualRAG, is able to leverage the right set of technical innovations** —dense-sparse integration, deeper contextualization, and recognition memory—to achieve improved performance over standard RAG across the board.
## Synthetic Temporal Experiments
As described above, our experiments are designed to test whether models can apply new knowledge in factual, associative, and sense-making tasks. However, we acknowledge that we initially did not assess catastrophic forgetting—a valuable point raised by reviewers.
Existing continual learning QA benchmarks for LLMs (Liska et al. 2022, Kim et al. 2024) create different sets of documents based on their creation time and define three subsets: *unchanged*, *updated* and *new*. As the corpus evolves, performance on the *unchanged* subset reflects resistance to forgetting, while scores on the *updated* and *new* subsets measure learning of new information.
Given that **no existing continual learning datasets test for associativity or sense-making**, we created synthetic temporal datasets from NQ and MuSiQue to evaluate these capabilities. Specifically, we randomly split each dataset into four sets and measure performance on one (our *unchanged* subset) as the other three *new* subsets were added. This allowed us to measure our method’s ability to **avoid catastrophic forgetting in both factual and associative scenarios**.
## References
- Liska et al., StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in QA Models. PMLR 2022
- Kim et al., Carpe diem: On the Evaluation of World Knowledge in Lifelong Language Models. NAACL 2024
- Gupta et al., Model Editing at Scale leads to Gradual and Catastrophic Forgetting. Findings of ACL 2024
- Zhong et al., MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions. ACL 2023
- Wang et al., DeepEdit: Knowledge Editing as Decoding with Constraints. 2024 | Summary: **Main findings**:
The ability to continuously acquire, organize, and leverage knowledge is a fundamental aspect of human intelligence. To empower LLMs with this capability, retrieval-augmented generation (RAG) has emerged as a critical approach. Recent methods enhance vector embeddings by integrating structures such as knowledge graphs, aiming to improve sense-making and associativity. However, these advanced approaches often suffer from significant performance degradation on basic factual memory tasks compared to standard RAG.
**Main algorithmic/conceptual ideas**:
To address this issue, we propose **ContinualRAG**, which extends HippoRAG by introducing three key enhancements: **Dense-Sparse Integration**, **Deeper Contextualization**, and **Recognition Memory**.
**Main results**:
Experiments conducted on diverse benchmarks demonstrate that ContinualRAG outperforms previous state-of-the-art methods such as HippoRAG, GraphRAG, and NV-Embed-v2.
Claims And Evidence: Yes, the claims are clear and convincing.
Methods And Evaluation Criteria: Yes, the proposed methods make sense for the target problems.
Theoretical Claims: Not applicable. No theoretical claims.
Experimental Designs Or Analyses: Yes, I have checked the experimental designs.
Supplementary Material: Yes, all supplementary materials are reviewed.
Relation To Broader Scientific Literature: Introducing **ContinualRAG** can empower LLMs to continuously acquire, organize, and leverage knowledge in a human-like manner.
Essential References Not Discussed: No additional related works should be included.
Other Strengths And Weaknesses: **Strengths**:
1. The proposed ContinualRAG is well-motivated and clearly explained.
2. ContinualRAG introduces solid technical improvements over HippoRAG.
3. ContinualRAG achieves impressively strong performance on various RAG benchmarks.
**Weakness**:
1. Given that ContinualRAG aims to empower LLMs to continuously acquire, organize, and leverage knowledge, additional experiments could be explored to continually update and expand the knowledge graph, which will make this work more solid.
Other Comments Or Suggestions: No additional comments.
Questions For Authors: 1. My suggestion is to consider conducting experiments that continuously introduce additional knowledge into the KG.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for the time and effort dedicated to reviewing our work. We are delighted that they found our work clear, convincing, well-motivated, technically solid and empirically validated by impressive performance improvements. We address their suggestions and comments below.
## Continual Knowledge Injection Experiments
> Given that ContinualRAG aims to empower LLMs to continuously acquire, organize, and leverage knowledge, additional experiments could be explored to continually update and expand the knowledge graph, which will make this work more solid.
We appreciate the reviewer’s thoughtful suggestion and agree with their assessment.
To address this, we conduct a new experiment on both NQ and MuSiQue. We partition our full corpus into four equal segments (approximately 250 questions and their distractors). We then select one segment for evaluation and incrementally add the remaining segments, measuring how performance evolves as new knowledge is added—simulating a temporal continual learning setting. We report the performance of ContinualRAG and NV-Embed-V2, our strongest baseline, in the tables below using F1 scores.
**NQ**
| # of Documents | ContinualRAG| NV-Embed-V2 |
|----------------|---------------|-----------------|
| 5,171 | 60.83 | 60.26 |
| 6,624 | 61.14 | 60.87 |
| 8,098 | 61.35 | 60.66 |
| 9,633 | 61.67 | 60.66 |
**MuSiQue**
| # of Documents | ContinualRAG | NV-Embed-V2 |
|----------------|---------------|-----------------|
| 3,316 | 52.83 | 47.44 |
| 5,496 | 52.02 | 45.52 |
| 8,127 | 49.93 | 44.13 |
| 11,656 | 44.95 | 40.42 |
As we can see, ContinualRAG’s improvements over NV-Embed-V2 remain remarkably consistent in both simple and associative continual learning settings. We note that, while both methods show steady performance on simple QA as more knowledge is introduced, their performance drops almost equivalently in the more complex task as more information is introduced. This behavior shows the strength of RAG in simple temporal tasks, highlights the value of our current experimental setup and points to the need for more complex temporal continual learning benchmarks for LLMs.
We will add this experiment to the camera-ready version of our paper.
---
Rebuttal Comment 1.1:
Comment: The rebuttal solves my concerns very well. I keep my rating of accept. Great work! | null | null | null | null | null | null |
Generalized Smooth Bilevel Optimization with Nonconvex Lower-Level | Accept (poster) | Summary: The paper investigates bilevel optimization problems where the inner function is nonconvex, and both inner and outer functions satisfy a generalized smoothness assumption. To address this problem, the single-level constrained formulation of bilevel optimization is adopted, replacing the inner problem with the Moreau envelope of the inner function. A novel algorithm, PNGBiO, is introduced, which involves performing normalized gradient steps on the Lagrangian function of this constrained problem. The algorithm demonstrates a convergence rate of O(T−1/4)O(T−1/4). Additionally, the paper presents experimental results for two bilevel problems: hyperparameter tuning and data hypercleaning.
Claims And Evidence: I review the theoretical claims in the section **Theoretical Claims** and the experimental evidences in the section **Experimental Designs or analyses**.
Methods And Evaluation Criteria: See the section **Experimental Designs or analyses** for the evaluation criteria.
Theoretical Claims: See the supplementary material section for a review of the proof of the theoretical results.
Besides the proofs, I have the following remark:
* **Stationnary point of problem (1)**: Several times (lines 130, 322, 324), the paper refers to stationary points of problem (1). What is the meaning of *stationary point* in this context since the lower level problem is not assumed to be strongly convex?
Experimental Designs Or Analyses: ### Reproducibility
The numerical part presents several reproducibility issues:
* **Code**: The code of the experiments is not provided, which does not foster the reproducibility of the results. This is an important concern.
* **Step size choice**: How are set the step sizes? Is it a grid search? In this case, the parameters of the grid should be precised. Why some algorithms use decreasing step sizes and other not?
* **Constraint**: For the contraint on $\lambda$ in the data hyper-cleaning task, how is it handled in the different algorithms which are originally designed for unconstrained optimization? How is chosen $R$?
* **Inner loop**: Some algorithms use inner loops (F2SA, BOME), what is the size of the inner loop?
* **Batch size for stochastic algorithms**: F2SA and SLIP are stochastic algorithms. What are the batch sizes used?
* **Number of runs**: How many runs are performed for each algorithm? Since some algorithms are stochastic, a single run is not enough to draw conclusions and error bars should be provided.
### Presentation/clarity
* **x-axis of the figures**: In figures 2 to 5, the x-axis is labeled as *"Epoch"*. What does the word "Epoch" refers to? In the context of bilevel optimization, the notion of "epoch" is fuzzy since there are two datasets (training and validation). Moreover, some algorithms use an inner loop or not, some others use second order information of the lower level problem or not, thus to be perfectly fair, curves in terms of wall-clock time would be more appropriate.
* **Data hypercleaining**: It would be convinient to be more precise when talking about *"contaminated training data"*. Reader ununfamiliar with data hyper-cleaning should understand by reading the paper that *"contaminated training data"* means some labels of the training data are wrong. Moreover, it should be clearly to write explicitly the bilevel problem considered, as done in the hyperparameter tuning problem.
* **Hyperparameter learning**: what is the regularizer $\mathcal{R_{\omega, \lambda}}$?
### Design
* **No nonconvex lower-level**. If I understand appropriately, in both cases, the lower level problem is a multinomial logistic regression problem. But this problem is convex and thus this is a concern that there are no experiments with nonconvex lower-level problem when the title of the paper suggests treating this case.
Supplementary Material: ### Proof of proposition 5.2
There some are confusions between $\mu$ and $\mu'$ sometimes
* **Line 616**: in the l.h.s. of $\mu'u_\mu + (1-\mu)u = \mu'\mu u' + (1-\mu'\mu)u$, should it be $\mu'$ instead of $\mu$ in factor of $u$? Otherwise, the equality does not hold.
* **Equation 30**: in the second line, is it $\|u_\mu-u\|$ instead of $\|u_{\mu'} - u\|$? And thus $\mu$ instead of $\mu'$ in the third line.
* **Line 642**: $H'(u')$ -> $H(\mu')$
* **Line 650**: $H(\mu')$ -> $H(\mu)$
### Proof of lemma 5.6
* **Lines 811**: a quantity $s$ appears without being introduced before.
Relation To Broader Scientific Literature: On the one hand, this paper extends the algorithm proposed in [1] to scenarios where both the inner and outer functions satisfy a generalized smoothness assumption. On the other hand, [2, 3] consider bilevel algorithm where only the outer function meets this assumption, while the inner function is strongly convex. These algorithms are based on approximate implicit differentiation. Thus, the method proposed in this paper complement this line of work by considering the generalized smoothness assumption on both functions and providing a fully first-order algorithm for this case.
[1] Liu, R., Liu, Z., Yao, W., Zeng, S., and Zhang, J. *Moreau envelope for nonconvex bi-level optimization: A single- loop and hessian-free solution strategy*. arXiv preprint arXiv:2405.09927, 2024.
[2] Hao, J., Gong, X., and Liu, M. *Bilevel optimization under unbounded smoothness: A new algorithm and convergence analysis*. ICLR 2024.
[3] Gong, X., Hao, J., and Liu, M. *A nearly optimal single-loop algorithm for stochastic bilevel optimization under unbounded smoothness*. ICML, 2024.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ### Strengths
* To my knowledge, this is the first time that a fully fisrt-order algorithm is proposed for bilevel optimization under the generalized smoothness assumption.
* The paper comes with non asymptotical convergence rates.
### Weaknesses
* **Presentation/writing**: The presentation and the writing of the paper need to be significantly improved. There are many typos, grammar issues and sentences that need to be rephrased. I provide a non-exhaustive list of these issues in the section "Other Comments Or Suggestions".
* **Section 5, page 4, 5**: the presentation of the theoretical results is really hard to parse. There is no transition between the different proposition/lemmas and they are not discussed/explained. Moreover, letting all the raw constants in the main text blurs the message. I think this section requires in profund clean up by moving some lemmas in the appendix, only keeping the most important quantities in the main text, and providing some highlevel explanations of the results.
* The algorithms comes with many hyperparameters (penalty parameter $c_t$, step sizes $\alpha_t$, $\beta_t$, and $\eta_t$, proximal parameter $\gamma$). This can make it difficult to use in practice.
Other Comments Or Suggestions: ### Typos/Unclear sentences
* **Abstract**: *"we stydy the convergence analysis"* -> *"we study the convergence property"*
* **Line 032**: *"When the lower-level objective function $g(x, y)$ on the variable y is strongly convex and twice differential"* -> *"When the lower-level objective $g$ is strongly convex and twice differentiable with respect to $y$"*
* **Line 043**: *"by using the following approximated gradient"* -> *"by using the following approximate gradient"*
* **Line 049**: *"where $\hat y$ is an approximate of the solution $y^*(x)$"* -> *"where $\hat y$ is an approximation of the inner solution $y^*(x)$"* or *"where $\hat y$ is an approximate solution of the inner problem $y^*(x)$"*
* **Line 82**: *"can not"* -> *"cannot"*
* **Lines 95-98**: *"More recently, (Liu et al., 2024) solve a variant of the single-level constrained optimization problem (5), where uses its Moreau envelope instead of the problem $\min_{y\in\mathbb{R}^p} g(x, y)$ in the problem (5), defined as"* -> *"More recently, Liu et al. (2024) solve a variant of the single-level constrained optimization problem (5), where the Moreau envelope of the inner function is used as a surrogate of the inner problem $\min_{y\in\mathbb{R}^p} g(x, y)$ in (5). This yields the following problem formulation"*
* **Lines 109, 66-67**: *"More recently, thus, some methods (Hao et al., 2024; Gong et al., 2024) have begun to study generalized smooth bilevel optimization."* -> *"More recently, some authors (Hao et al., 2024; Gong et al., 2024) started to study bilevel optimization under generalized smoothness assumption."*
* **Lines 106**: *"The generalized smoothness condition firstly was studied"* -> *"The generalized smoothness condition was first studied"*
* **Lines 115-120**: *"In the problem (1), since lower-level objective $g(x, y)$ is
nonconvex on variable y, we can not easily get the minimum of lower-level problem $\min_{y\in\mathbb{R}^p} g(x, y)$. Thus, we reformulate the lower-level problem by using a value function defined as: $v(x) = \min_{y\in\mathbb{R}^p} g(x, y)$."* The logical sequence of these sentences is unclear. It textually tells that, when the inner function in non-convex, solving $\min_{y\in\mathbb{R}^p} g(x, y)$ is too difficult, and thus it is replaced by $v(x)$ which is actually the same thing.
* **Assumption 5.2 and 5.3**: *"there exists"* -> *"there exist"*
* **Assumption 5.2 and 5.3**: comma missing before *"and"*: *"$L_{fy,0}$ and $L_{fy,1}$"* -> *"$L_{fy,0}$, and $L_{fy,1}$"*; *"$L_{gy,0}$ and $L_{gy,1}$"* -> *"$L_{gy,0}$, and $L_{gy,1}$"*
* **Line 320**: *"the above three terms shows"* -> *"the above three terms show"*
* **Lines 417 to 425**: there are several issues in the definition of the bilevel problem:
* $\sum_{x_i \in S_{val}}$ -> $\sum_{i \in S_{val}}$
* $\omega^*$ -> $\omega^*(\lambda)$
* $\omega^* = \mathcal{L}_{S_{tr}}(\omega, \lambda)$ -> $\omega^* = \mathcal{L}_{S_{tr}}(\omega^*, \lambda^*)$ -> $\omega^*(\lambda) = \underset{\omega}{\mathrm{arg\,\min}}\mathcal{L}_{S_{tr}}(\omega, \lambda)$
* the third line of the equation should also be corrected.
### Notation consistency
* **Equations (2) and (3)**: The partial gradients are denoted with number in subscript while the partial Hessians are denoted with letter in subscript. It would be more consistent to use the same notation for both.
* **Numeratical experiments**: In the data hypercleaning problem, the individual losses are denoted $\ell$ while in the hyperparameter tuning problem, they are denoted $\mathcal{L}$. Moreover, the full loss function are denoted $L$ in the data hypercleaning problem and $\mathcal{L}$ in the hyperparameter tuning problem. Moreover, in the bilevel problem between lines 417 and 425, the individual train losses depend on $\lambda$ while the individual validation losses do not.
### Miscellanous
* **Line 024**: *"and distributionally robust optimization"*: there should be a citation for this task.
* **Line 157**: *"Based on Theorem A.4 of (Liu et al., 2024), it is known that any limit point $(\bar x, \bar y)$ of sequence $(x^t, y^t)$ is a solution to the problem (9)."* The sequence $(x^t, y^t)$ is not defined.
* **Correct citation formating**: when the authors' name of a paper is part of the sentence, they should not be in parenthesis (in LaTeX it consists in using `\cite`or `\citet` instead of `\citep` I think). For instance:
* line 50: *"(Ghadimi & Wang, 2018)
proposed a class of approximated gradient methods based
on approximate implicit differentiation. (Ji et al., 2021) proposed
an effective approximated gradient methods based on the iterative differentiation. More recently, (Huang, 2023; 2024) studied the nonconvex bilevel optimization"* -> *"Ghadimi & Wang (2018) proposed a class of approximated gradient methods based on approximate implicit differentiation. Ji et al. (2021) proposed an effective approximated gradient methods based on the iterative differentiation. More recently, Huang (2023; 2024) studied the nonconvex bilevel optimization"*
* Line 156: *"Based on Theorem A.4 of (Liu et al., 2024)"* -> *"Based on Theorem A.4 of Liu et al. (2024)"*
* **Precision on AID-based methods**: Equation (3) makes think that the linear system that appears in the hypergradient is solved exactly. This is not the case. As the solution of the inner problem is approximated, the solution linear system is also approximated.
* **Page header**: page header should be changed. Now it is still *"Submission and Formatting Instructions for ICML 2025"*.
* **Notation definition**: In lemma 5.6, $\theta^*_\gamma$ is used without being defined. Actually, it is defined in the appendix. It should be defined before being used.
### Mathematical writting
* **Assumptions 5.2. and 5.3.**: Are the constants $L_{fx,0}
$, $L_{fx,1}$, $L_{fy,0}$, $L_{fy,1}$, $L_{gx,0}$, $L_{gx,1}$, $L_{gy,0}$, and $L_{gy,1}$, independent from $\rho$? In this case, it is necessary to revers the order of the quantifiers by writting *"There exist constants $L_{fx,0}$,..., and $L_{fy,1}$ such that for any $\rho>0$"*. Otherwise, it means that these constants depend on $\rho$ and in this case, this dependency should be made explicit.
Questions For Authors: * In many machine learning problems, having stochastic algorithms is crucial to handle large datasets. Is it easy to extend PNGBiO to the stochastic setting?
* **Lines 205-208**: The papers claims that the generalized smoothness assumption used in Assumptions 5.1 and 5.2 is milder than the one usually used. But, I don't really see why since the proposed smoothness condition is supposed to hold for any $\rho\in[0,1]$ and not only at $\rho = 1$. Could the authors clarify this point? Moreover, is there any example of such function in machine learning?
* Can the convergence result provided by the paper be translated in terms of convergence of the norm the hypergradient (equation (2) in the paper) when the inner function is strongly convex?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks so much for your comments and suggestions. We response to your questions one by one as follows:
**Q1:** What is the meaning of stationary point in this context since the lower level problem is not assumed to be strongly convex?
**A1:** In our paper, we use the same definition of stationary point as in [1]. From [1], the stationary point of problem (1) is equivalent to that of the relaxed problem $\min_{(x,y)\in \mathbb{R}^d \times \mathbb{R}^q} f(x,y) s.t. \nabla_y g(x,y)=0$. When the lower-level of problem (1) is strongly convex, this stationary point is equivalent to the stationary point used in [2,3].
[1] Liu et al., Moreau envelope for nonconvex bi-level optimization: A single-loop and hessian-free solution strategy. arXiv preprint arXiv:2405.09927, 2024.
[2] Ghadimi, S. and Wang, M. Approximation methods for bilevel programming. arXiv preprint arXiv:1802.02246, 2018.
[3] Ji, et al., Bilevel optimization: Convergence analysis and enhanced design. ICML, 4882–4892, 2021.
**Q2:** ... Is it easy to extend PNGBiO to the stochastic setting?
**A2:** Our PNBiO can be generalized to the stochastic setting. We have initially proved that our stochastic PNGBiO has a gradient complexity of $O(\epsilon^{-6})$. In the final version of our paper, we will add this conclusion.
Meanwhile, we add some experimental results on our stochastic PNGBiO (**S-PNGBiO**) method. Specifically, we compare our S-PNBiO algorithm with other baselines including BO-REF, F^2SA, SLIP, BOME. In the experiment, we conduct the meta-learning task at CIFAR10 dataset, and use Resnet18 as task-shared model at the upper-level (UL) problem, and use a 2-layer neural network as task-specific model at the lower-level (LL) problem. Clearly, both UL and LL problems are non-convex. We run every method for 500 seconds (CPU time) with minibatch size 64 for both training and validation set. We set learning rates as 0.01 at F^2SA and BOME algorithms, and $\alpha=0.0001$, $\beta=0.01$ in SLIP and BO-REF. Our algorithm uses a vanishing step size, which is $\alpha_t=\beta_t=0.3/(k+1)^{0.5}$,$\eta=0.01$, the rest of the parameter settings are the same as in the paper, and the experimental results are shown in the following table (we have recorded the loss):
| | S-PNBiO| F$^2$SA |BOME|BO-REF|SLIP|
|-------|-------|-------|-------|-------|-------|
|100s|1.591| 1.822| 1.831|1.691|1.948|
|200s|1.494 |1.655|1.692|1.585|1.839|
|300s|1.441|1.568|1.615|1.464|1.775|
|400s|1.392|1.485|1.542|1.402|1.723|
|500s|1.371|1.425|1.467|1.382|1.601|
In the final version of our paper, we will add these experimental results.
**Q3:** Lines 205-208: The papers claims that the generalized smoothness assumption…, is there any example of such function in machine learning?
**A3:** 1) When $\rho=0$, our generalized smoothness condition degenerates to standard Lipschitz smoothness condition. 2) When $\rho\in(0,1]$, this smoothness condition depends on the current gradient. For example, the polynomial function $f(x)=|x|^\frac{2-\rho}{1-\rho}$, where $x\in\mathbb{R}$ and $\rho\in(0,1)$, satisfies this generalized smoothness condition. In this case, it does not apply when $\rho=1$.
**Q4:** Can the convergence result provided by the paper be translated in terms of convergence of the norm the hyper-gradient (equation (2) in the paper) when the inner function is strongly convex?
**A4:** When inner function is strongly convex, we have proved that our method has the same convergence rate $O(\frac{1}{T^{1/4}})$ based on the hyper-gradient metric as in [1], which shows our algorithm has the same rate of convergence under the generalized smooth condition as the standard smooth condition. In the final version, we will add these results.
[1] Liu et al., Moreau envelope for nonconvex bi-level optimization: A single-loop and hessian-free solution strategy. arXiv preprint arXiv:2405.09927, 2024.
**Q5:** Some questions of experimental designs.
**A5:** 1. In the experiments, the tuning parameters of all algorithms are selected to be optimal through the grid.
2. In the data cleaning experiment, the problem is based on unconstrained conditions, so here $R$ should be infinite, and here is a typo.
3. In the algorithms that require inner loops, we set 10 times.
**Q6:** The partial gradients are denoted with number in subscript while the partial Hessians are denoted with letter in subscript...
**A6:** In the paper, we have a different definition for the subscripts: 1 and 2 represent the derivatives with respect to the first and second parts, respectively. We will clarify this in the final version of our paper.
**Q7:** Questions about essay structure, mathematical writing and typos.
**A7:** Thanks so much for your suggestions. In the final version of our paper, we will correct these typos and mathematical writing, and will reorganize the section 5 of our paper, explaining the main results to make them clearer. Meanwhile, some theoretical results will be added.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
I thank you for answering my review. I have the following remark:
**A3**: I am not sure I understand. According to the generalized smoothness condition provided in the paper (*e.g.*, in Assumption 5.2), a function $f$ verifies the generalized smoothness assumption if it verifies for any $u, u'$
$$
\lVert\nabla f(u) - \nabla f(u')\rVert \leq (L_{f, 0} + L_{f,1}\max_{\mu\in[0, 1]}\lVert f(u_{\mu})\rVert^{\rho})\lVert u - u'\rVert
$$
for **any** $\rho\in[0,1]$ and **not for a particular** $\rho\in[0,1]$. Thus in Assumptions 5.2 and 5.3, either the expression *"For any $\rho\in[0,1]$"* should be replaced by *"There exists $\rho\in[0, 1]$"*, either these assumptions are more restrictive than the usual smoothness.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer RaRE,
Thanks so much for your suggestion. It is more accurate to articulate the condition in Assumptions 5.2 and 5.3 as “there exists $\rho \in [0,1]$.” Our intention was to highlight the applicability of the algorithm across the entire range of $\rho \in [0,1]$ . Considering that it may cause readers to misunderstand, we will make the necessary corrections in the final version.
Best wishes,
Authors | Summary: This paper proposes a gradient-based first-order algorithm called PNGBiO for generalized-smooth nonconvex-weakly-concave bilevel optimization. The authors provide the convergence analysis and claim that this algorithm achieve a convergence rate of O(ϵ^(-4)) for finding an approximation stationary point. The authors conduct some experiments to show the effectiveness of the proposed algorithm.
Claims And Evidence: This article is technically sound. The theory seems to be correct
Methods And Evaluation Criteria: I think the main strength is that the proposed algorithm obtains an ϵ-stationary solution with pure gradient calls and extend the smoothness condition to generalized smoothness for both upper and lower level for nonconvex-weakly-concave bilevel optimization.
The overall presentation is not clear. Specifically, in section 5 the authors give some proposition and lemmas without further description. What are the roles of these lemmas in the convergence analysis? Also, which lemmas are main novel ones? The description is very dense and difficult to parse.
Theoretical Claims: The authors claim that the proposed algorithm solve the generalized-smooth bilevel optimization with nonconvex condition for both upper and lower level but only give an assumption that the lower-level objective function is weakly-convex. Did I miss something?
About Theorem 5.10. Seems that the authors give some unusual settings for the stepsize α, β and η, with some complex form of the gradient norm in the iteration t and a variety of parameters instead of the commonly used constant stepsize in other works. Could the authors futher explain that what do these parameters stand for and how can these settings make the gradient update stable and efficient as descripted in the end of Section 4?
Experimental Designs Or Analyses: Regarding the experiments, the authors did not conduct the comparison with BO-REP because it consumes a significant amount of memory. But in [1] a similar Hyper-representation task shows that BO-REP have a good performance compared to other baseline algorithms. Also, how do the authors choose the parameters of these algorithms? I’m not sure whether these parameters setting are fair or not. Moreover, the computational gain compared to other algorithms seems limited as shown in the experiment results.
[1] Gong X, Hao J, Liu M. A nearly optimal single loop algorithm for stochastic bilevel optimization under unbounded smoothness[C]//Proceedings of the 41st International Conference on Machine Learning. 2024: 15854-15892.
Supplementary Material: The supporting materials appear to be correct, but I did not carefully review them
Relation To Broader Scientific Literature: No
Essential References Not Discussed: no
Other Strengths And Weaknesses: no
Other Comments Or Suggestions: no
Questions For Authors: The authors claim that the proposed algorithm solve the generalized-smooth bilevel optimization with nonconvex condition for both upper and lower level but only give an assumption that the lower-level objective function is weakly-convex. Did I miss something?
About Theorem 5.10. Seems that the authors give some unusual settings for the stepsize α, β and η, with some complex form of the gradient norm in the iteration t and a variety of parameters instead of the commonly used constant stepsize in other works. Could the authors futher explain that what do these parameters stand for and how can these settings make the gradient update stable and efficient as descripted in the end of Section 4?
Regarding the experiments, the authors did not conduct the comparison with BO-REP because it consumes a significant amount of memory. But in [1] a similar Hyper-representation task shows that BO-REP have a good performance compared to other baseline algorithms. Also, how do the authors choose the parameters of these algorithms? I’m not sure whether these parameters setting are fair or not. Moreover, the computational gain compared to other algorithms seems limited as shown in the experiment results.
[1] Gong X, Hao J, Liu M. A nearly optimal single loop algorithm for stochastic bilevel optimization under unbounded smoothness[C]//Proceedings of the 41st International Conference on Machine Learning. 2024: 15854-15892.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1**: The authors claim that the proposed algorithm solve the generalized-smooth bilevel optimization with nonconvex condition for both upper and lower level but only give an assumption that the lower-level objective function is weakly-convex. Did I miss something?
**A1**: Thanks for your comment. At the **line 19 ( in introduction part)** of our paper, we have pointed out that the upper-level problem belongs to the category of general non-convex problems.
**Q2**: Regarding the experiments, the authors did not conduct the comparison with BO-REP because it consumes a significant amount of memory. But in [1] a similar Hyper-representation task shows that BO-REP have a good ...
**A2**: Thanks for your comment. Our PNBiO can be generalized to the stochastic setting. We have initially proved that our stochastic PNGBiO has a gradient complexity of $O(\epsilon^{-6})$. In the final version of our paper, we will add this conclusion.
Meanwhile, we have added some experimental results on stochastic version of our PNGBiO method. Specifically, we compare our stochastic PNBiO algorithm with other baselines including BO-REF, F^2SA, SLIP, BOME. In the experiment, we conduct the meta-learning task at CIFAR10 dataset, and use Resnet18 as task-shared model at the upper-level (UL) problem, and use a 2-layer neural network as task-specific model at the lower-level (LL) problem. Clearly, both UL and LL problems are non-convex. We run each method for 500 seconds (CPU time) with minibatch size 64 for both training and validation set. We set learning rates as 0.01 at F^2SA and BOME algorithms, and $\alpha=0.0001$, $\beta=0.01$ at SLIP and BO-REF. Our algorithm uses a vanishing step size, which is $\alpha_t=\beta_t=0.3/(k+1)^{0.5}$,$\eta=0.01$, the rest of the parameter settings are the same as in the paper, and the experimental results are shown in the following table (we have recorded the loss):
| | PNBiO| F$^2$SA |BOME|BO-REF|SLIP|
|-------|-------|-------|-------|-------|-------|
|100s|1.591| 1.822| 1.831|1.691|1.948|
|200s|1.494 |1.655|1.692|1.585|1.839|
|300s|1.441|1.568|1.615|1.464|1.775|
|400s|1.392|1.485|1.542|1.402|1.723|
|500s|1.371|1.425|1.467|1.382|1.601|
In the final version of our paper, we will add these experimental results.
**Q3**: About Theorem 5.10. Seems that the authors give some unusual settings for the stepsize α, β and η, with some complex form of the gradient norm in the iteration t and a variety of parameters instead of the commonly used constant stepsize in other works. Could the authors further explain that what do these parameters stand for and how can these settings make the gradient update stable and efficient as descripted in the end of Section 4?
**A3**: Thanks for your comment. In our paper, we set the step size as follows:
$\alpha_t\in\left(0,\ \frac{||\frac{1}{c_t}\nabla_1\ f(x^t,y^t)||+||\nabla_1\ g(x^t,y^t)||}{4\left(K_1+2K_2+2K_3+\kappa\right)+C}\right)$,
$\beta_t\in\left(0,\ \frac{||\frac{1}{c_t}\nabla_2\ f(x^t,y^t)||+||\nabla_2\ g(x^t,y^t)||}{4\left(K_4+2K_4+2K_6+\kappa\right)+C}\right)$,
and
$\eta_t\in\left(0,\ \frac{\min\{||\frac{1}{c_t}\nabla_1\ f(x^t,y^t)||+||\nabla_1\ g(x^t,y^t)||,||\frac{1}{c_t}\nabla_2\ f(x^t,y^t)||+||\nabla_2\ g(x^t,y^t)||\}}{\kappa_{g_2}\ \max\{4\left(K_1+2K_2+2K_3+\kappa\right),4\left(K_4+2K_4+2K_6+\kappa\right)\}+C}\right)$,
where $C=8(1+\frac{1}{\eta_t\kappa_{g_2}})L_\theta^2(L_{gx,0}+L_{gx,1}M^\rho+\frac{2}{\gamma^2})$.
These step sizes are the bounds derived during the theoretical analysis process. It can be considered that during the experimental process, the selection of the step size not only depends on the parameters of the generalized smoothness of the objective function but also relies more on the gradient at the current iteration point. Due to the special structure of the algorithm, it can be viewed in the following form:
$x^{t+1}=x^t-\alpha'_t\ \frac{||\frac{1}{c_t}\nabla_1\ f(x^t,y^t)||+||\nabla_1\ g(x^t,y^t)||}{||d_x^t||}d_x^t$,
$y^{t+1}=y^t-\beta'_t\frac{||\frac{1}{c_t}\nabla_2\ f(x^t,y^t)||+||\nabla_2\ g(x^t,y^t)||}{||d_y^t||}d_y^t$,
where $\alpha'_t\ \in(0,\frac{1}{4(K_1+2K_2+2K_3+\kappa)+C})$,$\beta'_t\ \in (0,\ \frac{1}{4(K_1+2K_2+2K_3+\kappa)+C})$. When $C$ is a constant, it can be regarded as a constant step size, but this depends on the boundedness of $M$.
**Q4**: Specifically, in section 5 ... these lemmas in the convergence analysis? Also, which lemmas are main novel ones? The description is .....
**A4**: Thanks for your suggestion. These new lemmas are crucial to our theoretical contributions, and we believe they will provide new insights for research in related fields. In the final version, we will provide a more detailed description of these lemma. We admit that the current description may be too dense. We will reorganize this section, adding more explanations to make it easier for readers to comprehend our arguments and conclusions. | Summary: This paper studies the generalized smooth bilevel optimization, where the upper-level objective is nonconvex with generalized smooth, and the lower-level objective is weakly convex and generalized smooth.
It proposes an effective Hessian/Jacobian-free penalty normalized gradient (PNGBiO) method to solve these bilevel optimization problems.
It also provided the convergence analysis of the proposed PNGBiO method. Some experimental results verify efficiency of the proposed PNGBiOalgorithm.
Claims And Evidence: Yes
Methods And Evaluation Criteria: This paper studies the generalized smooth bilevel optimization with nonconvex lower-level, while the existing methods mainly focus on the generalized smooth bilevel optimization relying on the strongly convex lower-level.
It presented an effective Hessian/Jacobian-free penalty normalized gradient (PNGBiO) method to solve these bilevel optimization problems, the comparison methods are designed for generalized smooth bilevel optimization which need to compute Hessian and Jacobian matrices.
Theoretical Claims: I have read the main proof of the paper.
The primary proof presented in the paper is well-constructed, exhibiting no apparent errors, and follows a coherent and smooth logical progression that aligns with established techniques in the field.
This paper provides a convergence analysis of the proposed PNGBiO method, and proved that the PNGBiOmethod has a low gradient complexity of $O(\epsilon^{-4}) $for finding an $ \epsilon$-stationary point.
Experimental Designs Or Analyses: The experimental design is robust, free of glaring weaknesses, and effectively supports its conclusions.
Supplementary Material: Review the main proof of the main theory.
Relation To Broader Scientific Literature: This paper studied the generalized smooth bilevel optimization relying on the generalized smoothness introduced in [1]
[1] Chen, Z., Zhou, Y., Liang, Y., and Lu, Z. Generalizedsmoothnonconvex optimization is as efficient as smooth nonconvex optimization. In International Conference on Machine Learning, pp. 5396–5427. PMLR, 2023.
Essential References Not Discussed: No
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: 1. This paper studied the generalized smooth bilevel optimization relying on the generalized smoothness introduced in [1]. The proposed PNGBiO method could apply the generalized smooth bilevel optimization with generalized smoothness introduced in [2] ?
[1] Chen, Z., Zhou, Y., Liang, Y., and Lu, Z. Generalizedsmoothnonconvex optimization is as efficient as smooth nonconvex optimization. In International Conference on Machine Learning, pp. 5396–5427. PMLR, 2023.
[2] Li, H., Qian, J., Tian, Y., Rakhlin, A., and Jadbabaie, A. Convex and non-convex optimization under generalized smoothness. Advances in Neural Information Processing Systems, 36, 2023.
2. In the experiments, how to choose the proximal parameter $\gamma$ and penalty parameter $c_t$ in the proposed PNGBiO algorithm?
3. From Figure 1 in the paper, the proposed PNGBiO method solves the approximated problem (1). In the convergence analysis, the convergence measure used in the paper is reasonable ?
4)In the convergence analysis, the term $M=\|\nabla_1g(x,\theta^*_{\gamma}(x,y)) \|$ is bounded ?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: **Q1**: This paper studied the generalized smooth bilevel optimization relying on the generalized smoothness introduced in [1]. The proposed PNGBiO method could apply the generalized smooth bilevel optimization with generalized smoothness introduced in [2] ?
**A1**: Thanks for your comment. We study generalized smooth bilevel optimization based on generalized smoothness introduced in [1] based on the following points:
1) The framework of [1] is more compatible with our theoretical foundations and methodology, and facilitates the integration and extension of existing results.
2) The scope of our paper is limited, and focusing on the framework of [1] helps to explore a specific problem in depth and avoid an overly broad scope.
3) We recognize the importance of exploring different broad definitions of smoothness including [2], which will be of interest for future research.
**Q2**: In the experiments, how to choose the proximal parameter $\gamma$ and penalty parameter $c_t$ in the proposed PNGBiO algorithm?
**A2:** Thanks for your comment. In the experiments, we set $c_t=\frac{10}{(k+1)^{0.25}}$ , $\gamma=0.1$.
**Q3**: From Figure 1 in the paper, the proposed PNGBiO method solves the approximated problem (1). In the convergence analysis, the convergence measure used in the paper is reasonable?
**A3**: Thanks for your comment. The metric in the article is reasonable, as our problem uses two approximations to the original problem, in the metric in this article, we have effectively carved the deviations generated during both approximations to get back to the original problem. Since the lower problem is not (strongly) convex, we could not use the traditional hyper-gradient for the carving, but instead used the residual function for the metric.
**Q4**: In the convergence analysis, the term $M=\max_{\mu\ \in[0,1]}\|\nabla_1 g(x,\theta_{\mu})\|$ is bounded ?
**A4**: Thanks for your comment. As you said, in the convergence analysis, $M$ has a bounded gradient paradigm for every fixed $x$. Since our iteration can be terminated in a finite number of times, for $M$ it can take the maximum value within $t$ values. Moreover, $M$ is bounded when the Lipschitz smoothness condition is satisfied for $x$ in the lower function.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors’ responses. These responses have dealt with my concerns.
This paper well studied bilevel optimization with nonconvex lower-level under the complex
generalized smooth setting, which takes bilevel optimization a step further in machine learning applications and research.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 4qTz,
Thanks so much for your recognition and affirmation of our paper.
Best wishes,
Authors | Summary: This paper introduces an efficient Hessian/Jacobian-free Penalty Normalized Gradient (PNGBiO) method for solving bilevel optimization problems, where the upper-level objective is generalized smooth and nonconvex, while the lower-level objective is generalized smooth and weakly convex. Furthermore, the authors provide a comprehensive convergence analysis of the proposed PNGBiO method. Some experimental results demonstrate the effectiveness of PNGBiO.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, but using the same parameter $M$ to denote two different terms in the convergence analysis (at lines 240 and 266) could indeed be a typo or an oversight in notation.
Experimental Designs Or Analyses: Yes, experimental results demonstrate the effectiveness of the proposed method.
Supplementary Material: Yes, from the conclusion of the proof, it appears that the proof of Theorem A.10 relies on the condition that $m c_t$ is uniformly bounded, as in (Liu et al., 2024).
Relation To Broader Scientific Literature: This paper proposed an effective Hessian/Jacobian-free penalty normalized gradient (PNGBiO) method to solve the generalized-smooth bilevel optimization with weakly-convex lower-level objective, while the existing methods only studied the generalized smooth bilevel optimization with strongly convex lower-level objective.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths: In addition to the strengths mentioned in the Summary. In theoretical analysis, the paper proved that the proposed PNGBiO has a low gradient complexity of $O(\epsilon^{-4})$ for finding an $ \epsilon$-stationary point under milder assumptions such as generalized smoothness condition.
Weakness: Since the normalized gradient plays a crucial role, could you provide a more detailed explanation of its purpose and necessity?
Other Comments Or Suggestions: Typo: In Table1, the first $g(\cdot, \cdot)$ should be $f(\cdot, \cdot)$.
Questions For Authors: Q1: In the PNGBiO algorithm, how to choose the parameters $\gamma$ and $c_t$ in the experiments ?
Q2: This paper studied the nonconvex bilevel optimization with the generalized smoothness introduced in (Chen et al. 2023 ). Why not consider the nonconvex bilevel optimization with the generalized smoothness in (Li et al., 2023) ?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1**: Using the same parameter $M$ to denote two different terms in the convergence analysis (at lines 240 and 266) could indeed be a typo or an oversight in notation.
**A1**: Thanks for your comment. There is a typo. We will correct it in the final version of our paper.
**Q2**: Since the normalized gradient plays a crucial role, could you provide a more detailed explanation of its purpose and necessity?
**A2**: Thanks for your insightful comment. Inspired by the paper [1], the generalized smoothing condition implies that the smoothness is unbounded, i.e., it depends on the gradient information at a certain point. During the iteration process, too large changes may produce too large or too small gradients, making the algorithm unstable and inefficient. Normalization can effectively solve these problems, making it more robust and faster convergence. Thus, in our algorithm, we use the normalized gradient descent iteration.
[1] Chen, Z., Zhou, Y., Liang, Y., and Lu, Z. Generalized-smooth nonconvex optimization is as efficient as smooth nonconvex optimization. In International Conference on Machine Learning, pp. 5396–5427. PMLR, 2023.
**Q3**: In the PNGBiO algorithm, how to choose the parameters $\gamma$ and $c_t$ in the experiments?
**A3**: Thanks for your comment. In the experiments, we set $c_t=\frac{10}{(k+1)^{0.25}}$ , $\gamma=0.1$.
**Q4**: This paper studied the nonconvex bilevel optimization with the generalized smoothness introduced in (Chen et al. 2023 ). Why not consider the nonconvex bilevel optimization with the generalized smoothness in (Li et al., 2023) ?
**A4**: Thanks for your comment. We chose to study the generalized smoothness nonconvex bilayer optimization proposed in (Chen et al. 2023) based on the following points:
1) The framework of (Chen et al. 2023) is more compatible with our theoretical foundations and methodology, and facilitates the integration and extension of existing results.
2) The scope of our paper is limited, and focusing on the framework of (Chen et al. 2023) helps to explore a specific problem in depth and avoid an overly broad scope.
3) We recognize the importance of exploring different broad definitions of smoothness including (Li et al., 2023), which will be of interest for future research.
**Q5**: In Table1, the first $g(\cdot,\cdot)$ should be $f(\cdot,\cdot)$.
**A5**: You are right, we will correct this typo in the final version of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. I am satisfied with the answers to my concerns. I agree that exploring generalized smoothness in bilevel optimization settings is both important and challenging.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer LknT,
Thanks so much for your recognition and affirmation of our paper.
Best wishes,
Authors | null | null | null | null | null | null |
xLSTM 7B: A Recurrent LLM for Fast and Efficient Inference | Accept (poster) | Summary: This work builds on the work by Beck et al. on language modeling with xLSTMs. It introduces several architecture modifications and tricks that support efficiency and training stability, and it introduces an (open sourced) pre-trained 7B parameter language model based on mLSTM. The paper contains a detailed description and evaluation of these modifications with ablations.
Claims And Evidence: Yes, claims are supported with detailed empirical evidence (mostly in the form of ablation studies).
Methods And Evaluation Criteria: Yes, models are evaluated on standard language modeling and long context benchmark tasks.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Experimental results are thorough and detailed.
Supplementary Material: Yes
Relation To Broader Scientific Literature: This is an empirical study, and prior work is discussed (as it should be) primarily through quantitative comparisons to existing models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper introduces a strong new open source pre-trained model, which is as valuable contribution.
As stated in the paper, the performance of the model ranks somewhere in the mid-range among other, similar size models. The paper also states that with a better training set mix (more data, better curation, more emphasis on math and code in early training phase) performance could match stronger models. This is a bit of a strange claim to make, given that the main aim of this paper is to provide an empirical study and improvement of the existing architecture (from Beck et al) Why not attempt and report on those suggested improvements?
Other Comments Or Suggestions: There should be citations for RMSNorm and Layernorm the first time they occur (line 187 or before).
Figure 5: to clarify: these results all include the time it takes to consume the prefill tokens?
How sensitive is model performance with respect to choice of soft-capping parameters? Also, why was it set to 15 and 30, for gates and logits, respectively? How sensitive is that choice?
It is interesting that learnable input gates do not matter much, except for long-context evaluations. Any insights into why that is?
Pre- and post-up projection should be described in the paper to make it more self-contained. Currently, the paper relies on definitions in Beck et al. and Gu & Dao.
What about non-linear models? RNNs hold the promise to beat transformers in tasks that rely heavily on state-tracking (as opposed to natural language modeling, where this does not seem to be the case as much), but they require non-linearity (or at least special considerations regarding the eigenvalues of the transition matrix). Have the authors considered exploring sLSTM blocks over just mLSTM in this study? Related to this point, as well as the authors' point on the dependence of performance on training set amount and mix: I am wondering how much the introduction of non-linearities at the cost of parallelism during training would really reduce efficiency, and how much this would really matter with respect to downstream metrics - in particular, in tasks that are heavy on state tracking requirements (such as code and math)?
Questions For Authors: See previous section.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback. We appreciate that the reviewer finds that our claims are supported by evidence, that our experimental results are thorough and detailed and that our strong model is a valuable contribution to the open-source community.
### Attempt on Suggested Improvements
Starting from the models in the xLSTM paper, this study already improves the pre-training dataset by switching from SlimPajama in the xLSTM paper to the DCLM dataset.
At the time of training the xLSTM 7B, this was the probably best open source pre-training dataset, to the best of our knowledge.
Other competitor models used custom internal pre-training datasets (Codestral-Mamba, Falcon-Mamba), which make a re-training unfeasible.
While the main focus of this study are architecture optimizations, improving the pre-training data is an exciting future opportunity to train even better xLSTM models and we will continue to work towards this goal.
Figure 5: Time to first token includes the prefill time
Yes, for the time to first token metric in Figure 5 the prefill time is included.
Figure 5 (left) measures the time to the first token (or latency), i.e. the time it takes until the model answers with the first token.
Figure 5 (right) measures the time until the model has answered with 100 tokens. This corresponds to a mix between prefill and generate time.
### Soft-capping parameters
This is an interesting question that we did not investigate in our work yet. The logit soft capping values were taken from the Gemma 2 technical report. For the gate soft-capping the range is intended to be well outside the interesting initialization range (see attached additional paper).
### Input gate & long context evaluations
Many of the long context tasks in RULER contain only small parts of highly relevant text among a lot of unnecessary "filling" text. The exponential input gate can increase the magnitude of the important parts within the linear matrix memory, and this seems to help improve the task performance. Still, it might be interesting to look at some examples qualitatively to support this mechanistically.
### Non-linear RNNs
While non-linear models have the theoretical advantage of state tracking capabilities, in our preliminary experiments, we did not see any benefit of including those for human language modeling tasks. Our xLSTM-7B architecture tries to maximize speed in both training and inference at high performance in language modeling, which is why sLSTM was not included here. In recent work, researchers have shown how to maximize speed for sLSTM and other non-linear RNN variants, but the training speeds reported there are still far behind what can be achieved with mLSTM (as a linear RNN) and Transformers. [1]
We agree with the reviewer, it is an interesting question whether state-tracking capable architectures such as non-linear RNNs show benefits on math and code downstream tasks, which should be investigated further in future work.
We thank the reviewer again for their valuable comments that help to improve our paper.
[1] Pöppel, Korbinian, Maximilian Beck, and Sepp Hochreiter. "FlashRNN: I/O-Aware Optimization of Traditional RNNs on modern hardware." The Thirteenth International Conference on Learning Representations. 2025. | Summary: In this paper, the authors introduce a new 7B LLM xLSTM 7B. The model is built upon optimized xLSTM architecture to achieve better training stability and efficiency. Extensive experiments show that xLSTM 7B is memory and computation efficient compared to attention-based models and Mamba-based models, and achieves comparable performance to recurrent models of a similar size.
## update after rebuttal
I thank the effort the authors made in the rebuttal about comparison with optimized transformer inference. I keep my positive score.
Claims And Evidence: - The authors claim the xLSTM is efficient in inference GPU memory consumption and computation.
- This claim is supported by experiments in Figure 4, 5, 6.
- The authors claim XLSTM 7B is comparable to existing LLMs.
- This is evidenced by results in Table 1 and Table 6, showing close or superior performance to SOTA recurrent LLMs while surpassing some attention-based LLMs.
- The optimized xLSTM architecture is claimed to improve training stability.
- This is evidenced in Appendix C.2. For example, with the softcap, the training gradient norm is smaller.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense. For both the performance and the efficiency, the adopted metrics are widely used. And the compared baseline modes are also common ones.
Theoretical Claims: The paper does not provide theoretical claims.
Experimental Designs Or Analyses: I checked the validity of all experimental designs, and they make sense to me.
- The benchmarks in Table 1 and the models chosen are widely used ones, which leads to a valid comparison.
- The speed benchmarks consider long context token generation speed, memory consumption, and time to first token. All these metrics are reasonable for evaluation.
Supplementary Material: I checked all supplementary material.
Relation To Broader Scientific Literature: The work is related to all LLM architectures, especially recurrent LLMs.
Essential References Not Discussed: ###
I am familiar with general LLM architectures. For example,
> [1] Grattafiori, Aaron, et al. "The llama 3 herd of models." *arXiv preprint arXiv:2407.21783* (2024). \
[2] Gu, Albert, and Tri Dao. "Mamba: Linear-time sequence modeling with selective state spaces." *arXiv preprint arXiv:2312.00752* (2023).
>
Other Strengths And Weaknesses: Strength:
- The paper is well-written and easy to follow.
Weakness:
- The Figure 1 and Figure 7 are confusing. While I can guess the meaning of most blocks without their names, it is better to add names.
Other Comments Or Suggestions: None.
Questions For Authors: In the paper, you do experiments for speed metrics with Huggingface transformers, so I wonder what would the speed of xLSTM inference be like when compared to attention-based LLMs with vLLM or other inference engines, that provide optimized inference speed.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback.
We appreciate that the reviewer finds our paper well-written, easy to follow and that your claims are supported by evidence as well as that our experimental design is valid and reasonable.
### Architecture Illustrations
We agree with the reviewer, our illustrations of the architecture could be improved by adding names to the respective blocks.
This is a good suggestion and we will update our figures in the final version of the paper.
### Metrics with Optimized Inference Engines (like vLLM)
We expect that the speed of xLSTM will only get faster than our numbers currently reported in the paper with the current optimizations of the Huggingface models with torch.compile and CUDA graphs.
We acknowledge that our current benchmark setup does not use any production grade inference serving frameworks (like for e.g. vLLM) and we are open about this in the paper.
We did our best to optimize the pure PyTorch Huggingface implementations as much as we can.
For a fair comparison, we compare baselines from Huggingface with the same optimizations as for xLSTM (torch.compile and CUDA graphs).
However, we agree with the reviewer it would be interesting to measure the speed also in these optimized inference environments. Currently, xLSTM is not yet integrated in these inference frameworks, but we intend to do so in the future.
Since some of our baselines are already integrated in vLLM, we added additional benchmark results for FalconMamba, CodestralMamba, Llama2 and Llama3 in vLLM.
We observe a speed-up of Llama 3 for longer context lengths, also small speed-ups of Mamba-based models compared to our optimized HuggingFace versions. However, xLSTM 7B is still the fastest model (even when compared to baselines from optimized frameworks like vLLM), both in generation and prefill, with larger margin towards longer contexts, due to the quadratic scaling of Transformers. See the results at: [https://i.postimg.cc/1XcMCyQV/Rebuttal-Plots.jpg](https://i.postimg.cc/1XcMCyQV/Rebuttal-Plots.jpg)
We thank the reviewer again for their valuable comments that help to improve our paper. | Summary: This paper mainly scales XLSTM to 7B using some optimization technique
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: not many theoretical claims.
Experimental Designs Or Analyses: yes
Supplementary Material: yes, experimental parts
Relation To Broader Scientific Literature: another LLM models
Essential References Not Discussed: yes
Other Strengths And Weaknesses: Strengths:
1. The contriution of this paper is clear, scaling xLSTM model to 7B model using optimization techinqies.
2. The experimental results show the trade-off, it seems that its performance is not as good as mamba, but its lentency is good.
3. This framework may be interesting for LLM community.
Weakness:
1. lack discuss about the core design mLSTMCell , any deep insight or theoretical analysis.
2. it is better to open source the project
Other Comments Or Suggestions: no
Questions For Authors: 1. lack discuss about the core design mLSTMCell , any deep insight or theoretical analysis.
2. it is better to open source the project
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for highlighting our optimizations as a clear contribution and seeing strengths in our results in the trade offs we make, our good latency, and the interesting xLSTM framework.
### Discussion on the mLSTM cell
We agree with the reviewer that there is only a brief discussion on the mLSTM cell in Section 2 of our paper.
The reason is that our goal in Section 2 is to only review the fundamentals of the mLSTM. For a more in depth discussion, we would like to point the reviewer to the original xLSTM paper (https://arxiv.org/abs/2405.04517), where the mLSTM has been introduced.
The main motivation for using the mLSTM in the xLSTM 7B is its high efficiency due to its full parallelizability. Moreover, in the xLSTM 7B we still rely on two main features of the mLSTM: The improved gating with exponential gating and the enhanced memory capacity with its matrix memory, which we found to be beneficial in language modeling.
### Open-Sourcing and Code Release
Unfortunately, we did not upload the code as part of the submission, but we can assure you that it will be open sourced.
We thank the reviewer again for the helpful feedback and hope they find their main concerns addressed. | null | null | null | null | null | null | null | null |
Oscillation-Reduced MXFP4 Training for Vision Transformers | Accept (poster) | Summary: This paper proposes two methods to train vision transformers with MXFP4-accelerated GEMMs. In the backward pass, the authors add stochastic rounding and scaling to achieve unbiased gradient estimates. In the forward pass, the authors add various EMA-based methods to avoid weight oscillation during quantization. The authors test their method on training smaller vision transformers and find that the proposed methods are sufficient to achieving near-lossless performance.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: This is a mostly empirical paper.
Experimental Designs Or Analyses: The experiments seem reasonable.
Supplementary Material: Yes
Relation To Broader Scientific Literature: FP4 training is a relatively unsolved problem. The paper makes a number of references to oscillation reduction in low precision training that seem relevant.
Essential References Not Discussed: A number of FP4 LLM training papers have come out since this paper was submitted. The authors may find these works useful and interesting but since they were released around the time of submission or after, it would be unreasonable to have the authors compare against them.
Other Strengths And Weaknesses: The paper seems to have two separate methods: unbiased gradient estimation in the backward pass and oscillation reduction in the forward pass. It is not clear to me how orthogonal these two methods are. Are unbiased gradients necessary to get oscillation reduction to work, or is oscillation reduction necessary to get unbiased gradients to have an effect on the training curve?
Other Comments Or Suggestions: Is double quantization actually necessary? If you use stochastic rounding to compute unbiased gradients and stochastic rounding is implemented with iid noise, the output should be unbiased regardless of which ways the MX groups go. Perhaps I am misunderstanding what the core issue is that necessitates double quantization.
Is there a reason you chose to evaluate on small vision transformers? This method does not seem specific to vision transformers and could be applied to language models as well.
What is the cost of storing the EMA components for oscillation reduction?
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for valuable comments and the acknowledgment of our contributions. Below we respond to the questions.
> Question 1:
> … Are unbiased gradients necessary to get oscillation reduction to work, or is oscillation reduction necessary to get unbiased gradients to have an effect on the training curve?
Thanks for the valuable questions. In short, the methods for forward (oscillation-reduction) and for backward (unbiased gradients) are **orthogonal**. We provide empirical evidence as follows.
For the first question, we conduct an additional experiment to show that oscillation reduction is not limited to a certain method of gradient calculation (Table 1). We can see the Q-EMA also works on the baseline Microscaling (MS), which does not unbiasedly estimate gradients.
For the second question, we have done experiments in our work without oscillation reduction techniques. In Table 2 (part of Table 2 in our paper), our method TetraJet(TJ) with quantization improvement on unbiased estimation (do not add oscillation reduction technique) outperforms baseline Microscaling (MS).
*Table 1. Oscillation Reduction on DeiT-S 60-epoch pre-training.*
| Methods | Accuracy |
| --- | --- |
| MS (Baseline) | 63.73 |
| MS+Q-EMA | 64.19 |
*Table 2. Accuracy improvement without additional oscillation reduction techniques (90-epoch pre-train)*
|Methods|DeiT-T|DeiT-S|DeiT-B|Swin-T|Swin-S|
|-|-|-|-|-|-|
|MS(Baseline)|58.56|70.10|74.54|76.87|79.45|
|TJ(Ours)|59.75|71.03|74.91|77.12|79.51|
> Question 2:
> Is double quantization actually necessary?
Thanks for the valuable question. There are mainly two core issues that necessitate double quantization:
- **(1) The correctness of optimization goal**. The forward pass of our optimized network is $Y= Q_D^{(1)}(X) \times Q_D^{(2)}(W^\top)$. So the gradient for weight is $\nabla_{W}\mathcal{L}\overset{\text{STE}}{\approx} \nabla_{Q_D^{(2)}(W)} \mathcal L = (\nabla_Y\mathcal L)^\top \times Q_D^{(1)}(X)$. To unbiasedly calculate this gradient using MXFP4, we need to estimate $Q_D^{(1)}(X)$ rather than $X$, to align with the forward pass.
- **(2) The non-square group shape**. To enable efficient MXFP4 Matrix Multiplication, the group shapes should be (1x32) x (32x1) for each multiplication. Because $Q_D^{(1)}(X)$ is in group shape 1x32 in forward, we need another unbiased quantizer for $Q_D^{(1)}(X)$ to achieve group shape 32x1 during backward, so we get the double-quantized $Q_S^{(6)}\left(Q_D^{(1)}(X)\right)$ (see Eq.5 of our paper).
In conclusion, the correctness of optimization goal & the non-square group shape (1x32 / 32x1) make double quantization necessary in our MXFP4 training.
> Question 3:
> Is there a reason you chose to evaluate on small vision transformers? This method does not seem specific to vision transformers and could be applied to language models as well.
Thanks for the valuable comment. We chose to begin our exploration of FP4 training with Vision Transformers (ViTs) primarily due to computational resource constraints, as pre-training large language models (LLMs) in FP4 is significantly more resource-intensive. Importantly, developing efficient FP4 training algorithms for vision tasks is itself a meaningful and underexplored direction, with many prior works also focusing on ViTs (e.g., [1,2,3]).
Our ultimate goal is to enable FP4 training for LLMs, and we actually observe that our proposed method, TetraJet, generalizes well to LLMs and outperforms FP4 baselines (see Table 3). We also find that oscillation problem still exists in LLM pertaining task. However, fully resolving convergence issues and matching BF16 performance in large-scale LLMs still requires further systematic investigation, particularly due to their large scale and dynamics. We believe that our work on ViTs Training with FP4 formats would be inspiring and valuable for further exploration of LLMs training.
*Table 3. OLMo-2 150M pre-training with 20 Billion tokens*
|Method|Perplexity|
|-|-|
|BF16 | 23.18 |
|Microscaling(Baseline)|25.88|
|TetraJet(Ours)|23.82|
> Question 4:
> What is the cost of storing the EMA components for oscillation reduction?
During training, we only need to store an extra EMA weight for each linear layer in transformer blocks. For DeiT-B with 85M linear parameters in Transformer blocks, this adds about 340MB of storage, which is relatively small compared to the overall training memory footprint (around 20GB per GPU in a 4-GPU setup).
If we use FP4-trained models for inference, we can calculate FP4 values for each parameter in advance, so we don’t need the EMA component anymore during inference, which means no additional cost.
[1] Y. Li et al.,"Q-vit: Accurate and fully quantized low-bit vision transformer," NeurIPS, 2022.
[2] Y. Liu et al.,"NoisyQuant: Noisy bias-enhanced post-training activation quantization for vision transformers," CVPR, 2023.
[3] Z. Wang et al.,"Quantformer: Learning extremely low-precision vision transformers," IEEE TPAMI,2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and additional experiments. I will keep my score. | Summary: The authors propose a MXF4 training scheme for Vision Transformers. Training at extremely low-bit such as 4-bit formats is challenging and prone to high accuracy loss, mainly due to weight oscillations in the forward pass as identified by the authors. The paper outlines two methods EMA Quantizer (Q-EMA) and Adaptive Ramping Optimizer (Q-Ramping) to resolve this issue.
Claims And Evidence: The authors provide clear evidence through detailed experiments on popular Vision Transformer architectures (DeiT and Swin). They systematically identify the accuracy degradation problem in MXFP4 training due to weight oscillations. The evidence from the ablation studies is thorough, clearly indicating that forward-pass quantizers for activations and weights cause the most degradation. Both proposed solutions, Q-EMA and Q-Ramping, effectively address this oscillation issue, demonstrated through quantitative metrics like rate of change and quantization confidence.
The experiments and analyses are solid, with clear ablation studies and comparisons against baseline methods and competitive state-of-the-art methods. I reviewed the supplementary materials
Methods And Evaluation Criteria: The proposed method sound reasonable with proper explanation.
Theoretical Claims: I did not rigorously verify any theoretical proofs, as the primary contributions are experimental and methodological.
Experimental Designs Or Analyses: Results and analysis sound reasonable.
Supplementary Material: I reviewed Appendix A and B to reference the details mentioned in the main paper.
Relation To Broader Scientific Literature: The paper tackles the complex problem of training with MXFP4 format. There are several recent attempts at low-bit training specifically targeting FP8, and INT4 methods. The discussion of prior techniques (e.g., Microscaling, Jetfire, LSQ quantization) is comprehensive and clear. However, the authors might enhance their context by explicitly citing some recent low-precision training surveys, if available, to provide broader context.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
- Clearly identifies and addresses a practical issue (weight oscillation) with thoughtful solutions.
- Extensive empirical validation across multiple ViT architectures demonstrates significant performance improvements.
- Methods introduced (Q-EMA, Q-Ramping) are innovative yet practical.
Weaknesses:
- The analysis primarily focuses on ViT architectures; evaluating more diverse architectures (e.g., CNNs, LLMs) could further validate general applicability.
- Discussion of potential overhead or computational costs associated with the oscillation tracking in Q-Ramping could be expanded.
Other Comments Or Suggestions: I think the sentence "Therefore, quantizer (1)(3)(5) should use 1 × 32 group shape, and quantizer (2)(4)(6) should use
32 × 1 group shape." can be better phrased. It's hard to follow the sentence and what quantizer(*) refer to.
Questions For Authors: - Could you clarify the computational overhead introduced by Q-EMA and Q-Ramping methods in actual training?
- Are the proposed methods sensitive to network architecture or hyperparameters beyond those tested? Specifically, how robust are they for larger models beyond the evaluated ViTs?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for valuable comments and the acknowledgment of our contributions. Below we respond to the questions.
> Broader Scientific Literature:
> The authors might enhance their context by explicitly citing some recent low-precision training surveys...
Thanks for the valuable advice. We have found some surveys on low-precision training [1,2,3]. We will include them in our revised version.
> Other Comments Or Suggestions:
> I think the sentence "Therefore, quantizer …. group shape." can be better phrased. It's hard to follow the sentence and what quantizer(*) refer to.
Thanks for the valuable advice. We will clarify this by pointing out that they are Quantizers $Q^{(*)}$ in Eq. 3-5.
> Weakness 1:
> The analysis primarily focuses on ViT architectures; evaluating more diverse architectures (e.g., CNNs, LLMs) could further validate general applicability.
Thanks for the valuable comments. According to NVIDIA's design[4], block scaling formats (e.g., MXFP4) are mostly designed for matrix multiplications rather than convolutions. Therefore, our work focuses on designing methods tailored to Transformers rather than CNNs.
Due to space limitations, we left the detailed response to this question in the rebuttal to “Question 3” of Reviewer TZFB. In short, we include results demonstrating that our method generalizes to LLMs, suggesting that our approach can be a promising direction for FP4 training of LLMs as well.
> Weakness 2:
> Discussion of potential overhead or computational costs associated with the oscillation tracking in Q-Ramping could be expanded.
Thanks for the constructive comment. In Q-Ramping method, we do not track oscillations all the time. For example, in ImageNet-1K pre-training, an epoch consists of ~2500 iterations. We only track the first 30 iterations to find the oscillating weights. As a result, this only adds 1.64% to the total training wall time compared to w/o Q-Ramping in our DeiT-B training on RTX4090.
> Question 1:
> Could you clarify the computational overhead introduced by Q-EMA and Q-Ramping methods in actual training?
Thanks for the valuable questions.
In conclusion, both methods only introduce little overhead into the training time. For Q-EMA method, we only need to use EMA for weight quantization during the forward pass (other quantizers don’t have additional cost), and we update the EMA-weight for quantized layers when parameters are updated. In our DeiT-B training on RTX4090, we found Q-EMA only adds 0.35% to the total training wall time compared to w/o Q-EMA. For Q-Ramping method, we have discussed above in response to Weakness 2.
> Question 2:
> Are the proposed methods sensitive to network architecture or hyperparameters beyond those tested? Specifically, how robust are they for larger models beyond the evaluated ViTs?
Thanks for the valuable questions. We have evaluated the robustness of hyperparameters for DeiT-B. As Tables 1&2 show, Q-EMA’s decay factor β and Q-Ramping’s update frequency factors $k_1$, $k_2$ are not sensitive within certain intervals.
We conducted additional fine-tuning experiments for ViT-base model [6] using the same default hyperparameters from our paper, and found our methods are not sensitive to network architectures or specific tasks (Table 3). Furthermore, our methods significantly outperform the baseline in LLMs pre-training using the larger OLMo-2-150M (Table 4), showing generalization beyond ViT models. These results demonstrate that our method can be adopted to different kinds of architectures and tasks.
*Table 1: Insensitivity to hyperparameters (Q-EMA) on DeiT-B.*
| $\beta$ | 0.993 | 0.995 | 0.997 | 0.998 | 0.999 | 0.9995 | w/o Q-EMA |
| ------- | ----- | ----- | ----- | ----- | ----- | ------ | --------- |
|Acc%|75.39|76.37|77.23|77.18|77.32|77.30|74.91|
*Table 2: Insensitivity to hyperparameters (Q-Ramping) on DeiT-B.*
|$k_1$|16|16|16|16|16|16|8|12|16|20|w/o Q-Ramping|
|-|-|-|-|-|-|-|-|-|-|-|-|
|$k_2$|3|4|5|6|7|8|5|5|5|5||
|Acc%|75.35|75.33|75.62|74.96|75.29|75.13|75.19|75.60|75.62|74.85|74.91|
*Table 3. Results of 50epoch MAE ViT-base Fine-tuning (MS: Microscaling; TJ: TetraJet)*
|Methods|Acc%(mean±std)|
|---|---|
|BF16|81.49±0.08|
|MS(Baseline)|80.04±0.04|
|TJ(Ours)|80.17±0.03|
|TJ+Q-EMA(Ours)|80.57±0.09|
|TJ+Q-Ramping(Ours)|80.25±0.04|
*Table 4. OLMo-2 [6] 150M Pre-training with 20 Billion tokens*
| Methods | Perplexity |
| --- | --- |
| BF16 | 23.18 |
| MS (Baseline) | 25.88 |
| TJ (Ours)| 23.82 |
[1] Wei L et al., "Advances in the Neural Network Quantization: A Comprehensive Review". Applied Sciences. 2024.
[2] Chitsaz K et al., "Exploring Quantization for Efficient Pre-Training of Transformer Language Models," arXiv:2407.11722, 2024.
[3] Kumar T et al., "Scaling Laws for Precision," arXiv:2411.04330, 2024.
[4] https://docs.nvidia.com/cuda/pdf/ptx_isa_8.7.pdf
[5] OLMo Team et al., "2 OLMo 2 Furious," arXiv:2501.00656, 2024.
[6] https://github.com/facebookresearch/mae | Summary: The paper introduces TetraJet, a novel training method for Vision Transformers using the MXFP4 low-precision format, which is supported by Nvidia's Blackwell GPUs and offers significant speed improvements. The authors identify weight oscillation as a key issue causing accuracy degradation in MXFP4 training. To address this, they propose two techniques: EMA Quantizer (Q-EMA), which smooths weight quantization using an exponential moving average, and Adaptive Ramping Optimizer (Q-Ramping), which dynamically reduces update frequency for oscillating weights. Their approach achieves over 50% reduction in accuracy degradation compared to the baseline Microscaling method and brings MXFP4 training close to full-precision performance, demonstrating its effectiveness in stabilizing low-precision training.
Claims And Evidence: The paper provides strong empirical evidence to support its claims, including extensive experiments on Vision Transformers that demonstrate TetraJet’s superiority over existing MXFP4 training methods. The identification of the weight oscillation problem is backed by quantitative analysis of weight changes, rate of change metrics, and oscillation ratio measurements. The effectiveness of Q-EMA and Q-Ramping is validated through comparative accuracy results, stability improvements, and confidence metrics, showing clear advantages over the baseline. However, while the paper convincingly argues that oscillation is the main source of degradation, it does not theoretically prove why these methods generalize across different architectures or tasks beyond Vision Transformers, leaving room for further validation.
Methods And Evaluation Criteria: The methods and evaluation criteria are well-aligned with the problem of low-precision training for Vision Transformers. The authors conduct experiments on ImageNet-1K using established Vision Transformer architectures (DeiT and Swin Transformers), which are widely used benchmarks for image classification. Their evaluation focuses on accuracy degradation, oscillation metrics, and training stability, which are appropriate for assessing the impact of quantization techniques. The use of quantization confidence and oscillation ratio provides insightful analysis beyond standard accuracy metrics. However, the paper primarily evaluates MXFP4 on pre-training tasks, and it would be useful to see results on fine-tuning or downstream applications to confirm generalizability.
Theoretical Claims: The paper primarily focuses on empirical findings, but it includes some theoretical justifications for its quantization techniques, such as the unbiased gradient estimation from double quantization and truncation-free scaling. The derivations in Section 3.4 align with prior work on Straight-Through Estimators (STE) and unbiased gradient estimation. The quantization confidence metric and the oscillation ratio definition are intuitively reasonable, though they lack formal theoretical validation. While the explanations are convincing, rigorous mathematical proofs on why Q-EMA and Q-Ramping reduce oscillation across different scenarios are not provided, leaving some theoretical gaps.
Experimental Designs Or Analyses: The experimental design is robust and well-structured, leveraging ImageNet-1K as a benchmark and conducting comprehensive ablation studies to isolate the effects of different quantization components. The impact analysis of six quantizers (Table 1) effectively identifies the activation and weight quantizers in the forward pass as the primary sources of degradation. The oscillation reduction experiments (Figures 2–6) provide strong empirical support for Q-EMA and Q-Ramping. However, the study lacks statistical significance testing (e.g., confidence intervals or variance analysis), which would further strengthen the validity of the reported improvements. Additionally, while Q-Ramping's dynamic adaptation is intuitive, its hyperparameter sensitivity is only briefly discussed, and a more detailed analysis of its tuning could enhance confidence in its robustness.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper builds on prior work in low-precision training, QAT, and MX quantization (Rouhani et al., 2023) by addressing oscillation issues in MXFP4 training from scratch. It extends findings on weight oscillation in QAT (Nagel et al., 2022; Liu et al., 2023) but uniquely applies EMA-based smoothing and adaptive update strategies to stabilize MXFP4. While related techniques exist in optimization, their use for 4-bit Vision Transformer training is novel. However, its applicability beyond vision models remains unexplored.
Essential References Not Discussed: None
Other Strengths And Weaknesses: - The study focuses only on Vision Transformers and does not evaluate other architectures (e.g., CNNs, NLP models like LLMs), limiting its broader applicability.
Other Comments Or Suggestions: None
Questions For Authors: - Have you tested MXFP4 fine-tuning on pre-trained models? Does the oscillation issue persist in fine-tuning settings?
- How sensitive are Q-EMA’s decay factor (β) and Q-Ramping’s update frequency adjustments to different datasets and architectures? Did you find any optimal ranges?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for valuable comments. Below, we respond to the questions.
> Claims And Evidence & Theoretical Claims:
> Lack formal theoretical validation & Leaving some theoretical gaps
Thank you for the insightful comment. We agree that theoretical analysis is important. While our current work primarily focuses on empirical evaluation, we acknowledge the need for a stronger theoretical foundation to support our findings on oscillation reduction. However, theoretically analyzing the convergence properties of low-precision Transformer training remains an open and challenging problem, primarily due to the non-differentiability of quantization functions. We view this as a valuable direction for future research and are actively exploring ways to address it.
> Other Weaknesses:
> The study focuses only on Vision Transformers and does not evaluate other architectures (e.g., CNNs, NLP models like LLMs).
Thanks for the valuable comment. According to NVIDIA's design[1], block scaling formats (e.g., MXFP4) are mostly designed for matrix multiplications rather than convolutions. Therefore, our work focuses on designing methods tailored to Transformers rather than CNNs.
Due to space limitations, we left the detailed response to this question to the rebuttal to “Question 3” of Reviewer TZFB. In short, we include results demonstrating that our method generalizes to LLMs, suggesting that our approach can be a promising direction for FP4 training of LLMs as well.
> Experimental Designs Or Analyses:
> the study lacks statistical significance testing (e.g., confidence intervals or variance analysis), which would further strengthen the validity of the reported improvements.
Thanks for the valuable advice and questions. We report results with standard deviation based on three runs using different random seeds in our additional fine-tuning experiments (see Table 1). Results demonstrating that our method performs consistently better than baselines.
> Question 1:
>Have you tested MXFP4 fine-tuning on pre-trained models? Does the oscillation issue persist in fine-tuning settings?
Our research is mainly focused on ViTs pre-training, but can also be generalized to fine-tuning. Our further conducted experiments show that the oscillation problem still exists, which aligns with previous literature on oscillation problem about low-precision fine-tuning [2].
We finetune MAE-ViT-base [3] for 50 epochs based on the pre-trained model, and we report the average accuracy and standard deviation over 3 seeds. Our method TetraJet still outperforms the baseline, and Q-EMA/Q-Ramping provides additional enhancement by alleviating the oscillation problem.
*Table 1. Results of ViT-base Fine-tuning (MS: Microscaling; TJ: TetraJet)*
| Methods | Acc% (mean±std) |
| --- | ---|
| BF16 | 81.49±0.08 |
| MS (Baseline) | 80.04±0.04|
| TJ (Ours) | 80.17±0.03 |
| TJ + Q-EMA (Ours) | 80.57±0.09 |
| TJ + Q-Ramping (Ours) | 80.25±0.04 |
> Question 2:
> How sensitive are Q-EMA’s decay factor (β) and Q-Ramping’s update frequency adjustments to different datasets and architectures? Did you find any optimal ranges?
Thanks for the valuable questions. The choices of these hyperparameters are **not sensitive** through our experiments. As Table 2&3 show, Q-EMA’s decay factor β and Q-Ramping’s update frequency factors $k_1$, $k_2$ are not sensitive within certain intervals.
We find [0.997, 0.999] is an optimal range for Q-EMA’s decay factor (β); a good choice for oscillation detection factor $k_1$ is 16, and good weight update frequency factor $k_2$ can be from 3 to 5.
We used default settings ($\beta=0.998, k_1=16,k_2=5$) for our reported pre-training and fine-tuning experiments, and found additional enhancement of Q-EMA and Q-Ramping across different architectures and tasks (DeiT/Swin pre-training & ViT fine-tuning). These results further confirmed the robustness and the ability of generalization of our oscillation-reduction methods.
*Table 2: Insensitivity to hyperparameters (TetraJet + Q-EMA) on DeiT-B.*
| $\beta$ | 0.993 | 0.995 | 0.997 | 0.998 | 0.999 | 0.9995 | w/o Q-EMA |
| ------- | ----- | ----- | ----- | ----- | ----- | ------ | --------- |
| Acc% | 75.39 | 76.37 | 77.23 | 77.18 | 77.32 | 77.30 | 74.91 |
*Table 3: Insensitivity to hyperparameters (TetraJet + Q-Ramping) on DeiT-B.*
| $k_1$ | 16 | 16 | 16 | 16 | 16 | 16 | 8 | 12 | 16 | 20 | w/o Q-Ramping |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $k_2$ | 3 | 4 | 5 | 6 | 7 | 8 | 5 | 5 | 5 | 5 | |
| Acc | 75.35 | 75.33 | 75.62 | 74.96 | 75.29 | 75.13 | 75.19 | 75.60 | 75.62 | 74.85 | 74.91 |
[1] https://docs.nvidia.com/cuda/pdf/ptx_isa_8.7.pdf
[2] S.-Y. Liu, Z. Liu, and K.-T. Cheng, "Oscillation-free quantization for low-bit vision transformers," ICML, PMLR, 2023.
[3] https://github.com/facebookresearch/mae
---
Rebuttal Comment 1.1:
Comment: Thank you for your prompt reply and for the additional experiments. I will maintain my original score. | null | null | null | null | null | null | null | null |
Enhancing Treatment Effect Estimation via Active Learning: A Counterfactual Covering Perspective | Accept (poster) | Summary: This paper proposes a data-efficient method to construct a regression model for estimating the mean difference between the treatment and control responses. Specifically, the author upper-bounds the evaluation error and proposes two radius-based approaches to minimize the upper bound with a limited label budget.
Claims And Evidence: My primary concern regarding the paper is that the proposed approaches bear a stronger resemblance to coreset learning rather than active learning. In active learning, users interact with the real world, acquire responses (i.e., labels), and utilize these responses to refine the learning process. However, as seen in Algorithm 1 and Eq. (8), the core of the proposed approach focuses on finding a coreset that covers the samples in the treatment and control groups. In other words, there is no interactive learning in which the treatment/control responses are contributory.
Methods And Evaluation Criteria: The authors evaluate the performance of the proposed algorithm across various label budgets and compare them with multiple baseline. This makes sense.
Theoretical Claims: I did not check the correctness of proofs. However, I have a couple of concerns regarding the theory.\
(1) \mathbf{Theorem 3.3}: What is $\mathcal{R}$? What is the hypothesis class $H$ comprised of? Does $H$ contain regression functions which are used to model among $x,t$ and $y^t$?\
(2) Corollary 3.4: Shouldn’t this be a $(1-\gamma)$ probability-type result? I am thinking the response $y$ is noisy and there is chance even $\gamma$ is small, $\Delta$ could also be big.
Experimental Designs Or Analyses: I have a bare-minimum check on the experimental results, as the methodology part raises significant concerns.
Supplementary Material: I reviewed Algorithm 1 in Appendix to try to figure out the data query scheme.
Relation To Broader Scientific Literature: The paper states that the proposed method can be applied to evaluate the difference between treatment and control responses in a label-efficient manner.
Essential References Not Discussed: I am not aware of.
Other Strengths And Weaknesses: I think that lines 6 and 7 in Algorithm 1, which involve searching for points, are critical. However, the authors have included these parts in the appendix, and it seems they did not provide the intuition behind these queries in the main paper. It would be helpful to include an explanation of what data should be selected to help reduce the maximum radius.
Other Comments Or Suggestions: In assumption 2.3, should use $T$ to indicate a random variable and and use $t$ to indicate the realization., e.g., $p(Y=1\mid x)$.
Questions For Authors: Please see the Theoretical Claims.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Active Learning vs. Coreset Learning:** The problem setting we described in lines 114-127 (left column) is an active learning loop that consists of 5 steps, where in Step 4 we identify a subset from the completely unlabeled pool set $\mathcal{D}=\{(\mathbf{x}_ {i},t_ {i})\}^{N}_ {i=1}$ for the oracle to obtain the labels (i.e., treatment outcome $y_i$). The newly labeled samples are then used to refine the estimation model in Step 5. As such, our setting differs from coreset learning that operates with all samples' labels $y_i$. As our labels are actively acquired, Algorithm 1 corresponds to Step 4 (unlabeled subset identification from $\mathcal{D}$) of our problem setting. In Eq.(8), though it is formulated using the labeled training sets $\mathcal{S}_ {0}$ and $\mathcal{S}_ {1}$, the calculation of Eq.(8) only requires covariate $\mathbf{x}_i$ and its corresponding treatment assignment $t_i$ (lines 202-203 right column), hence are applicable to all samples in the unlabeled pool set $\mathcal{D}$. In both Eq.(8) and Algorithm 1, we slightly abused the notation $\mathcal{S}$ to make the notations less cluttered, as their calculations do not actually rely on the access to treatment outcome labels $y_i$. To avoid potential confusion, we will update the description and notation in corresponding parts to highlight the active learning setting.
**$\mathcal{R}$, and Class $\mathcal{H}$:** Thanks for pointing out the typo "$\mathcal{R}$'', we will correct it to the feature space "$\mathcal{X}$''. The family of functions $\mathcal{H}$={$h \mid h: \mathcal{X} \rightarrow \mathbb{R}$} does contain the regression model $f$ as part of the function $h$. As per Theorem 3.3, we have $h_{f}(\mathbf{x},t):=\frac{1}{\kappa}l_{f}(\mathbf{x},t)\in \mathcal{H}$ ($t$ is a constant for any given $\mathbf{x}$), which contains the loss function $l_{f}$ to evaluate regression model $f$. **The rationale for defining $\mathcal{H}$ is that** the constant $\kappa_{\mathcal{H}}$ in our derived bound is dependent on $\mathcal{H}$. As per line 966, the term in Eq.(32c) is further upper-bounded by Integral Probability Metric (IPM) under the premise that $h_{f}(\mathbf{x},t)$ is in $\mathcal{H}$, which is a useful bounding technique also seen in (Shalit et al., 2017).
**Corollary 3.4 and Noisy Response $y$:** Thanks for improving the completeness of the Corollary, which should be claimed with the probability of least $1-\gamma$ since it is a further simplification of Theorem 3.3. Its main idea is to reveal what optimizable terms are closely related to risk convergence. For the gap $\Delta$, we follow the common practice in (Sener \& Savarese, 2018) by assuming zero training loss and noiseless observation for the theoretical analysis purpose. Under the active learning setting, it is reasonable that the labels obtained from the oracle are sufficiently reliable, i.e., $y_i$ has minimal noise.
**How We Query in Algorithm 1:** As stated in lines 215-219 (right column), the intuition behind the query process is to greedily prioritize the reduction of the largest radius, which is a commonly used intuition also seen in (Tsang et al., 2005; Sener \& Savarese, 2018) for the $k$-Center problem. As per Eq.(8), each of the four radii is defined in lines 202-204 (right column), thus, we just need to query the point (no label) that constitutes the radius $\delta$, and the query is explicitly defined in line 6 of the full Algorithm 1 (Appendix C) if we need to reduce $\delta_{(1,1)}$. The visualizations in Figure 1 are a graphical expression of Algorithm 1's intuition, i.e., the point at the other end of the radius is the next acquisition goal. Notably, Algorithm 1 is a naive instantiation of the covering radius-based active learning and comes with a strong (yet impractical) assumption on fully overlapping data distributions (lines 232-234 right column). Thus, we made its description relatively brief to pave the way for our proposed main algorithm FCCM (Algorithm 2), which relaxes the full coverage constraint to achieve stronger radius reduction under the same labeling budget. We will add this explanation when introducing Algorithm 1 to provide more intuitions.
**Random Variable $T$:** Thanks for pointing out the typo, we will correct $t$ to $T$ where needed.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will withhold my judgment on the score for now, primarily because the labels $y$ have not been incorporated into the acquisition function in Eq. (8). While selecting informative points based solely on covariates and treatment and response assignment may provide improvement over the random querying, it is far less effective than actively involving the labels $y$ in the decision process. In fact, a key advantage of active learning lies in its ability to avoid querying points that are unlikely to provide additional value based on the observed labels. I think important consideration is missing from the proposed approach
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate reviewer CkNS's responsive reply and valuable time during the discussion phase. We address the raised concerns as follows:
**Unlabeled data selection with existing labels $y_{i}$ in the training set:** We would firstly address this concern by distinguishing the difference between the **Uncertainty-based Querying** and the **Diversity-based Querying**. As in the uncertainty-based querying, e.g., BALD (Gal et al., 2017), BAIT (Ash et al., 2021), Causal-BALD (Jesson et al., 2021b), the label $y_{i}$ is used for querying further unlabeled samples because the proposed methods are **model-dependent**, which requires training an uncertainty-aware model based on the existing labeled training set, thus further querying on the unlabeled sample involves the uncertainty output from the trained model. Also, the model-dependent method requires accurate uncertainty estimation, please refer to **Q1** for reviewer **hAfw** for elaborating on the drawback in model-dependent methods. **However**, the diversity-based querying is **model-independent**, e.g., Core-Set (Sener \& Savarese, 2018), ProbCover (Yehuda et al., 2022), MACAL (Wen et al., 2024), which queries the unlabeled samples from the diverse regions of the data distribution purely based on the covariates (and treatment if for causal inference) and Eq. (8) is a classical form of the core-set querying without the labels. **Note,** that our task focuses on the regression as stated in line 025 (right column), where labels are distributed in the continuous space, such that each unique sample $\mathbf{x}_{i}$ is assigned with a unique label $y_i$, so the diversity-based querying on the feature space natural guarantees the diversity in the label space.
**Why Diversity-based querying in our paper?** We would like further to address the method of choice in our paper. **It is noted that our paper is theory-based**. Our main theorem -- Theorem 3.3 suggests that the incalculable estimation risk PEHE is upper-bounded by four computable radii, **which unveils the independence from the label information under our theoretical framework and the proposed theorem deeply motivates our algorithm designed to reduce the covering radii**. Thus, to reduce the key terms in the risk upper bound, we first propose a straightforward radius reduction method in Algorithm 1. By considering the limitation of Algorithm 1, we then further propose Algorithm 2 to accelerate the radius reduction specifically tailored for the treatment effect estimation setting. In summary, incorporating the label information for querying depends on the choice of the method, and most importantly, our proposed interesting and intuitive theorem for the treatment effect estimation problem with active learning unveils the independence of the label information in the risk upper bound.
We will incorporate the discussion to update our paper regarding the use of the label information and we kindly welcome any further concerns from reviewer CkNS. | Summary: This paper tackles causal effect estimation in active learning, where only features and treatments are known, but outcome labeling is costly and incomplete. To ensure balanced label expansion, it bounds generalization error using factual and counterfactual covering radius. Starting with a greedy approach, it develops the FCCM algorithm, optimizing coverage in real-world data. Experiments on benchmarks show state-of-the-art performance in causal effect estimation.
Claims And Evidence: The claims are supported by the evidence.
Methods And Evaluation Criteria: The method is developed by extending the covering radius concept to the counterfactual sample setting, which in my view is a neat and intuitive solution to the problem. The evaluation adapts major benchmark datasets in causal effect estimation to the active learning setting, and covers a variety of strong baselines. Thus, the methodology and experiment sections are rigorous.
Theoretical Claims: (1) In the proof of Theorem 3.3, it would be useful to explicitly describe the magnitude of $\lambda$, such that the probability of $1-\lambda$ is sufficiently large to be practical.
(2) In the proof of Lemma A.3, the acronym SUTVA should be expanded as it is not introduced elsewhere.
Experimental Designs Or Analyses: The experimental results show a promising level of performance gain from the proposed FCCM algorithm. The settings are rational and the radius hyperparameters are provided, where the algorithm is made available for reproducibility. On top of the quantitative results, I find the visualizations in Figure 6 also helpful for showing FCCM’s advantages.
Supplementary Material: All materials are checked and correct.
Relation To Broader Scientific Literature: This paper resolves the issue of limited labeled data in the causal inference space, which aligns with real-world challenges when building data-driven models under data scarcity due to cost and/or privacy concerns. Hence, the relevance to the broader of causal inference is high.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
1 This paper clearly differentiates the problem it studies with another line of work in data-efficient causal inference, which is active experiment design. The practical scenarios where only the treatment effect (but not treatment) is unknown is well motivated and presents some important challenges unique to the problem.
2 The solution approaches treatment-aware data selection/labeling from an inter- and intra-group covering radius perspective, which is a technically sound and innovative solution.
Weakness
1 In terms of the experimental setting, as each dataset has 50 acquisition steps, it might help demonstrate the performance trend of FCCM if the results at more budget levels can be shown. For example, a figure similar to Figure 5 can be added to Appendix to showcase the prediction performance with a finer-grained increment (e.g., 10% or even 2% percent).
2 The presentation of the ablation study results in Table 2 is somewhat untraditional. It would be more intuitive to directly report the PEHE values of both variants, with the calculated percentage difference as an additional reference.
3 The active learning strategy needs to maximize a synergic objective with four covering radii (two inter-group and two intra-group) combined, where the authors state that all four radii share the same setting. It is worth discussing whether or not using a uniform hyperparameter setting for all four radii is the best possible practice.
4 In Figure 5, the reported results for both TOY and CMNIST datasets start from 0% budget, while IBM starts from 20%. Is it because the error at zero budget is too large for IBM? Since the causal estimator is the same across all active learning methods, the results at 0% are in fact less informative and can be removed for TOY and CMNIST.
Other Comments Or Suggestions: Typos/presentation issues:
1 Line 380, “a fine granularity results” should be “fine-granularity results”
2 Line 845 in Appendix, “distributions” is mis-spelled
3 Line 957 in Appendix, the meaning of “by change of variable reverting y to x” is unclear
Questions For Authors: 1 The Introduction mentions a real-world application for “collecting customer preferences from those who have already received different services”. The applicability of the studied problem to this example is not very clear to me; can you explain with more details?
2 How does FCCM’s performance look like when using a finer-grained increment of the labeling budget?
3 Any insights into sharing the same hyperparameter value among all four radii? In other words, in what scenarios will different radii values help?
4 Is there a reason that results for IBM with 0% budget are omitted?
5 Can the authors provide the PEHE results of FCCM and FCCM- from the ablation study?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1:** For instance, when performing A/B tests to compare two software versions, the service provider needs to assign two distinct user groups to different versions (i.e., treatments). To understand which software version offers a better experience for a specific user profile (i.e., features), it is necessary to collect users' feedback via customer surveys. Though the features and treatments associated with each user are known to the service provider, collecting all treatment outcomes -- users' satisfaction levels is time-consuming and economically unrealistic (e.g., paid surveys). In this case, active learning can be used to select a small subset of the most informative users, from whom the treatment outcomes can be obtained at a much lower cost.
**Q2+W1:** We provide the figures under the 2\% increment evaluation scheme, which is the most fine-grained setting based on the 50 query steps in total in the anonymous link https://anonymous.4open.science/r/To-Reviewer-FYYH (click into folder ``To-Reviewer-FYYH''). Note that on the IBM dataset, all methods yield an impractically large prediction error at the initial acquisition steps (<20\% budget), thus we show the results from 20\% for better clarity.
**Q3+W3:** Our main rationale is to avoid making the search space prohibitively large -- up to $\mathcal{O}(n^4)$ if each radius has $n$ candidate settings. The key insight to reduce the search complexity is that, because $P(\mathcal{A})=\frac{1}{4}(P(\mathcal{A}^{t=1}_ {F})+P(\mathcal{A}^{t=0}_ {F})+P(\mathcal{A}^{t=1}_ {CF})+P(\mathcal{A}^{t=0}_ {CF}))$, each radius is **independent** and each of the sub-terms increases monotonically with the radius, then so does the mean coverage $P(\mathcal{A})$. Thus, by the independence and monotonicity, the search space greatly reduces to $\mathcal{O}(n)$ while the search of four radii stays in the same direction, thus we let the four radii share the same setting. **When will different radii help?** Though the shared value among four radii allows for effective and efficient hyperparameter tuning, it could potentially fail if the distribution discrepancy between the two treatment groups is very large. This is because the counterfactual covering radius can be $\delta_{(t,1-t)}\gg\delta_{(t,t)}$ for the counterfactual coverage $P(\mathcal{A}^{t}_{CF})$ to get close to full, lowering the utility of the uniform radii value. As such, if the distribution discrepancy between treatment groups is large, we can then identify a different value for each covering radius to maintain the high coverage under the linear complexity due to the independence and monotonicity. We will add this discussion to an updated version of our paper.
**Q4+W4:** The reason for omitting 0\% in IBM is that the estimation risk at the first few steps is too large to be practical for all methods. Thus, the reported results start from 20\% for clarity. We will append a more explicit explanation on the figure caption to eliminate potential confusion.
**Q5+W2:** Thanks for the suggestion, we will update the table to show the PEHE results for both FCCM and FCCM$^-$ in our paper. Please check the anonymous link https://anonymous.4open.science/r/To-Reviewer-FYYH (click into folder ``To-Reviewer-FYYH'') for the detailed results.
**Probability Threshold $\lambda$ ($\gamma$ in Our Paper):** We will add a discussion of the magnitude of the probability threshold $\gamma$. It is noted that higher confidence gives a larger upper bound due to the increase in the third term of the bound, but it remains a constant during our optimization.
**SUTVA:** We will update it with the full spell, namely Stable Unit Treatment Value Assumption (SUTVA).
**Typos:** Thanks for pointing them out, and we will correct them correspondingly. For ``by change of variable reverting y to x'', we intended to express that the integral over the domain $\mathcal{Y}$ is changed to the domain $\mathcal{X}$ by the change of variable. We will modify the description to clear the confusion. | Summary: This paper proposes an active learning (AL) approach tailored to enhance the treatment effect estimation from observational data, when labeling outcomes is costly. The authors introduce a theoretical formulation using the concepts of factual/counterfactual covering radii, to upper bound the (fundamentally incalculable) population-level risk --- expected precision in estimation of heterogeneous effect (PEHE), defined in Eq. (3) --- effectively by the sum of four factual/counterfactual covering radii, up to a multiplicative constant and lower-order terms, cf. Eq. (4) and Corollary 3.4.
Two distinct greedy algorithms to minimize the covering radii are introduced: (1) A direct radius-reduction algorithm (Algorithm 1 in Section 4.1) motivated by traditional core-set techniques, and (2) the factual and counterfactual coverage maximization (FCCM) method (Algorithm 2 in Section 4.2), which seeks a more balanced and effective radius reduction through maximizing coverage under given radii constraints. Empirical experiments conducted on synthetic, semi-synthetic, and image-based datasets demonstrate that the FCCM approach notably outperforms established baselines by systematically prioritizing samples from regions of high density and significant overlap between treatment groups.
Claims And Evidence: The main claims---(1) a counterfactual covering perspective provides a tractable risk upper bound, and (2) the proposed FCCM enhances estimation performance and improves theoretical risk upper bounds---are well-supported by empirical evidence provided through experiments. The choice of datasets (TOY, IBM, and CMNIST) effectively demonstrates the general applicability and performance advantages of the FCCM approach. Nonetheless, a deeper discussion linking theoretical guarantees to empirical outcomes, especially considering variations in dataset properties such as the degree of overlap between treatment groups, would enhance the validity of these claims. While the empirical evidence is strong, a more explicit acknowledgment of potential failure modes or limitations of the proposed methods could strengthen the authors’ analysis and presentation.
Methods And Evaluation Criteria: The chosen methods and evaluation criteria---e.g., the PEHE metric---are appropriate and commonly accepted within the causal inference community. The diversity in datasets (synthetic, semi-synthetic, and image-based) effectively demonstrates methodological versatility. However, more explicit explanations and motivations for the chosen datasets in the main text would improve clarity. While Appendix C.2 provides some context, essential details such as dataset characteristics (e.g., types and ranges of covariantes and response variables) and preprocessing steps remain insufficiently clear.
Theoretical Claims: I reviewed the main theoretical claim (Theorem 3.3) and associated lemmas (A.2, A.3, and A.5). While I did not verify every line in the proofs, the theoretical derivations appear generally sound, utilizing standard techniques. However, assumptions such as strong ignorability and Lipschitz continuity, though commonly used in the literature, deserve more explicit justification regarding their practicality. Moreover, clarifying and simplifying the presentation of assumptions and key theorem statements would significantly improve readability and comprehension.
Experimental Designs Or Analyses: The experimental designs are sound, leveraging suitable baselines including QHTE, Causal-BALD, MACAL, LCMD, BADGE, and BAIT, effectively situating the proposed method within existing literature. Nevertheless, the robustness of the FCCM algorithm could be more comprehensively assessed through sensitivity analyses, particularly concerning hyperparameters such as the radii parameter $\delta_{t,t'}$ (for $t, t' \in \{0,1\}$) and the weight parameter $\alpha$.
Supplementary Material: I reviewed the supplementary materials briefly, particularly the theoretical proofs in Section A. The provided proofs appear thorough and rigorous, but clearer linkage to the main text, as well as additional intuitive explanations, would enhance accessibility. The literature review in Section B looks fine.
Relation To Broader Scientific Literature: The paper situates its contributions within the broader literature, linking clearly to existing AL methods such as BALD and core-set approaches, as well as contemporary causal inference methods like QHTE and MACAL.
Essential References Not Discussed: The discussion of related literature provided in Appendix B seems mostly sufficient.
Other Strengths And Weaknesses: * Strength: The theoretical framework based on factual and counterfactual covering radii is innovative and potentially valuable for future methodological development.
* Weakness: The paper's presentation could be significantly improved to enhance clarity and readability, particularly in defining key concepts and motivating algorithmic choices.
Other Comments Or Suggestions: The paper presents an interesting idea and a decent contribution; however, I believe its readability can be significantly enhanced. Below I list some suggestions.
* Definitions could be streamlined to improve clarity and reduce redundancy. For example, Definitions 3.1 and 3.2 could be merged to define covering radius $\delta_{t,t'}$ generically. which can then be specialized to define factual/counterfactual radii. The same comment also applies to Definitions 4.2 and 4.3
* The statement of Theorem 3.3 is currently dense and challenging to follow. Explicitly separating and clearly stating assumptions outside the theorem (e.g., by defining "Assumption" environment) would significantly aid comprehension. I would also suggest the authors further discuss the practical implications and limitations of the theoretical assumptions explicitly.
* The description of Section 4.2 is unclear and confusing. I would suggest the authors revise description more precisely and compactly, e.g., by defining quantities and sets, e.g., $\delta_{\mathrm{sum}}$, $\mathcal{A}$, $P(\mathcal{A})$ explicitly using mathematical expressions instead of vaguely defining them in English.
* Sentences throughout the manuscript often tend to be overly long and verbose. Breaking these into shorter, clearer sentences would enhance readability and improve interpretability.
* Additional Minor comments
- Please consider revising the captions more informatively. For example: (1) please add "by Algorithm 1" in Figure 2; (2) I think the caption of Figure 3 should contain "Algorithm 2" instead of "Algorithm 1"; and (3) please add "by Algorithm 2" in Figure 4.
- I think Algorithm 2 is missing an input parameter $\alpha$.
Questions For Authors: 1. Could you elaborate on the following sentence you wrote in Section 5.1: "For example, although $\mu\rho$BALD bias the data acquisition toward the overlapping region, it does not further embed the property to query from the high-density region, and thus accounts for less risk" and explain the mechanism how FCCM specifically addresses distributional discrepancies compared to existing methods like $\mu\rho$BALD, particularly in scenarios with significant non-overlap? I am asking this question because (1) Figure 6-(c) seems to suggest that $\mu\rho$BALD indeed places some points outside the overlapping region, instead of "biasing the data acquisition toward the overlapping region," and (2) I want to gain intuitive understanding on how FCCM (Algorithm 2) addresses the challenge.
2. How sensitive is FCCM to the choice of hyperparameters, particularly the radii parameter $\delta_{t,t'}$'s and the weight parameter $\alpha$? More details or experiments here would significantly enhance confidence in the robustness of your method.
3. Can you discuss the implications on your theoretical guarantees and empirical results if key assumptions, such as strong ignorability and Lipschitz continuity, do not hold in practice?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Potential Failure Modes:** FCCM is designed to better handle the partially overlapped data for a quicker bound reduction while maintaining high coverage (Figure 3). As such, in scenarios where the two treatment groups have no overlapping regions (e.g., biased treatment assignment), the data acquisition of FCCM will be challenged as there are no overlapping, counterfactual pairs to be identified. One possible remedy is to operate FCCM in the latent space where the inter-group distributions are aligned by methods like (Zhang et al., 2020; Wang et al., 2024), but it also offsets the model-independent advantage of FCCM. We will add this discussion in an updated version.
**Practicality of Strong Ignorablity (SI):** It is noted that the focus of this paper is active label acquisition, it can then integrate with the selection bias-aware method to make predictions. The validity of the SI can be approximated by carefully selecting sufficient relevant covariates and constructing a more balanced dataset.
**Practicality of Lipschitz Continuity:** Theorem 3.3 assumes the Lipschitzness of the $p^{t}(y|\mathbf{x})$, this can be a practical assumption if the regression model $f$, e.g., a well-regularized neural network (NN), learns a smooth mapping from $\mathbf{x}$ to $y$, which implies the Lipschitzness for $p^{t}(y|\mathbf{x})$. Also, the NN $f$ is differentiable w.r.t. $\mathbf{x}$, thus the squared loss $l_{f}$ is also bounded and sufficiently differentiable w.r.t. $\mathbf{x}$, further implying the Lipschitzness of $l_{f}$.
**Q1:** **Elaboration on The Sentence:** Despite that $\mu\rho$-BALD ($\mu\rho$) highlights searching for samples within the non-overlapping region, its acquisition criterion naturally prefers samples that lead to higher estimated uncertainty by its criterion. **However**, the criterion design of $\mu\rho$ is not density-aware. For example, in early acquisition steps, two unseen data points $a$ and $b$ can be of the same uncertainty, while $a$ is an outlier yet data $b$ belongs to a dense cluster (not seen). Then, prioritizing the acquisition of $b$ over $a$ can help generalize the estimator to more samples, which means training on $b$ can eliminate more estimation risk when data is limited. However, $\mu\rho$ does not differentiate the importance of $a$ and $b$, and thus "account for less risk'' in such cases. **How FCCM addresses this challenge:** Pertaining to the previous example, the factual covering from FCCM can prioritize the acquisition of $b$ over $a$, as $b$ has more neighbors (i.e., higher out-degree) than $a$ in the graph constructed in Algorithm 2. To enhance the distribution alignment, FCCM makes use of the weight $\alpha$ imposed on the counterfactual edges (line 308 left column). For example, assume $c$ is in the same treatment group as $b$, while $b$ and $c$ have similar numbers of neighbors in the graph. Then, if $c$ has more neighbors from the counterfactual group than $b$ does, then the priority of $c$ is higher than $b$ owing to the additional bonus (with $\alpha>1$) on counterfactual edges when calculating the out-degree. For cases with significant non-overlap (almost no overlap), please refer to our response to potential failure modes. **Why $\mu\rho$ queries non-overlapping samples in Figure 6(c):** As stated in line 85, $\mu\rho$ relies heavily on the accuracy of estimated uncertainty (model-dependent), which can be erroneously high for certain samples outside the overlapping region, boosting the acquisition metric score. Thus, a few such samples are selected as per Figure 6(c). In contrast, FCCM is model-independent, and the acquisition score is robustly computed via the out-degree of nodes (Algorithm 2).
**Q2:** Please see the anonymous link: https://anonymous.4open.science/r/To-Reviewer-hAfw for sensitivity analysis results on $\delta$ and $\alpha$ (click into folder "To-Reviewer-hAfw''). Note that the acquisition on treatment sample $t=1$ is insensitive on $\delta_{(0,0)}$ and $\delta_{(0,1)}$ in our setting, as all control samples ($t=0$) are seen. For $\alpha$, our setting of $\alpha=2.5$ has an overall lower error across different acquisition budgets. We will include all results in an updated version.
**Q3:** **Theorem 3.3 (Lipschitz continuity):** If the Lipschitzness does not hold, the multiplicative constant will be unbounded, and thus the reduction of the radii may not help control the risk upper bound. **Theorem 4.1 (SI):** SI provides an ideal scenario for acquiring the counterfactual samples, where a quick bound reduction by Algorithm 1 is seen in Figure 2(a). If it cannot be guaranteed in real-world data, e.g., CMNIST, the reduction of the bound will be significantly slower and less effective for Algorithm 1, as shown in Figure 2(b). Thus, it motivates us to propose Algorithm 2 which can handle compromised data distributions more effectively via a slight trade-off in coverage.
**We will adopt all the constructive suggestions to update the paper.**
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. With the proviso that the authors will incorporate appropriate revisions to clarify the discussed aspects, I have increased my evaluation rating from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: We again deeply appreciate Reviewer hAfw’s valuable time and constructive suggestions, which have been instrumental in helping us further improve the paper. We will update the paper accordingly. | Summary: This paper tackles the problem of active learning for treatment effect estimation, where the task is to label a treated or untreated unit given a fixed budget. The authors take the core set approach and extend it by introducing a counterfactual covering radius along with the factual covering radius. An algorithm FCCM is then introduced that chooses to label data points to minimises the expected factual and counterfactual coverings. Through quantitive experiments, the authors show that their method reduces the PEHE metric faster that other methods. Qualitatively, the authors also show that their method is better at choosing data points from the overlap of treatment groups, which is more informative of the treatment effect.
## update after rebuttal
I thank the authors for their response, I will keep my already positive score.
Claims And Evidence: The claims seem to be supported as their method does outperform previous active learning schemes. It is also clear why it is performing better (better acquisition in the overlap region).
Methods And Evaluation Criteria: The benchmarks are standard for these tasks and PEHE is a standard metric for ITE.
Theoretical Claims: The proofs seem correct.
Experimental Designs Or Analyses: The experiments seem sound.
Supplementary Material: The proofs and the experimental details.
Relation To Broader Scientific Literature: Although the idea of using covering sets for active learning is not new, using it for the counterfactual covering seems to be novel. It is also demonstrated that this leads to lear performance improvements.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: - What is Gamma in L228? I don't believe this is defined anywhere.
- I don't really understand what section 4.2 is trying to solve? It is shown that the covering depends on the data distribution in section 4.4 but statements such as "approximating the full coverage given the relatively smaller fixed covering radius" (L238 RHS) does not make sense to me.
- This problem is compounded as I dont believe P(A^t_F) (and other variants) are properly defined, are these the normalised means over the corresponding covering radiuses?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q1:** $\Gamma$ is a pseudo-operator defined in the "Input'' section of the Algorithm 1, i.e., $\Gamma=\arg\max\min d(\cdot,\cdot)$. We use $\Gamma$ as a shorthand for the distance-based radius defined underneath Eq. (8) (line 203 right column) to keep Algorithm 1 concise. Take line 6 of Algorithm 1 as an example, to reduce radius $\delta_{(1,1)}$, by applying $\Gamma$ as the selection criteria, the selected sample from $\mathcal{D}_ {1} \setminus\mathcal{S}_ {1}$ is $a=\arg\max_{a'\in\mathcal{D}_ {1}\setminus\mathcal{S}_ {1}}\min_{j\in \mathcal{S}_ {1}} d(\mathbf{x}^{t=1}_ {a'},\mathbf{x}^{t=1}_ {j})$, where $D_ {1}$ and $S_ {1}$ are respectively the unlabeled and labeled sets for $t=1$.
**Q2:** As per line 273 (left column), the informative bound in Theorem 3.3 consisting of four covering radii is derived under the full coverage, i.e., all actively acquired samples should together cover 100\% of the full data space (we will mathematically explain the full coverage in Q3). Thus, the straightforward solution in Algorithm 1 strictly maintains 100\% coverage requirement using currently the smallest radii (in Definition 3.1 and 3.2) at every acquisition step, while progressively minimizing the four radii via newly acquired samples. However, due to this full coverage requirement, Algorithm 1 cannot work effectively on real-world datasets where inter-group distributions do not align well (Figure 2 and lines 255-269 left column). Thus, in Section 4.2, we further build Algorithm 2 (FCCM) upon Algorithm 1 as a solution. **Clarifying the statement:** Instead of computing the minimal radii in Algorithm 1, Algorithm 2 fixes the radii by a small constant when performing data acquisition. Though the 100\% coverage requirement cannot be fully satisfied in early acquisition steps, as the acquisition expands, we can effectively increase the coverage of the data space. So as shown in Figure 3(b), FCCM can achieve up to 25\% bound reduction compared to Algorithm 1 with only a negative $1\%$ coverage gap (99\% coverage). Thus, in line 238 (right column) we state ``given the relatively smaller fixed covering radius'', we can '' approximate (approaching) the full coverage''. We will add more explanations to those parts to better motivate Section 4.2 in an updated version.
**Q3:** For each treatment group $t$,$P(\mathcal{A}^{t}_ {F})$ is the proportion of the data points from the unlabeled pool set $\mathcal{D}_ {t}$ that has been covered by the training set $S_{t}$ in the covering ball $\mathcal{A}^{t}_ {F}$, i.e., $P(\mathcal{A}^{t}_ {F})=\frac{|\mathcal{A}^{t}_ {F}|}{|\mathcal{D}_ {t}|}$. Analogously, $P(\mathcal{A}^{t}_ {CF})=\frac{|\mathcal{A}^{t}_ {CF}|}{|\mathcal{D}_ {1-t}|}$, then the mean coverage rate $P(\mathcal{A})=\frac{1}{4}(P(\mathcal{A}^{t=1}_ {F})+P(\mathcal{A}^{t=0}_ {F})+P(\mathcal{A}^{t=1}_ {CF})+P(\mathcal{A}^{t=0}_ {CF}))$. We will add this formal definition to complement our textual descriptions in lines 266-271 (right column) to make it easier to follow. A further note is that, for Algorithm 1, we have the full coverage, i.e., $P(\mathcal{A})=1$, strictly satisfied at every acquisition step, while Algorithm 2 -- FCCM is to maximize the mean coverage rate $P(\mathcal{A})$ to approximate the full coverage ($P(\mathcal{A})=1$) given the four fixed small covering radii to constitute a lower bound value. | null | null | null | null | null | null |
Context-Informed Neural ODEs Unexpectedly Identify Broken Symmetries: Insights from the Poincaré–Hopf Theorem | Accept (poster) | Summary: The paper introduces Context-Informed Neural ODEs (CI-NODEs), a framework designed to learn dynamical systems exhibiting bifurcation behaviors, particularly symmetry-breaking bifurcations. The authors claim that CI-NODEs, despite being trained solely on pre-bifurcation, symmetric data, can predict post-bifurcation symmetry-breaking behaviors without explicit physics-based priors. They attribute this phenomenon to the implicit use of topological invariants, specifically the Poincaré index, and provide a formal explanation via the Poincaré-Hopf theorem. Additionally, a novel topological regularization term inspired by this theorem is proposed and tested on the Landau-Khalatnikov system to enhance generalization.
Claims And Evidence: The claim that CI-NODEs can "identify" bifurcations and predict post-bifurcation behaviors is intriguing but lacks rigorous justification. The results indicate that the model sometimes hallucinates bifurcations, which contradicts the assertion that the approach reliably generalizes.
The use of the Poincaré-Hopf theorem to explain the model’s behavior is an interesting theoretical contribution, but the practical implications remain unclear. There is no empirical validation showing that the theorem correctly predicts when the model will succeed or fail.
The claim that the proposed regularization enhances generalization is weakly supported. The experiments show some improvements, but they lack robustness tests, such as evaluations across a broader range of bifurcation scenarios.
Methods And Evaluation Criteria: The experimental setup is well-structured but suffers from narrow validation. The datasets used for training and testing are limited to specific types of bifurcations, making it unclear how well the model generalizes to other dynamical systems.
The chosen evaluation metrics (e.g., Mean Squared Error, trajectory consistency) are appropriate but insufficient to fully assess model reliability. Additional evaluations such as robustness to noise, long-term stability, and sensitivity to initial conditions would be beneficial.
The ablation studies focus mainly on the impact of context-informed modeling but do not adequately assess individual architectural choices, such as the specific design of the fusion mechanism.
Theoretical Claims: The theoretical contribution is a highlight of the paper but remains somewhat speculative. While the Poincaré-Hopf theorem provides an insightful perspective, the connection between the theorem and the model’s emergent properties is not formally established.
There is no rigorous proof explaining why CI-NODEs should be able to generalize beyond training data in the specific manner observed. The argument based on topological constraints is heuristic at best.
The discussion of the relationship between NODEs and dynamical systems theory is useful but lacks depth in explaining how these theoretical insights could be leveraged to improve practical performance.
Experimental Designs Or Analyses: The experiments provide some compelling qualitative results, but the lack of quantitative robustness testing is a significant weakness.
The comparisons with baseline methods, such as traditional NODEs, are somewhat superficial. More rigorous benchmarking against established bifurcation detection techniques would strengthen the claims.
The zero-shot generalization experiments are interesting but suffer from a lack of statistical analysis. How often does the model correctly infer post-bifurcation behavior versus hallucinating incorrect structures?
Supplementary Material: No.
Relation To Broader Scientific Literature: The claim that this work offers new insights into OOD generalization in dynamical systems is overstated. While the results are interesting, they do not establish a broadly applicable principle for OOD learning.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: **Strengths**:
The paper presents a novel perspective on learning bifurcating dynamical systems using NODEs.
The use of topological invariants as an implicit learning signal is an interesting theoretical contribution.
The proposed topological regularization is conceptually novel and may inspire further research in constrained learning for dynamical systems.
**Weaknesses**:
The empirical validation is not rigorous enough to support the paper’s ambitious claims.
The theoretical arguments, while insightful, are largely heuristic and lack formal proof.
The experimental design does not adequately explore failure cases, making it difficult to assess the model’s reliability.
The paper is dense and difficult to follow, with crucial details buried in the appendix.
Other Comments Or Suggestions: (1) Provide statistical significance testing for key experimental results.
(2) Clarify how computational complexity scales with increasing problem size and longer time horizons.
(3) Conduct additional experiments to test the model’s robustness to noise and distribution shifts.
(4) Improve clarity by restructuring the theoretical discussion to make the key insights more accessible.
Questions For Authors: (1) Can you provide more evidence that CI-NODEs do not hallucinate bifurcations in cases where no symmetry breaking occurs?
(2) How does CI-NODE compare to physics-informed methods, such as PINNs, in terms of accuracy and interpretability?
(3) What guarantees, if any, can be provided regarding the reliability of CI-NODEs in predicting post-bifurcation behaviors?
(4) Have you tested CI-NODEs on higher-dimensional bifurcation problems, and if so, how does it perform?
(5) How does the choice of context representation affect the model’s ability to generalize?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thoughtful and constructive feedback. Below, we address each comment carefully. Additional experimental results are available in the **README** at https://github.com/anonymous-account123/icml2025-7637 and will be thoroughly incorporated into our revised paper.
***
**Q1. Reliability of the zero-shot identification.**
We fully agree with your comment that evaluating the reliability of bifurcation identification is of critical importance. To statistically validate the robustness of zero-shot bifurcation identification, we conducted repeated experiments (five runs) using different random seeds and initial conditions. Consistently, the model reliably identified post-bifurcation behaviors, demonstrating statistical robustness and reproducibility (**Figure 1(a)** at the provided link).
In addition, we performed additional robustness tests under realistic scenarios, including noisy observations and limited training data. These challenging settings revealed that the model remains capable of accurately identifying symmetry-breaking bifurcations, albeit with slightly increased variance compared to the ideal noise-free scenario (**Figure 1(b-c)** at the provided link).
***
**Q2. Hallucinated bifurcations and empirical validation of the theorem.**
The hallucinated symmetry-breaking scenario in Section 4.3 is purposefully designed to illustrate the predictive capability of Proposition 4.1. In the analyzed linear system, a simple center-to-saddle bifurcation is expected if the model merely approximates the functional form. However, our empirical results demonstrate a spurious symmetry-breaking bifurcation, as predicted by Proposition 4.1, confirming that the model genuinely leverages topological constraints rather than simple functional approximation. We will clarify this critical interpretation in the revised paper as you suggested.
***
**Q3. How to detect and avoid hallucinated bifurcations.**
We completely agree that identifying whether a model hallucinates bifurcations is vital. We propose a straightforward yet robust criterion: evaluating the variance in bifurcation diagrams generated by multiple independent training runs. Correctly identified bifurcations show minimal variance, indicating stable and consistent identification. Conversely, hallucinated bifurcations yield significantly higher variance due to their structural instability. **Figure 2** at the provided link clearly demonstrates this differentiation approach.
***
**Q4. Other type of bifurcations or dynamical systems.**
Following your suggestion, we investigated a non-Hamiltonian dynamical system exhibiting a codimension-2 cusp bifurcation: $(\dot{x}, \dot{y}) = (y, b + a x - x^3 – d y)$, with variable parameters $(a, b)$ and fixed $d = 0.5$. This system serves as a canonical model for capturing catastrophic transitions and hysteresis phenomena, which are fundamental in science and engineering. Remarkably, despite being trained only on the simple, monostable pre-bifurcation regime $(a,b) \in \[-2.0, -1.5, -1.0, -0.5 ]^2$, the model successfully identified the cusp bifurcation surface, validating the model's broader applicability. Detailed visualizations are available in **Figure 3** at the provided link.
***
**Q5. Comparison with PINNs.**
We acknowledge that PINNs effectively utilize explicit physical priors. However, they become inapplicable if physical laws are unknown or incorrectly assumed. Our proposed approach circumvents this limitation by leveraging general topological invariants. For instance, models representing vector fields on a sphere (e.g., climate models) naturally adopt a global index constraint of $\chi(S^2) = +2$. Such universally applicable prior knowledge positions our approach as complementary and potentially advantageous in scenarios where explicit physical laws are unavailable. This point will be discussed in the revised manuscript.
***
**Q6. Computational complexity.**
Modeling $N$ environments using vanilla NODEs with $M$ parameters requires $N \times M$ parameters. Conversely, the CI-NODEs reduce this requirement to approximately $M(1 + K)$, where $K$ represents the context dimensionality, showing substantial parameter efficiency, particularly when $N > K$.
Regarding computational overhead from topological regularization, our experiments confirm a moderate increase in computation time (from 53.5 ms/epoch for vanilla models to 80.9 ms/epoch for regularized models under the same environment). Nevertheless, the regularized model benefits from improved learning stability and accordingly demonstrates faster empirical convergence, as shown in **Figure 4** at the provided link. We will clearly address this trade-off in the revision.
***
**Q7. Clarity of the paper.**
We appreciate your suggestions for clarity improvement. We will refine the main text and Appendix to highlight essential details clearly, and will include the additional discussions and experiments made during the rebuttal.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed rebuttal. I will increase my score.
---
Reply to Comment 1.1.1:
Comment: We are pleased that our response has addressed your concerns. We sincerely appreciate your thoughtful suggestions and kind recognition of our work. | Summary: The paper finds that context-informed Neural Ordinary Differential Equations (NODEs) can identify symmetry-breaking bifurcations in dynamical systems (DS). By leveraging topological invariants like the Poincaré index and the Poincaré-Hopf theorem, the paper demonstrates conditions under which context-informed NODEs can out-of-domain (OOD) generalize without explicit physics-based priors. The manuscript further introduces a regularization based on these findings and empirically demonstrate its use on the Landau-Khalatnikov system.
Claims And Evidence: All claims are supported by evidence, even though some more evidence would be helpful (see **Weaknesses**).
Methods And Evaluation Criteria: Methods and evaluation criteria are appropriate.
Theoretical Claims: The paper's theoretical findings are based on established DS theory, such as the Poincaré-Hopf theorem. Appropriate references thereof are provided (e.g. Strogatz 2018). I did not check the proof for Proposition 4.1.
Experimental Designs Or Analyses: Experimental Analyses seem appropriate.
Supplementary Material: I skimmed through the Appendix but did not read the material in detail.
Relation To Broader Scientific Literature: Training on a limited window of ground-truth (GT) control parameter values using a hierarchical/meta-learning approach and extrapolating beyond this range has also been done in a previous study, albeit not across bifurcations [1]. However, similar linear relationships between context vector and GT control parameters have been observed (see Fig. 3a). Also, as mentioned in the current manuscript, [2] showed that *vanilla* DS reconstruction methods (such as vanilla NODEs) struggle to generalize across state space in a multistable (and hence) OOD scenario. In this context, this work is novel as it shows conditions under which OOD generalization is indeed possible.
Essential References Not Discussed: Not essential per se, but [1] backs the findings in that they also find that a similar hierarchical approach to CoDA, but using an recurrent neural network backbone, leads to linear relationships between GT control parameter and context vectors with capabilities of inter-and-extrapolation.
Other Strengths And Weaknesses: **Strengths**: I think the paper does a great job of motivating, explaining and presenting each and every experiment in a detailed fashion.
**Weaknesses**: A current weakness of the manuscript is that the hypothesized mechanism for the possible OOD generalization, i.e. the Poincaré-Hopf theorem, is only validated for NODE backbone based methods and tested on Hamiltonian systems. To remedy this weakness, the authors could try different DSR backbones, e.g. RNNs, and apply the regularizer to different, non-Hamiltonian DS.
Other Comments Or Suggestions: **Typo**: “This raises a pertinent question arises:” (p. 5, ll. 273-274)
**References**
[1] Brenner, M. et al. (2024). Learning Interpretable Hierarchical Dynamical Systems Models from Time Series Data. (ICLR 2025)
[2] Göring, N.A. et al. (2024). Out-of-Domain Generalization in Dynamical Systems Reconstruction. (ICML 2024)
Questions For Authors: 1. The choice of $\chi_{PH}$ (desired index) is a quantity that does depend on physical priors we have about the observed DS, correct? If so, how can the regularizer help in cases where this information is a priori *not* available?
2. I’m struggling a bit with section 4.3; without knowledge of the GT system, is there a way to identify whether the model is hallucinating the wrong bifurcation?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thoughtful and constructive feedback. Below, we address each comment carefully. Additional experimental results are available in the **README** at https://github.com/anonymous-account123/icml2025-7637 and will be thoroughly incorporated into our revised paper.
***
**Q1. Brenner et al.**
Thank you for pointing out the relevant reference, Brenner et al. (2024), which also enables both interpolation and extrapolation of model parameters via linear modulation of subject-specific features. However, as you noted, it does not address the bifurcation-driven OOD, specifically the case where training is conducted solely in the pre-bifurcation regime while the post-bifurcation regime is explored at test time. We will cite this work and include a discussion in the revised paper.
***
**Q2. How to detect and avoid hallucinated bifurcations.**
The hallucinated symmetry-breaking experiment in Section 4.3 is intentionally designed as a pathological case to illustrate how the model mistakenly classifies a simple center-to-saddle bifurcation as a symmetry-breaking one, as predicted by Proposition 4.1. In the linear system considered in Section 4.3, if the NODE model is merely approximating a known functional form, it should undergo a simple center-to-saddle bifurcation. However, the experimental results show that the NODE model instead undergoes a spurious symmetry-breaking bifurcation. This indicates that the model is leveraging (or in this case misusing) topological constraints. Therefore, its behavior should be interpreted using the topological arguments provided by Poincaré–Hopf theorem and Proposition 4.1.
We completely agree that identifying whether a model hallucinates bifurcations is vital. We propose a straightforward yet robust criterion: evaluating the variance in bifurcation diagrams generated by multiple independent training runs. Correctly identified bifurcations show minimal variance, indicating stable and consistent identification. Conversely, hallucinated bifurcations yield significantly higher variance due to their structural instability. **Figure 2** at the provided link illustrates this distinction clearly, highlighting how increased variance serves as an indicator of hallucinated behavior. We will include this result in the revised version, along with appropriate discussion.
***
**Q3. Non-Hamiltonian DS.**
Following your suggestion, we tested the context-informed NODEs on a non-Hamiltonian, codimension-2 cusp bifurcating system: $(\dot{x}, \dot{y}) = (y, b + a x - x^3 – d y)$, with variable parameters $(a,b)$ and fixed $d=0.5$. This system serves as a canonical model for capturing catastrophic transitions and hysteresis phenomena, which are fundamental in science and engineering. Despite its dissipative, non-Hamiltonian nature, its bounded dynamics imply a preserved total Poincaré index of +1. Training was conducted exclusively on simple, monostable pre-bifurcation conditions within the parameter range $(a,b) \in [-2.0, -1.5, -1.0, -0.5]^2$, yet the model successfully identified the cusp bifurcation surface, confirming broader applicability (**Figure 3** at the provided link). This result will be included in our revised paper.
***
**Q4. Regularization without physical priors.**
We agree with your comment that topological regularization requires a certain level of prior knowledge about the system. In particular, the amount of required knowledge depends on whether we apply only a global constraint or both global and local ones. Local regularization typically requires more detailed, system-specific information. In contrast, using only global constraints demands relatively less prior knowledge. As stated in the Poincaré–Hopf theorem, it depends only on coarse information about the phase manifold on which the ODE is defined. For example, in a 2D closed-orbit system, the total Poincaré index is fixed at +1. Thus, as long as we know the system is non-divergent, this global constraint can often be applied without detailed knowledge of the vector field itself. In other setting—such as when the phase manifold is a sphere—the total index is determined by the Euler characteristic. For instance, when modeling vector fields on Earth (as in climate models), it is typically reasonable to assume a global index of $\chi(S^2) = +2$.
This type of global information can be especially useful when the available training data is confined to a restricted domain of initial conditions. To assess the effectiveness of topological regularization under minimal prior assumptions, we revisited the experiment in Section 5 using only the global constraint. In this additional study, we limited the training domain to $[-1.0, 1.0]^2$ to mimic a restricted coverage scenario. **Figure 5** at the provided link illustrates that the topologically regularized model exhibits improved performance, even when relying solely on global information. We will include this result in the revised paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for this clarifying rebuttal. I would still like to see whether the results of the paper can be applied/reproduced for general flow operator models, not just NODEs. However, I think it is fair to see this as future work. I will increase my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We are pleased that our response has addressed your concerns. We sincerely appreciate your thoughtful suggestions and kind recognition of our work.
While our current study primarily focuses on continuous dynamical systems governed by ODE vector fields, we acknowledge that incorporating alternative DSR backbones and flow models (e.g., Brenner et al. (2024)) offers an interesting opportunity to extend our framework. Such extensions could enable us to further broaden the scope of our topological perspective. We will incorporate this promising direction into our discussion of future work. | Summary: The paper demonstrates that context-dependent Neural Ordinary Differential Equations can identify post-bifurcation behaviors, even when trained only on pre-bifurcation data. It then provides an interpretation for this phenomenon based on the Poincaré-Hopf theorem, and proposes a topological regularizer that mitigates the hallucination for post-bifurcation behaviors.
Claims And Evidence: The claims seem well supported by the experiments.
Methods And Evaluation Criteria: The evaluations make sense for the investigated problem.
Theoretical Claims: I read the proof section in the appendix.
Experimental Designs Or Analyses: The experiments are sound and well motivated.
Supplementary Material: I reviewed the supplementary material.
Relation To Broader Scientific Literature: This paper studies context-dependent Neural Ordinary Differential Equations in depth on its behaviors when predicting OOD data.
Essential References Not Discussed: I am not aware of any essential references not being discussed.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written and easy to follow.
2. The problem is well-motivated and the arguments and experiments are coherent.
Weaknesses:
1. As acknowledged by the authors, the scope of the investigation is limited.
2. The regularization term seems to require detailed knowledge of the dynamical system beforehand, such as the global and local contour. This may limit the applicability of the regularization.
Other Comments Or Suggestions: 1. On page 3 line 160, the left column, the definition of OOD condition for parameters in not clear for n-dimensional parameters.
Questions For Authors: I do not have additional questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thoughtful and constructive feedback. Below, we address each comment carefully. Additional experimental results are available in the **README** at https://github.com/anonymous-account123/icml2025-7637 and will be thoroughly incorporated into our revised paper.
***
**Q1. As acknowledged by the authors, the scope of the investigation is limited.**
Our analysis primarily targets closed-orbit systems (e.g., Hamiltonian flows), but the underlying principle of Proposition 4.1—the topological invariance of the total index from the Poincaré–Hopf theorem—has broader applicability. Indeed, Proposition 4.1 follows directly from Theorem 4.1 under closed orbit assumptions. Yet, the essence of the theorem lies in global topological invariants, specifically the Euler characteristic of the phase space $\mathcal{M}$ (e.g., $\chi(S^{2n}) = 2$, $\chi(S^{2n+1}) =0$, $\chi(T^{n}) = 0$, ...), allowing its application even when the associated ODE defined on $\mathcal{M}$ does not exhibit closed orbits or conserved quantities. We will explicitly address this generalization potential in our revision.
***
**Q2. The regularization term seems to require detailed knowledge of the dynamical system beforehand, such as the global and local contour. This may limit the applicability of the regularization.**
We agree with your comment that topological regularization requires a certain level of prior knowledge about the system. In particular, the amount of required knowledge depends on whether we apply only a global constraint or both global and local ones. Local regularization typically requires more detailed, system-specific information. In contrast, using only global constraints demands relatively less prior knowledge. As stated in the Poincaré–Hopf theorem, it depends only on coarse information about the phase manifold on which the ODE is defined. For example, in a 2D closed-orbit system, the total Poincaré index is fixed at +1. Thus, as long as we know the system is non-divergent, this global constraint can often be applied without detailed knowledge of the vector field itself. In other setting—such as when the phase manifold is a sphere—the total index is determined by the Euler characteristic. For instance, when modeling vector fields on Earth (as in climate models), it is typically reasonable to assume a global index of $\chi(S^2) = +2$.
This type of global information can be especially useful when the available training data is confined to a restricted domain of initial conditions. To assess the effectiveness of topological regularization under minimal prior assumptions, we revisited the experiment in Section 5 using only the global constraint. In this additional study, we limited the training domain to $[-1.0, 1.0]^2$ to mimic a restricted coverage scenario. **Figure 5** at the provided link illustrates that the topologically regularized model exhibits improved performance, even when relying solely on global information. We will include this result in the revised paper.
***
**Q3. On page 3 line 160, the left column, the definition of OOD condition for parameters in not clear for n-dimensional parameters.**
Following your suggestion, we formally describe the definition of parameter OOD conditions with codimension-$k$ bifurcation in $n$-dimensional parameter space:
Let $\mathcal{P} \subset \mathbb{R}^n$ be a parameter space and let $\mathcal{B} \subset \mathcal{P}$ be a bifurcation set. The complement $\mathcal{P} \setminus \mathcal{B}$ admits a decomposition into $L$ disjoint connected components: $ \mathcal{P} \setminus \mathcal{B} = \bigcup_{i=1}^L \mathcal{P}_i$, where each $\mathcal{P}_i$ is a maximal connected subdomain. Intuitively, each $\mathcal{P}_i$ is one qualitatively uniform parameter regime. Then, a parameter OOD condition in learning dynamics arises when the support of training distribution is $\mathrm{supp}(p^\mathrm{tr}_e(\mu)) \subseteq \mathcal{P}_l$, but the support of the test distribution is $\mathrm{supp}(p^\mathrm{test}_e(\mu)) \subseteq \mathcal{P}_i$ for some $i \neq l$. Equivalently, there exists no continuous path $\gamma: [0, 1] \to \mathcal{P} \setminus \mathcal{B}$ connecting $\mu^\mathrm{tr}_e$ and $\mu^\mathrm{test}_e$ without crossing $\mathcal{B}$.
This generalized notion of parameter OOD parallels the concept of initial condition OOD, enabling a unified perspective on OOD in learning dynamics. Accordingly, we will revise our paper to correctly describe this more general setting beyond the one-dimensional case.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my comments. I do not have additional questions and will maintain my score.
---
Reply to Comment 1.1.1:
Comment: We are pleased that our response has addressed your concerns. We sincerely appreciate your thoughtful suggestions and kind recognition of our work. | Summary: This paper explores the use of context-informed NODEs to identify symmetry-breaking bifurcations in dynamical systems without relying on physics-based training data. The authors demonstrate that NODEs trained solely on symmetric (pre-bifurcation) data can predict post-bifurcation behaviors in a "zero-shot" manner, meaning they do so without prior experience with such data.
The key to this capability lies in the NODEs' ability to utilize topological invariants, specifically the Poincaré index, which helps them grasp the underlying structure of the data despite significant changes from bifurcations. The authors also introduce a novel topological regularizer inspired by the Poincaré-Hopf theorem to enhance NODE performance.
The paper challenges the assumption that data-driven models struggle with bifurcations and presents a promising approach that demonstrates how these models can extract meaningful dynamics from limited training data, contributing to their robustness and applicability in real-world complex systems.
Claims And Evidence: Overall, their claims are almost well-supported by the evidence presented. In particular, the authors provide a solid theoretical framework grounded in the Poincaré-Hopf theorem and demonstrate the efficacy of their approach through empirical validation with the Landau-Khalatnikov system. The results show that context-informed NODEs can identify symmetry-breaking behavior without direct training on post-bifurcation data, which testifies to the power of their model. However, there are some Unclear Claims:
- The authors claim that NODEs implicitly learn topological invariants like the Poincaré index, but it is unclear if they truly "learn" them or if this just happens as a side effect of function approximation. The authors could make this claim stronger by showing exactly how the model learns these features.
- The paper presents an example in which NODEs "hallucinate" broken symmetry when trained on a linear system (Sec. 4.3). But, it is unclear whether this phenomenon is a general characteristic of NODEs or if it is specific to the chosen experimental setup.
There are also some concerns that could impact the robustness of their claims more generally in more complex scenarios:
- The study predominantly investigates **codimension-one** bifurcations, focusing solely on **2D** and **Hamiltonian** systems, which narrows the applicability of their findings. Although Theorem 4.1 hints at potential extensions to more complex systems or higher dimensions, it would strengthen the argument if the authors elaborated on potential strategies for tackling **codimension-higher bifurcations**, the challenges they might face in 3D or higher-dimensional systems, and the adaptations required in their approach for non-Hamiltonian systems. It would be helpful to provide insights into how these generalizations might occur and under what specific conditions it would hold true.
Methods And Evaluation Criteria: **Proposed Methods**: First off, they have nicely discussed the OOD generalization problem by categorizing it into two aspects: OOD in initial conditions and OOD in model parameters, to better clarify the two different challenges for learning dynamics. The use of context-informed NODEs and Poincaré-Hopf regularization makes sense for studying bifurcations and symmetry-breaking in dynamical systems.
**Evaluation Approach**: The paper evaluates NODEs on well-defined 2D dynamical systems (e.g., Hamiltonian and Landau-Khalatnikov models), which are relevant for studying bifurcations. The experiments mostly involve Hamiltonian or nearly conservative systems and focus only on codimension-one bifurcations. Yet, real-world systems often include dissipation and forcing, which can alter bifurcation behavior and lead to codimension-higher bifurcations. So, evaluating on non-Hamiltonian, higher-dimensional, and multi-parameter bifurcations ($ \mu \in \mathbb{R}^n, n>1 $) would provide a more comprehensive validation.
Theoretical Claims: I tried to verify the correctness of the theoretical claims in the main text as well as the proofs in Appendix A.
**Re the proof of Proposition 4.1**:
- The proposition outlines a condition for local symmetry breaking in a Hamiltonian vector field via a center-to-saddle bifurcation. This condition requires a smooth bijective map that maintains vector fields close in the $C^1$ sense, potentially overlooking complexities in real-world systems where perturbations are not small/easily managed.
- The proof logically follows from bifurcation theory, Implicit Function Theorem and $C^1$ closeness assumptions between the learned and true **Hamiltonian** vector fields. The proof assumes the learned vector field closely matches the true system, but in practice, Neural ODEs are approximations and may not perfectly capture the system's dynamics. If there are errors in approximation (due to limited training data, network capacity, or optimization issues), the conditions stated in the proof might not always hold, making the model’s predicted bifurcations less reliable in real-world applications.
- The proof in Proposition 4.1 is specifically derived for Hamiltonian systems, where energy conservation and structured dynamics naturally enforce certain constraints. However, many real-world systems are non-Hamiltonian which can significantly alter bifurcation behavior. Since the proof relies on preserving closed orbits and the Poincaré-Hopf index, its conclusions may not directly apply to non-Hamiltonian systems where these properties don't hold. The authors should clarify how far their results, particularly Proposition 4.1, can be generalized to non-Hamiltonian systems. This includes discussing the limitations of their findings and what further theoretical or empirical validation is needed to adapt their conclusions for these systems.
Experimental Designs Or Analyses: Re soundness/validity of experimental designs or analyses:
**Topological Regularization**: They introduce a Poincaré-Hopf regularization aimed at enhancing the NODEs' learning capabilities. However, it lacks detailed information on how this regularization is specifically implemented and tuned, which is essential for assessing its effectiveness and reproducibility.
**Model Evaluation**: The evaluation metric relies on checking if the model predictions converge within a distance (σ) from the true dynamics. If this distance threshold is not well-defined or justified, it could lead to misleading conclusions about the model's predictive capabilities and overall performance.
**Parameter Exploration**: A mesh grid approach is used for parameter sampling, but the methodology for how this grid is constructed and the resolution of the sampling are crucial.
Supplementary Material: Yes, I checked Sections A and B.
Relation To Broader Scientific Literature: The key contributions of the paper relate to several areas, particularly regarding modeling dynamical systems and understanding bifurcations through data-driven approaches. It has implications for theoretical advancements in dynamical systems and OOD generalization challenges. Specifically, it discusses OOD in model parameters, which is another category of OOD generalization challenges, in addition to OOD in initial conditions as noted in (G¨oring et al., 2024), which is very important.
**Key Points**:
- Data-Driven Dynamics: Builds on Brunton et al. (2016), demonstrating effective discovery of dynamical systems without physics-based priors.
- Topological Invariants: Utilizes invariants (Brasselet et al., 2009) to enhance NODEs by linking topology and dynamics.
- OOD Generalization: Challenges beliefs about model generalization across bifurcations (Ye et al., 2021), showing robust NODE performance without diverse training.
- Symmetry Recovery: Contrasts with García Pérez et al. (2023), indicating NODEs' ability to model symmetry-breaking without physics-informed priors.
Essential References Not Discussed: -
Other Strengths And Weaknesses: **Other Strengths**:
- Originality: The paper introduces a fresh approach by showing how context-dependent NODEs, trained on localized data, can effectively identify symmetry-breaking bifurcations. This highlights a creative use of topological invariants in a data-driven setting.
- Significance: By tackling how NODEs can extrapolate behaviors beyond their training domain, this research makes a meaningful contribution to both dynamical systems and machine learning. It could open up new possibilities for automating scientific discovery and deepens our understanding of complex systems.
- Clarity: The paper is almost well-written and well-structured and explains complex ideas in a way that's easy to follow.
Other Comments Or Suggestions: **Some minor comments**:
- Page 1, second column, line 21: It is mentioned "Formally, a dynamical system is represented by a phase space Ordinary Differential Equation (ODE)". But, it is better to say "Formally, many **continuous-time** dynamical system ... " as discrete-time dynamical systems can be represented by recursive maps. Also, some continuous-time dynamical systems can be represented by PDEs, not necessarily ODEs.
- Page 3, in Sect. "OOD in model parameters": The definition of bifurcation parameter (starting from line 151) is for the general case $\mu_{crit} \in \mathbb{R}^n$. But, the sentence "Consequently, the OOD condition for parameters arises when a model is trained on parameters $\mu_e^{tr} < \mu_{crit}$ but the support of the test data is $\mu_e^{test} > \mu_{crit}$" is only valid for $\mu_e^{tr} , \mu_{crit} , \mu_{crit} \in \mathbb{R}$ (codimension-one bifurcation). Please clarify it in the text.
- Page 6, Definition 4.1: The topological degree of a map should be better clarified/explained.
Questions For Authors: 1. In Section 3.1, could you elaborate on the rationale for randomly sampling **four** initial conditions? Furthermore, what is the justification for adapting the model using a **single** trajectory after training it with pre-bifurcation data?
2. In Section 2, you introduce a Poincaré-Hopf regularization to enhance the NODEs' learning capabilities. Coulld you provide more detailed information on the specific implementation and tuning process of this regularization? How do you ensure that it effectively contributes to the learning without introducing unwanted biases?
3. In Section 5, the evaluation of model performance is based on the convergence within a distance (σ) from the true dynamics. How is this distance threshold (σ) determined, and what justification or heuristics do you provide for selecting its value? Could different choices of σ significantly affect your conclusions regarding the model's predictive capabilities?
4. Could you clarify how far your results, particularly Proposition 4.1, can be generalized to non-Hamiltonian systems? What specific characteristics or conditions of non-Hamiltonian systems would need to be considered for the applications of Proposition 4.1 to remain valid, and how might the presence of dissipation or external forcing impact the conclusions drawn from your study regarding symmetry-breaking behavior?
5. The authors claim that NODEs implicitly learn topological invariants like the Poincaré index, but it is unclear if they truly "learn" them or if this just happens as a side effect of function approximation. Could elaborate on this further and make this claim stronger by showing exactly how the model learns these features?
I am happy to increase my score if the authors can address my main concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s thoughtful and constructive feedback. Below, we address each comment carefully. Additional experimental results are available in the **README** at https://github.com/anonymous-account123/icml2025-7637 and will be thoroughly incorporated into our revised paper.
***
**Q1. Do NODEs genuinely utilize topology? Is the hallucinated broken symmetry general?**
You highlight a critical point about distinguishing whether context-informed NODEs genuinely leverage topological constraints or merely approximate functions that happen to exhibit topological properties. Indeed, the double-well scenario poses a dilemma: better approximations naturally reflect correct symmetry-breaking behavior. Our hallucinated bifurcation scenario in Section 4.3, grounded explicitly in Proposition 4.1, is intentionally designed to address this challenge. If NODEs were merely approximating a functional form, a simple center-to-saddle bifurcation would emerge. Instead, the model exhibits an incorrect symmetry-breaking bifurcation, confirming the model genuinely utilizes (in this scenario, misuses) topological constraints, precisely as Proposition 4.1 predicts. Hence, our findings demonstrate that NODEs indeed exploit topological invariants. In addition, the hallucinated symmetry-breaking can arise whenever Proposition 4.1's conditions hold. We will highlight this more clearly in the revised manuscript.
***
**Q2. Robustness of bifurcation identification under noisy conditions.**
We acknowledge that approximation errors may increase under noisy perturbations or with limited training samples, as you pointed out. To address this, we conducted further experiments under these realistic conditions. Despite the increased variance, the model consistently identified the symmetry-breaking bifurcation, demonstrating its robustness. The results are illustrated in **Figure 1(b-c)** at the provided link, which will be included in the revised manuscript.
***
**Q3. Non-Hamiltonian and codimension-higher bifurcations.**
Following your suggestion, we tested the context-informed NODEs on a non-Hamiltonian, codimension-2 cusp bifurcating system: $(\dot{x}, \dot{y}) = (y, b + a x - x^3 – d y)$, with variable parameters $(a,b)$ and fixed $d=0.5$. This system serves as a canonical model for capturing catastrophic transitions and hysteresis phenomena. Despite its dissipative, non-Hamiltonian nature, its bounded dynamics imply a preserved total Poincaré index of +1. Training was conducted exclusively on simple, monostable pre-bifurcation conditions within the parameter range $(a,b) \in [-2.0, -1.5, -1.0, -0.5]^2$, yet the model successfully identified the cusp bifurcation surface, confirming broader applicability (**Figure 3** at the provided link). This result will be included in our revised paper.
***
**Q4. Generalization of Proposition 4.1.**
Our analysis primarily targets closed-orbit systems (e.g., Hamiltonian flows), but the underlying principle of Proposition 4.1—the topological invariance of the total index from the Poincaré–Hopf theorem—has broader applicability. Indeed, Proposition 4.1 follows directly from Theorem 4.1 under closed orbit assumptions. Yet, the essence of the theorem lies in global topological invariants, specifically the Euler characteristic of the phase space $\mathcal{M}$ (e.g., $\chi(S^{2n}) = 2$, $\chi(S^{2n+1}) = 0$, $\chi(T^{n}) = 0$, ...), allowing its application even when the associated ODE defined on $\mathcal{M}$ does not exhibit closed orbits or conserved quantities. We will explicitly address this generalization potential in our revision.
***
**Q5. Experimental designs.**
We have provided a detailed description and implementation of our proposed topological regularization approach in Appendix Section G. Additionally, Figure 17 in Appendix illustrates consistency scores across various threshold values ($\sigma$) ranging from 0.2 to 1.0, for the results of Section 5. The regularized model consistently outperforms vanilla models, maintaining scores close to 1.0 regardless of $\sigma$. Furthermore, Appendix Section H thoroughly summarizes our experimental setup (e.g., 16 training and 441 testing parameter combinations over an $(\alpha,\beta)$ grid).
***
**Q6. Why four training samples and one for adaptation?**
Our experimental setup closely mirrors Kirchmeyer et al. (2022), who utilized four initial conditions per parameter for training on the Lotka–Volterra system—another 2D closed-orbit system analogous to ours. For adaptation, a single confined trajectory was selected intentionally to simulate an extreme symmetry-breaking scenario. This deliberately restricts the model’s exposure to the global phase space structure, effectively testing simultaneous parameter and initial condition OOD challenges.
***
**Q7. Minor comments.**
We will clarify the manuscript as recommended. For the clarified definition of OOD in parameters, please refer to our response to Question 3 from Reviewer rQs3. | null | null | null | null | null | null |
Sundial: A Family of Highly Capable Time Series Foundation Models | Accept (oral) | Summary: The paper presents a collection of foundation models for time series. To this end, the authors proposed a loss called TimeFlow for predicting the distribution next-patch, enabling Transformers to be pre-trained without the need for discrete tokenization. It is argued that this loss function helps in preventing mode collapse. The pre-training is conducted on a dataset called TimeBench.
## update after rebuttal
Following the rebuttal and the authors' feedback, I have increased my scores accordingly.
Claims And Evidence: Yes, they are clear and convincing.
Methods And Evaluation Criteria: YES
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: The experimental designs and the conducted analysis seem to be acceptable.
Supplementary Material: I skimmed through the supplementary material.
Relation To Broader Scientific Literature: I find the idea interesting and the results pertinents.
Essential References Not Discussed: The following paper is not referenced in the submitted paper, even though it achieves high-quality image generation without the need for discrete tokenization, thus having some connections to the proposed method:
Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. arXiv preprint arXiv:2406.11838, 2024.
Other Strengths And Weaknesses: It is not clear why the authors considered as “unprecedented” the large-scale TimeBench dataset. Moreover, it would have been better to clearly describe this dataset, and how it was curated. The description in the paper and appendix is only limited to half a column. The paper would have benefited from a clear in-depth description of the dataset.
Moreover, it is not clear the influence of the dataset to the obtained results, namely does this dataset has a great influence on the obtained results compared to other methods that considered smaller datasets.
Following the rebuttal and the authors' feedback, I have increased my scores accordingly.
Other Comments Or Suggestions: none
Questions For Authors: none
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks to Reviewer 4SCU for providing a valuable review.
> **Q1**: Include the relevant citation (Li et al. 2024).
We referred to it in Section 5.3. In this Section, we provided the comparison using different training objectives: MSE Loss, Diffusion Loss (Li et al., 2024), and TimeFlow Loss (Ours). Performance using Diffusion Loss is notably inferior to TimeFlow Loss. The different generative frameworks and target modalities distinguish our work from the previous work.
> **Q2**: Detailed description of the TimeBench datasets.
Thanks for your valuable suggestion for the dataset description. We consider TimeBench as an unprecedented large-scale dataset since the previous scale of time series datasets is still at the million level. **The largest pre-trained time series dataset before our work was 300B (Shi et al., 2024)**. TimeBench initially extends the scale to the trillion level, following a streamlined curation:
* **Collection**: Different from other modalities, time series data are highly heterogeneous and confidential. Most of them are unavailable on open websites or repositories. There are also limited domains that encompass typical and predictable time series, such as weather, traffic, marketing, energy, etc, leading to slow progress on dataset construction.
* **Preprocessing**: Due to common device faults, it is difficult to pre-train large models using raw time series data. We have conducted tedious preprocessing, including missing values imputation, abnormal exclusion, and normalization techniques.
* **Quality control**: We conduct statistical analysis in our collection, examining time series through the lenses of intrinsic properties, e.g., non-stationarity, forecastability, and seasonality. This approach allows us to characterize the data quality inherent to time series, which affects the training stability of next-token prediction.
* **Diversity and Generality**: We adopt synthetic techniques to improve pattern diversity and the capability of seasonal/trend forecasting. Further, we adopt ERA5, which includes well-defined and systematic real-world temporal observations.
**We will provide a clear and detailed description of TimeBench (See also **Q4 of Reviewer VkQC**) in the final revision**.
> **Q3**: Does TimeBench have a great influence on the results compared to other methods that consider smaller datasets?
We compare Sundial with other time series foundation models that are pre-trained with smaller datasets. We also conduct pre-training on Sundial using different scales of datasets (Chronos-94B, LoTSA-230B, TimeBench-1032B). These results highlight **the scaling behavior of using larger datasets**.
| Zero-Shot (MSE \| MAE) | Chronos (94B) | Moirai (230B) | Time-MoE (300B) | Sundial (94B) | Sundial (230B) | Sundial (1032B) |
| ---------------------- | -------------- | ------------------ | ------------------ | -------------- | ------------------ | ---------------------- |
| ETTh1 | 0.591 \| 0.468 | 0.417 \| **0.419** | **0.400** \| 0.424 | 0.402 \| 0.429 | 0.403 \| **0.419** | 0.411\| 0.434 |
| ETTh2 | 0.405 \| 0.410 | 0.362 \| **0.382** | 0.366 \| 0.404 | 0.377 \| 0.414 | 0.364 \| 0.398 | **0.333** \| 0.387 |
| ETTm1 | 0.645 \| 0.500 | 0.406 \| 0.385 | 0.394 \| 0.415 | 0.367 \| 0.402 | 0.352 \| 0.385 | **0.336** \| **0.377** |
| ETTm2 | 0.310 \| 0.350 | 0.311 \| 0.337 | 0.317 \| 0.365 | 0.280 \| 0.341 | 0.273 \| 0.334 | **0.258** \| **0.320** |
| ECL | 0.214 \| 0.278 | 0.187 \| 0.274 | in distribution | 0.172 \| 0.269 | 0.171 \| 0.267 | **0.169** \| **0.265** |
| Weather | 0.292 \| 0.315 | 0.287 \| 0.281 | 0.265 \| 0.297 | 0.254 \| 0.301 | 0.252 \| 0.297 | **0.234** \| **0.270** |
---
Rebuttal Comment 1.1:
Comment: Dear authors,
I thank you for your useful feedback. I have increased my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for providing the insightful review, which helped us a lot in the rebuttal and paper revision. We will elaborate on the dataset curation and include the scaling analysis in our final version. | Summary: The paper introduces Sundial, a novel family of time series foundation models that address fundamental challenges in time series forecasting through a native, flexible, and scalable approach. The work's primary innovation is the proposed TimeFlow Loss, an optimization objective based on flow-matching that enables Transformers to be pre-trained directly on continuous time series data without requiring discrete tokenization. This approach allows the model to generate multiple probable predictions when conditioned on arbitrary-length time series, achieving flexibility in representation learning beyond parametric densities that constrain distribution modelling capacity. The authors make several significant technical contributions to achieve highly capable time series foundation models. First, they employ continuous tokenization through patch embedding that accommodates variable-length inputs alongside minimal but crucial adaptations to the Transformer architecture, including Pre-LN for stability, RoPE for temporal causality, and optimizations like FlashAttention and KV Cache. Second, they curate TimeBench, an unprecedented corpus containing 1 trillion time points from diverse sources, including real-world datasets and synthetic data across various frequencies. This extensive pre-training dataset enables the model to learn comprehensive temporal dynamics and patterns. The paper presents a systematic comparison between generative and deterministic forecasting paradigms, revealing that models pre-trained with MSE loss often produce over-smoothed predictions due to mode collapse on heterogeneous data distributions. In contrast, Sundial generates diverse yet coherent temporal patterns that align well with input patterns. The authors validate their design choices through extensive ablation studies that compare TimeFlow Loss with alternative approaches, demonstrating trade-offs between inference speed and prediction quality.
Claims And Evidence: The authors make several significant claims regarding their proposed time series foundation models, and overall, most claims are backed by thorough evidence experimentation. The central contribution—the TimeFlow Loss based on flow-matching—is well-established through comprehensive mathematical formulations in Sections 3.1 and 4.1.3, providing a clear theoretical foundation for the approach. The claim of state-of-the-art performance is convincingly demonstrated through extensive benchmarking across multiple datasets. Table 1 shows Sundial consistently outperforming other advanced foundation models on point forecasting tasks, with quantitative improvements (7.57% MSE reduction and 4.71% MAE reduction compared to Time-MoE). For probabilistic forecasting, Table 2 demonstrates Sundial achieving first place in MASE and second place in CRPS on the GIFT-Eval benchmark across 23 datasets. These results are particularly impressive given the zero-shot nature of the evaluation.
---
The authors' claims regarding model scalability are supported by both empirical performance gains across model sizes (as shown in Table 1) and convergence improvements (Figure 7 shows a 15.38% reduction in training objectives for larger models). However, the scaling analysis could have been strengthened with more model size variants to establish clearer scaling laws. The assertion that TimeFlow Loss mitigates mode collapse is partially supported through ablation studies in Table 3 comparing different training objectives and visualizations in Figures 13-14 that contrast Sundial's diverse predictions with the over-smoothed outputs from MSE-trained models. While these comparisons are informative, a more quantitative evaluation of prediction diversity would have strengthened this particular claim.
---
The paper's ambitious dataset contribution (TimeBench with 1 trillion time points) is well-documented in Table 4, with clear attribution of sources and distributions, lending credibility to the large-scale pre-training claims. The inference speed claims require some nuance—while Figure 6 demonstrates that Sundial achieves an 11.34× speedup compared to Chronos, it's not necessarily faster than all baseline approaches. This represents a reasonable trade-off given the probabilistic capabilities of the model.
Methods And Evaluation Criteria: The methods and evaluation criteria proposed in the paper are fundamentally well-aligned with the challenges of time series foundation modelling. The authors recognize a critical limitation in existing approaches—specifically, the tension between discrete tokenization (which limits the representation of continuous values) and parametric densities (which restrict distribution modelling capacity). Their proposed TimeFlow Loss offers a theoretically sound solution by enabling Transformers to learn flexible predictive distributions without requiring tokenization or prior distribution specification, which is particularly appropriate for the heterogeneous nature of large-scale time series corpora.
---
The patch-based continuous tokenization approach sensibly balances computational efficiency with representation quality, addressing the unique characteristics of time series data while maintaining compatibility with Transformer architectures. Technical adaptations like Pre-LN for stability and RoPE for temporal causality demonstrate thoughtful consideration of the domain-specific challenges in time series modelling.
---
Regarding evaluation criteria, the authors employ a comprehensive benchmarking strategy that convincingly validates their claims. The use of both point forecasting metrics (MSE, MAE) and probabilistic metrics (CRPS, MASE, WQL) across three established benchmarks—Time-Series-Library, GIFT-Eval, and FEV—ensures thorough performance assessment. The GIFT-Eval benchmark is particularly appropriate as it encompasses 23 datasets with diverse characteristics, providing a robust measure of generalization capability. The evaluation against both statistical methods and competing foundation models offers the necessary context for interpreting performance gains.
---
The TimeBench dataset, with its unprecedented scale of 1 trillion time points and diverse sources spanning different frequencies and domains, constitutes an appropriate foundation for pre-training models intended for broad applicability. The careful exclusion of test datasets from pre-training data demonstrates methodological rigour in preventing data leakage that could invalidate zero-shot performance claims.
---
The ablation studies comparing TimeFlow Loss with alternative approaches (Table 3) and the exploration of inference trade-offs (Figure 8) provide crucial scientific validation of design choices. However, the paper would benefit from more domain-specific evaluations to complement the general-purpose benchmarks and from deeper analysis of prediction diversity beyond visual showcases. Nevertheless, the overall approach and evaluation strategy are well-matched to the foundational goals of developing generalizable, probabilistic time series models capable of effective zero-shot performance.
Theoretical Claims: The paper's main theoretical foundation rests on the flow-matching framework originally proposed by Lipman et al. (2022), which the authors adapt to time series forecasting. The primary theoretical contribution—TimeFlow Loss—is formulated in Section 4.1.3, where the authors extend the conditional flow-matching objective to handle sequential time series data. While the mathematical formulation is clearly presented (Equations 4-7), the authors don't provide formal proofs for the new theoretical properties of this adaptation. Instead, they demonstrate its effectiveness through empirical validation across multiple benchmarks.
---
The theoretical formulation builds directly upon the preliminaries established in Section 3.1, where the standard flow-matching framework is introduced. The authors correctly present the existing theoretical elements including the velocity field ODE (Equation 1), the Flow-Matching objective (Equation 2), and the Conditional Flow-Matching objective with Gaussian formulation (Equation 3). These equations are properly cited to the original work. The adaptation to time series involves conditioning the flow-matching process on a learned representation h_i, which appears mathematically sound, but the paper doesn't provide a rigorous proof of its properties beyond empirical results. The inference procedure (Algorithm 1) is a straightforward application of numerical ODE solving, consistent with standard flow-matching literature.
---
The claim that TimeFlow Loss mitigates mode collapse during pre-training is primarily substantiated through qualitative visual showcases (Figures 13-14) and comparative experiments (Table 3) rather than through theoretical guarantees. This empirical validation approach is reasonable for a systems paper but limits the theoretical depth of the contribution.
---
From my understanding, while the paper presents a coherent mathematical framework for applying flow-matching to time series forecasting, it does not contain novel theoretical proofs requiring verification. The mathematical soundness of the approach derives from its faithful extension of established flow-matching theory, with innovations focused on architecture and application rather than fundamental theoretical advances.
Experimental Designs Or Analyses: The authors present a comprehensive evaluation framework for their proposed time series foundation models, with experiments that are generally well-designed but I think it contains several methodological limitations that merit consideration. I am providing some of them here.
(1) The zero-shot forecasting evaluation methodology demonstrates strong validity through the use of established benchmarks (Time-Series-Library, GIFT-Eval, and FEV) and clear separation between pre-training and evaluation datasets. The authors implement appropriate metrics (MSE/MAE for point forecasting and MASE/CRPS/WQL for probabilistic forecasting) that align with community standards. The careful exclusion of evaluation datasets from the pre-training corpus (explicitly denoted by dashes in Table 1) strengthens the experimental integrity of zero-shot claims. However, the comparison methodology raises concerns regarding controlled evaluation conditions. While Table 6 documents architectural differences between models, the paper doesn't explicitly confirm whether all baseline models had access to identical context lengths during inference, which could significantly impact forecasting performance. Additionally, the reliance on officially reported results from other papers (noted in Table 7) introduces potential inconsistencies in evaluation protocols that could affect fair comparison.
---
(2) The ablation study for TimeFlow Loss (Table 3) is methodologically sound in maintaining architectural consistency while varying only the training objective. However, this analysis is limited to reporting point forecasting metrics (MSE) without evaluating probabilistic metrics, which undermines a complete assessment of the central claim regarding improved distribution modelling. The qualitative comparisons in Figures 13-14 partially address this gap but lack objective quantification of prediction diversity.
---
(3) The scaling behaviour analysis in Figure 7 presents valid training curves but would be strengthened by more intermediate model sizes to establish clearer scaling laws. Similarly, the inference speed versus performance trade-off analysis (Figure 8) systematically varies generation parameters but doesn't properly quantify the relationship between these parameters and actual inference time.
---
(4) Most concerning is the absence of statistical significance testing or confidence intervals across all experimental results, particularly important given the high variance typically observed in time series forecasting performance. This omission limits the robustness of performance comparisons, especially for closely-matched results.
Supplementary Material: Yes, I reviewed the full paper, including all of the supplementary materials.
Relation To Broader Scientific Literature: I think this work makes several notable contributions that both build upon and diverge from established research trajectories in time series modelling. The authors' central contribution—TimeFlow Loss—represents a significant advancement in how foundation models handle continuous-valued time series data. This approach extends the flow-matching framework of Lipman et al. (2022) to the autoregressive time series domain, creating a bridge between continuous generative modelling and sequential forecasting that has been largely unexplored in prior literature. The authors' decision to embrace generative modeling rather than discrete tokenization strategically positions their work as an alternative to the language-modeling inspired approach taken by Chronos (Ansari et al., 2024). While Chronos adapted techniques from NLP by discretizing continuous values, Sundial's native approach avoids the information loss inherent in quantization—addressing a fundamental limitation recognized but unresolved in previous work. Similarly, the TimeFlow Loss offers greater flexibility than the parametric mixture distributions employed by Moirai (Woo et al., 2024), which the authors correctly identify as potentially constraining when modelling heterogeneous time series distributions at scale.
---
The architectural adaptations, while individually derived from existing techniques like RoPE (Su et al., 2024) and Pre-LN (Xiong et al., 2020), represent a thoughtful integration of components that specifically address the challenges of time series forecasting. This integration draws from disparate research streams that have not previously been united for time series foundation models. Particularly, the deliberate incorporation of FlashAttention and KV Cache reflects an understanding of the efficiency challenges faced by practitioners—a consideration often neglected in academic time series research but well-established in large language model literature.
---
TimeBench's trillion-point scale represents an order-of-magnitude increase over previous datasets like those used in Time-MoE (300B, Shi et al., 2024b) and Timer (231B, Liu et al., 2024a,b). This scaling aligns with the broader foundation model literature's emphasis on dataset size as a critical factor in model capability, while specifically addressing the unique challenges of time series heterogeneity. The authors' careful compilation of diverse frequency data connects to emerging work on scaling laws for time series (Shi et al., 2024a), extending these insights into previously unexplored data volumes.
---
I think the empirical results position Sundial at the intersection of deterministic and probabilistic forecasting research streams. The comparative analysis against both MSE-trained models and diffusion-based alternatives provides a valuable empirical bridge between these previously separate approaches. This integration of multiple modelling paradigms within a unified foundation model framework represents a meaningful synthesis of previously disparate research directions in time series forecasting.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: This work demonstrates considerable originality in its approach to time series foundation models. The authors identify a fundamental tension in prior work—the trade-off between discrete tokenization and parametric distributions—and present a creative solution through their TimeFlow Loss framework. This contribution is conceptually significant as it bridges flow-matching techniques with autoregressive time series modelling, enabling Transformers to learn flexible distributions directly from continuous values without imposing restrictive prior distributions. This represents a meaningful step forward in the relatively nascent field of time series foundation models, not merely an incremental improvement over existing methods.
---
The paper's most significant contribution is its comprehensive approach to the generative modelling paradigm for time series. While individual components (flow-matching, Transformer architectures) exist in isolation, their integration into a cohesive framework specifically designed for time series forecasting demonstrates genuine innovation. The TimeBench dataset with 1 trillion time points represents a substantial resource contribution to the research community and enables proper exploration of scaling behaviour in time series foundation models, an important but previously under-explored area.
---
From a technical writing perspective, the paper generally maintains strong clarity. The mathematical formulations in Sections 3 and 4 are precise and well-structured, providing sufficient detail for implementation. The ablation studies effectively isolate the contribution of the TimeFlow Loss compared to alternatives, though they could benefit from more rigorous quantification of prediction diversity beyond the qualitative showcases in Figures 13-14.
---
However, several weaknesses merit attention. While the model shows impressive zero-shot performance, the paper inadequately addresses computational efficiency during training. Training on 1 trillion time points likely requires significant computational resources, but the paper provides a limited discussion of training time, hardware requirements, or environmental impact—considerations increasingly important in foundation model research. Additionally, the inference procedure introduces complexity with its sampling parameters, creating practical deployment challenges that aren't thoroughly addressed.
---
The model's conservative prediction behaviour (noted in the limitations section) represents a substantive weakness that could limit practical utility. Since accurate trend forecasting is critical in many applications, this limitation deserves a more thorough analysis rather than a brief acknowledgment. Furthermore, the univariate pre-training approach, while pragmatic, sidesteps the important challenge of modelling inter-variate correlations in multivariate time series—a limitation that restricts the model's applicability to many real-world scenarios where complex interdependencies exist.
---
I think despite these limitations, the paper makes a compelling contribution to time series foundation model research by establishing a new paradigm for generative modelling in this domain, demonstrating strong empirical results, and providing a solid foundation for future work in this direction.
Other Comments Or Suggestions: N/A
Questions For Authors: Here are my questions and I would love to hear back from the authors on them.
---
(1) Quantification of Prediction Diversity: While you demonstrate TimeFlow Loss mitigates mode collapse through visualizations in Figures 13-14, could you provide quantitative metrics to measure prediction diversity across your generated samples? I think this would strengthen your claims regarding the advantages of flow-matching over MSE-based training and help practitioners better understand when to choose each approach.
---
(2) Computational Resource Requirements: Your work demonstrates impressive results using a trillion-point dataset but lacks details on computational requirements. Could you provide specific information about training time, hardware configurations, and estimated computational costs? This information would be valuable for assessing scalability and environmental impact considerations that are increasingly important in foundation model research.
---
(3) Context Length Sensitivity: How sensitive is Sundial's performance to the availability of historical context? Your experiments maintain fixed context lengths, but real-world applications often face varying historical data availability. An analysis showing performance degradation curves as context length decreases would provide important insights into the model's robustness in practical deployment scenarios.
---
(4) Conservative Prediction Behaviour: You briefly mention conservative prediction behaviour as a limitation. Could you elaborate on the specific scenarios where this manifests most prominently and provide a quantitative analysis of this phenomenon? This information would help users understand when Sundial might underperform on trend forecasting tasks compared to alternative approaches.
---
(5) Controlled Baseline Comparisons: For comparing against baseline models (Table 1), did all models have access to identical context lengths and inference settings? The architectural differences noted in Table 6 raise questions about whether performance differences might be partially attributable to these variations rather than fundamental modelling approaches.
---
(6) Multivariate Extension Strategy: Given your univariate pre-training approach, what specific architectural or training methodology modifications would be required to effectively model complex inter-variate correlations in multivariate settings? Would this require fundamental changes to the TimeFlow Loss formulation or primarily architectural adaptations? I think a clearer roadmap for this extension would significantly enhance the paper's impact.
---
I am happy to change the score if the answers are convincing, and look forward to hearing back from the authors.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks to Reviewer gDT6 for providing a detailed review and recognizing our contributions.
> **Q1**: Quantitative evaluation of prediction diversity.
Thanks for your suggestion. We extend the ablation study of Table 3, where we provide probabilistic metrics CRPS to evaluate the diversity. Note that MSE-optimized models give deterministic predictions (a peak distribution), while generative forecasters can give non-deterministic results to estimate the distribution (we keep consistent with the original paper by using 20 raw predictions):
|Zero-Shot (CRPS)|ETTh1|ETTh2|ETTm1|ETTm2|ECL|Weather|GIFT-Eval|
|-|-|-|-|-|-|-|-|
|TimeFlow|**0.0059**|**0.0037**|**0.0057**|**0.0029**|0.0082|**0.0021**|**0.505**|
|Diffusion|0.0082|0.0053|0.0070|0.0039|0.0095|0.0032|0.534|
|MSE|0.0063|0.0040|0.0058|0.0032|**0.0080**|0.0023|0.642|
The predictive distribution modeled by TimeFlow is **more coherent and diverse** than its counterparts, notably on the diverse GIFT-Eval, validating TimeFlow's advantages in generative modeling.
> **Q2**: Computational resources requirements.
Pre-training: We use 32 A100-40G GPUs. Each GPU handles a batch size of 128 and a context length of 2880. Benefiting from the patch tokenization, our mode finishes pre-training (100k iterations) on TimeBench in around 24 hours, totally using 768 GPU hours.
Inference (Context length=2880; Sampling steps=50; Sampling numbers=20; A single GPU with memo > 2GB): It takes 7ms to generate one prediction (140ms for 20 raw samples per forwarding). This cost is approximately proportional to sampling steps and numbers.
> **Q3**: Context length sensitivity.
We provide performance under different context lengths.
|Zero-Shot (MSE, Averaged)|480|960|1440|1920|2400|2880|
|-|-|-|-|-|-|-|
|ETTh1|0.404|0.412|0.427|0.422|0.417|0.411|
|ETTh2|0.343|0.346|0.337|0.336|0.340|0.333|
|ETTm1|0.385|0.375|0.361|0.349|0.339|0.336|
|ETTm2|0.288|0.287|0.264|0.252|0.255|0.258|
|ECL |0.171|0.172|0.171|0.169|0.169|0.169|
|Weather|0.248|0.239|0.237|0.235|0.236|0.234|
Note that Sundial handles various context lengths on FEV and GIFT-Eval. Unlike fixed-context models, Sundial can be flexible for practitioners, where the context length can be dynamically adjusted during inference instead of re-training.
> **Q4**: Discussion about the conservative prediction behavior.
We delved into this behavior after the initial submission, focusing on the pre-training distribution: Meteorological data (ERA5) taking a great proportion is less likely to encompass extreme trends. The conservative behavior is mitigated by reweighing training samples in TimeBench.
|Zero-Shot (FEV)|MASE|WQL|
|-|-|-|
|Original|0.845|0.712|
|Reweighting|**0.832**|**0.685**|
> **Q5**: Controlled comparisons of baseline models.
We report official results from public leaderboards and papers. We regard the context length as an inherent capability, which a TSFM can make continuous self-improvement.
Despite different context lengths in baseline models, our experiments still validate that performance improvement is not simply attributable to length variations: On TSLib, Sundial (context length=2880) surpasses previous state-of-the-art Time-MoE (context length=3072); On FEV and GIFT-Eval, there are more than half time series with less than 512 context length, less than the maximum context length of most TSFMs. We will also extend the baseline evaluation like **Q3** in the revised version.
> **Q6**: Extension for multivariate forecasting.
Univariate pre-training addresses different variate numbers of datasets during pre-training. A multivariate extension is crucial for the future improvement of Sundial, including:
* Architecture: Recent works proposed new **attention mechanisms** (e.g., Moirai, Timer-XL) for intra-/inter-variate modeling, which can be seamlessly incorporated into Sundial. Multivariate extension on the **flow-matching network** (e.g., from MLP to iTransformer) is applicable to make the post-merging on univariate representations.
* Post-training: Another roadmap is univariate pre-training and multivariate fine-tuning. Similar to GPT-3, a univariate pre-trained TSFM is a **start-point model**. We will explore multivariate prompting (e.g., special variate token) to instruct TSFMs on downstream tasks.
> **W1**: Experimental supplement to mentioned issues:
* Quantify the relationship between parameters and inference time (Extension of Figure 6):
|FEV Evaluation|Small|Base|Large|
|-|-|-|-|
|Parameters|32 M|128 M|444 M|
|Inference Time|2.36ms|3.13ms|5.31ms|
* Statistical significance testing (five runs):
|Zero-Shot (MSE)|ETTh1|ETTh2|ETTm1|ETTm2|ECL|Weather|
|-|-|-|-|-|-|-|
|Mean$\pm$Std.|0.411 $\pm$ 0.001|0.333 $\pm$ 0.000 |0.336 $\pm$ 0.000|0.258 $\pm$ 0.001|0.169 $\pm$ 0.000|0.234 $\pm$ 0.001|
Different from the high variance typically observed in forecasting performance, generative forecasting aggregates multiple predictions for calibration, achieving a small variance.
---
Rebuttal Comment 1.1:
Comment: Good job for providing a detailed response. I really appreciate the effort you put into it. I am happy with your answer to my first question; it was clear, well-explained, and totally convincing. For the second question, I appreciate all the details you shared about the training setup - things like GPU specs, batch size, context length, and total training duration. That was helpful. That said, I was actually hoping to get a bit more info on computational costs, like FLOPs, energy consumption, or even the monetary cost. These details are super important when thinking about the scalability of large models. But just to be clear, that does not mean your response was not convincing. For the rest of the questions, you covered everything I asked, and I found them convincing as well. I am thankful that you even extended your results - it all makes a lot of sense to me. Thanks again for your hard work. I am happy to bump up your score to a 4, and I truly hope your work gets accepted. Great job to the team.
---
Reply to Comment 1.1.1:
Comment: We’re glad that our responses addressed your concerns, which helped a lot us to improve the quality of this work. We will incorporate the corresponding revisions, including more detailed computational costs, into the final manuscript. Thanks again for your response and raising the score. | Summary: This paper introduces Sundial, a family of native, flexible, and scalable time series foundation models. It proposes a TimeFlow Loss based on flow-matching for model training, which can generate multiple probable predictions. It also proposes some crucial adaptations of Transformers and the TimeBench with 1 trillion time points to enhance the time series foundation model. Sundial shows good generalization performance on zero-shot forecasting and achieves new state-of-the-art on both point forecasting and probabilistic forecasting benchmarks.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem.
Theoretical Claims: There is no proof or theoretical claims.
Experimental Designs Or Analyses: Experimental designs and analyses are sound and valid.
Supplementary Material: I reviewed all the appendix.
Relation To Broader Scientific Literature: This paper contributes to the literature of time series foundation models. Existing models mainly use parametric densities or language modeling, while this paper introduces a new perspective of generative modeling. It also proposes a large TimeBench for model training and a family of foundation models with SOTA performance on point and probabilistic forecasting.
Essential References Not Discussed: There are no essential references not discussed.
Other Strengths And Weaknesses: Strengths:
(1) This paper is well presented. Most motivations, designs, and contributions are clearly described. The proposed method is easy to follow.
(2) The idea of introducing generative modeling into time series foundation models is interesting. The proposed TimeFlow loss allows Transformers to be trained without discrete tokenization and make probable predictions.
(3) The proposed Sundial is a family of scalable and efficient time series foundation models, which achieves state-of-the-art zero-shot performance on point forecasting benchmarks and probabilistic forecasting benchmarks, including GIFT-Eval and FEV.
Weaknesses:
(1) Figure 1 needs more detailed explanations, such as the explanations on these two tokenization ways and three modeling techniques (their meanings, advantages and disadvantages) and the differences between them.
(2) The proposed model makes some critical adaptations to the Transformer architecture, such as RoPE, Pree-LN, FlashAttention, and KV Cache. More explanations on why these adaptations help the model are needed. It would also be better to conduct some experimental analysis on the effects of these adaptations.
(3) This paper mentions mode collapse in foundation models. It needs more descriptions of this term and more explanations on how the proposed model overcomes mode collapse.
Other Comments Or Suggestions: (1) Are there any principles in collecting datasets in TimeBench? For example, how can we decide which dataset should or should not be included during collection? How do these different datasets help the pre-training?
(2) With a larger scale of pre-training data, why does Sundial use a smaller model size than Time-MoE and Chronos?
Questions For Authors: Please see the ’Other Strengths And Weaknesses’ and ‘Other Comments Or Suggestions’ parts. Addressing the weaknesses or suggestions above may help improve the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks to Reviewer VkQC for providing thorough insightful comments.
> **Q1**: Explanations about different tokenization ways and modeling techniques.
Thanks for your suggestion, we provide a comparison to enhance the clarity:
| Tokenization | Meanings| Advantages | Disadvantages |
|-|-|-|-|
| Discrete | quantize time series into a fixed vocabulary| compatible with language modeling| foreign (discrete precision), compute-intensive, OOV risk |
| Continuous | embed time series into latent representations | native (operate on original values), efficient (patch token) | unconstrained output range|
| Modeling | Meanings| Advantages | Disadvantages |
|-|-|-|-|
| Parametric Densities | specify data with prior distributions | competent with suitable prior, fit well on small-scale data | inflexible, risk of mode collapse |
| Language Modeling | predict categorical distributions of word sequences | flexible, scalable | rely on discrete tokens |
| Generative Modeling | learn the underlying distribution that generates data | flexible, scalable, compatible with continuous tokens| require sampling|
> **Q2**: Effectiveness of architectural adaptations.
* RoPE maintains the temporal causality, leading to better performance.
| TSLib (Zero-Shot, Averaged) | w/o RoPE | with RoPE|
|-|-|-|
| MSE \| MAE| 0.302 \| 0.360 | **0.290** \| **0.342** |
* Pre-LN improves training stability, leading to stable convergence and better performance.
| TSLib (Zero-Shot, Averaged) | Post-LN (15k iter) | Post-LN (30k iter) | Pre-LN (15k iter) | Pre-LN (30k iter)|
|-|-|-|-|-|
| MSE \| MAE| 0.295 \| 0.348 | 0.297 \| 0.350 | 0.294 \| 0.347| **0.290** \| **0.342** |
* FlashAttention reduces computational costs. KV Cache improves the inference speed of multi-step autoregression.
| Context Length = 2880, Patch Size=16, Batch Size=96 | w/o FlashAttention | with FlashAttention |
|-|-|-|
| Training Speed (s/iter) | 1.2723 | **1.2245**|
| Memory Footprint (GB) | 35.41| **30.18** |
| Prediction Length = 160, Autoregression Steps = 10 | w/o KV Cache | with KV Cache |
|-|-|-|
| Inference Speed (s/iter) | 1.08 | **0.62**|
> **Q3**: Discussion of mode collapse in time series foundation models.
Mode collapse is a failure of representation learning, where a model generates a limited variety of outputs, ignoring the diversity in the training data. For time series foundation models, mode collapse stems from the heterogeneity of time series distribution and sometimes leads to **oversmooth predictions** (See showcases in Figure 13-14).
Our work addresses it by training objectives. We adopt generative modeling to learn flexible distribution **without probabilistic priors**. As an extension of Table 3, we evaluate the distributional metric CRPS to assess the quality of generated predictions using different training objectives.
| Zero-Shot Benchmark (CRPS) | ETTh1| ETTh2| ETTm1| ETTm2| ECL| Weather| GIFT-Eval |
|-|-|-|-|-|-|-|-|
| TimeFlow | **0.0059** | **0.0037** | **0.0057** | **0.0029** | 0.0082 | **0.0021** | **0.505** |
| Diffusion| 0.0082 | 0.0053 | 0.0070 | 0.0039 | 0.0095 | 0.0032 | 0.534 |
| MSE| 0.0063 | 0.0040 | 0.0058 | 0.0032 | **0.0080** | 0.0023 | 0.642 |
Results show that the predictive distribution modeled by TimeFlow is **more coherent and diverse** than counterpart training objectives, especially on the highly diverse GIFT-Eval, which validates TimeFlow's effectiveness in addressing mode collapse.
> **Q4**: Principles in collecting datasets in TimeBench.
The curation of trillion-scale TimeBench includes:
| Preprocessing| Quality Control| Composition |
|-|-|-|
| impute missing values via mean values or ARIMA | measure statistics like ADF-test, predictability, exclude datasets that deviate from normal ranges | collect datasets from typical domains that consist of predictable and applicable data, such as weather, traffic, marketing, energy, etc |
| replace abnormal values (e.g., 3-sigma)| exclude less predictable series based on the performance of simple ML forecasters | adopt synthetic technique (e.g., KernelSynth), which improves the capability of seasonal/trend prediction |
| conduct normalization to mitigate range discrepancies | use statistics to determine the sampling weight during pre-training | use systematic datasets with well-defined temporal dynamics, such as meteorological ERA5, which enhance the understanding of local variations |
> **Q5**: About the smaller model size compared to Time-MoE and Chronos.
The difference mainly comes from the **architectural choice**: Time-MoE adopts MoE, which increases parameter counts greatly. Compared with the encoder-decoder Chronos, Sundial adopts a decoder-only Transformer. To effectively scale the parameter counts, it is crucial to enhance the domain diversity and dataset completeness in the pre-trained corpora, which leaves an important future work. | null | null | null | null | null | null | null | null |
Offline Opponent Modeling with Truncated Q-driven Instant Policy Refinement | Accept (poster) | Summary: This paper proposes to learn a horizon-truncated incontext action-value function and a policy refinement mechanism to tackle the offline opponent modeling problem, espically when the training dataset is suboptimal. This paper has analyzed the rationale of Truncated Q from the perspective of No Maximization Bias probability theoretically. And experimental results have demonstrated the effective of the algorithm.
Claims And Evidence: Yes, the claims made in the submission are supported by both theory and experiment results.
Methods And Evaluation Criteria: The evaluation criteria is make sense, but the proposed algorithm needs more explanation. Please refer to the Question part.
Theoretical Claims: I have reviewed the theoretical section of the paper, but I have some confusions. For example:
1. What's the defination of $\breve{Q}_h$ in Equation 2.
2. The explanations of the notations in Theorem 3.1 should be included in the main paper
Experimental Designs Or Analyses: Experimental designs are good.
Supplementary Material: No, I have not review the supplementary material.
Relation To Broader Scientific Literature: The key contributions of this paper build upon and extend several important lines of research in offline opponent modeling (OOM), offline reinforcement learning (RL), and in-context learning.
Essential References Not Discussed: I am not very familiar with this field, so I am not sure.
Other Strengths And Weaknesses: Strength:
1. The paper introduces truncated Q-value estimation, which is a interesting approach.
2. This paper provides experimental validation and theoretical analysis, which can serve as an inspiration for future research.
3. The paper is well-structured.
Weakness:
1. Each symbol used in the paper should be clearly defined. The main paper should be self-contained.
2. The pseudocode of the algorithm is necessary in the main part.
3. More intuitive explanation is needed as to why truncating Q-values is better.
Other Comments Or Suggestions: No
Questions For Authors: 1. Can you give a detailed explanation about why the error in the optimization objective of $\bar{Q}$ accumulates primarily through the accumulation of rewards over time $t'$ in $G_t^1$ in Equation 1.
2. How to find the optimal $h^*$? Because experimental results show that truncating Q-values performs worse in certain values of h.
3. Intuitively, h should not be too large or too small, and It should have robust performance within a certain range. However, this paper indicates that TIPR’s improvement on OOM algorithm does not exhibit a strict correlation trend with the truncated horizons. This is somewhat counterintuitive. What role does the randomness of hyperparameter tuning experiments play in the experimental results?
.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. In response, we offer the following clarifications and hope our explanations address your concerns and strengthen our work.
---
> **(A) Theoretical Claims:** 1. What's the definition of $\breve{Q} _\mathsf{h}$ in Equation 2.
>
$\mathsf{h}$ denotes the random variable of the truncated horizon, and $\mathsf{G} _\mathsf{h}(t) := \sum _{t'=t}^{t+\mathsf{h}-1} \gamma^{t'-t} \mathsf{r} _{t'}$ denotes the random variable of the horizon-truncated *Return-To-Go* (RTG). In Eq. (2), $\breve{Q} _\mathsf{h}$ denotes the random variable corresponding to the Truncated Q (i.e., the neural network we aim to learn), and its learning objective is to approximate $\mathbb{E}\mathsf{G} _\mathsf{h}$.
> **(B) Theoretical Claims:** 2. The explanations of the notations in Theorem 3.1 should be included in the main paper
Weakness: 1. Each symbol used in the paper should be clearly defined …
>
We create a notation table in [this link](https://ibb.co/m5QGry8L) to provide detailed explanations of all the notations used in Thm. 3.1. These definitions will be incorporated into the main text in our revision.
> **(C)** Weakness: 2. The pseudocode of the algorithm is necessary in the main part.
>
We provide the pseudocode for the TIPR algorithm framework at [this link](https://ibb.co/9H8Hjwt8). We appreciate your suggestion and will incorporate it into the main text in our revision.
> **(D)** Weakness: 3. More intuitive explanation is needed as to why truncating Q-values is better.
>
For this question, please refer to our response to reviewer **wBej**'s comment "**(B) Methods And Evaluation Criteria:** … However, it is still unclear". You can use Ctrl + F to search for the sentence in quotation marks to quickly jump to there.
> **(E) Questions For Authors:** 1. Can you give a detailed explanation about why the error in the optimization objective of $\bar{Q}$ accumulates …
>
In Eq. (1), the original Q-value $\bar{Q}$ regresses onto the RTG label $G_t^1$. As the horizon grows longer, estimation errors for individual rewards accumulate due to the cumulative summation of discounted rewards. This compounding of errors directly reduces the empirical accuracy of the learned $\bar{Q}$, especially when datasets are suboptimal.
From a theoretical view (Thm. 3.1), this accumulation manifests explicitly in the term $\Delta(\mathsf{h})$, whose complexity increases with the $\mathsf{h}$ (specifically $\Delta(\mathsf{h})=O(\mathsf{h}^{2})$). Consequently, as the $\mathsf{h}$ extends, the Empirical Risk NMB Probability (the probability of correctly fitting truncated returns) decreases significantly due to this error accumulation.
The longer horizon in $\bar{Q}$ reduces its reliability in identifying actions with the highest true returns, lowering the Overall NMB Probability. Truncating the horizon limits the error accumulation, improving action selection accuracy and policy refinement—especially in challenging suboptimal dataset settings.
> **(F) Questions For Authors:** 2. How to find the optimal $H^{\ast}$? …
>
Our primary focus in this work is to demonstrate, both theoretically and empirically, that **just truncating the horizon for Q-value estimation, without careful optimization of $H$, can already provide significant advantages over not truncating at all**. Theoretical insights from Prop. 3.2 explicitly show the existence of an optimal $H^{\ast}$, validating the conceptual soundness of truncation.
Identifying the precise $H^{\ast}$ is indeed an important direction for future research. Here are some possible methods:
1. **Cross-Validation on the Dataset**: Empirically selecting $H^{\ast}$ by performing cross-validation using subsets of the dataset to minimize prediction errors or maximize validation performance.
2. **Adaptive Methods**: This approach adaptively adjusts $H$ during training or testing based on real-time confidence or uncertainty estimates. For example, an adaptive strategy could start with a small horizon and increase it if confidence stays high, or reduce it when uncertainty rises—balancing accuracy and computation dynamically.
> **(G) Questions For Authors:** 3. Intuitively, h should not be too large or too small, and It should have robust performance within a certain range …
>
In fact, the results in Fig. 5 (**Question** 5 in Section 4.2) clearly demonstrate that setting $H$ too large (e.g., $H = T$, which degenerates into the Original Q) or too small (e.g., $H = 1$) is suboptimal. A moderate value of $h$ tends to yield better performance.
We did not perform an exhaustive search for the optimal $H^{\ast}$ in each environment. In Fig. 3, we used a moderate, untuned $H$. Still, Truncated Q effectively improves suboptimal original OOM policies over the Original Q.
---
Your comments have greatly helped improve our manuscript. If you have further feedback, we'll address it promptly. If your concerns are resolved, we hope you'll reconsider your rating. | Summary: The paper proposes an offline opponent modeling approach that enhances the consistency of Q-functions by truncating the horizon length during Q-learning, incorporates in-context learning to mitigate distribution shift caused by sub-optimality in offline datasets, and employs test-time policy refinement to further improve policy performance. Experimental results across four tasks demonstrate the effectiveness of the proposed TIPR method and its key components.
Claims And Evidence: Yes. The claims are reasonable and seem technically sound.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes. I just roughly checked the proof in the appendix and had not found any issues.
Experimental Designs Or Analyses: Yes. I checked the designs and analyses. Most of the experimental designs are reasonable.
Supplementary Material: Yes. I checked all parts (related works, proofs, environment details, hyper parameters, etc.), but I only roughly checked Appendix B (proofs).
Relation To Broader Scientific Literature: This work provides an opponent modeling based approach for MARL.
Essential References Not Discussed: I have not found yet.
Other Strengths And Weaknesses: **Strengths:**
This work integrates truncated Q-learning, in-context learning, and policy refinement to enhance opponent modeling in multi-agent reinforcement learning. TIPR serves as a plug-and-play framework compatible with many offline opponent modeling (OOM) methods, making it a promising tool for broader applications. The experimental design and analysis validated the effectiveness of TIPR.
**Weaknesses:**
1. The authors use a MEP approach to generate 4 opponents across all four environments, which possibly cover the opponent policy spaces. While this may be sufficient for the current task, it raises concerns about whether the results generalize to larger scenarios.
2. The in-context learning module uses a dataset of sequence length 15 across four experiments. Given that Transformer architectures typically require large-scale pre-training data, the scalability of TIPR in complex environments (e.g., Google Research Football (GRF)) remains unverified.
Other Comments Or Suggestions: 1. The experiments may conduct on some larger-scale environments.
2. The MSE comparisons between $\bar{Q}$ and $\bar{Q}_V$ may be exacerbated. As the original Q is used for a longer horizon, thus a normalization may be needed.
Questions For Authors: 1. How to read the colored and shaded areas of Fig.3 in MS and PP? The shared areas show a permanence drop, which may contradict the legend.
2. Why are the refine condition (RC) threshold and the inner cumulative confidence threshold (indicator condition) set as 0?
3. If I understand correctly, the context length to identify the opponent's newest policy is 15 which is very fast with Transformer (assumed the opponent is seen or unseen but fixed during 20 episodes). So I wonder how the differences between the trained opponents are.
4. The Appendix E.1 states that the size of MEP population is 4. How are 20 policies sampled from this population for Seen and Unseen setting?
5. Does TIPR introduce significant additional computational overhead (in terms of time and memory) during testing?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We'd like to offer the following clarifications and hope our responses address your concerns and strengthen our work.
---
> **(A) Weaknesses:** 1. The authors use a MEP approach to generate 4 opponents …
**Questions For Authors:** 4. … How are 20 policies sampled …
>
MEP uses Population-Based Training (PBT), which maintains a fixed population of 4 policies and evolves them through cross-play and natural selection. During training, we regularly saved checkpoints of all policies in the population, resulting in many policies. We then selected 20 diverse, representative ones (from weak to strong) to form the opponent policy pool.
> **(B) Weaknesses:** 2 … the scalability of TIPR in complex environments (e.g., Google Research Football (GRF)) remains unverified.
>
Regarding scalability, we argue that **TIPR framework is fundamentally built upon highly scalable in-context learning methods and the Transformer architecture**, which perfectly aligns with the principles outlined in Richard S. Sutton’s "*The Bitter Lesson*" [1], emphasizing the inherent scalability of learning-based approaches.
Regarding the choice of environments, although they are not highly complex, existing OOM algorithms still exhibit significant performance drops when the datasets are suboptimal. Moreover, these environments are widely recognized as benchmarks in the OM community and have been extensively used in prior works such as DRON, LIAM, TAO, OMIS [2], etc. While environments like GRF and SMACv2 are indeed more complex, they are typically used in MARL, focusing on agent communication and coordination, not the OM setting.
To further validate TIPR, we also ran preliminary experiments on the more challenging **OverCooked** (OC) environment [2,3], which has high-dimensional image inputs and requires intensive cooperation to complete a sequence of subtasks for serving dishes. The results can be found at [this link](https://ibb.co/XZX5X1m9). Due to time limits, we currently report results only on the representative OOM algorithm TAO, and we will include the full results in the revision. TIPR consistently improves the original OOM policy in OC, effectively addressing the suboptimality of datasets.
[1] http://www.incompleteideas.net/IncIdeas/BitterLesson.html?ref=blog.heim.xyz, Rich Sutton.
[2] Opponent modeling with in-context search, NeurIPS2024.
[3] On the Utility of Learning about Humans for Human-AI Coordination, NeurIPS2019.
> **(C) Other Comments Or Suggestions:** 2. The MSE comparisons between $\bar{Q}$ and $\bar{Q}_V$ may be exacerbated …
>
Our main concern is the absolute Q-value error during Policy Refinement (PR) in unknown test environments. PR relies on the ranking of Q-values across all legal actions, since actions are chosen based on these rankings. Normalization doesn’t affect this ranking—it only scales values uniformly. Therefore, the reported MSE directly reflects confidence in action rankings for each method. Normalization wouldn't change qualitative conclusion that truncated Q-values improve ranking accuracy and thus enhance PR effectiveness.
> **(D) Questions For Authors:** 1. How to read the colored and shaded areas of Fig.3 in MS and PP? …
>
We use ***Black Error Bars*** (BEB) for the original OOM policy's Standard Deviation (SD) and *Grey Error Bars* (GEB) for the SD after applying TIPR. In both MS and PP, GEB consistently appears above BEB, with the shaded area showing performance gains. This shows TIPR consistently improves OOM performance. We'll refine Fig. 3 in the revision to address any clarity issues.
> **(E) Questions For Authors:** 2. Why are the refine condition (RC) threshold and … set as 0?
>
This design is intuitive: (1) **Indicator Condition**: If a legal action is predicted to yield non-zero reward within $H$ steps, its value estimate is considered high-confidence. (2) **RC**: If any action meets this, we deem it promising to use Truncated Q to refine the policy at that timestep. This helps in sparse-reward environments and doesn't negatively impact performance in dense-reward ones.
> **(F) Questions For Authors:** 3 … So I wonder how the differences between the trained opponents are.
>
Regarding the diversity of the opponent policies, please refer to our response to reviewer **Chog**'s comment "**(D) Experimental Designs Or Analyses:** … However, if the offline dataset". You can use Ctrl + F to search for the sentence in quotation marks to quickly jump to there.
> **(G) Questions For Authors:** 5. Does TIPR introduce significant additional computational overhead …?
>
For this question, please refer to our response to reviewer **Chog**'s comment "**(G) Other Strengths And Weaknesses:** 2. The computational efficiency".
---
Your comments have greatly improved our paper. If you have further feedback, we'll address it promptly. If your concerns are resolved, we hope you'll reconsider your rating.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. | Summary: This work aims to model opponent in multi-agent learning scenario from offline data. They argue that the offline data may be suboptimal and lead to suboptimal policies. To address this, they propose to learn a horizon-truncated in-context Q-function and whether to perform policy refinement. Truncated Q-driven Instant Policy Refinement (TIPR) maximize the No Maximization Bias (NMB) probability.
Claims And Evidence: They present somewhat theoretical and experimental results to validate their claim. They provide results by isolating the performance improvement of truncated Q and iterative policy refinement across different testbeds. The experimentation includes different rational ablations.
Methods And Evaluation Criteria: The proposed method appears to perform better in terms of experimental evaluation. However, it is still unclear to me why and how truncated Q value would address the sub-optimality of the dataset. I understand truncated Q may reduce the complexity that arise with the increment of the number of opponents.
Theoretical Claims: The proofs looks reasonable. However, as mentioned in the previous point, I am concerned about how truncated Q is a sufficient solution.
Experimental Designs Or Analyses: The experiments are well-designed and throughly conducted.
Supplementary Material: Supplementary material provides additional information regarding related works, proof of the theorems, hyperparameters, neural network architectures, and test environments.
Relation To Broader Scientific Literature: This paper nicely fits in between offline MARL, in-context learning, and transformer-based decision making models. However, it discusses a very specific problem opponent modeling via offline pre-training particularly at the presence of suboptimal data.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The results show that the value of fixed horizon doesn't impact the performance. It is not clear why this is the case. Also, it would be nice to see whether there is any difference between using lower value of h vs. values close to T.
Other Comments Or Suggestions: The paper should define the concept "policy refinement" in a more concrete way as it is central to the paper.
Questions For Authors: Should we consider the method as few-shot adaptation method? It seems during test time it observes few samples first and then decide whether to perform policy refinement.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback. In response to your comments, we would like to make the following clarifications and feedback. We hope our explanations and analyses can eliminate concerns and make you find our work stronger.
---
> **(A) Other Comments Or Suggestions:** The paper should define the concept "policy refinement" in a more concrete way …
>
In *Offline Opponent Modeling* (OOM), existing algorithms typically assume that the offline dataset is optimal. We refer to the policy trained using these existing algorithms directly on the dataset as the **original policy**. When the dataset is suboptimal, we find that the performance of the original policy drops significantly. Our work aims to design a plug-and-play algorithmic framework to improve the performance of the original policy obtained through OOM algorithms—this is the definition of "**Policy Refinement**" (PR). We will clarify this in our revision.
Notably, our work introduces the novel idea of performing PR instantly at test time, referred to as *Instant Policy Refinement* (IPR), rather than refining the original policy during the offline stage, as done in traditional *Offline Conservative Learning* (OCL) methods. As pointed out by Reviewer **Chog**, this helps mitigate distribution shift, and the advantage of IPR over OCL is also demonstrated in Fig. 4 (**Question 2** in Section 4.2).
> **(B) Methods And Evaluation Criteria:** … However, it is still unclear to me why and how truncated Q value would address the sub-optimality of the dataset …
>
Inspired by OCL methods, we aim to learn a Q-function to refine the original policy obtained through OOM algorithms. However, learning a workable Q-function in multi-agent environments is highly challenging, as the error in return prediction accumulates significantly when learning an Original Q (see Fig. 1). As you noted, this issue becomes more severe with a larger number of opponents—a claim that is also supported by our Theorem 3.1. In contrast, **our Truncated Q can effectively reduce this accumulation, making it possible to distinguish between optimal and suboptimal actions**.
Our in-depth theoretical analysis (Theorem 3.1 and Proposition 3.2) demonstrates that, under the OOM problem setting, **an optimal truncated horizon exists that maximizes the probability of correctly identifying the best actions—formally, the No Maximization Bias probability**. This theoretical result supports the fact that using Truncated Q for PR is more sound and effective than using the Original Q.
In addition, our proposed IPR method can automatically decide whether to perform PR based on the confidence of action-value estimates from the Truncated Q. It effectively addresses the distribution shift problem commonly found in OCL methods: when uncertain, it falls back on the original policy (conservative), and when confident, it refines the policy (greedy). This trade-off mechanism is validated in Fig. 4 (**Question 4** in Section 4.2) to outperform both always being greedy and always being conservative.
> **(C) Other Strengths And Weaknesses:** The results show that the value of fixed horizon doesn't impact the performance …
>
In Fig. 5 (**Question 5** in Section 4.2), we analyze how the choice of different truncated horizons $H$ for the Truncated Q affects the improvement results. In fact, the results in Fig. 5 show that different values of $H$ lead to varying levels of improvement, and this effect is environment-dependent.
When $H$ is set to a small value, the improvement from PR is limited. When $H$ equals $T$ (i.e., degenerating to the Original Q), PR generally degrades the performance of the original policy. However, when $H$ takes on a moderate value, PR is able to effectively improve the original policy. This observation supports our Proposition 3.2: there exists an optimal $H^{\ast}$, and in general, $H^{\ast}$ is not equal to $T$.
> **(D) Questions For Authors:** Should we consider the method as few-shot adaptation method? …
>
You've provided a very interesting insight—indeed, our method can be viewed as a form of few-shot adaptation. This is because our Truncated Q is built upon in-context learning and a Transformer-based architecture, which enables it to collect a small set of contextual samples at test time. Through the IPR method, we can adaptively decide whether to perform PR using the Truncated Q—based on the collected samples—without relying on gradient descent. This allows us to generate a refined policy in an adaptive and efficient manner.
---
All your questions and feedback have greatly contributed to improving our manuscript. With the valuable input from you and all other reviewers, the quality of our work can be significantly enhanced. We welcome further comments from you and will seriously consider your suggestions for revisions. If you feel that we have addressed your concerns, we hope you will reconsider your rating. | Summary: This paper introduces a framework called Truncated Q-driven Instant Policy Refinement (TIPR) to improve Offline Opponent Modeling (OOM) algorithms trained on suboptimal datasets where the self-agent may not always select best-response (BR) actions to its opponents. Unlike prior OOM methods that assume optimal trajectories, TIPR addresses the reality that self-agent policies in offline datasets can be arbitrarily poor. The framework introduces Truncated Q, a horizon-truncated in-context action-value function that estimates returns over a fixed truncated horizon rather than the full episode, reducing estimation complexity and improving reliability. At test time, Instant Policy Refinement (IPR) dynamically refines the self-agent policy by using Truncated Q’s confidence estimates to decide whether policy updates are necessary. Theoretical analysis suggests that Truncated Q optimizes the No Maximization Bias (NMB) probability and there exist optimal truncated horizon lengths which can stabilize training along with ensuring effectiveness of Q-guided action selection. Empirical results across four competitive multi-agent environments show that TIPR consistently improves various OOM algorithms, even with highly suboptimal datasets. TIPR also outperforms conventional offline RL methods like Offline Conservative Learning (OCL) which struggle with distributional shifts at test time.
Claims And Evidence: Claim: Suboptimal datasets degrade OOM performance
- The paper presents experimental results across four competitive environments showing that existing OOM algorithms perform poorly when trained on suboptimal datasets. This supports the claim that suboptimality is a significant issue.
Claim: Truncated Q is more reliable than Original Q
- Theoretical analysis explains why truncating the Q function reduces estimation complexity and improves learning stability.
- Experimental results (Figure 1, Table 1) show that Truncated Q has significantly lower Mean Squared Error (MSE) than Original Q, supporting the claim that it provides more reliable value estimates.
Claim: TIPR consistently improves OOM algorithms across varying dataset optimality levels
- Performance improvements in Figure 3 demonstrate that TIPR enhances multiple OOM algorithms, even with heavily suboptimal datasets.
- The paper includes a thorough ablation study, showing that removing key components (confidence estimation, in-context opponent modeling) reduces performance.
Claim: IPR is more effective than OCL for OOM improvement
- Empirical results (Figure 4) show that IPR outperforms OCL across all environments.
- The explanation that OCL struggles due to distributional shifts is reasonable and aligns with existing challenges in offline RL.
Claim: Theoretical justification of Truncated Q maximizing the No Maximization Bias (NMB) probability
- The analysis in Section 3.3 argues that Truncated Q is superior due to better balancing between empirical risk and natural NMB probability.
- However, the derivation relies on assumptions about the structure of Q-learning errors and the properties of the opponent policies, which may not hold universally.
- Empirical validation supports the effectiveness of Truncated Q, but additional experiments explicitly comparing different horizon truncation strategies could strengthen this claim.
Claim: TIPR maintains stable improvements across all degrees of suboptimality
- While the results show consistent performance gains, some environments (PP) exhibit high variance in TIPR's impact (Figure 3). More analysis on why TIPR works better in some settings than others would improve clarity.
Claim: Optimal truncated horizon $h^*$ exists for every environment
- Proposition 3.2 suggests that there is an optimal $h^*$, but finding it is non-trivial. This paper does not provide a method to determine $h$ optimally and instead treats it as a hyperparameter. The claim would be stronger if supported by an adaptive mechanism for choosing $h$ rather than manually testing different values (as in Figure 5).
Overall, the core claims (that TIPR improves OOM algorithms, that Truncated Q is better than Original Q, and that IPR is more effective than OCL) are well-supported by both theory and experiments. However, some theoretical claims (like No Maximization Bias probability, optimal truncation horizon) rely on assumptions that may not always hold and would benefit from additional empirical validation.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I have briefly looked at the proof in the appendix. Reiterating the comment made in the "Claims and Evidence" section:
The proof relies on a structured relationship between truncation horizon and error growth, which may not generalize across all settings. Specifically, the empirical risk bound in eq-4 assumes a specific distribution of Q-learning errors, which depends on well-behaved rewards and opponent policies. If the environment dynamics cause non-monotonic error growth (for example due to long-term dependencies in strategic games), the proposed bounds might not accurately reflect reality. Further, the assumption that opponent modeling via in-context data is always reliable could be a strong assumption, especially since some environments may have non-stationary opponents whose strategies evolve in unseen ways.
Experimental Designs Or Analyses: Yes, I checked the entire experiments section in the paper.
This work assumes that the opponent policies seen during training provide enough diversity such that Truncated Q generalizes well to new opponents. However, if the offline dataset lacks sufficient opponent diversity, then the confidence and value estimates of Truncated Q might be biased. This issue is hinted at in Table 1, where accuracy for the confidence estimates $Q_C$ drops in environments with multiple opponents (PP for example). The assumption that an in-context model can fully capture opponent dynamics may not hold if the training distribution is too narrow.
Supplementary Material: I have briefly skimmed over the supplementary material.
Relation To Broader Scientific Literature: Opponent modeling has traditionally relied on online learning, with early works like Deep Reinforcement Opponent Network (DRON) (He et al., 2016) and Latent Interaction-based Opponent Modeling (LIAM) (Papoudakis et al., 2021) focusing on learning opponent representations. More recent efforts, such as TAO (Jing et al., 2024), leveraged transformers for in-context learning to adapt to unseen opponents. However, previous Offline Opponent Modeling (OOM) approaches assumed that datasets contained optimal trajectories, a limitation that TIPR overcomes by enabling learning from suboptimal offline datasets. The idea of refining policies based on action-value functions is inspired by conservative offline RL methods, such as Conservative Q-learning (CQL) (Kumar et al., 2020) and Implicit Q-learning (IQL) (Kostrikov et al., 2021), but TIPR differs by performing refinement dynamically at test time rather than during offline training. This paper also contributes to the growing field of transformer-based RL, drawing inspiration from Decision Transformer (Chen et al., 2021) and Prompt-DT (Xu et al., 2022), but instead of using ICL for direct policy learning, TIPR applies it to improve action-value estimation through Truncated Q. Additionally, it addresses challenges in multi-agent RL related to opponent non-stationarity by introducing a more flexible confidence-based policy refinement mechanism, allowing real-time adaptation to shifting opponents. However, unlike adaptive truncation methods used in Q-learning (Poiani et al., 2023), TIPR does not yet provide an automated mechanism to determine the optimal truncation horizon $H$.
Essential References Not Discussed: Related to this paper's discussion on adapting the self-agent to unseen opponents, the problem formulation in [1] focuses on a similar setting from the perspective of principal-agent mechanism design using a meta-RL approach to few-shot test-time adaptation.
[1] Banerjee, A., Phade, S., Ermon, S. and Zheng, S., 2023. MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning. arXiv preprint arXiv:2304.04668.
Other Strengths And Weaknesses: Additional comments on some limitations in this paper:
1. While the proposed approach is evaluated on smaller simulated environments for multi-agent RL, a discussion on real-world deployment challenges (for example, computational cost of real-time policy refinement) is missing. Testing TIPR on more complex strategic settings (like SMACv2) would better demonstrate its practical impact.
2. The computational efficiency of TIPR is not discussed. Since IPR refines policies dynamically, it may be slower than standard OOM baselines.
Other Comments Or Suggestions: The last paragraph in page 2 introduces the variable $M$ but has it been defined somewhere in the paper?
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback. We'd like to clarify a few points in response, hoping our explanations address your concerns and strengthen our work.
---
> **(A) Theoretical Claims: …** the empirical risk bound in eq-4 assumes a specific distribution of Q-learning errors, which depends on well-behaved rewards and opponent policies. If the environment dynamics cause non-monotonic error growth …
>
Our theory (Thm. 3.1 and Prop. 3.2) fundamentally proves that there exists a trade-off between the truncated horizon $h$ and the No Maximization Bias probability. Specifically, the theorem reveals a structured relationship showing that extending the $h$ increases the complexity of accurately estimating returns, reducing the accuracy of Q-values in selecting optimal actions. Thus, using Truncated Q is more sound and effective than using the Original Q for policy refinement.
Importantly, we clarify that **our theory does not explicitly assume or depend on any particular structure or specific distributions of Q-learning errors, nor does it require particular properties or assumptions regarding opponent policies**. The derived bounds in Eq. (4) are general and are based purely on standard statistical inequalities and complexity terms that arise naturally from the estimation problem itself.
We acknowledge that some environment dynamics may cause non-monotonic error growth. However, our theory doesn't specifically assume monotonicity or well-behaved rewards or opponents. Future work could explore more detailed analyses of environments with complex long-term strategic dependencies.
> **(B) Theoretical Claims:** Further, the assumption that opponent modeling via in-context data is always reliable could be a strong assumption …
>
We clarify that assuming reliable Opponent Modeling (OM) via *In-Context Data* (ICD) is not overly strong. TAO (Jing et al., 2024a) has theoretically and empirically proved that using ICD for OM is both feasible and effective, even with non-stationary opponents. Building on this evidence, our Truncated Q uses ICD to enhance OM reliability and performance.
> **(C) Claims And Evidence:** The claim would be stronger if supported by an adaptive mechanism for choosing $h$ rather than manually testing different values (as in Figure 5).
>
For possible automated mechanisms of choosing $h$, please refer to our response to reviewer **RP82**'s comment "**(F) Questions For Authors:** 2. How to find". You can use Ctrl + F to search for the sentence in quotation marks to quickly jump to there.
> **(D) Experimental Designs Or Analyses:** … However, if the offline dataset lacks sufficient opponent diversity …
>
We use MEP algorithm to generate opponent policies. From the MEP population, 12 policies were selected as training opponents (Seen Set), and 8 policies were selected as test opponents in the Unseen Set. **A quantitative analysis of the diversity among the 12 training opponent policies** is provided at [this link](https://ibb.co/XZnpC3cm).
To measure their distinctions, we use the **Pair-wise Expected KL Divergence** (PEKLD). If we take 1.0 as a threshold, over 80% of the PEKLD values in all environments exceed this threshold. This indicates that the training opponent policies are generally well-distinguished from one another.
> **(E) Essential References Not Discussed:** … the problem formulation in [1] focuses on a similar setting from the perspective of principal-agent …
>
[1] is an interesting related work. We will cite [1] and discuss its connections with our work in our revision.
> **(F) Other Strengths And Weaknesses:** 1 … Testing TIPR on more complex strategic settings …
>
For experiment results on more challenging settings, please refer to our response to reviewer **LtaJ**'s comment "**(B) Weaknesses:** 2 … the scalability of TIPR".
> **(G) Other Strengths And Weaknesses:** 2. The computational efficiency of TIPR is not discussed …
>
Compared to the original OOM algorithm, **TIPR only requires one additional forward pass using the Truncated Q at each timestep, which can be completed very quickly (on the order of milliseconds to seconds).**
Many other policy refinement methods, such as decision-time planning (e.g., Monte Carlo Tree Search), perform a large number of forward passes to rollout numerous simulated trajectories to refine the policy. In contrast to them, the computational cost of TIPR is negligible.
> **(H) Other Comments Or Suggestions:** … introduces the variable $M$ but has it been defined somewhere in the paper?
>
$M$ denotes the number of $(o^{-1}, a^{-1})$ tuples in the ICD $D$. This value determines how much information is used to characterize the opponent's policy and is typically set as a predefined constant. We will include this in our revision.
---
Your comments have greatly helped improve our paper. If you have further feedback, we're happy to address it. If your concerns are resolved, we hope you'll reconsider your rating. | null | null | null | null | null | null |
Clipping Improves Adam-Norm and AdaGrad-Norm when the Noise Is Heavy-Tailed | Accept (poster) | Summary: This paper studies high probability convergence rate of clip-AdamNorm and clip-AdaGradNorm under heavy-tailed noise. The authors show that Adam/AdaGrad fails to get convergence rate with $log(1/\delta)$ dependence if the noise is heavy-tailed. Then the authors show that gradient-clipping can fix this issue for AdamNorm and AdaGradNorm.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: No
Experimental Designs Or Analyses: Yes, in Section 4
Supplementary Material: Yes, the major framework of proof
Relation To Broader Scientific Literature: It is generally related to theoretical understanding of popular optimization algorithm in machine learning.
Essential References Not Discussed: No
Other Strengths And Weaknesses: This is a clearly theoreitical-solid paper, providing a complete proof of high probability convergence of Adam-type algorithms with clipping. The only concern is that the analysis heavily depends on the global adaptive stepsize instead of coordinate-wise scaling as in original Adam. Although the authors provide results for clip-Adam, it still dependends on $\beta_1=0$. This is not aligned with practice. The only paper I know that analyzes clip-Adam without sub-gaussian noise assumption is [1], which has similar assumptions with this paper but in distributed setting. This might be helpful for the authors to do future research.
[1] Cheng, Ziheng, and Margalit Glasgow. "Convergence of Distributed Adaptive Optimization with Local Updates." arXiv preprint arXiv:2409.13155 (2024).
Other Comments Or Suggestions: In Theorem 3.3, $M\rightarrow\Delta$
Questions For Authors: see strength and weakness part
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the very positive evaluation of our paper and constructive feedback.
>1:**Analysis of Clip-Adam with $\beta_1 > 0$**
A: We revisited our proof and generalized it to the case of $\beta_1 > 0$. The sketch of the proof with the key derivation is provided below. We promise to include the complete proof in the final version.
The main idea remains the same: we prove the descent lemma for the coordinate-wise approach as for the nonconvex case in scalar-wise form (Lemma C.3) and the rest of the proof follows the induction-based arguments (similarly to Theorem C.5). Applying smoothness and rewriting everything in the coordinate-wise form, we get an analog of formula (29) for Clip-AdaGradD and Clip-AdamD:
$$f(x_{t+1}) - f(x_t) \leq \sum\limits_{i=1}^d\left[-\gamma \frac{\nabla_{t, i} m_{t,i}}{b_{t,i}} + \frac{L\gamma^2}{2}\frac{m_{t,i}^2}{b_{t,i}^2}\right].$$
The next step is to derive an analog of formula (30) from page 26. We have:
\begin{align}
-\nabla_{t,i} m_{t,i} &= -\beta_1 \nabla_{t,i} m_{t - 1,i} - (1 - \beta_1)\nabla_{t,i} g_{t,i} \\\\
&= -\beta_1 (\nabla_{t,i} - \nabla_{t-1,i}) m_{t - 1,i} -\beta_1 \nabla_{t-1,i} m_{t - 1,i} - (1 - \beta_1)\nabla_{t,i} g_{t,i} \\\\
&\leq \beta_1 |\nabla_{t,i} - \nabla_{t-1,i}| \cdot |m_{t - 1,i}| -\beta_1 \nabla_{t-1,i} m_{t - 1,i} - (1 - \beta_1)\nabla_{t,i}g_{t,i}.
\end{align}
Unrolling the recurrence and summing over $i$, we get
$$ -\sum\limits_{i=1}^d \frac{-\nabla_{t,i} m_{t,i}}{b_{t,i}} \leq -(1 - \beta_1)\sum\limits_{k=0}^t \beta_1^{t-k}\sum\limits_{i=1}^d\frac{\nabla_{k,i}g_{k,i}}{b_{t,i}} + \sum\limits_{k=0}^{t-1} \beta_1^{t-k} \sum\limits_{i=1}^d\frac{ |\nabla_{k+1,i} - \nabla_{k,i}| \cdot |m_{k,i}|}{b_{t,i}}. $$
Applying Cauchy-Schwarz inequality to the last term in the right-hand side and then using $L$-smoothness, we obtain
\begin{align}
\sum\limits_{k=0}^{t-1} \beta_1^{t-k} \sum\limits_{i=1}^d\frac{|\nabla_{k+1,i} - \nabla_{k,i}| \cdot |m_{k,i}|}{b_{t,i}} &\leq \sum\limits_{k=0}^{t-1} \beta_1^{t-k} \left(\sqrt{\sum\limits_{i=1}^d (\nabla_{k+1,i} - \nabla_{k,i})^2}\right)\left(\sqrt{\sum\limits_{i=1}^d\frac{m_{k,i}^2}{b_{t,i}^2}}\right)
\\\\
&=\sum\limits_{k=0}^{t-1} \beta_1^{t-k} \|\| \nabla f(x_{k+1}) - \nabla f(x_k)\|\|\left(\sqrt{\sum\limits_{i=1}^d\frac{m_{k,i}^2}{b_{t,i}^2}}\right)
\\\\
&\leq \sum\limits_{k=0}^{t-1} \beta_1^{t-k} L \|\| x_{k+1} - x_k\|\| \left(\sqrt{\sum\limits_{i=1}^d\frac{m_{k,i}^2}{b_{t,i}^2}}\right) \\\\
&= L\gamma\sum\limits_{k=0}^{t-1} \beta_1^{t-k} \left(\sqrt{\sum\limits_{i=1}^d\frac{m_{k,i}^2}{b_{k,i}^2}}\right)\left(\sqrt{\sum\limits_{i=1}^d\frac{m_{k,i}^2}{b_{t,i}^2}}\right) \\\\
&\leq \frac{L\gamma}{c_m}\sum\limits_{k=0}^{t-1} \beta_1^{t-k} \sum\limits_{i=1}^d\frac{m_{k,i}^2}{b_{k,i}^2}.
\end{align}
Then, we plug this bound into the previous ones and get:
$$f(x_{t+1}) - f(x_t) \leq -(1 - \beta_1)\gamma\sum\limits_{k=0}^t \beta_1^{t-k}\sum\limits_{i=1}^d\frac{\nabla_{k,i}g_{k,i}}{b_{t,i}} + \frac{L\gamma^2}{c_m}\sum\limits_{k=0}^{t} \beta_1^{t-k} \sum\limits_{i=1}^d\frac{m_{k,i}^2}{b_{k,i}^2}.$$
Applying Lemma C.2 and summing over $t$, we derive the following bound:
\begin{align}
(f(x_T) - f^\ast) - (f(x_0) - f^\ast) &\leq -(1 - \beta_1)\gamma\sum\limits_{t=0}^{T-1}\sum\limits_{k=0}^t \beta_1^{t-k}\sum\limits_{i=1}^d\frac{\nabla_{k,i}g_{k,i}}{b_{t,i}} \\\\
&+ \frac{L\gamma^2(1-\beta_1)}{c_m}\sum\limits_{t=0}^{T-1}\sum\limits_{k=0}^{t} \beta_1^{t-k} \sum\limits_{i=1}^d\sum\limits_{j=0}^k\beta_1^{k-j}\frac{g_{j,i}^2}{b_{k,i}^2}.
\end{align}
Then, similarly to the proof of Lemma C.3, we denote the coefficients in front of $\nabla_{r,i} g_{r,i}$ and $g_{r,i}^2$ as $-\gamma C_{r,i}$ and $A_{r,i}$ respectively. They have the following explicit formulas
\begin{align}
-\gamma C_{r,i} &= -(1 - \beta_1)\gamma\sum\limits_{t=r}^{T-1}\frac{\beta_1^{t-r}}{b_{t,i}},\\
A_{r,i} &= \frac{L\gamma^2(1-\beta_1)}{c_m}\sum\limits_{t=r}^{T-1}\sum\limits_{k=r}^t\frac{\beta_1^{t - r}}{b_{k,i}^2}
\end{align}
After that, we follow exactly the same steps as in the proof of Theorem C.5 and get that for $\gamma \leq \frac{(1 - \beta_1)^2c_m^3 b_0}{2L}$ the following inequality holds:
\begin{align}
\sum\limits_{t=0}^{T-1}\sum\limits_{i=1}^d \frac{\gamma C_{t,i}}{2}\nabla_{t,i}^2 &\leq (f(x_0) - f^\ast) - (f(x_T) - f^\ast) - \sum\limits_{t=0}^{T-1}\sum\limits_{i=1}^d (\gamma C_{t,i} - 2A_{t,i})\nabla_{t,i}\theta^u_{t,i} \\\\
&+ \sum\limits_{t=0}^{T-1} \sum\limits_{i=1}^d 2A_{t,i}(\theta^u_{t,i})^2 + \sum\limits_{t=0}^{T-1} \sum\limits_{i=1}^d \gamma C_{t,i}(\theta^b_{t,i})^2.
\end{align}
The rest of the proof follows very similar steps to the current proofs of Theorems C.5 and C.12 – from Theorem C.5 the same dependence on $1 - \beta_1$ follows, and from Theorem C.12 the dependence on $d$ is preserved.
>2:**In Theorem 3.3, $M \to \Delta$**
A: Thank you for spotting the typo: should be $M$ in the upper bound.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' feedback and I will keep my recommendation for acceptance.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for checking our response and keeping the positive recommendation! We also thank the reviewer for the positive and encouraging feedback.
If possible, we would be grateful if the reviewer could consider championing our paper during the discussion with other reviewers and the AC. | Summary: This paper provides a loss function that AdaGrad and Adam have bad high probability convergence when the noise is heavy-tailed. Then they show a desirable convergence rate for AdaGrad-Norm and Adam-Norm with clipping gradient to fix the heavy-tailed noise issue. They conduct experiments to show that clipping can help Adam optimize faster.
Claims And Evidence: 1. Theorem 2.1 can’t be viewed as an example of bad dependence on $\delta$. See detailed comments in theoretical claims.
2. Algorithm 1 is not a proper definition for Adam/AdaGrad. Instead, it should be Adam-Norm/AdaGrad-Norm because $b_t$ is the weighted average/sum of the square of gradient norm rather than entry-wise square of gradient.
3. There is a gap between the experiments and the theory. All the theoretical results are for Adam-Norm/AdaGrad-Norm but the experiments are done with Adam/AdaGrad.
Methods And Evaluation Criteria: High-probability convergence is preferable to in-expectation convergence, and heavy-tailed noise is a real issue in practice, as confirmed by this paper and prior work. While the fine-tuning experiment is reasonable, I am curious why the authors chose ALBERT over BERT and RoBERTa. Additionally, CoLA and RTE are relatively small datasets, though I don’t doubt that clipping would also be beneficial for larger datasets, given its common use in practice. However, it would be more informative to compare training loss rather than validation loss, as the focus is on optimization rather than generalization.
Theoretical Claims: I checked the proof of theorem 2.1 and found the statement and loss example problematic. I have listed the specific problems below but in short I find this theorem completely useless. We can easily get similar result even for deterministic Adam because the key idea in the loss example is to make $x_0$ far enough from $x^*$ so that it can not enter the region with small loss because its moving distance is restricted when $T$ and stepsize $\gamma$ are not large enough.
1. This **cannot** be viewed a polynomial dependence on $\delta^{-1/2}$ and $\epsilon^{-1/2}$ because both the function and noise depends fixed $\epsilon$ and $\delta$. They are defined by the specific $\epsilon$ and $\delta$ and then the authors show that $T$ needs to be larger than $\frac{R}{\gamma} (\frac{\sigma}{\nu \sqrt{2 \delta}}-1)$. For each fixed function, you instead need to show the convergence rate as a function of $\epsilon$ and $\delta$.
2. It is inapproriate to choose initial $x_0$ depending on stepsize $\gamma$. Specifically, clip-Adam will get the same negative result under the same setting so theorem 2.1 isn't a valid example for failure of Adam without clipping. In comparison, stepsize is decided based on the lipschitzness of loss function and initial distance $R=||x_0-x^*||$ in the positive result theorem 3.1-3.3.
3. The noise distribution is not heavy-tailed at all since there is only noise at the first update and the noise even has bounded variance. It is very different from the heavy-tailed noise people find in practice.
4. The provided example doesn’t satisfy assumption 1.1 as claimed in the theorem statement. The variance of noise is always $1$ rather than any choice of $\sigma$.
Experimental Designs Or Analyses: 1. The noise in the quadratic experiment seems too large. I feel the failure of Adam comes from the large magnitude of noise rather than heavy tail. If large gradients are accumulated in $b_t$ without clipping, the effective stepsize of AdaGrad will be much smaller than the effective stepsize of Clip-AdaGrad, which may be the real reason of AdaGrad’s failure. I’d like to see how the result will change if you use 100 times smaller noise. The distribution is still heavy-tailed. Or a fair comparison between AdaGrad and Clip-AdaGrad is to thoroughly tune the learning rate in a large range for each one. Comparing them under the same learning rate rather than the optimal learning rate is not appropriate.
2. What is the randomness in the fine-tuning experiment? Is it just the seed that determines the batch order in training set? It is hard to believe that some seeds will lead to no improvement to val loss.
Supplementary Material: I read appendix A, B and D.
Relation To Broader Scientific Literature: It can help understand the relationship between vanilla optimization algorithms and the tricks like clipping that people use in practice.
Essential References Not Discussed: [1] conducted experiments to show that heavy-tailed noise is not the major factor for the gap between Adam and SGD. They point out that Adam should be more similar to sign descent when handling heavy-tailed noise. I think the authors should discuss more whether their results on Adam-Norm can really suggest anything for Adam when dealing with heavy-tailed noise because Adam-Norm makes Adam completely different from sign descent.
References
[1] Kunstner, F., Chen, J., Lavington, J. W., and Schmidt, M. Noise is not the main factor behind the gap between SGD and Adam on Transformers, but sign descent might be. arXiv preprint arXiv:2304.13960.
Other Strengths And Weaknesses: Weaknesses:
1. It is hard to understand whether the authors really want to analyze clipping for Adam or Adam-Norm. The title and the theoretical results are all about Adam-Norm. But the motivation for analyzing Adam-Norm under heavy-tailed noise is missing because the relevant literature mentioned in introduction focuses on Adam under heavy-tailed noise. If the goal is to analyze Adam through the middle step of Adam-Norm, then it is doubted how much insight can be provided by the theoretical results because Adam and Adam-Norm are completely different algorithms as shown in [2].
2. There is no clear definition of heavy-tailed noise. The failure example provided in theorem 2.1 is not heavy-tailed noise in my opinion.
Reference
[2] Xie, S., Mohamadi, M. A., and Li, Z. Adam exploits $\ell_\infty$-geometry of loss landscape via coordinate-wise adaptivity. arXiv preprint arXiv:2410.08198.
Other Comments Or Suggestions: The introduction should be carefully rewritten since the authors often confuse Adam with Adam-Norm. It seems that the authors initially focus on Adam but only get rigorous theoretical results for Adam-Norm so they need to make the current title. But the introduction and the paper should be revised to keep consistency.
Questions For Authors: 1. I’d like to see some discussion on Theorem 2.1 to check if I misunderstand anything. See details in theoretical claims.
2. Why can we view Theorems 3.1–3.3 as a positive result? Even if the number of iterations doesn’t depend on $\delta$, its dependence on $\epsilon$ is even worse than Theorem 2.1 if Theorem 2.1 is valid.
3. I don’t understand why the hyperparameters need to depend on $\delta$ in Theorems 3.1–3.3. If we want very high probability $1-\delta$, then $A$ needs to be very large, and $\gamma$ and $\lambda$ will be very small. Does this suggest that the clipping effect needs to be very strong to achieve a good convergence result?
4. Can you conduct more experiments as suggested in the experimental design?
a. Decrease the magnitude of noise in the quadratic problem and carefully tune learning rate.
b. Report the training loss of the fine-tuning experiment.
c. Repeat the fine-tuning experiments with Clip-Adam-Norm.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback that helped improve our paper. The requested numerical experiments can be found here: https://anonymous.4open.science/r/Clip-Adam-Norm-and-Clip-AdaGrad-Norm-1E8A/ (folder "rebuttal").
>1: **Algorithm 1.**
A: We will add "-norm" to method names in Algorithm 1. Section 1.3 provides coordinate-wise versions of AdaGrad/Adam.
>2:**Gap between experiments and theory.**
A: We include coordinate-wise methods with $\beta_1 = 0$ in Appendix C.5 (end of Section 3). **Our proofs now cover coordinate-wise stepsizes—see our response to Reviewer GE8p**. In Section 2, our results hold in 1D, where coordinate-wise and norm/scalar versions coincide. Hence, the theory aligns well with quadratic problem experiments. The Clip-Adam experiment without delay, though not analyzed theoretically, complements our theory.
>3:**Why ALBERT?**
A: Mosbach et al. (2020) motivate our choice. RoBERTa wasn't used with gradient clipping in their experiments. Due to computational constraints and the need for 100 runs per method for high-probability convergence, we used a smaller model (ALBERT) instead of BERT.
>4:**Small datasets.**
A: We use two of three datasets from Mosbach et al. (2020). We can add results on larger datasets if the paper gets accepted.
>5: **Training loss.**
A: New plots are available in the anonymized repository.
>6: **On Theorem 2.1.**
A: Let us address your concerns on the example of AdaGrad with momentum (Theorem B.4). The same reasoning holds for other methods as well.
1. We focus on worst-case guarantees [1], estimating complexity for the worst possible problem given the method’s parameters. This allows a resisting oracle [2], where the problem depends on $\varepsilon$ and $\delta$. This is a classical approach.
2. We require $x_0 > \sqrt{2\varepsilon} + 3\gamma$. Since typically $\varepsilon, \gamma \ll 1$, this assumption is mild. The stepsizes in Theorems 3.1-3.3 do not contradict Theorem 2.1: the results are derived for *different methods*. For $R, \sigma, M, L, b_{-1} = O(1)$ and large $K$ in Theorem 3.3, Theorem B.4 applies.
3. The fact that AdaGrad/Adam lack logarithmic dependence on $1/\delta$ even for $\alpha = 2$ strengthens our results. Some works [3] label $\alpha = 2$ case as heavy-tailed. Our setting, where noise is zero except in the first iteration, reinforces our findings: a large first stochastic gradient drastically reduces stepsize.
4. The stochastic gradient noise is $\sigma \xi_k$, with $\mathbb{E}[\xi_0^2] = 1 \Longrightarrow \mathbb{E}[\sigma^2\xi_0^2] = \sigma^2$.
>7: **The noise in the quadratic experiment seems too large.**
A: Figures 1, 3, and 4 show results for different $\gamma$. Methods with clipping consistently outperform those without. Additional experiments with lower noise (scaled by 100) and tuned stepsizes are in the anonymized repository.
>8: **Fine-tuning experiment?**
A: Randomness stems from the seed that determines mini-batch sampling order.
>9: **Results from Kunstner et al. (2023)**
A: We will discuss this paper in the final version. Their work shows Adam outperforming SGD even in full-batch settings -- this observation is unrelated to stochasticity. Our focus is on high-probability convergence of AdaGrad/Adam-based methods, showing that without clipping, they have poor high-probability complexities, similar to SGD. Thus, our results complement Kunstner et al. (2023) by highlighting the necessity of gradient clipping for high-probability convergence.
>10:**Scalar or coordinate-wise methods?**
A: We analyze both scalar (norm) and coordinate-wise versions of AdaGrad/Adam, studying both in experiments.
>11:**Introduction.**
A: We will rewrite the introduction for clarity on considered methods.
>12:**Definition of heavy-tailed noise.**
A: The noise is heavy-tailed if it satisfies Assumption 1.1.
>13:**Discussion of Theorem 2.1.**
A: We will expand the discussion, incorporating the points above.
>14: **Dependence of $K$ on $\varepsilon$ and $\delta$ in Theorems 3.1-3.3.**
A: The complexity results are presented in a standard way for the optimization literature (e.g., see Theorem 2.4 from [4]). The hyperparameters also depend on $\varepsilon$ and $\delta$: even standard SGD with constant stepsize converges only to the neighborhood of the solution for strongly convex smooth problems, i.e., one has to choose the stepsize to be dependent on the target error. We also provide the rates in Appendix C (see Theorems C.5, C.7, C.9, C.12).
---
References:
[1] A. Nemirovsky & D. Yudin. Problem complexity and model efficiency in optimization. J. Wiley @ Sons, New York (1983)
[2] Y. Nesterov. Lectures on convex optimization. Springer (2018)
[3] E. Gorbunov et al. Stochastic optimization with heavy-tailed noise via accelerated gradient clipping. NeurIPS (2020)
[4] S. Ghadimi & G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM journal on optimization (2013)
---
Rebuttal Comment 1.1:
Comment: Thanks for your clarifications. Some of my concerns are addressed. But I still have some questions. Most importantly, I can’t raise my score since I still find theorem 2.1 problematic.
I would like to restate my concern on theorem 2.1 because the rebuttal doesn’t address it. Thanks for your explanation and now I agree that we can choose loss function depending on $\epsilon$ and $\delta$. However, I think the bigger issue is that you choose the initialization based on fixed stepsize. You argue that $x_0 > \sqrt{2 \epsilon} + 3 \gamma$ is a mild assumption because $\gamma$ is small. Even if $\gamma$ is small, the fact that $x_0$ must be set as a function of $\gamma$ is the core issue. A proper lower bound or negative example should demonstrate that the algorithm fails on a fixed problem — not just that for each $\gamma$, there exists some adversarial problem where it fails. A fixed step size of course shouldn’t work for any problem. You also mention this in your discussion of theorem 3.1-3.3. For the result to reflect a true limitation of M-AdaGrad, there should exist a fixed function and initialization $x_0$ such that M-AdaGrad fails regardless of $\gamma$. That is not demonstrated here.
I have checked the plots for rescaling noise. AdaGrad and Adam can reach a relative low loss very quickly while clip-AdaGrad and clip-Adam optimizes very slowly in the beginning. The only advantage of clip version is that they can reach a slightly lower loss than those without clipping after several thousand steps. So I don’t think this is a valid experiment for supporting the claim that clipping can improve optimization under noise. Optimization actually becomes more slowly with clipping operation.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer RQzs,
Thank you for your further comments.
## On Theorem 2.1
We would like to further clarify the construction used in Theorem 2.1 ( and in Theorems B.4, B.8, B.12, and B.16 from the Appendix, respectively) and the interpretation of the results.
1. We highlight that Theorem 2.1 is not a classical *lower bound*, but rather a *negative result*. The parameters of the problem and the method (e.g., M-AdaGrad) are: $\varepsilon$ – target optimization error, $\delta$ – failure probability, $b_{-1}$ or $b_{0}$ – initial scaling factor, $||x_0 - x^\ast||$ -- initial distance to the optimum, $\gamma$ -- stepsize. The goal of Theorem 2.1 is not to show that for any set of the above parameters, the number of iterations required for achieving $\varepsilon$-solution with probability $\geq 1-\delta$ is proportional to $\frac{1}{\delta^c}$ for some $c > 0$, but to show the same for some (quite noticeable) range of parameters. We emphasize that $x_0$ in our example is not a function of $\gamma$: our result holds whenever $x_0 > \sqrt{2\varepsilon} + 3\gamma$, which is not equivalent to assuming that $x_0$ is a function of $\gamma$. Moreover, this restriction is typically satisfied in practice since typically $||x_0 - x^\ast|| \geq 1$, $\varepsilon$ is small (not greater than $10^{-3}$) and $\gamma$ is small (e.g., the default value of $\gamma$ in PyTorch is $0.01$). Therefore, Theorem 2.1 is a meaningful negative result with practical implications. Nevertheless, you are right that it is not a lower bound since our construction does not hold for any choice of parameters. It leads us to the natural question: Is Theorem 2.1 sufficient to claim that adaptive schemes like AdaGrad and Adam have bad high-probability convergence? To answer this question, let us assume the opposite.
2. Assume that M-AdaGrad (or Adam) does converge for some range of $\gamma$ with high-probability under the settings of Theorem 2.1 and assume that the complexity depends on $\delta$ through the factor of $\mathrm{poly}(\log^{c_1}(1/\delta))$ for some $c_1 > 0$, i.e., the complexity of finding $\varepsilon$-solution after $K$ steps with probability $\geq 1-\delta$ (under convexity, smoothness and bounded variance assumption) is
$$K = \mathcal{O}\left(\frac{1}{ \mathrm{poly}(\gamma^{c_2}) \mathrm{poly}(\varepsilon^{c_3})} \mathrm{poly}(\log^{c_2}\frac{1}{\delta})\right)$$
for some $c_1, c_2, c_3 \geq 0$ and under some conditions on $\gamma$. We are not aware of any results in the literature stating the convergence of AdaGrad/Adam-type methods only for large enough $\gamma$. In contrast, the existing results for these methods hold either for all $\gamma > 0$ or for any (e.g., Theorem 1 from [1]) $0 < \gamma < C$ for some $C$ (e.g., Case 1 of Theorem 1 from [2]). However, the aforementioned hypothetical result for AdaGrad/Adam cannot hold for any $0 < \gamma < C$ regardless of $C$ since it would contradict Theorem 2.1. Therefore, we conjecture that the result of Theorem 2.1 can be extended to any choice of $\gamma$. We will explicitly mention it in the final version.
Overall, the above arguments show that Theorem 2.1 is a quite strong negative result for AdaGrad/Adam without clipping. However, we promise to add more discussion of this result in the final version.
---
References
[1] Défossez et al. A simple convergence proof of Adam and Adagrad, TMLR 2022
[2] Xie et al. Linear convergence of adaptive stochastic gradient descent, ICML 2020
---
## Experiments
In the provided experiments, the goal was to illustrate that the methods with clipping achieve better optimization error with higher probability. Indeed, the plots are given in logarithmic scale in the Y-axis, but “the width of the oscillations” is of the same size on the plots. This means that the methods that achieve better error also oscillate less. We also uploaded additional plots to the same file with illustrations of methods’ behavior with different stepsizes (see the very end of “description_of_the_results_and_plots.md”). These plots indicate that with very small stepsizes AdaGrad and Adam do not reach a reasonable error. Moreover, the small reduction of the stepsizes does not significantly improve the achieved optimization error.
We also highlight that the contribution of our paper is primarily theoretical.
---
**If you have any further comments or questions, we kindly ask you to let us know: we are committed to promptly addressing any remaining concerns.**
Best regards,
Authors | Summary: This work examines adaptive optimization methods, specifically variants of Adam and AdaGrad, in settings with heavy-tailed noise. The authors establish that for Adam and AdaGrad with momentum, achieving a polylogarithmic dependence on the confidence level is impossible. In contrast, they show that the clipped versions of Adam-Norm and AdaGrad-Norm provide high-probability convergence guarantees, with additional results for the clipped versions of Adam and AdaGrad without momentum. Numerical experiments further confirm the improved performance of the clipped variants.
## update after rebuttal
I thank the authors for their response and I updated my score.
I recommend the authors to better reflect the paper is mainly focused on the norm versions instead of the element-wise versions per the comments of the other reviewers regarding Algorithm 1.
Claims And Evidence: The paper provides an analysis of several variants of Adam and AdaGrad to support the claim that clipping improves performance in the presence of heavy-tailed noise. However, some aspects of the evidence could be more comprehensive.
First, the noise model deviates from the standard gradient oracle model, as the noise may depend on time. This is only briefly mentioned (in line 90 of the second column) and is not explicitly detailed in Assumption 1.1, despite being a crucial component for establishing the lower bound (see the noise distribution in line 928).
Additionally, while AdaGrad and Adam perform poorly without clipping, the results for clipped coordinate-wise methods remain incomplete, as momentum is not incorporated. So, claiming in the abstract that clipping fixes Adam is not fully proven.
Finally, Theorem 3.3 is established using Assumption 1.4, whereas the lower bound does not support this assumption.
Methods And Evaluation Criteria: Yes
Theoretical Claims: The paper contains many technical details, most of which are provided in the appendix. The reviewer focused on the lower bound of M-AdaGrad and the upper bound of Clip-M-AdaGradD/Clip-AdamD-Norm presented there.
Aside from the non-standard noise model, which the reviewer noted is used in the appendix, the examined parts appear to be correct.
Experimental Designs Or Analyses: The experiments align with the authors' claims. Notably, Adam appears to fail completely with a reasonable probability in the CoLa experiment (Figure 2), which is somewhat surprising. The reviewer would expect that, with properly tuned parameters, Adam would still be able to learn, even if the clipped version performs better.
Supplementary Material: As the reviewer noted, the technical proofs are presented in the appendix. The reviewer examined parts of the lower bound for M-AdaGrad and the upper bound for Clip-M-AdaGradD/Clip-AdamD-Norm.
Relation To Broader Scientific Literature: The paper aims to extend observations from Clip-SGD to adaptive methods, which is a valuable direction for advancing our understanding of optimization with heavy-tailed noise.
Essential References Not Discussed: Essential references are discussed.
Other Strengths And Weaknesses: **Strengths**
- The analysis of widely used optimization methods under heavy-tailed noise is an important topic.
- This paper establishes results for a broad range of adaptive variants.
**Weaknesses**
- As mentioned earlier, the connection between scalar and coordinate-wise versions is limited.
- The large number of details, different variants, and assumptions make it challenging for readers to compare the positive and negative results presented in the work.
Other Comments Or Suggestions: No further comments.
Questions For Authors: - Can the lower bounds be modified to accommodate stochastic gradients that does not depend on the time? What modifications are required and how difficult is the task?
- Can an improved lower bound support assumption 1.4?
- What is the difficulty in removing assumption 1.4 from Theorem 3.3?
- What is the main difficulty in accommodating both coordinate-wise algorithms and momentum when discussing heavy-tailed noise?
- With light-tailed noise, AdaGrad-Norm converge even without accurate specification of the hyper-parameters (the bound itself degrades according to the hyper-parameters, but the algorithm still converge). Can such results be obtained with heavy-tailed noise?
Overall, the paper provides new insights and results, but the connection between claims and results can be improved.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback that helped to improve our paper.
>1:**Time-dependent noise.**
A: Thank you for spotting this. We will rewrite Assumption 1.1 to explicitly highlight that we allow time-dependent noise. Such assumptions are relatively standard for stochastic optimization with streaming oracle [1-3]. Moreover, many existing convergence upper bounds hold for time-varying noise as long as the corresponding moment bound is satisfied for each step [4-6]. We will add these clarifications to the final version of our paper.
>2:**Coordinate-wise methods & momentum.**
A: In the abstract, we claim that we show that clipping fixes AdaGrad-Norm and Adam-Norm. In Section 1.1, we made a typo in line 65: it should be “without momentum” instead of “with momentum”. We also explicitly mention that Clip-AdaGradD and Clip-AdamD are analyzed without momentum. **To address this limitation of our paper, we generalized our proofs for the methods with coordinate-wise stepsizes to the case of $\beta_1 > 0$ – please see our response to Reviewer GE8p.** We promise to add the complete proof to the final version of our paper.
>3:**Lower bound & Assumption 1.4.**
A: Our lower bounds can be extended to support Assumption 1.4, if we modify $f$ as follows:
$$f(x) = \begin{cases}\frac{1}{2}x^2,& \text{if } |x| < \nu,\\\\ \nu\left(|x| - \frac{1}{2}\nu\right),& \text{if } \nu \leq |x| \leq D,\\ \nu\left(D - \frac{1}{2}\nu\right),& \text{if } |x| > D, \end{cases}$$
where $D > |x_0| > \nu$. Then, the proofs remain the same, and the problem satisfies Assumption 1.4.
>4:**Lower bound & time-independent noise.**
A: If we assume that the noise is i.i.d. for each step, e.g., the same as for $k = 0$ in formula (16), then for the provided example of function and oracle M-AdaGrad/Adam should also have inverse-power dependence on $\delta$ in their high-probability complextites – this is our conjecture. However, deriving the lower bounds for this case is technically more difficult because the explicit form of the iterates becomes more involved than in Lemma B.1 (due to the summation of the squared norms of stochastic gradients in the stepsize). We will discuss this more explicitly in the final version of the paper.
>5: **Why Assumption 1.4?**
A: The main difficulty comes from the dependence of $b_t$ and $g_t$ in the methods without delayed stepsizes. The existing approaches typically use boundedness of the variance and the norm of the gradient (see Lemma 5.1 in [7]) or assume that the noise is sub-Gaussian [8] to tackle this issue. In the heavy-tailed noise regime, these assumptions do not hold. Therefore, we use Assumption 1.4, which is a relaxation of the assumption used in [9] (see also our discussion of Theorem 3.3 on page 7). More precisely, to decouple $b_t$ and $g_t$ in the analysis, we multiply the inequality above (67) by $b_t$. However, it eventually leads to the non-trivial weighted sum of function values: the first sum in the RHS of the first row on page 44. After small rearrangements, we get the term $\sum_{t=1}^T \left( \frac{b_t}{p_t} - \frac{b_{t-1}}{p_{t-1}} \right)(f(x_t) - f_\ast)$ that we estimate using Assumption 1.4. We are not aware of the alternative ways of analyzing versions of AdaGrad/Adam or closely related methods in the heavy-tailed noise regime.
>6:**On the hyper-parameters**
A: For AdaGrad-Norm *without clipping*, similar results cannot be obtained with logarithmic dependence on $1/\delta$ due to Theorem 2.1. However, it is an interesting open question whether it is possible to show convergence for Clip-AdaGrad-Norm with a choice of hyper-parameters agnostic to $L,\sigma, \alpha$, and $R$, in the heavy-tailed noise regime. We leave this question for future work.
---
References:
[1] S. Ghadimi & G. Lan. Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms. SIOPT 2012
[2] N. J. A. Harvey et al. Simple and optimal high-probability bounds for strongly-convex stochastic gradient descent. arXiv:1909.00843
[3] A. Sadiev et al. High-probability bounds for stochastic optimization and variational inequalities: the case of unbounded variance. ICML 2023
[4] J. Zhang et al. Why are Adaptive Methods Good for Attention Models? NeurIPS 2020
[5] A. Cutcosky & H. Mehta. High-probability bounds for Non-Convex Stochastic Optimization with Heavy Tails. NeurIPS 2021
[6] T. D. Ngyen et al. Improved convergence in high probability of clipped gradient methods with heavy tailed noise. NeurIPS 2023
[7] A. Défossez et al. A Simple Convergence Proof of Adam and Adagrad. TMLR 2022
[8] Z. Liu et al. High Probability Convergence of Stochastic Gradient Methods. ICML 2023
[9] S. Li & Y. Liu. High probability analysis for non-convex stochastic optimization with clipping. ECAI 2023
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and I updated my score.
I recommend the authors to better reflect the paper is mainly focused on the norm versions instead of the element-wise versions per the comments of the other reviewers regarding Algorithm 1.
I do not have further questions.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for checking our replies and updating the score. We promise to better reflect in the introduction that most of the results are derived for the scalar versions of AdaGrad and Adam with clipping, and we will apply necessary corrections according to the reviews. We will also extend our analysis of coordinate-wise versions of AdaGradD and AdamD with clipping to the case of $\beta_1 > 0$, following the arguments provided in our response to Reviewer GE8p.
Thank you once again for your insightful feedback. | Summary: This paper suggests that clipping enables high-probability convergence (with polylogarithmic dependence on the confidence level δ) for Adam-norm/AdaGrad-norm under heavy-tailed noise. In contrast, without clipping, Adam/AdaGrad has inverse-power dependence.
The authors provide some numerical results to support this result.
Claims And Evidence: The claim of this work is mainly supported by providing the theoretical results and comparing it to the settings / results in previous works.
Overall the paper is written clearly to deliver the ideas.
Methods And Evaluation Criteria: The authors attempt to validate the idea on both synthetic and realistic problems in numerical experiments.
Theoretical Claims: I’ve skimmed through proofs of failure cases in Appendix B. I’ve not seen the proofs for the clipping results.
Experimental Designs Or Analyses: It’s not the specific settings but rather the scale that I’m concerned with. I believe the experiments can be much more comprehensive to claim a solid support of the theory results.
Supplementary Material: I’ve skimmed through proofs of failure cases in Appendix B. I’ve not seen the proofs for the clipping results in Appendix C. I’ve checked the experiments in Appendix D.
Relation To Broader Scientific Literature: This work renders a theoretical contribution to the optimization for machine learning community in particular for adaptive methods and more broadly stochastic methods. The authors present a balanced review over prior works in the Related Work section and the New Upperbound section, where one can find how this work compares to previous works.
Essential References Not Discussed: As far as I’m concerned, the paper seems to address previous / concurrent related works well enough.
Other Strengths And Weaknesses: * The obvious weakness of this work is that the versions of Adam/AdaGrad are actually normed ones, i.e., Adam-norm/AdaGrad-norm, rather than the original form. This is a concern on both theory and practice sides. I think this should be discussed more clearly / up front in the paper rather than at the end of the theory result.
* I kind of received an impression that various versions of algorithms are analyzed not because they are all important but because they can be proved. It would be nice to see how the choice of conditions under which methods are analyzed is made, or why analyzing all of these would be worthwhile.
* Numerical results for CLIP-Adam do not seem strong or much differentiated.
Other Comments Or Suggestions: none
Questions For Authors: none
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your time and positive feedback. We also thank you for your useful comments, which we have addressed below.
>1:**It’s not the specific settings but rather the scale that I’m concerned with. I believe the experiments can be much more comprehensive to claim a solid support of the theory results.**
A: We agree that the scale of our experiments can be further improved, and we can provide additional numerical results for larger models and datasets. However, the main contribution of our paper is primarily theoretical. Therefore, we see our experimental results as complementary contributions extending theoretical findings.
>2: **The obvious weakness of this work is that the versions of Adam/AdaGrad are actually normed…**
A: We point out that we also analyze coordinate-wise versions of AdaGrad and Adam (with $\beta_1 = 0$) in Appendix C.4, as explained in the very end of Section 3. Moreover, we generalized our proof to the case of $\beta_1 > 0$ (please, see our response to Reviewer GE8p). Unfortunately, due to the page limitations of the main part, we had to defer these results to the Appendix. If the paper gets accepted, we will use an additional page in the main part to discuss the results on coordinate-wise versions of AdaGrad and Adam with clipping in more detail. We also kindly ask you to check our response to Reviewer RQzs (comments 2 and 10).
>3: **I kind of received an impression that various versions of algorithms are analyzed not because they are all important but because they can be proved. It would be nice to see how the choice of conditions under which methods are analyzed is made, or why analyzing all of these would be worthwhile.**
A: Thank you for mentioning this point of view. The main goal of this paper is to provide a comprehensive answer to the questions formulated on the second page, right before Section 1.1. In general, the answer could potentially depend on the choice of the convexity and the stepsizes. Therefore, we analyze both non-convex and convex problems and derive the results for the scalar (a.k.a. “-norm”) and coordinate-wise versions of AdaGrad and Adam with and without delayed stepsizes. We believe that the provided analysis is quite comprehensive and covers many different cases (though not all possible ones for the sake of having the paper of a reasonable length; see also our response to Reviewer RQzs, comment 10). That is, we believe that the main research questions formulated in the paper are adequately addressed, i.e., we show that standard AdaGrad and Adam (with and without delayed stepsizes) do not enjoy logarithmic dependence on $1/\delta$ in their high-probability complexities and clipping provably improves their high-probability convergence.
>4:**Numerical results for CLIP-Adam do not seem strong or much differentiated.**
A: We believe that the main theoretical findings of this paper are properly illustrated and complemented by our numerical results. We kindly ask the reviewer to clarify the concern. | null | null | null | null | null | null |
HALoS: Hierarchical Asynchronous Local SGD over Slow Networks for Geo-Distributed Large Language Model Training | Accept (poster) | Summary: This paper presents a framework for geo-distributed LLM training named HALoS. To reduce the staleness effect, HALoS introduces local parameter server (LPS) for workers within each region, and periodically syncs LPS with global parameter server (GPS). This paper also introduces a convergence analysis for this approach under certain assumptions. The results show that HALoS significantly reduces the amount of sync bottlenecks while maintaining competitive model performance.
## update after rebuttal
I appreciate the reply from the authors during the rebuttal process and have carefully read them. I will upgrade my score because most of the concerns are addressed.
Claims And Evidence: The claims are generally well-supported by the evidences.
Methods And Evaluation Criteria: The methods are mostly sound. But I have a few questions:
Q1: In the geo-distributed training setting, we may not have the same number of workers for each region. The evaluation were conducted on 4x4 setting. How would you split the batch size among regions if the number of workers are difference from region to region?
Q2: when pushing from LPS to GPS, are there mechanism to prevent a staled model weight from "poluting" global model weight, or they just merge with GPS model weight?
Theoretical Claims: I checked the correctness of the Theorems in Section 4.2, but I am not entirely sure if the assumptions made are valid or realistic.
Experimental Designs Or Analyses: Q3. For the impact of K (Fig.5), why is the validation loss the highest for smaller K? I assume that a smaller K conrresponds to more frequent sync between LPS and GPS, and a smaller K should be closer to Sync SGD baseline in term of training performance.
Q4. Can you elaborate a bit more on the claim _"the accumulation of a minimum number of updates ensures stability"_ in Section 5.2?
Q5: Fig.7 and Fig. 9shows that DiLoCo+DynUpd has the best convergence speed. Is this because Sync SGD still outperforms Async variants?
Supplementary Material: Yes, I have reviewed the supplementary material.
Relation To Broader Scientific Literature: Outside the geo-distributed training area, this paper is also related to model parallel training (under strictly synchronous setting).
Essential References Not Discussed: Not applicable.
Other Strengths And Weaknesses: Strengths:
* The paper presents an important problem -- straggler mitigation for async sgd, and introduces a hierachical architecture that effectively hide the communication for syncing with computation.
* The speedup is significant compared to baselines.
* This is well-written and easy to follow.
Weakness:
* More models (eg. Llama, QWen) and more model parallelism techniques beyond DP should be explored
* Heterogenous cluster setup (each region having difference number of workers) should also be explored, especially for geo-distributed setting.
Other Comments Or Suggestions: Please refer to the previous comments.
Questions For Authors: Please refer to the previous comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Clarification on Questions
**(Q1)** We assign the same mini-batch size per worker. For heterogeneous clusters, we avoid ineffective convergence from imbalanced numbers of workers per LPS by adopting **a consistent grouping strategy** that assigns an equal number of workers to each LPS. For example, if four regions originally have (2,4,4,6) workers, we deploy (1,2,2,3) LPSs, each covering two workers. Empirical results and further discussion are provided in the last section ([W3]) of this response.
**(Q2)** We apply momentum at both local and global levels to mitigate gradient/model staleness effectively (Figure 5). We observe that this approach surpasses prior methods (e.g., gradient penalty), which aligns with previous findings (see Section 2). Our theoretical analysis (Section 4.2) and empirical results (Section 5.2) demonstrate **optimal momentum configurations that minimize staleness effects while maintaining strong performance.** Exploring more advanced, staleness-aware momentum update techniques is left as our future work.
**(Q3,Q4)** We clarify that each LPS accumulates updates during LPS-GPS communication and then $K$ additional updates (Algorithm 1). When $K$ is too small (e.g., $K=1$), regions with higher LPS-to-GPS bandwidth (up to 0.9 Gbps in Figure 1(a)) can push updates more frequently than those with slower links (as low as 0.1 Gbps), causing high variance. By using a modest $K$, each LPS accumulates enough updates before syncing with GPS, ensuring more balanced contributions and reducing variance (Section 5.2).
**(Q5)** We clarify that HALoS consistently achieves **the fastest wall-clock time** to reach the same target loss (top row in Fig. 7 and 9). For example, when training Pythia-410M (Table 1, Fig. 7), **HALoS achieves 3.9x faster convergence** than DiLoCo+DynUpd, with less than 10% additional tokens.
---
# [W1] Different Models
We evaluate HALoS using different LLM families (Llama and Qwen) and observe that HALoS consistently outperforms the baseline. Please see the 'Empirical Validation on Different LLM Families' section in our response to Reviewer Lzk8.
---
# [W2] Model Parallelism
We thank the reviewer for highlighting model parallelism (MP). In HALoS, each worker is a data-parallel group of accelerators, each maintaining a full model replica, enabling seamless integration of MP techniques (e.g., tensor and pipeline parallelism). Below, we further compare HALoS with strictly synchronous MP methods.
**Comparison with Strictly Synchronous MP:** We evaluate relative convergence times across three methods: synchronous DP (DP), synchronous DP with PP (DP+PP), and a heterogeneity-aware version of DP+PP (Hetero-Aware DP+PP).
|Method|Time-to-Loss (Relative to HALoS)|
|-|-|
|DP|85.12|
|DP+PP|8.34|
|Hetero-Aware DP+PP|8.26|
|HALoS (Ours)|1.00|
As shown in the above table, **HALoS achieves 8.26x faster convergence** compared to the strongest baseline, Hetero-Aware DP+PP. PP allows relatively small activations transferred cross-region and improves convergence speed compared to pure DP (85.12 → 8.34). However, PP inherently suffers from computational inefficiency from imbalanced pipeline stages (i.e., pipeline bubbles) due to heterogeneous accelerator speeds—even with heterogeneity-aware workload partitioning. In contrast, HALoS effectively mitigates slow inter-region communications and heterogeneous accelerator speeds, achieving superior performance.
**Experiment Details:** We trained Pythia-70M as in Section 5.1. DP+PP method used a DP degree of 4 and a PP degree of 4, placing pipeline stages across distinct regions. Hetero-Aware DP+PP employed heterogeneity-aware partitioning and a simulated annealing-based heuristic for placement from [1].
[1] Dacheng Li et al. AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness. NeurIPS 2022.
---
# [W3] Heterogeneous Cluster Setup
We evaluate HALoS under heterogeneous worker distributions (2,4,4,6 workers per region), comparing against our strongest baseline, Async-Local-SGD.
|Method|Relative Time-to-Loss|
|-|-|
|Async-Local-SGD|1.7|
|HALoS (Naive grouping: 4 LPSs with 2, 4, 4, 6 workers)|-|
|HALoS (Consistent grouping: 8 LPSs with 2 workers each)|1.0|
When workers were naively grouped into one LPS per region, resulting in significantly varying workers per LPS (2 to 6), convergence slowed due to large discrepancies in update progresses across LPSs. To address this, we employ a consistent grouping strategy ensuring each LPS manages exactly two workers, resulting in 1,2,2,3 LPSs per region. Using this strategy, HALoS efficiently coordinates learning across heterogeneous clusters, **achieving 1.7x faster convergence** than Async-Local-SGD.
**Experiment Details:** We trained Pythia-70M as in Section 5.1. For consistent grouping, hyperparameters remained unchanged, except for adjustments to local updates accumulation ($K=8$) and local momentum update delay ($d_l=4$), reflecting the smaller number of workers per LPS. | Summary: This paper presents HALoS, a hierarchical asynchronous optimization framework for training large language models (LLMs) across geographically distributed hardware. HALoS addresses communication bottlenecks by using local parameter servers (LPSs) within each region and a global parameter server (GPS) that merges updates across regions. This design minimizes expensive inter-region communication while leveraging fast intra-region links. The framework allows asynchronous updates at each level, enabling efficient communication and computation overlap. The paper provides theoretical convergence guarantees for HALoS under non-convex objectives, demonstrating how hierarchical momentum affects asynchronous training. Empirical results show HALoS achieves up to 7.5× faster convergence than synchronous methods and faster than existing asynchronous methods in geo-distributed LLM training, while preserving model quality.
Claims And Evidence: Claim1: HALoS enables efficient geo-distributed LLM training.
Evidence1: experiments show the reduced amount of communication and enhanced speed.
Claim2: HAloS has tight convergence.
Evidence2: The proof in Appendix E.
Methods And Evaluation Criteria: I think the methods make sense. HALoS could reduce inter-region communication.
Benchmarks (MMLU, Hellaswag, LAMBADA, and WinoGrande) are commonly-used ones in LLM evaluation.
Theoretical Claims: Convergence analysis includes an order approximation of gradient expectation. I briefly go through Appendix E and found no significant problem.
Experimental Designs Or Analyses: Experiments are carried out in a simulation manner. The designs are okay with four regions, 16 workers for LPS. Worker speeds are not constants; rather, they are randomly set via uniform distribution.
Supplementary Material: This paper does not have a supplementary material. But I am aware there is an appendix at the back of the pdf that includes experimental setup, additional experiments, and proofs.
Relation To Broader Scientific Literature: On top of sync optimizers, the proposed HALoS targets cross region training scenarios via async training, and proposes an algorithm that reduces cross-region communication costs, yielding higher training speed. It is also claimed by the authors to be the first giving theoretical proofs on convergence.
Essential References Not Discussed: No (from my knowledge).
Other Strengths And Weaknesses: Strengths:
1. I think the research field is of practical usages. During pretraining of LLMs, cross-region training is common.
2. The experiments are sufficient.
Weaknesses:
1. The introduction needs to be polished. It is much longer than I expected, and I think some chunk of paragraphs could be re-allocated to "related work".
2. Too much hyperparameters are introduced.
Other Comments Or Suggestions: N/A
Questions For Authors: Since many hyperparameters are introduced in this work, how to effectively adjust new hyperparameters under a different scenario (different LLM models, different # of regions, etc.)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # [W1] Refining Introduction and Related Work
We appreciate the suggestion to enhance the clarity and readability of our paper. We designed the introduction to provide sufficient motivation and context for HALoS by outlining both the challenges of geo-distributed LLM training and relevant prior work to help readers understand our design choices early. That said, we greatly value readability for the community and will carefully revise the introduction and related work sections in the final version to further improve their structure and ease of understanding.
---
# [W2] Hyperparameter Search Strategy
We recognize that effective hyperparameter tuning is crucial for the practical deployment of new optimization frameworks across diverse models and environments. To address this challenge, we combined rigorous theoretical analysis with empirical validation, enabling systematic and efficient hyperparameter tuning across different LLMs and geo-distributed setups.
Our paper provides **a thorough theoretical analysis and valuable insights** that guide the identification of optimal hyperparameters. For example, our theoretical results (Section 4.2) demonstrate that local momentum ($\beta_l$) significantly stabilizes training due to relatively homogeneous updates within each region, while global momentum ($\beta_g$) requires moderation to effectively handle stale updates at the global level. Specifically, our theory suggested optimal momentum parameters ($\beta_l$=0.9, $\beta_g$=0.5), which we empirically confirmed through ablation studies (Section 5.2).
To further reduce the hyperparameter search cost, we adopted **a scale-up strategy**. Initially, we conducted comprehensive hyperparameter sweeps on the smallest model (Pythia-70M) and subsequently transferred the best-performing settings to larger models (Pythia-160M and Pythia-410M). This approach consistently improved convergence efficiency, significantly lowering tuning overhead. Similar approaches have been recently validated by other studies as well ([1], [2]).
Our results further indicate **strong generalization of these hyperparameters** across varying infrastructure conditions (e.g., inter- and intra-region bandwidths; see Section 5.3 of the paper) and different LLM families (e.g., Llama and Qwen; for related empirical results, please see the section "Empirical Validation on Different LLM Families" below), demonstrating their robustness.
As a next step, we plan to integrate automated hyperparameter tuning techniques—such as Bayesian optimization or adaptive online adjustment—by combining our theoretical convergence bounds with runtime monitoring. We believe this will facilitate easier adoption of HALoS in diverse scenarios without excessive tuning overhead.
[1] Arthur Douillard *et al*. DiLoCo: Distributed Low-Communication Training of Language Models. WANT@ICML 2024.
[2] Weigao Sun *et al*. CO₂: Efficient Distributed Training with Full Communication-Computation Overlap. ICLR 2024.
---
# Empirical Validation on Different LLM Families
To demonstrate the robustness and practical applicability of HALoS across diverse model architectures, we extended our evaluation beyond the Pythia models to include two additional widely-used LLM families—Llama and Qwen.
|Model|Training Speedup of HALoS (vs. Async-Local-SGD)|
|-|-|
|Llama-70M|2.1x|
|Qwen-70M|2.3x|
|Pythia-70M|1.9x|
As demonstrated in the table above, **HALoS consistently outperforms the strongest baseline (Async-Local-SGD) across all tested LLM architectures**. Importantly, the hyperparameters initially optimized for Pythia-70M—**without any additional tuning**—exhibited strong generalization, achieving even greater relative improvements (**up to 2.3x** with the Qwen-70M model).
**Experiment Details:** We assessed whether the optimal hyperparameters, identified from our theoretical analysis and validated on the Pythia-70M model, could effectively generalize to these models. To ensure a fair comparison, we selected Llama and Qwen models closely matching Pythia-70M in size, using the same number of layers and hidden dimensions, and conducted the same experiments described in Section 5.1 of the paper. | Summary: The paper presents HALoS, an optimization framework designed to enhance cross-region training of large language models (LLMs). To address the challenges of communication costs and imbalanced hardware utilization, HALoS employs a hierarchical architecture with local parameter servers within each region and a global parameter server that aggregates updates across regions. By prioritizing fast intra-region communication and minimizing inter-region synchronization, the framework reduces latency, mitigates straggler effects, and optimizes resource efficiency. HALoS enables asynchronous updates at both local and global scales, facilitating overlap of computation and communication. The authors provide theoretical convergence guarantees for non-convex objectives, analyzing how hierarchical momentum influences training stability in asynchronous settings. Empirical evaluations demonstrate HALoS’s superiority.
Claims And Evidence: HALoS could greatly improve training speed under cross-region scenarios, and it ensures convergence. Both experiment results and proofs in the appendix have verified the claim.
Methods And Evaluation Criteria: The methodology is okay with a good motivation and experiment-verified results.
The work uses common LLM benchmarks. There is no problems in terms of benchmarks.
Theoretical Claims: The convergence analysis is supported by a proof om the supplementary materials.
Experimental Designs Or Analyses: Experiment designs are generally okay under the simulation regime. Randomness is introduced in terms of worker speed, which reflect real scenarios. But I think real experiment demos rather than simulations could be carried out to directly demonstrate the efficacy of the method.
Supplementary Material: Yes. I inspected the supplementary experiments, further explanations on experiment setups, and theoretical proofs of theorem 4.6.
Relation To Broader Scientific Literature: The work is contributive to the broader literature in that it has a hierarchical architecture: global & local. It also demonstrates good efficiency.
Essential References Not Discussed: Not at all.
Other Strengths And Weaknesses: Strengths:
1. Practicalness of the proposed method; good simulations that reflect reality.
2. Theoretical proofs on convergence.
3. Scalability across various model sizes and network bandwidth conditions.
Weaknesses:
1. The proposed method assumes i.i.d. data distribution across all workers.
2. Global parameter server may undergo huge computation burden, impacting efficiency.
Other Comments Or Suggestions: No other comments.
Questions For Authors: How would the method perform under non iid? Would the method work without the iid assumption?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # [W1] HALoS without i.i.d. assumption
We appreciate the reviewer’s insightful question regarding the i.i.d. assumption. While our theoretical analysis in Section 4.2 follows standard practice by assuming i.i.d. data for analytical clarity, **HALoS does not rely on this assumption in practice.** To demonstrate this, we evaluate HALoS under both i.i.d. and non-i.i.d. data distributions.
For the non-i.i.d. setting, we use the Shakespeare dataset [2], assigning each worker distinct speaking roles (e.g., Juliet, Romeo)—a widely adopted setup in federated learning (e.g., [1], [2], [3]). The i.i.d. setting is created by randomly shuffling data across workers. After training on the same amount of data, we report accuracy and relative training time below (see “Experiment Details” at the end for additional information):
|Method|Test Accuracy|Relative Training Time|
|-|-|-|
|Async-Local-SGD|49.2%|1.6|
|HALoS (w/ Global Momentum)|46.9%|1.0|
|HALoS (w/o Global Momentum)|**50.0%**|**1.0**|
Even under non-i.i.d. partitions, **HALoS (w/o Global Momentum) achieves 0.8% higher accuracy and 1.6× faster convergence** than our strongest baseline, Async-Local-SGD. As discussed in Section 4.2, global momentum may slow convergence when worker updates conflict due to data heterogeneity, while local momentum remains effective. This allows HALoS to mitigate inter-region communication delays and converge efficiently.
In the i.i.d. setting, HALoS achieves the same accuracy (50.4%) as Async-Local-SGD but with 1.6× faster training. These results highlight **HALoS’s robustness across both i.i.d. and non-i.i.d. settings,** demonstrating practical effectiveness beyond theoretical assumptions.
The slight accuracy drop under non-i.i.d. data (50.0% vs. 50.4%) is consistent with prior observations in many hierarchical or asynchronous methods, which are *partially robust* to data heterogeneity. To further improve non-i.i.d. performance, we plan to explore FL techniques such as gradient clipping and adaptive regularization ([3]).
Theoretically, **HALoS can be readily extended to non-i.i.d. settings**. Existing literature on local or federated SGD often provides additional bounds involving the divergence across local distributions. In HALoS, we can introduce an extra term in the bound related to the data heterogeneity assumption when bounding the variance of local gradients, similar to prior work ([4], [5]). This would allow our analysis to formally reflect the effect of data heterogeneity.
We thank the reviewer again for prompting this valuable discussion. We will include both the empirical results and an extension of our theoretical analysis to the non-i.i.d. case in the final version.
**Experiment Details:** We train a 6-layer Llama-style model (128 hidden dim) to predict the next character (among 79 unique characters) given the previous 80. We train on 3,145,728 samples with a batch size of 64 per worker using 16 workers in Appendix A. Evaluation uses 524,288 separate test samples. We use a max local learning rate of 0.01 and train for one epoch. For HALoS, we use the same hyperparameters in our paper. For Async-Local-SGD, we found that 32 local steps ($H$) caused divergence and adjusted it to $H=16$.
[1] Brendan McMahan et al. Communication-Efficient Learning of Deep Networks from Decentralized Data. AISTATS 2017.
[2] Sebastian Caldas et al. LEAF: A Benchmark for Federated Settings. arXiv preprint arXiv:1812.01097, 2018.
[3] Junbo Li et al., FedNar: Federated Optimization with Normalized Annealing Regularization, NeurIPS 2023.
[4] Xiang Li et al., On the Convergence of FedAvg on Non-IID Data, ICLR 2020.
[5] Tian Li et al., Federated Optimization in Heterogeneous Networks, MLSys 2020.
---
# [W2] Computation in Global Parameter Server (GPS)
We acknowledge that minimizing computation in the Global Parameter Server (GPS) is crucial for scalable geo-distributed training, and this goal fundamentally guided our design of HALoS. Our method significantly reduces GPS computation by employing local server-side update accumulation: each local parameter server (LPS) aggregates multiple local updates before synchronizing with the GPS. This design reduces both the frequency of inter-region communication and the number of updates processed by the GPS—preventing it from becoming a bottleneck.
|Method|# Global Updates|Time-to-Loss|Tokens-to-Loss|
|-|-|-|-|
|FedAH|5857|3.3|38.1B|
|HALoS (Ours)|478|1.0|12.3B|
As shown in the above table (extending Table 2 in the paper), when training Pythia-70M, **HALoS reduces GPS updates by 12.3x** compared to FedAH, our baseline hierarchical asynchronous training algorithm. This reduction comes from two key factors: (i) 32 local update accumulations per LPS before GPS communication, and (ii) 3.1× fewer total tokens needed to reach the same loss due to HALoS’s improved efficiency. Together, these design choices substantially reduce GPS computation and ensure HALoS achieves scalable, efficient coordination. | null | null | null | null | null | null | null | null |
LETS Forecast: Learning Embedology for Time Series Forecasting | Accept (poster) | Summary: In this paper, the authors introduce DeepEDM, a framework that extends Empirical Dynamical Modeling (EDM) to learn complex latent non-linear dynamics from observed time series for improved forecasting. Building upon Takens’ theorem, DeepEDM first constructs time-delayed embeddings of the input data and then projects these embeddings into a high-dimensional latent space. Next, it employs kernel regression on this latent representation, and uses a multilayer perceptron (MLP) to transform the resulting predictions back to the original time-series space.
According to the authors, DeepEDM offers three key advantages over existing methods:
1. It remains robust even in the presence of noisy observations.
2. It trains a unified parametric model that readily adapts to new time-series data.
3. It combines the rigor of dynamical systems with the flexibility and scalability through its deep learning–based architecture.
In addition to describing fundamental concepts in empirical dynamical modeling and reviewing relevant literature on deep learning for time-series forecasting, the paper also draws connections between DeepEDM and the Attention mechanism. Extensive experiments on both simulated and real-world multivariate forecasting benchmarks show that DeepEDM performs competitively with state-of-the-art approaches, highlighting its potential for robust and adaptable time-series forecasting.
Claims And Evidence: Authors claim that DeepEDM:
1. remains robust even in the presence of noisy observations.
2. trains a unified parametric model that readily adapts to new time-series data.
3. combines the rigor of dynamical systems with the flexibility and scalability through its deep learning–based architecture.
Overall, the paper effectively supports its first claim—that DeepEDM remains robust in the presence of noisy observations, by demonstrating strong performance in both synthetic noise-injected scenarios and on real-world datasets against various state-of-the-art models.
The second claim, involving a “unified parametric model” that readily adapts to new time-series data, is the least substantiated: after introducing the idea, the paper does not provide further elaboration or experiments to confirm how DeepEDM generalizes to new time series. Consequently, while the framework appears promising, further clarity is needed on the role and validation of the latent dynamics, as well as concrete evidence of adaptation to new data.
It also partially supports its third claim of merging dynamical systems rigor with deep learning flexibility, as the authors present a novel architecture grounded in Takens’ theorem and empirical dynamical modeling. While the overall design is innovative, key details are missing about the latent-projection step. Specifically, it is unclear whether the “latent dynamics” it learns genuinely reflect the underlying system or directly improve the forecast. The paper does not delve into interpretability or show whether these latent dynamics match “true” hidden states in controlled scenarios (beyond broad performance metrics).
Methods And Evaluation Criteria: Yes, the paper’s methods and chosen benchmarks do make sense for time-series forecasting, as it uses standard datasets and builds upon widely studied dynamical systems for its simulations. However, the limited scope of comparisons in the simulated scenarios—focusing mostly on EDM-based approaches such as Simplex, Koopa, and iTransformers—raises some concerns. On the other hand, their comparisons on standard benchmark datasets are reasonably thorough, which partly compensates for the narrower set of baselines in the simulation experiments.
Theoretical Claims: There are no proofs to verify beyond the statements of Takens’ theorem, which the authors cite rather than prove themselves.
Experimental Designs Or Analyses: In the simulation study, authors inject Gaussian noise into well-known chaotic systems (e.g., Lorenz, Rössler) to test the model’s robustness; this is a reasonable to demonstrate DeepEDM's resilience to noise. However, as previously noted, it only compares DeepEDM with a limited set of methods (mostly EDM-based). This narrow comparison leaves open the question of how DeepEDM would fare against a broader set of state-of-the-art models in similarly controlled, noise-injected scenarios.
Supplementary Material: The Appendix offers additional insights and clarifications to support the main paper’s findings. Section A.1 provides an overview of the key terminology used throughout, while Section A.2 details the DeepEDM architecture and its implementation. Section A.3 expands upon the short-term forecasting benchmark, Section A.4 focuses on analyzing the impact of varying lookback windows, and Section A.5 presents ablation studies on core design choices. Section A.6 evaluates the stability of the results across multiple runs, and Section A.7 delves deeper into the simulation-based experiments.
Relation To Broader Scientific Literature: The paper appears to be the first to systematically merge Empirical Dynamical Modeling with deep neural architectures for time-series forecasting. While prior work has explored EDM (e.g., Simplex, S-Map) and separate lines of research have focused on deep learning (e.g., Transformers, MLP-based approaches), no previous study has explicitly integrated these two strands under a single, end-to-end framework.
Essential References Not Discussed: The paper covers most of the relevant literature on empirical dynamical modeling (EDM) and deep time-series forecasting. However, despite DeepEDM’s direct connection to chaotic time series prediction, the authors do not provide a literature review of this field. Foundational work, such as Farmer & Sidorowich (1987), established key principles for short-term chaotic forecasting using time-delay embeddings, which are directly relevant to DeepEDM’s methodology. More recent studies have extended these ideas using neural networks, including:
1. Karunasinghe, D. S., & Liong, S. Y. (2006). Chaotic time series prediction with a global model: Artificial neural network. Journal of Hydrology, 323(1-4), 92-105.
2. Li, Decai, Min Han, and Jun Wang. "Chaotic time series prediction based on a novel robust echo state network." IEEE Transactions on Neural Networks and Learning Systems 23.5 (2012): 787-799.
Including a discussion of this literature would better position DeepEDM in the context of prior neural-based approaches to chaotic forecasting and clarify its novelty in comparison to existing methods.
Other Strengths And Weaknesses: The paper presents a novel combination of Empirical Dynamical Modeling (EDM) and deep learning, which is an original and promising direction for time-series forecasting. However, one key concern is that the model’s performance is primarily evaluated against standard forecasting benchmarks, which may not fully demonstrate its advantage in capturing underlying system dynamics. While these benchmarks are widely used in time-series forecasting, they are not necessarily designed to evaluate methods that explicitly model chaotic or nonlinear dynamical systems. A more extensive comparison against known chaotic time series datasets, beyond the current synthetic experiments in the simulation study, would help clarify whether DeepEDM provides meaningful improvements over deep learning-based methods specifically designed for chaotic time-series prediction.
Other Comments Or Suggestions: I have no additional comments.
Questions For Authors: 1. How do you validate that the latent space learned by DeepEDM accurately captures the underlying system dynamics?
The paper presents the latent projection as a core component, but it is unclear whether it is reconstructing meaningful system dynamics or simply acting as a high-dimensional feature extractor.
2. Can you clarify what is meant by DeepEDM being a “unified parametric model” that generalizes to new time series?
The introduction suggests that DeepEDM can adapt to new time series data more effectively than EDM, but this claim is not well-supported in the rest of the paper.
3. Why does the paper not reference prior work on chaotic time series forecasting, despite its direct relevance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your detailed feedback. We’re pleased that you recognize the novelty of combining Empirical Dynamical Modeling with deep learning and the strong performance of DeepEDM, especially in noisy settings. We also appreciate your acknowledgment of our connection to the Attention mechanism and the flexibility of our approach. Below, we address your questions.
### 1. Validation of Latent Space Representation
Great question! The latent space in DeepEDM functions as a learned kernel. We hypothesize that with noise, the learned kernel retrieves nearest neighbors more faithfully in the underlying state space compared to the time-delay embedding, which, as dictated by Takens' theorem, relies on nearly noise-free data for accurate reconstruction. This is indirectly supported by our results in Table 6, where DeepEDM outperforms EDM in time series forecasting.
To provide a more direct validation, we design and conduct a new experiment using simulated time series generated from the Lorenz system, where true neighbors in the state space are known. We evaluate the ability to retrieve these neighbors using different distance metrics. Specifically, for a given state at time $t$, we first determine $k$ true neighbors in the original state space. For the same data point at time $t$, we then retrieve the top $m$ nearest neighbors in (i) the time-delay embedded space using Euclidean distance, and (ii) DeepEDM's latent space using the learned kernel. For each query point, we evaluate the recall of the retrieved neighbors, quantifying the fraction of true neighbors successfully retrieved. For a comprehensive evaluation across different settings, we conduct this experiment with varying values of $m \in \{7, 14, 28\}$, fixing $k = 7$ (the minimum number of neighbors necessary for recovering the dynamics). We also consider two noise conditions: no noise ($\sigma = 0$) and w. noise ($\sigma = 2.5$). A high recall score indicates a distance metric can successfully identify true neighbors in the state space necessary to reconstruct the dynamics. The results of this experiment can be found here:
https://anonymous.4open.science/r/icml_rebuttal-B440/nearest_neighbor_retrieval_exp_results.md
As expected, under noise-free conditions, both methods perform similarly. However, when noise is added, the vanilla time-delayed embeddings exhibit a sharp decline in recall. In contrast, our learned kernel in DeepEDM degrades more gracefully, maintaining more accurate retrieval of neighbors. This suggests that the state space reconstructed using the learned kernel more accurately reflects the underlying state space, as it preserves the local neighborhood structure.
### 2. Generalization as a Unified Parametric Model
Please refer to our response #1 to reviewer Ywcy.
### 3. Selection of baselines
We respectfully disagree with the comment, 'baselines (for synthetic dataset experiments) are predominantly EDM-based'. The following baselines were considered (for this experiment) in the paper:
- **Simplex**: A classical EDM approach, as the starting point of DeepEDM.
- **Koopa**: A recent deep model based on Koopman operator, as a representative method that combines deep learning and dynamical system modeling (Koopman).
- **iTransformer**: A recent deep learning-based model, as a representative method that is pure learning based using a Transformer.
This selection ensures a well-rounded comparison against strong and methodologically diverse baselines. We note that the full set of results using these baselines is reported in Table 6 of the Appendix.
### 4. Missing References on Chaotic Time Series Forecasting
Your suggestion to incorporate prior work on chaotic time series forecasting is well taken. While our focus has been on EDM and deep learning-based forecasting, we recognize the relevance of research in chaotic time series forecasting. We drafted the following paragraph to add to our related work section. We welcome your further feedback.
*A related research direction focuses on forecasting chaotic time series via state space reconstruction, mirroring the underlying principles of EDM. Pioneering work by Farmer & Sidorowich (Phys. Rev. Lett. 1987) introduced local approximation techniques within reconstructed state spaces using delay embeddings, facilitating short-term predictions. Subsequent studies explored the application of feedforward neural networks for learning direct mappings from reconstructed phase states to future states (Karunasinghe & Liong Journal of Hydrology 2006). Recurrent neural networks, particularly Echo State Networks (ESNs), have also shown promise, with adaptations like robust ESNs (Li, Hand & Wang. IEEE TNNLS 2012) addressing the inherent sensitivity of chaotic signals to noise and outliers. However, a significant gap remains: the development of a fully differentiable, end-to-end trainable neural network architecture that seamlessly integrates dynamical systems theory with deep learning methodologies.*
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses and the additional experiment on latent space validation. I found that particularly helpful. The new paragraph on prior chaotic time series work is also a welcome addition.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your thoughtful follow-up and for raising the score. We are pleased that you found the latent space validation experiment helpful. We are also grateful to you for pointing out the missing related work, and we will be sure to include the paragraph in the final paper.
Since you recognized the novelty and the thorough comparison on standard benchmarks of our work, further noting it [appears to] be the first to systematically merge Empirical Dynamical Modeling with deep neural architectures in an end to end framework, we would greatly appreciate any further reconsideration of the score if you feel the paper merits it.
Regards,
Authors | Summary: This paper introduces DeepEDM, a framework that integrates nonlinear dynamical systems modeling with deep neural networks for time series forecasting. Built on empirical dynamic modeling (EDM) and Takens' theorem, DeepEDM employs time-delayed embeddings in a latent space and uses kernel regression to approximate underlying dynamics. This approach aims to enhance forecasting accuracy by explicitly modeling complex time series dynamics.
Claims And Evidence: Overall, the claims made in the paper are clear.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate for the problem of time series forecasting.
Theoretical Claims: The paper presents theoretical claims related to nonlinear dynamical systems and their integration with deep learning. While I have reviewed the theoretical justifications, I am not fully confident in the correctness of all derivations.
Experimental Designs Or Analyses: I have reviewed the experimental design and analyses, and they appear to be sound.
Supplementary Material: I have reviewed the supplementary material, particularly the additional experiments.
Relation To Broader Scientific Literature: The paper contributes to the broader scientific literature by integrating nonlinear dynamical systems modeling with deep learning for time series forecasting.
Essential References Not Discussed: The paper appropriately discusses related work, and I did not identify any missing essential references.
Other Strengths And Weaknesses: Strengths
1. The paper is well-organized, making it easy to follow the motivation, methodology, and results.
2. The integration of empirical dynamic modeling (EDM) with deep learning for time series forecasting is novel and addresses the challenge of capturing nonlinear dynamics.
3. The proposed approach is supported by extensive experiments and analyses, demonstrating its effectiveness across multiple datasets.
Weaknesses
1. The paper claims that DeepEDM mitigates EDM's sensitivity to noise and is potentially noise-free. However, the underlying mechanism for this claim is not explicitly clear. Further clarification on how DeepEDM addresses this limitation would strengthen the argument.
2. The proposed method relies on the time delay hyperparameter, which is crucial in EDM-based approaches. How is the time delay selected in practice? Additional ablation studies on the effect of time delay and embedding dimension would be beneficial.
3. The method underperforms on certain datasets, particularly those with a high number of variables (e.g., Electricity, Traffic). Providing an explanation for this performance drop would improve transparency. Additionally, results using a more standard MSE-based objective would help verify that the reported performance gains are due to model improvements rather than loss function optimization.
Other Comments Or Suggestions: N/A
Questions For Authors: Please refer to the Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and recognition of our paper’s clarity, structure, and experimental design. We appreciate the acknowledgment of our contribution to nonlinear dynamical systems through EDM-integrated deep learning. Below, we address your suggestions and will incorporate these responses into the final version.
### 1. Noise Robustness of DeepEDM
We hypothesize that DeepEDM’s noise robustness stems from its learned kernel, which enables adaptive distance assessments for nearest-neighbor retrieval, rather than using the fixed Euclidean distance in traditional time-delay embeddings. Fixed metrics are more sensitive to perturbations, while the learned kernel adapts to the data, thus improving neighbor identification under noise.
To validate this, we conducted additional experiments (please refer to our response #1 to Reviewer XFih for more details) on nearest-neighbor retrieval. In short, we compare (i) the time-delay embedded space using Euclidean distance, and (ii) DeepEDM's latent space using the learned kernel for retrieving true neighbors in the state space using synthetic data with known dynamics. Our results show that the learned kernel in DeepEDM achieves higher recall under noise, confirming that the learned kernel indeed enhances robustness against noise. This robustness is further supported by the superior forecasting performance of DeepEDM compared to EDM w. Simplex, under noisy conditions (see Table 6).
### 2. Sensitivity to Time Delay and Embedding Dimension
We conducted additional experiments to study the effects of time delay and embedding dimensions. Specifically, DeepEDM has two parameters that control time delay and embedding dimensions. First, $m$ defines the number of time-delayed steps, and $\tau$ controls the interval between these steps. For example, $m=3$ and $\tau=1$ results in a 3-dimensional embedding $[y_t, y_{t-1}, y_{t-2}]$, while $m=3$ and $\tau=2$ would yield an embedding $[y_t, y_{t-2}, y_{t-4}]$. In our work, $m$ is determined empirically for each dataset, while $\tau$ is set to 1. Following your suggestion, we conduct ablation experiments to analyze the impact of these hyperparameters. For the experiments on $m$, we fix $\tau = 1$ and vary $m$ over $[3, 5, 7, 11, 15]$. Conversely, for the experiments on $\tau$, we set $m = 5$ and explore $\tau$ in $[1, 2, 3]$. The results are linked below.
**Effects of time-delayed steps $m$ (i.e., embedding dimension)**: Our findings indicate that the impact of $m$ is dataset-dependent. In some cases, performance remains relatively stable across different values of $m$, suggesting that the intrinsic state-space dimensionality might either be very low or very high. If the state-space dimension is small, larger values of $m$ will capture the underlying dynamics and thus perform similarly well. On the other hand, if the state-space dimension is large, different small values of $m$ will not be able to model the dynamics, yielding results at a similar performance level. This behavior aligns with the intuition that effective embedding reconstruction depends on the underlying system complexity.
https://anonymous.4open.science/r/icml_rebuttal-B440/m_delay_results.md
**Effects of delay interval $\tau$:** Our results show that in most cases, $\tau = 1$ yields the best performance, suggesting that a small stride is sufficient. This is also the case considered in Takens’ theorem. However, it remains possible that an optimal balance between $\tau$ and $m$ could further improve results, which we leave for future investigation.
https://anonymous.4open.science/r/icml_rebuttal-B440/tau_results.md
### 3. Performance degradation on High-Dimensional Datasets
Please refer to our response to Q5 from Reviewer TNFN.
### 4. Loss Function Impact
Our ablation study (Table 4) shows that our DeepEDM significantly enhances forecasting performance (MLP vs. MLP+EDM, without any loss function optimization) and the loss optimization further reduces the errors (MLP+EDM w/o TDT loss vs. Full Model w. TDT loss). To further delineate the contributions of DeepEDM and loss optimization, we conducted an additional experiment. Specifically, we compare the DeepEDM trained with standard MSE loss against DeepEDM trained with the optimization objective incorporated in our paper. The results of this experiment are available at the table linked below. Notably, the differences remain marginal—while in some cases, the MSE loss further improves DeepEDM’s performance, in others, it results in slight declines.
https://anonymous.4open.science/r/icml_rebuttal-B440/loss_exp_results.md
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed and thoughtful responses. I appreciate the clarifications and additional results provided, which help address several of the concerns I raised in my initial review.
While the rebuttal has clarified some points and improved my understanding of the work, it does not substantially shift my overall assessment. Therefore, I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your thoughtful response and for taking the time to review our rebuttal. We are pleased to hear that our rebuttal has addressed several of your concerns, and we appreciate the opportunity to further clarify our work. We deeply value your assessment.
As you kindly noted, our rebuttal has addressed several of your concerns. We also believe that the additional experiments and clarifications (specific to your review, as well as others in the rebuttal) address weaknesses #1, #2, and #3 that you highlighted, while further strengthening the contributions of our work. Given these updates, as well as your initial recognition of the novelty of our work, the well-organized nature of our paper, and our extensive empirical analyses, we would be grateful if you could please consider raising your score. If there are any remaining concerns or points that still require attention, we welcome your insights and would be happy to incorporate any feedback or suggestions you may have.
Thank you once again for your valuable feedback and continued engagement with our work.
Best regards,
Authors | Summary: The paper resorts to first principles, to examine the usefulness of using embedology (as in Takens' embedding theorem) in conjunction with neural networks. This is important and has been missing in the literature.
Claims And Evidence: DeepEDM claims three key advantages.
(1) By learning a latent space of the time-delayed embeddings, it mitigates
the sensitivity of EDM to input noise;
(2) Unlike EDM,which requires a separate model for each time series, it
learns a single parametric model that generalizes to new
time series; and
(3) It offers flexibility and scalability, providing theoretic insights for Transformer-based
time series forecasting models
Methods And Evaluation Criteria: The integration of dynamical systems theory with time series forecasting is important and relatively under-explored.
Theoretical Claims: The paper employs an efficient attention-based kernel regression, as a general framework to consider Transformer-based time series models. It maps time-delay embeddings, derived from Takens’ theorem, into a learnable, higher-dimensional latent space that is potentially noise-free, facilitating a more precise reconstruction of the system’s underlying dynamics. DeepEDM can be viewed as employing a modified cross-attention framework. Similar to cross attention, the focal vectors act as queries (Q), the historical data points (Y) serve as keys (K), while the future states (y + ∆t) act as the values (V). Further, the relationship between queries and keys is quantified using the dot product as the similarity metric modulated by θ. However, DeepEDM diverges from traditional attention mechanisms by enforcing a structural differentiation where keys and values are explicitly distinct.
I am famialiar with the Takens type of embedding and in my opinion the approach in this paper is correct.
Experimental Designs Or Analyses: The approach is evaluated on both synthetic and real world time series. The synthetic series involve the Lorenz and Rossler chaotic signals (which exhibit clear attractors).
Supplementary Material: Yes
Relation To Broader Scientific Literature: The paper extends and combines two classical approaches, within the framework of transformers:
- Empirical Dynamical Modeling (EDM) (Sugihara & May, 1990),
- Takens’ embedding theorem (Takens, 1981; Sauer et al., 1991),
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strentghs:
- the paper introduces a theoretical framework, but the exepriments do not fully justify the approach
- The empirical results on real world data are good but not consistently outperforming SOTA.
- the proposed method is not very sensitive to the lookback length (time segment size), which is unexpected
Weaknesses:
- are the curves in Fig 2plotted in the absolute terms or in a dB scale?
- There is little consistency in the empirical analysis.
- Is the proposed method better suited for some kinds of time series than for other kinds?
- the simulations vs lookback length in Fig 4 in the supplement are important and better suited to be part of the main paper.
- since the paper deals with multivatiate time series, I would have expected deep analysis of the performance wrt the number of variates in a multivariate time series
- the performance under noise is not convincing
Other Comments Or Suggestions: None
Questions For Authors: See Weaknesses
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and recognition of our theoretical contributions. We appreciate your acknowledgment that embedology, despite its foundational role, has been largely overlooked and that our work helps bridge this gap by integrating Takens' embedding theorem with neural networks. Your recognition of our approach as a step toward extending EDM and providing theoretical insights into Transformer-based forecasting is also greatly valued. Below, we address your suggestions and will integrate these responses into the final version.
### 1. Clarification on Figure 2B
The axis in Figure 2B is presented in absolute terms, which we will clarify in the final version. We chose to not normalize the data (e.g. mean subtraction, standardization) to preserve the raw performance of the model. While this increases error ranges—especially in chaotic Lorenz settings where noise sensitivity is high—it provides a more transparent evaluation. Detailed results are in Appendix A.7.
### 2. Consistency of Empirical Analysis
Our empirical analysis consists of experiments and results using (a) synthetic time series generated from known, challenging dynamic systems (Lorenz and Rossler); and (2) real-world time series from multiple application domains. For synthetic data, we demonstrated that DeepEDM is more robust to noise when compared to the vanilla EDM, and is more accurate when compared to other representative learning-based methods (Koopa and iTransformer). For real-world data, we showed that DeepEDM consistently outperforms prior methods across different settings (e.g., fixed vs. varying lookback length). This consistency was evidenced by the overall ranking of our method (e.g., in Table 1, DeepEDM ranks the top 1 for 36 out of 72 metrics while the next best model achieves 11) as well as the results of multiple runs (Table 5 in the Appendix). Additionally, we presented extensive ablation studies of our design and lookback length. We believe our current empirical analysis is sufficient to support the main claim of the paper. Yet we welcome specific suggestions to improve our empirical analysis.
### 3. Sensitivity to lookback length
Theoretically, Takens’ theorem puts constraints over the number of time-delayed steps (>2d for d-dimensional state space), yet does not speak to the number of samples needed (i.e., lookback length). Empirically, we observe that the model exhibits minimal sensitivity to the lookback length beyond a certain threshold. Once the model has sufficient historical data, additional time steps—particularly those with similar dynamics—contribute little to improving forecast performance. On the other hand, too short a lookback window can degrade performance, as there may be insufficient information to generate an accurate forecast. This observation is shown in our paper: the existence of an optimal lookback length, or "sweet spot," as illustrated in Figure 4.
### 4. Moving Figure 4 to the main paper
Given the additional one-page allowance in the final version, we plan to incorporate this analysis into the main paper.
### 5. Benefit of DeepEDM for Some Kinds of Time Series than for Others
Great question! We will include a discussion in the main paper. Our main findings from the empirical analysis are twofold. First, there is a sweet spot in the lookback length, which is dataset-dependent (discussed in our response #3). Second, DeepEDM is best suited to time series with a low to moderate number of input variates, as seen in datasets like the ETT set, Weather, ILI, and our simulated data. For datasets such as Traffic and ECL, which contain a larger number of variates, we observe a slight decline in performance. One possible reason is that DeepEDM considers a channel-wise prediction using a shared model across all channels, and thus assumes independence between input variates. With a growing number of variates, there is likely an increasing strength of correlation among these variates, which our DeepEDM can not effectively model. We plan to further explore this correlation among input variates in future work.
### 6. Performance Under Noise
We highlight that noisy conditions present a significant challenge for forecasting models, leading to performance degradation in strong baselines such as iTransformer, Koopa, and the vanilla Simplex. However, DeepEDM demonstrates greater robustness, exhibiting less performance degradation compared to these baselines. As discussed in Sec 5.1 and further shown in Table 6, DeepEDM achieves the best performance among strong baselines in the presence of noise. These results support our claim on noise robustness, as also noted by reviewer XFih that “[DeepEDM] remains robust even in the presence of noisy observations”. | Summary: This paper introduces a new framework, DeepEDm, for time series forecasting, inspired by studies in Empirical Dynamic Modeling (EDM). The approach is fundamentally based on Taken's theorem, which states that a dynamic system can be reconstructed using a delay embedding of the series in phase space.
In practice, the method starts by constructing a time-delay representation of the series, from which an embedding is learned in a latent space. Forecasting is then performed using a regression kernel, which assigns weights to the closest representations from the training set in the latent space. Finally, a classic MLP decoder generates the forecast.
Experiments on both synthetic and real-world datasets demonstrate the performance of the proposed model.
Claims And Evidence: The main claim is that the proposed method achieves strong performance compared to the state of the art and is theoretically grounded. The experimental campaign indeed demonstrates the effectiveness of the proposed approach.
A secondary claim is mentioned in the introduction: "it learns a single parametric model that generalizes to new time series." However, I am not entirely sure what the authors mean by this statement. My understanding might be incorrect, but I expected to see experiments on adaptation or transfer to previously unseen time series in the datasets. However, the paper does not include any experiments addressing this issue.
Methods And Evaluation Criteria: The selected benchmarks are standard in the field, although they are known for their simplicity and stability. The two evaluation metrics used—MAE and MSE—are widely used in forecasting. The chosen baselines are also comprehensive, ensuring broad coverage of existing architectures
Theoretical Claims: The formalism is derived from Empirical Dynamic Modeling (EDM). It is clearly presented and easy to understand, with a summary of the mathematical tools provided in the appendix. The paper does not include any additional theoretical elements that require validation.
Experimental Designs Or Analyses: The paper follows recent publications in terms of benchmark selection and evaluation protocols. However, the benchmarks commonly used in the field are known to be simple and highly stable.
The authors chose to present results in the main body of the paper based on a very long lookback window (twice the forecasting horizon), while results for other lookback windows are only provided in the appendix. However, these results are not detailed per lookback window; instead, only the best result is displayed (Table 3). I believe these results should be presented in greater detail, with a breakdown by lookback window size.
Additionally, Table 4 (ablation study) reports averaged results, which somewhat obscures key insights. Specifically, it shows that a simple linear model achieves results very close to both the baselines and the proposed model, on average, across all forecasting horizons. To better understand the challenge posed by the task and the added value of more complex methods over a simple linear model, these results should be detailed more thoroughly.
It is unfortunate that there are no experiments on transfer learning or domain adaptation. The proposed method seems well-suited for such tasks, given the learned latent space, and this could have been a key strength of the approach.
Supplementary Material: Yes, all of it.
Relation To Broader Scientific Literature: The paper is in line with recent works in time series forecasting, but it shares the same limitations, namely the use of simple benchmarks that may no longer be suitable.
The paper also draws a connection between the framework used and self-attention, which is indeed very similar once the latent projection method is defined.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: As mentioned above, it would be beneficial to provide detailed experiments for the different lookback windows and the ablation study. Additionally, it would be valuable to include experiments on more competitive tasks.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and detailed review. We now address your suggestions. Our response will be integrated into the final version.
### 1. Clarification on Generalization Statement
This was meant to contrast DeepEDM and EDM w. simplex: EDM trains a separate model for each time series, whereas DeepEDM learns a single shared model across all series (within same dataset). The statement is partially supported by our main results (Table 1), where models were evaluated on unseen time series. In this widely-used evaluation protocol, the training series and test ones may be extracted from the same long sequence. Due to this design, we realize that part of this statement (“generalization to new time series”) is ambiguous and will revise it in the final version.
To further clarify, we conducted an additional experiment on ETT, Exchange, and Weather datasets to evaluate DeepEDM's ability to generalize across “different” time series (within the same dataset). Specifically, for each dataset, we trained the model on a subset of sequences (seqs.) and tested it on a different subset.
- For the ETT datasets, which consist of 7 seqs., we trained on the seq. 0-2 (using only timesteps from the standard training split) and tested on seq. 4-6 (using timesteps from the standard test split). We use this 3:3 split as several baseline models are unable to handle different channel dimensions between training and testing.
- Similarly, for the Exchange dataset (8 seqs.), we trained on the first 4 sequences and tested on the latter 4.
- For the Weather dataset (21 seqs.), we trained on sequences 0-9 and tested on sequences 10-19.
We compared DeepEDM against strong baselines including Koopa, iTransformer, and PatchTST. The prediction horizon (H) varied from [48, 96, 144, 192] and lookback window was set to $2*H$. The summarized results are presented below along with the link to full results. Notably, DeepEDM ranks 1st in MAE and MSE winning in 39 out of 48 cases. The results thus provide more direct support to the statement of “a single model that generalizes to new time series."
| Dataset | DeepEDM (MSE/MAE) | Koopa (MSE/MAE) | PatchTST (MSE/MAE) | iTransformer (MSE/MAE) |
|:----------|:----------------------|:------------------|:----------------------|:-------------------------|
| ETTh1 | (**0.230**/**0.310**) | (0.248/0.330) | (0.253/0.330) | (0.259/0.330) |
| ETTh2 | (**0.151**/**0.230**) | (0.170/0.260) | (0.183/0.270) | (0.170/0.250) |
| ETTm1 | (**0.214**/**0.280**) | (0.242/0.310) | (0.216/0.290) | (0.245/0.300) |
| ETTm2 | (**0.075**/**0.160**) | (0.086/0.180) | (0.088/0.180) | (0.096/0.190) |
| Exchange | (0.106/**0.200**) | (0.146/0.240) | (**0.105**/0.210) | (0.131/0.230) |
| Weather | (**0.293**/**0.250**) | (0.325/0.280) | (**0.293**/**0.250**) | (0.423/0.320) |
https://anonymous.4open.science/r/icml_rebuttal-B440/generalization_exp_results.md
### 2. Lookback Window Reporting
Since historical data is typically accessible, any reasonable lookback window can be assumed and the lookback length is often treated as a hyperparameter [Bergmeir, 2023]. Therefore, we tuned lookback length separately for each model and forecasting horizon, and only reported best results without a detailed breakdown. This protocol was also considered in prior works [TimeMixer, PatchTST, DLinear]. The effects of lookback length in forecasting performance was indeed discussed in Appendix A.5.2 (Figure 4). If needed, we can add a detailed breakdown to the Appendix.
### 3. Ablation Study Reporting and Results of Linear Models
Our ablation study is designed to concisely highlight the method design choices. These detailed experiments span 9 datasets, 4 lookback settings, and 4 configurations with multiple runs. To maintain clarity and conciseness, we presented aggregated results describing key trends. Per reviewer’s request, we have included the full table below, which we will add to the Appendix.
https://anonymous.4open.science/r/icml_rebuttal-B440/ablation_full_results.md
**Linear models**:
As prior work [DLinear, RLinear] has noted, linear models are effective for forecasting, particularly for simpler datasets. However, they struggle with more complex datasets. As shown in the above table, DeepEDM significantly outperforms the linear model on benchmarks like ILI, Weather, ECL, and Traffic by a large margin, e.g., on ECL the relative improvement in MSE from the linear model to our method is ~10%.
### 3. Transfer learning / domain adaptation
This is an exciting future direction! Our work focuses on single-domain forecasting but shows transfer within-domain (see Q1). Cross-domain transfer learning requires further study and we leave it as future work. | null | null | null | null | null | null |
Power Mean Estimation in Stochastic Continuous Monte-Carlo Tree Search | Accept (poster) | Summary: The submission introduces a new MCTS algorithm for continuous and stochastic MDPS. The method combines power mean backups (known to be more stable than simple averaging) with polynomial exploration bonuses (known to lead to improved converged compared to logarithmic exploration) and an HOO-like partitioning scheme for the continues domain. The convergence rate of the method in continuous and stochastic MDPS is analyzed and found to match the one of POLY-HOOT in for continuous and deterministic MDPS. In the presence of stochasticity, the method empirically performs better than its deterministic counterpart.
Claims And Evidence: I did not find any problematic claims.
Methods And Evaluation Criteria: * Analyzing the convergence rate makes sense for an optimization method.
* The method is empirically evaluated on classic RL benchmarks, which seem appropriate to me. A small caveat is that the benchmarks are originally deterministic and only made stochastic in the aftermath by adding noise. In order to support the practical relevance of extending this class of algorithms to stochastic environments, I think it would have been a bit nicer to actually find an inherently stochastic application.
Theoretical Claims: I checked the proofs in Appendix F and did not find an issue with them.
Experimental Designs Or Analyses: If I understand correctly, POLY-HOOT and discretized-POWER UCT are as closely related to the method as the current baselines discretized-UCT and HOOT. Therefore, including them as baselines could provide an interesting additional layer of ablation.
Supplementary Material: I reviewed Appendix A and Appendix F.
Relation To Broader Scientific Literature: The suggested method is most closely related to POLY-HOOT:
Both methods adopt polynomial exploration and both methods achieve the same convergence rate. The difference is that POLY-HOOT is designed for deterministic settings, while the proposed method works for stochastic settings.
Since POLY-HOOT is related to HOOT and PUCT (by replacing the logarithmic exploration bonus in HOOT with the polynomial one from PUCT) and to HOO (by using it for action selection), the proposed method is also related to these two pieces of work.
Essential References Not Discussed: **Generalized Mean Estimation in Monte-Carlo Tree Search** Dam, Klink, D’Eramo, Peters, Pajarinen. Published in IJCAI 2020.
This paper introduces „POWER-UCT“, which is related to the proposed method in the sense that it is also a MCTS with power mean backups (e.g. Section 4, Eq. 11). The difference to the proposed method is that POWER-UCT is designed for discrete and deterministic environments.
**Power Mean Estimation in Stochastic Monte-Carlo Tree Search** Dam, Maillard, Kaufmann. Published in UAI 2024.
This is another very similar power mean MCTS version. The difference is that the newly proposed version is for discrete, stochastic MDPs, wheres this citation is for continuous, stochastic MDPs.
Other Strengths And Weaknesses: I think this work nicely complements and fits into the existing line of work by extending this class of algorithms to continuous and stochastic settings.
Other Comments Or Suggestions: * I would mention PUCT *with its name* in the related work section. I know it’s mentioned later on, but I was slightly confused.
* One could highlight the best performing method in Table 2 by using bold font for it.
* Typo in the headlines in the Appendix: „Convergence of Stochastic-Power-HOOT [i]n Monte-Carlo Tree Search“.
* In the outline in Appendix A, the third bullet point should be „Technical Lemmas“.
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive review and for suggesting additional related work we hadn't cited. We appreciate your thorough analysis of our theoretical claims. We address each of your concerns in detail and kindly ask you to consider updating your scores after reading the rebuttal.
### Including Additional Baselines
| Algorithm | Humanoid-v0 | Hopper-v0 |
|-----------|---------------------|-------------------|
| UCT (baseline) | -136.98 ± 44.84 (1.0x) | 5216.93 ± 179.64 (1.0x) |
| POLY-HOOT (p=1) | -44.40 ± 3.33 (3.1x) | 13230.66 ± 2844.33 (2.5x) |
| **Stochastic-Power-HOOT** | **-44.12 ± 6.22 (3.1x)(p=2)** | **13303.61 ± 3070.34 (2.5x)(p=8)** |
| HOOT | -57.35 ± 10.46 (2.4x) | 10452.83 ± 3885.12 (2.0x) |
| PW-UCT | -89.74 ± 24.16 (1.5x) | 5218.73 ± 1384.90 (1.0x) |
| Voronoi MCTS | -48.69 ± 9.52 (2.8x) | 411.70 ± 29.03 (12.7x worse) |
Following your suggestion, We added Kim et al.'s Voronoi MCTS (AAAI 2020) to our experiments. This recent algorithm uses Voronoi partitioning for continuous action spaces but for deterministic settings only:
1. **Voronoi MCTS comparison**:
- Performs well on Humanoid (3nd best performer after our method)
- But shows inconsistent results on Hopper (worst performer)
- This highlights the need for methods specifically designed for stochasticity
### Missing Citations
Thank you for pointing out the important related work. We'll add these citations in our revised paper:
1. "Generalized Mean Estimation in Monte-Carlo Tree Search" (Dam et al., 2020)
2. "Power Mean Estimation in Stochastic Monte-Carlo Tree Search" (Dam et al., 2024)
These works complement our approach and help position our contribution in the broader literature on power means in MCTS.
### Additional Changes
We'll implement your suggested edits:
- Bold formatting for best results in Table 2
- Fix typos in Appendix headings
- Properly name PUCT in the related work section
Kim, B., Lee, K., Lim, S., Kaelbling, L., & Lozano-Perez, T. (2020). Monte Carlo Tree Search in Continuous Spaces Using Voronoi Optimistic Optimization with Regret Bounds. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06), 9916-9924. | Summary: The paper considers MCTS for continuous action space and non-stochastic environments. The authors propose to use a power mean operator for value estimates. They also propose a polynomial exploration bonus. They show convergence results for stochastic environments that matches previous results on deterministic environments. Finally they perform experiments that show improvement over previous methods.
######
Post rebuttal
#######
I'm raising my score to 2 as my concerns were answered. I think it would benefit the paper to better discuss the theoretical results that allow obtaining guarantees on stochastic environments.
Claims And Evidence: The theoretical claims looks sound (even though I didn't read the proofs).
The experimental results seem a bit lacking - the environments and rewards were modified to fit the claims. I'm not sure if these changes are reasonable and would have preferred some benchmark environment that wasn't tempered with.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No, but the theoretical claims are reasonable.
Experimental Designs Or Analyses: No.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper improves on previous algorithms empirically (e.g. HOOT) and extends previous convergence rates to stochastic environments.
Essential References Not Discussed: No to the best of my knowledge.
Other Strengths And Weaknesses: Strengths:
1. Solid theoretical result.
2. Paper is written clearly.
3. Experiments show improvements of the current method over previous similar works.
Weaknesses:
1. The power mean estimation method seems to be the main technical novelty, but why it's any good is not really discussed. I understand that it's closer to the max function than the average function, but also is softmax. This choice seems arbitrary and unclear, perhaps the theory still works, but a complex averaging is not a strong novelty in my opinion. Also it obviously puts some practical limitations on negative or zero rewards which adds unnecessary complications.
2. The polynomial bonus was mentioned to also exist in Poly HOOT, how do the two compare? You mention in the introduction its tailored to stohcastic MDP, but this tailoring is not explained in the text.
3. The environments in the experiment were tailored to show the solution's benefits. But if stochastic benchmarks couldn't be found, maybe the problem is not that interesting..
Other Comments Or Suggestions: If there is a benchmark which is stochastic with continuous action space it would make a more convincing case.
Questions For Authors: Following my previous comments:
1. Can you give a strong justification for mean power estimation?
2. Can you provide better explanation on the polynomial bonus and why it works for stochastic environments unlike the previous polynomial bonus?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review of our submission. We appreciate your recognition of our theoretical contributions and have addressed your concerns with additional justification and expanded experiments.
## Power Mean Estimation: Detailed Justification
- The power mean effectively balances between underestimation and overestimation of optimal values in MCTS. As planning progresses, we need a backup operator that captures the potential of promising actions without being overly optimistic.
- Standard arithmetic means (p=1) tend to underestimate optimal values by averaging across all samples, including suboptimal early explorations. Conversely, max operators (p→∞) overestimate true values by focusing solely on the best sample, which may be an outlier.
- Power means with p>1 provide an elegant trade-off: they give more weight to higher values without fully committing to the maximum. As p increases above 1, the power mean moves smoothly from the arithmetic mean toward the max operator. This creates a controlled optimistic bias that encourages exploration of promising regions while maintaining some robustness to outliers.
- Our experiments confirm this advantage—Stochastic-Power-HOOT with appropriate p values consistently outperforms both UCT (which uses arithmetic means) and other approaches, particularly in stochastic environments where balancing optimism and realism is critical.
## Polynomial Bonus for Stochastic Environments
Our approach addresses a critical gap in the POLY-HOOT paper: convergence in stochastic MDPs. While POLY-HOOT proved convergence for deterministic environments, our contribution extends these guarantees to settings with stochastic transitions.
The key factors:
- Our polynomial bonus separates local action exploration from global uncertainty adjustment, creating an adaptive mechanism that responds to varying levels of stochasticity in the environment.
- In stochastic environments, value estimates have higher variance from both transition randomness and downstream policy changes. Our approach maintains sufficient exploration even as planning progresses, unlike logarithmic bonuses that decrease too rapidly.
- The combination with power mean backups creates a robust mechanism for stochastic settings - the power mean controls the trade off underestimation/overestimation in value estimates, while the polynomial bonus guides exploration appropriately.
Stochastic-Power-HOOT's superior performance demonstrates the effectiveness of our approach in bridging this important theoretical gap.
## Experimental concerns
For responding to the reviewer's concern about the relevancy of stochastic environments:
- While our experiments use simulated environments, they reflect fundamental challenges present in real-world applications which are inherently stochastic. Robotics, autonomous vehicles, healthcare, and resource management all operate under multiple sources of uncertainty (Action, Dynamics, Observation uncertainty )
- The growing deployment of autonomous systems in unstructured environments makes planning under stochasticity increasingly critical. Current methods often rely on deterministic approximations that fail when uncertainty accumulates. Our approach addresses this gap, demonstrating how planning can be made robust to the types of stochasticity that practitioners face daily.
- The contribution is not creating artificial noise, but rather developing theoretical guarantees for continuous planners in settings where stochasticity cannot be ignored - a limitation of prior work that restricts real-world applicability.
| Algorithm | Humanoid-v0 | Hopper-v0 |
|-----------|---------------------|-------------------|
| UCT (baseline) | -136.98 ± 44.84 (1.0x) | 5216.93 ± 179.64 (1.0x) |
| POLY-HOOT (p=1) | -44.40 ± 3.33 (3.1x) | 13230.66 ± 2844.33 (2.5x) |
| **Stochastic-Power-HOOT** | **-44.12 ± 6.22 (3.1x)(p=2)** | **13303.61 ± 3070.34 (2.5x)(p=8)** |
| HOOT | -57.35 ± 10.46 (2.4x) | 10452.83 ± 3885.12 (2.0x) |
| PW-UCT | -89.74 ± 24.16 (1.5x) | 5218.73 ± 1384.90 (1.0x) |
| Voronoi MCTS | -48.69 ± 9.52 (2.8x) | 411.70 ± 29.03 (12.7x worse) |
We run further experiments in Humanoid and Hopper with noise introduced in Action, Dynamics, Observation. Our approach significantly outperforms existing methods on higher-dimensional robotics tasks with multiple sources of stochasticity. On Humanoid-v0 (17D action space), Stochastic-Power-HOOT achieves a 3.1x improvement over UCT, while on Hopper-v0, it shows a 2.5x improvement. Interestingly, power mean tuning matters - p=2 works best for Humanoid while p=8 excels for Hopper.
The performance (catastrophic in Hopper) of Voronoi MCTS (which is only for deterministic settings) further shows the challenge of designing robust planners for stochastic settings and validates our theoretical contribution.
After reading the rebuttal, we kindly invite you to consider updating your scores. | Summary: This paper introduces Stochastic-Power-HOOT, a novel Monte-Carlo Tree Search (MCTS) algorithm designed for continuous, stochastic MDPs. The authors propose integrating a power mean as a value backup operator along with a polynomial exploration bonus to address the related challenges. The paper provide theoretical analysis that Stochastic-Power-HOOT converges at a polynomial rate, extending the theoretical guarantees of POLY-HOOT to stochastic environments. And the authors also conduct experiments on stochastic tasks adapted from OpenAI Gym to support their claims.
Claims And Evidence: The paper makes several strong claims, most of which are mainly supported by theoretical analysis, and the empirical experiments is a bit weak:
- Stochastic-Power-HOOT converges at a polynomial rate in stochastic environments. The authors provide a theoretical analysis in Section 6, showing that the algorithm achieves convergence at the mentioned rate. This is supported by Lemma 1, Theorem 1, and Theorem 3, which establish the polynomial concentration of the power mean estimator and the convergence of the expected payoff. However, I dont find this paper has discussed the computational complexity of Stochastic-Power-HOOT compared to other methods, which could be a limitation in practical applications.
- The power mean backup operator effectively handles non-stationary dynamics. The theoretical analysis in Section 5.2.4 and the experimental results in Section 7 support this claim. The power mean operator is shown to balance exploration and exploitation better than simple averaging, especially in stochastic environments.
- Stochastic-Power-HOOT outperforms existing methods in continuous, stochastic domains. The experimental results in Section 7 demonstrate that Stochastic-Power-HOOT achieves higher average returns compared to HOOT, discretized-UCT, and PUCT across several modified OpenAI Gym tasks. The results are presented with standard deviations, showing consistent performance improvements. However, these experiments are limited to simple or modified OpenAI Gym tasks. The dimension of the continuous space and the complexity of stochasticity are far from environments often used in academic community and real-world decision-making scenarios.
Methods And Evaluation Criteria: The proposed method, Stochastic-Power-HOOT, is well-suited for the problem of continuous, stochastic MDPs. The use of a power mean backup operator and polynomial exploration bonus addresses the limitations of existing methods like HOOT, which rely on logarithmic bonuses that lack theoretical guarantees in non-stationary settings.
The authors compare Stochastic-Power-HOOT against several established MCTS variants (e.g., HOOT, discretized-UCT, PUCT). And the evaluation criteria, including average returns and standard deviations over multiple runs, are standard and suitable for assessing the performance of planning algorithms.
Theoretical Claims: The theoretical claims in the paper are well-supported by rigorous proofs. The authors provide detailed proofs for the polynomial concentration of the power mean estimator (Theorem 1) and the convergence of the expected payoff (Theorem 3). The proofs rely on assumptions about the smoothness of the reward function and the hierarchical structure of the MCTS tree, which are reasonable for the problem at hand.
The proof of Theorem 1 relies on the assumption that the reward sequence satisfies certain concentration properties (Assumption 3). While this assumption is standard in bandit literature, it would be helpful to discuss its implications in the context of MCTS. The paper could benefit from a more detailed discussion of the conditions under which the theoretical guarantees hold, especially in relation to the choice of hyper-parameters.
Experimental Designs Or Analyses: The experimental design is sound, with the authors testing Stochastic-Power-HOOT on modified versions of classic control tasks. The modifications introduce stochasticity in actions, transitions, and observations, making the tasks more challenging and suitable for evaluating the algorithm's robustness. But totally, the experiments are limited to relatively simple tasks. While the results are promising, they may not generalize to more complex or high-dimensional environments.
Supplementary Material: No
Relation To Broader Scientific Literature: This paper extends the theoretical guarantees of POLY-HOOT to stochastic environments
Essential References Not Discussed: The cited references cover some classical works in the MCTS field, but show little discussion about recent deep RL + MCTS methods, such as AlphaZero/MuZero, and their variants designed for continuous action space and stochastic environments—Sampled MuZero and Stochastic MuZero.
Other Strengths And Weaknesses: Strengths:
- The paper addresses an important gap in the literature by extending MCTS to continuous, stochastic environments with theoretical guarantees.
- The use of a power mean backup operator is novel and well-motivated, providing a flexible mechanism for value estimation.
Weaknesses:
- The empirical validation is limited to relatively simple tasks. Testing on more complex or real-world environments would strengthen the paper.
- The theoretical analysis, while rigorous, relies on assumptions that may not hold in all practical scenarios.
Other Comments Or Suggestions: - The paper could benefit from a more detailed ablation study to isolate the impact of the power mean backup operator versus the polynomial exploration bonus.
- The paper could benefit from a clearer explanation of the intuition behind the power mean backup operator and how it helps handle non-stationarity in stochastic environments.
Questions For Authors: - How do the assumptions made in the theoretical analysis impact the applicability of Stochastic-Power-HOOT in more general settings? Are there any limitations or potential extensions to address these concerns?
- For sparse-reward tasks like MountainCar, does explicit reward engineering obscure the algorithm’s performance in raw sparse-reward settings?
- How do you address the trade-off between exploration and exploitation when the stochasticity of the environment varies significantly over time or across different regions of the state space?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your insightful review and recognition of our theoretical contributions. We address each of your concerns in detail and kindly invite you to consider updating your scores
### Complex Environments
We've expanded our experiments to include high dimensional MuJoCo environments with added noise to action, dynamic, and observation:
1. **Humanoid-v0** (17D action space):
- Stochastic-Power-HOOT achieved 3.1x better results than baseline UCT
- Even in this complex environment, our method maintained consistent improvement
2. **Hopper-v0**:
- Stochastic-Power-HOOT achieved 2.5x better results than baseline UCT
- Stochastic-Power-HOOT (p=8) showed further improvements (2.6x)
| Algorithm | Humanoid-v0 | Hopper-v0 |
|-----------|---------------------|-------------------|
| UCT (baseline) | -136.98 ± 44.84 (1.0x) | 5216.93 ± 179.64 (1.0x) |
| POLY-HOOT (p=1) | -44.40 ± 3.33 (3.1x) | 13230.66 ± 2844.33 (2.5x) |
| **Stochastic-Power-HOOT** | **-44.12 ± 6.22 (3.1x)(p=2)** | **13303.61 ± 3070.34 (2.5x)(p=8)** |
| HOOT | -57.35 ± 10.46 (2.4x) | 10452.83 ± 3885.12 (2.0x) |
| PW-UCT | -89.74 ± 24.16 (1.5x) | 5218.73 ± 1384.90 (1.0x) |
| Voronoi MCTS | -48.69 ± 9.52 (2.8x) | 411.70 ± 29.03 (12.7x worse) |
**Voronoi MCTS comparison:** We added Kim et al.'s Voronoi MCTS (AAAI 2020) to our experiments. This recent algorithm uses Voronoi partitioning for continuous action spaces. Interestingly, it performs well on Humanoid but poorly on Hopper, highlighting complementary strengths of different approaches.
### Ablation Study on Power Parameter
We conducted additional experiments varying the power parameter p (1, 2, 4, 8) and found:
- For Humanoid-v0, p=2 performs best (-44.12 ± 3.11)
- For Hopper-v0, p=8 shows highest returns (13303.61 ± 3070.34)
- Higher p values generally help in more stochastic environments
## Power Mean Justification
- The power mean effectively balances between underestimation and overestimation of optimal values in MCTS. As planning progresses, we need a backup operator that captures the potential of promising actions without being overly optimistic.
- Standard arithmetic means (p=1) tend to underestimate optimal values by averaging across all samples, including suboptimal early explorations. Conversely, max operators (p→∞) overestimate true values by focusing solely on the best sample, which may be an outlier.
- Power means with p>1 provides an elegant trade-off: it gives more weight to higher values without fully committing to the maximum. As p increases above 1, the power mean moves smoothly from the arithmetic mean toward the max operator. This creates a controlled optimistic bias that encourages exploration of promising regions while maintaining some robustness to outliers.
### References to Deep RL + MCTS Methods
In our revised paper, we'll include a discussion of connections to and contrasting our approach with Deep RL + MCTS Methods for continuous spaces
### Addressing Your Specific Questions
1. **Theoretical assumptions in general settings**:
To clarify
*. Concentration Properties (Assumption 3)
This assumption is actually **automatically satisfied by construction** in our algorithm:
- As proven in Section 6, the concentration property is established by induction through the MCTS tree structure
- The power mean backup operator inherently preserves and strengthens this property
- Unlike alternative approaches requiring external assumptions, our concentration guarantees emerge directly from the algorithm design
*. Smoothness (Assumption 2)
The smoothness assumption is both theoretically sound and practically validated:
- It is a standard requirement across all HOO-based algorithms (HOO, HOOT, POLY-HOOT)
- For robotic control tasks (Hopper, Humanoid, etc.), smoothness naturally emerges from:
* Physics-based dynamics (governed by differential equations)
* Continuous control inputs and their effects on system states
* Reward functions based on physical quantities (distance, energy, etc.)
*. Empirical Validation
Our extensive experiments confirm these assumptions hold in practice:
- Across 5 standard benchmarks (CartPole, Pendulum, etc.)
- In higher-dimensional MuJoCo environments (17D Humanoid, 3D Hopper)
- Under various stochasticity conditions (dynamics noise, observation noise)
The superior performance of Stochastic-Power-HOOT over alternatives (UCT, HOOT, Voronoi MCTS) provides strong empirical evidence that our theoretical assumptions align well with practical settings.
2. **Sparse-reward tasks**: For MountainCar, our reward transformation preserves the sparse nature while ensuring numerical stability. Our results show robust performance even with the original reward structure.
3. **Handling varying stochasticity**: The power mean parameter (p) + the polynomial bonus allows adaptation to different noise levels. Our ablation studies show p=2 works well for Humanoi, while p=8 works well with Hopper. | Summary: This paper introduces Stochastic-Power-HOOT, an extension of HOOT designed to handle stochastic and continuous-action Markov Decision Processes (MDPs), where prior methods primarily focused on deterministic settings. The core contributions include:
- Power mean backup operator to mitigate non-stationary reward estimation issues in stochastic environments.
- Polynomial exploration bonus to ensure convergence at a polynomial rate in stochastic settings.
- The paper evaluate Stochastic-Power-HOOT on four modified OpenAI Gym tasks (CartPole, Pendulum, MountainCar, and Acrobot), demonstrating that it outperforms existing baselines such as HOOT, PUCT, and discretized-UCT.
Claims And Evidence: The paper provides strong theoretical justification for its proposed modifications, particularly the use of power mean backups and polynomial exploration bonuses, along with a clear convergence rate analysis. However, the empirical evaluation is somewhat limited:
- The experiments are conducted on toy domains with relatively simple dynamics. The applicability of the method to more complex, real-world tasks (e.g., MUJOCO environments or robotics control benchmarks) remains uncertain.
- The baselines used for comparison are dated, with the most recent one being from 2013. It is unclear whether the observed improvements would hold against state-of-the-art approaches.
While the theoretical contributions are compelling, the experimental validation does not fully justify the practical impact of the proposed method.
Methods And Evaluation Criteria: The evaluation benchmarks are too simple to meaningfully assess the advantages of Stochastic-Power-HOOT in complex stochastic environments. More challenging and widely-used benchmarks (e.g., MUJOCO or real-world robotic control tasks) would have provided a stronger demonstration of the algorithm’s scalability and practical relevance.
Theoretical Claims: The theoretical results appear correct and well-structured. The proofs provide clear convergence guarantees, and I did not find any obvious flaws in the derivations.
Experimental Designs Or Analyses: See issues listed in 'Methods And Evaluation Criteria' section.
Supplementary Material: N.A.
Relation To Broader Scientific Literature: The paper extends prior work in Monte-Carlo Tree Search (MCTS) for continuous action spaces and stochastic domains, which is relevant for broader AI research community.
Essential References Not Discussed: N.A.
Other Strengths And Weaknesses: N.A.
Other Comments Or Suggestions: N.A.
Questions For Authors: See issues listed in 'Claims And Evidence' section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: | Algorithm | Humanoid-v0 | Hopper-v0 |
|-----------|---------------------|-------------------|
| UCT (baseline) | -136.98 ± 44.84 (1.0x) | 5216.93 ± 179.64 (1.0x) |
| POLY-HOOT (p=1) | -44.40 ± 3.33 (3.1x) | 13230.66 ± 2844.33 (2.5x) |
| **Stochastic-Power-HOOT** | **-44.12 ± 6.22 (3.1x)(p=2)** | **13348.45 ± 6110.36 (2.6x)(p=8)** |
| HOOT | -57.35 ± 10.46 (2.4x) | 10452.83 ± 3885.12 (2.0x) |
| PUCT(PW) | -89.74 ± 24.16 (1.5x) | 5218.73 ± 1384.90 (1.0x) |
| Voronoi MCTS | -48.69 ± 9.52 (2.8x) | 411.70 ± 29.03 (12.7x worse) |
Thank you for your review and recognizing the theoretical contributions of our work. We appreciate your suggestions for improvement, particularly regarding empirical evaluation. We address each of your concerns in detail and kindly ask you to consider updating your scores after reading the rebuttal.
### Addressing Limited Experimental Evaluation
We expanded our experiments to include more complex environments. Specifically, we tested our approach on MuJoCo tasks with higher-dimensional action spaces:
1. **Humanoid-v0** (17D action space):
- Stochastic-Power-HOOT achieved 3.1x better performance than baseline UCT
- Stochastic-Power-HOOT (-44.12 ± 6.22) significantly outperformed HOOT (-57.35 ± 10.46)
2. **Hopper-v0**:
- Stochastic-Power-HOOT achieved 2.6x better performance than baseline UCT
- Performance ranking: POLY-HOOT = POLY_HOOT > HOOT > PUCT (Progressing Widening) > UCT >> Voronoi MCTS
These results demonstrate that our theoretical advantages translate to practical improvements in more complex domains with higher-dimensional action spaces.
We agree that comparing against newer state-of-the-art approaches would strengthen our evaluation. We added Kim et al.'s Voronoi MCTS (AAAI 2020) to our experiments as you suggested. This recent algorithm uses Voronoi partitioning for continuous action spaces. Interestingly, it performs well on Humanoid (2.8x better than UCT) but poorly on Hopper, highlighting complementary strengths of different approaches.
We will update our revised paper and discuss connections with recent deep RL + MCTS methods. | Summary: In the setting of continuous state/action MCTS, this work proposes replacing the empirical mean node value estimate with a power mean-based estimate. They also propose a tree action selection bonus based on this power mean. They provide convergence proofs for this method in the setting of stochastic MDPs, and show empirical results on 5 continuous control tasks.
## Update after rebuttal
The rebuttal addresses almost all of my significant concerns, so I have updated my recommendation.
My one reason for not choosing a weak or full accept recommendation is that I cannot say I understand the action branching behaviour, despite two author responses. I understand how the power mean focuses search, but cannot see how that affects lines 183-190 of the algorithm: a previously unselected sub-interval **must** be chosen each time the HOO tree is queried, other than when the HOO tree node is at max depth.
The node statistics are a bit difficult to understand, because the method takes an unusual approach to MCTS: **multiple** MCTS leaf nodes can be added during one trial, up to the specified max depth. In the tree structure analysis, very few nodes have a branching factor >1. This suggests to me that the algorithm may be behaving more like a flat Monte Carlo evaluation than MCTS with meaningful branching factors below the root node.
I appreciate I am the only reviewer to query this, so would not push for rejection based on my not understanding this action branching factor point.
Recommendation after rebuttal/discussion: weak reject.
Claims And Evidence: The theoretical claims seem well-supported by proofs. There are shortcomings with claimed contributions and the experiment scope, as discussed below.
Methods And Evaluation Criteria: The key novel change compared to POLY-HOOT (Mao et al 2020), i.e. the use of power means, is well-motivated to better track non-stationary rewards. It would have been nice to have more discussion of why the power mean in particular was chosen, instead of other estimators that can be shown to be consistent in the non-stationary setting.
The evaluation criteria are standard for MCTS methods with MDPs.
Theoretical Claims: I checked the convergence proofs at a high level and they seem to be correct.
Experimental Designs Or Analyses: I have some concerns about the experiments, which I list below.
- The experiments are limited to 5 low-dimensional continuous control tasks without much long-horizon planning needed. It would be nice to see results in higher-dimensional longer-horizon tasks, for example as in (Kim et al., "Monte Carlo Tree Search in Continuous Spaces Using Voronoi Optimistic Optimization with Regret Bounds", AAAI 2020).
- Line 435 describes MountainCar as "high-dimensional", but it is only 2D.
- More detail/justification should be given on the progressive widening parameters chosen. The optimal parameter values generally vary by task (Sunberg et al., "Online algorithms for POMDPs with continuous state, action, and observation spaces", ICAPS 2018).
- Similarly, it would be nice to investigate the effect of changing the max HOO tree depth parameter rather than arbitrarily fixing it at 10.
- Insufficient justification is given for scaling the rewards of the continuous control tasks. It is well-known that shifting rewards (especially changing the sign of rewards) can significantly change the optimal policy behaviour and returns.
- Conflictingly, the description of the MountainCar domain seems to suggest that it still has negative rewards most of the time (l405-406).
- Additionally: "Details of the experimental setup, including noise levels and reward-scaling techniques, are provided in Appendix G." (l424-426), however appendix G has no discussion of reward scaling.
Supplementary Material: I reviewed the entire supplementary material, which is largely proofs. As an emergency reviewer, I did not have time to check the proofs in detail, but they seem sensible at a high level.
Relation To Broader Scientific Literature: In my view the claimed contributions of the paper over the current literature are overstated in places:
1. Applicability to stochastic MDPs. It is true that POLY-HOOT (Mao et al. 2020) did not apply to stochastic MDPs. Howevever, I don’t agree that there are general “significant challenges… in adapting [MCTS] to stochastic MDPs” (l12-14), and that "adapting MCTS to... stochastic domains remains non-trivial" (l67-68). UCT (Kocsis et al. "Bandit based Monte-Carlo Planning", ECML 2006.) and many other MCTS-based algorithms are straightforwardly applicable to stochastic MDPs.
2. “Our approach avoids naive action-space discretisation by focusing search adaptively on where it matters most” (l39-41, RHS) -- this is true of many existing works, including the one that this work is most closely related to, POLY-HOOT.
However, results show this work's method outperforming common approaches to continuous state/action MCTS. Novel methods to continuous MCTS are relatively rare, so this is a good contribution and the method may be practically useful to the community.
It would have been nice to mention and potentially experimentally compare to a similar recent work that addresses continuous action sampling: (Kim et al., "Monte Carlo Tree Search in Continuous Spaces Using Voronoi Optimistic Optimization with Regret Bounds", AAAI 2020). That work only applies to deterministic MDPs, but the same is true of POLY-HOOT which is compared to in this work.
Essential References Not Discussed: I do not know of any missing essential references.
Other Strengths And Weaknesses: 1. Notation is not the best:
- Clarity would be improved by consistently using MCTS depth d = 0..D, rather than using h/H in the text formalisation of MCTS and d/D in Algorithm 1. The POLY-HOOT paper used d=0..D for MCTS trees and h=0..H for HOO trees, which is clearer.
- $H$ in Algorithm 1 is also used for both MCTS horizon and the chosen $h$ node value in Algorithm 1 (Line 190).
- $\hat{H}$ seems to be the maximum depth of a HOO tree but is not defined anywhere. It may be a typo of $\bar{H}$, defined later on line 290 RHS.
1. A few improvements to clarity could be made:
- Referring to “non-stationary settings” in the abstract is confusing, as it makes it seem that the work applies to non-stationary MDPs instead of non-stationary value estimation in the MCTS tree.
- In my view calling the optimism term in arm selection an “exploration bonus” is confusing. The term “exploration bonus” is most commonly used to refer to additional reward added to the ground-truth reward in order to encourage exploration by RL (Taiga et al, “On bonus-based exploration methods in the arcade learning environment”, 2021) (Kolter et al., "Near-Bayesian exploration in polynomial time", ICML 2009). Here, the bonus term affects only action selection, not the real incurred reward during the trajectory. Mao et al. (2020) call it a “bonus” without calling it an “exploration bonus”, likely to avoid confusion.
- "If s_{l_t} is a new leaf with l_t < H, new Q-value nodes (s_{l_t} , a) for all actions in Aslt are added." (l134-135 RHS) -- this only applies for the discrete action case, not the continuous action case.
- As the algorithm is largely a modification of POLY-HOOT, it would be helpful to use highlighting in Algorithm 1 to show the novel parts of the algorithm.
Other Comments Or Suggestions: ### Minor points
- Line 122-124 RHS: "If $\pi_0$ is deterministic, it provides a fixed value" -- this is not true for stochastic MDPs, a deterministic policy will give a distribution over returns.
- Equation 1 seems to be missing magnitude bars around the expectation operator
- Line 118 A_s not defined, and surely the sum should be over outcomes not actions?
- Appendix section headers E and F say “n” instead of “in”
- Line 155 “UCT like” should be “UCT-like”
- Line 179 overrun of line into RH column
- Line 207 awkward spacing
Questions For Authors: 1. I have one key question about HOO-based action selection, which I may be misunderstanding.
- From the description of the HOO action selection process here and in the original POLY-HOOT paper, it seems that a different action will be returned every time until the tree is fully expanded to max depth, down at least one path. Only max-depth actions may be repeated, otherwise a new action is selected and a new node created (lines 217-219).
- This means the MCTS tree action branching factor must be somewhere between $H$ and $2^H$, which may be very high. H=10 in your experiments, so the branching factor could be up to 1024. This seems very high for a continuous action space, and would make the tree very wide and shallow. In progressive widening, the branching factor dynamically changes over time to balance the tree width/depth, but I don't see how this could happen here.
- Given that your max MCTS horizon is $T=150$, surely my interpretation cannot be correct. Please can you explain?
1. Please explain the justification behind the reward scaling in the experiments.
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Research Integrity Issues (e.g., plagiarism)']
Ethical Review Concerns: There is substantial duplicated proof content between the supplamentary material and another paper (Mao et al, "POLY-HOOT: Monte-carlo planning in continuous space mdps with non-asymptotic analysis.", NeurIPS 2020), without attribution. THe other paper is cited elsewhere in the text as a reference, but the proofs are not attributed to the other paper.
- proof of lemma 4 (i) is copied lemma 4 in Mao et al (2020)
- proof of lemma 4 (ii, iii) is copied from lemma 6 in Mao et al (2020)
- proof of lemma 5 is copied from lemma 5 in Mao et al (2020)
- proof of lemma 6 is copied from lemma 7 in Mao et al (2020)
- proof of lemma 7 is substantially the same as in Mao et al (2020)
"copied from" refers to the proof logical structure, mathematics and text structure being identical other than renaming of two constants.
The proofs of these lemmas should instead be a citation to Mao et al (2020), rather than a direct copy-paste of the proof.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. We address each of your concerns in detail and kindly ask you to consider updating your scores after reading the rebuttal.
## Experimental Design
**Higher-dimensional tasks:** We added results on Humanoid (17D action, 376D state) and Hopper (3D action) with added noise to action, dynamic, and observation. For both envs, we incorporated heuristic physical knowledge of the robots during MCTS rollouts across all algorithms to improve sample efficiency:
| Algorithm | Humanoid-v0 | Hopper-v0 |
|-----------|---------------------|-------------------|
| UCT (baseline) | -136.98 ± 44.84 (1.0x) | 5216.93 ± 179.64 (1.0x) |
| POLY-HOOT (p=1) | -44.40 ± 3.33 (3.1x) | 13230.66 ± 2844.33 (2.5x) |
| **Stochastic-Power-HOOT** | **-44.12 ± 6.22 (3.1x)(p=2)** | **13348.45 ± 6110.36 (2.6x)(p=8)** |
| HOOT | -57.35 ± 10.46 (2.4x) | 10452.83 ± 3885.12 (2.0x) |
| PUCT(PW) | -89.74 ± 24.16 (1.5x) | 5218.73 ± 1384.90 (1.0x) |
| Voronoi MCTS | -48.69 ± 9.52 (2.8x) | 411.70 ± 29.03 (12.7x worse) |
**Voronoi MCTS comparison:** We added Kim et al.'s Voronoi MCTS (AAAI 2020) to our experiments as you suggested. This recent algorithm uses Voronoi partitioning for continuous action spaces. Interestingly, it performs well on Humanoid (2.8x better than UCT) but poorly on Hopper, highlighting complementary strengths of different approaches.
**MountainCar dimensionality:** We will correct this to "low-dimensional" (2D state, 1D action).
**PW parameters:** We tune environment-specific PW parameters (α, k, c) following Sunberg et al. (2018). Each environment has tailored values and action selection strategies based on its dynamics:
- Acrobot: α=0.6, k=2.5, c=1.2 (timeseries strategy) > (77.86±0.00)
- MountainCar: α=0.55, k=2.2, c=1.3 (momentum strategy) > (-0.025 ± 0.004)
- CartPole-IG: α=0.5, k=2.0, c=1.0 (adaptive strategy) > (75.80±11.84)
- Pendulum: α=0.45, k=2.5, c=1.1 (optimistic strategy) > (1315.89±85.43)
We will add to the revised paper.
**HOO tree depth:** We will add an investigation varying H from 1-20 and report performance.
**Reward scaling:** Power means require positive inputs. Our reward transformation `max(0.01, (r + offset) * scaling_factor)` preserves positive and ensures fair comparisons by applying same scaling to all methods evaluated.
**MountainCar reward:** We clarify that scaling transforms negative rewards to positive while preserving optimal policy.
## Claimed Contributions
**Stochastic MDPs:** We will revise to clarify our contribution is extending POLY-HOOT (limited to deterministic MDPs) to stochastic settings, not introducing MCTS for stochastic MDPs broadly.
**Adaptive search:** We will rephrase to acknowledge building on POLY-HOOT's approach, emphasizing our novel power mean application to stochastic envs.
## Notation and Clarity
We will improve clarity by fixing all the mentioned notions.
## HOO-Based Action Selection in POLY-HOOT
To Clarify
**HOO Tree**
A different action isn't returned every time until max depth; here's how HOO-based action selection works:
- HOO (Hierarchical Optimistic Optimization) uses a binary tree structure to adaptively partition the continuous action space. Each node in this HOO tree represents a subset of the action space, with child nodes representing further subdivisions.
- When selecting an action, HOO follows a path from the root to a leaf by choosing at each level the child with the larger upper confidence bound (B-value). This doesn't necessarily mean creating a new unique action each time.
The key insight is that HOO strategically explores promising regions of the continuous action space rather than uniformly creating new actions. The algorithm uses upper confidence bounds to balance exploration and exploitation within the action space.
**Bounded-Depth HOO Tree**
POLY-HOOT introduces a bounded-depth parameter H̄ for the HOO tree, which serves an important purpose: Once a HOO tree path reaches depth H̄, it will repeat the action rather than creating a new one. This mechanism helps prevent the excessive branching you're concerned about.
**Practical Branching Factor**
In practice, the effective branching factor is much smaller than the theoretical maximum of 2^H̄ because:
1. HOO concentrates exploration on promising regions of the action space
2. The power parameter (p) further controls exploration vs. exploitation balance
3. The algorithm uses polynomial upper confidence bounds to focus on high-reward actions
4. The bounded-depth mechanism encourages exploitation of good actions already found
## Academic Integrity
We acknowledge the reviewer's concern about proof attribution. Due to the foundational nature of POLY-HOOT to our work and the necessary parameter renaming ($b,\alpha,\beta$ in our work versus $\eta,\alpha,\xi$ in Mao et al.), we included detailed proofs to ensure consistency and correctness. However, we recognize that proper attribution is crucial.
We will make explicitly clear in our revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. Two additional experiment domains and one additional baseline have addressed some of my concerns. I'm still unconvinced it was necessary to recreate lemma 4/5/6/7, with identical language and mathematics as Mao et al., just to rename some parameters. However, as long as it's properly attributed in the final version, this should be fine in my opinion.
I am still not convinced by your answer on HOO-based action selection. I understand that HOO focuses search effort on promising regions of the action space, the question is about the effective branching factor because nodes can only be reselected at the max depth.
1. A new HOO leaf node is definitely created each time an action is queried (lines 183-190) -- unless the max depth is reached, in which case an existing node can be returned.
2. You say that "This doesn't necessarily mean creating a new unique action each time". This would imply that different nodes/intervals can sometimes choose the same action -- for example, a child node could pick the same action as the parent node. However, in the algorithm it's described as "Choose arbitrary arm X in $P{H,I}$".
3. The most obvious way to choose an action is to choose the action in the centroid of the interval. A good example is shown in Figure 1 of the original HOO paper ("X-Armed Bandits, Bubeck et al, 2011). In Figure 1, the pulled point is always the centre of the 1-dimensional interval the node represents. It is therefore clear that the pulled point $X_n$ of a node can never line up on the X axis with its parent, so whenever a new node is added a new pulled point $X_n$ is returned.
Given that your algorithm describes action selection as "Choose **arbitrary** arm X in $P{H,I}$", it seems that unless you had some extra functionality to ensure the same actions can be selected at different nodes (H,I), then by default different nodes would always lead to different actions being selected.
In most practical scenarios I can't imagine that the branching factor is close to the minimum, $\overline{H}$.
For example, in Figure 2 of the original HOO paper, although most of the search effort has gone into the most promising region, lots of nodes have been added elsewhere. After 1000 trials, their HOO tree has a depth of 15, but the tree is full width up to around depth 7.
It may be useful to hear in practice how actions are selected in your implementation, and what the branching factors are in practice given N number of trials run.
---
Reply to Comment 1.1.1:
Comment: Thank you for your detailed feedback. We appreciate your thorough review and would like to address your specific concerns about HOO-based action selection and branching factor.
Your question about "branching factors in practice given N trials" has been addressed with our extensive empirical analysis.
Below are key insights from our tree structure analysis across multiple power values (p=1.0, 2.0, 4.0, 20.0) with the Hopper environment, using 5000 samples and 100 maximum depth:
1. **Branching Factor Analysis:**
Average branching factors are remarkably efficient:
- p=1.0: 2.17
- p=2.0: 1.86
- p=4.0: 1.94
- p=20.0: 2.10
While root nodes have high branching (84-116 children), this rapidly decreases:
For p=2.0:
Depth 0: 84.00
Depth 1: 2.27
Depth 2: 1.65
Depth 3: 1.35
...
Depth 10: 1.15
This demonstrates that Stochastic-Power-HOOT efficiently focuses computational resources.
2. **Branching Distribution:**
Most nodes (>95%) have just 1 child, with few having higher branching. For p=2.0 at depth 1:
- 1 child: 80 nodes (95.2%)
- 2 children: 2 nodes (2.4%)
- 3 children: 1 node (1.2%)
- 104 children: 1 node (1.2%)
This concentrated branching pattern persists throughout the tree. The results stem from our power mean backpropagation mechanism, which more aggressively concentrates exploration on promising regions compared to the standard approach.
3. **Branching Factor and Tree Depth:**
Branching factor approaches 1.0 earlier with higher power values:
- p=1.0: Converges at depth 20 (80% with branching = 1.0)
- p=2.0: Converges at depth 16 (84% with branching = 1.0)
- p=4.0: Converges at depth 15 (85% with branching = 1.0)
- p=20.0: Converges at depth 13 (87% with branching = 1.0)
Higher power values lead to earlier convergence, demonstrating how power mean effectively control the exploration-exploitation trade-off.
4. **Depth and Value Distribution:**
Trees reach maximum depth (100) across all power values, but with clear patterns:
- High-value nodes: average depth ~58-59
- Low-value nodes: consistently at maximum depth (100)
- ~99% of nodes classified as high-value
This indicates that the algorithm allocates resources efficiently, exploring valuable regions deeply while getting to the maximum tree depth.
5. **Performance Correlation:**
Power mean significantly affects branching patterns and performance:
- p=1.0: 7933.50 ± 7.61
- p=2.0: 18617.72 ± 20.38 (best performance)
- p=4.0: 17651.10 ± 6.63
- p=20.0: 11732.32 ± 9.38
p=2.0 achieves optimal performance despite not having the earliest branching convergence, suggesting that moderate amplification of value differences provides the best exploration-exploitation balance.
Regarding your specific concerns about action selection, we are greatly sorry for the misunderstanding. In our implementation, actions are selected as the midpoint of each cell/interval (consistent with Bubeck et al., 2011). While you're correct that new HOO leaf nodes are created during exploration (unless max depth is reached), the power mean backpropagation fundamentally changes how these nodes are selected for further exploration.
The "arbitrary arm X in PH,I" is indeed implemented as the centroid of the interval, but the power mean causes the algorithm to heavily favor revisiting successful regions rather than exploring new ones. This creates a natural bottleneck where, although theoretically different nodes could select different actions, in practice, the algorithm's value-driven selection consistently favors a small subset of promising paths.
At max depth, as you noted, existing nodes can be returned, which further contributes to the concentration of exploration in high-value regions. However, our empirical results show this convergence to narrow exploration paths happens well before reaching maximum depth limitations.
The data demonstrates that our algorithm efficiently navigates the exploration-exploitation trade-off, achieving strong performance with a tree structure that rapidly narrows to focus on promising regions.
We conduct a further analysis on HOO. The HOO analysis on the Hopper environment shows a focused tree with 24,698 nodes reaching depth 20. Value-based exploration is evident with 86.7% high-value nodes (average depth 12.95) versus 13.3% low-value nodes (all at depth 20). Node distribution increases with depth, peaking at depth 20 (13.3%). Node values decrease linearly from 651.12 at the root to 0 at maximum depth. Immediate rewards initially increase (peaking at 29.76 at depth 7), slightly decrease mid-tree, then surge again to 30.72 at maximum depth, demonstrating HOO's effective balance between exploration and exploitation.
Since "novel methods to continuous MCTS are relatively rare," we hope to present our contribution at ICML. We'll ensure the revised version is properly attributed to Mao et al. and add all experimental results. | null | null | null | null |
Armijo Line-search Can Make (Stochastic) Gradient Descent Provably Faster | Accept (poster) | Summary: Establishes rates for Armijo line search in several settings.
Claims And Evidence: There are some interesting results in this paper, and I like the expansion of our understanding of the (L0, L1)-smoothness condition and it's generalizations.
I object to the use of stochastic to refer to analysis under interpolation condition. These are profoundly different settings and they should not be confused. The title of the paper is misleading and would suggest to readers that there is some novelty in the paper that addresses the full stochastic setting.
My major concern with this paper is that it doesn't have a related work section, and seems to only cite very old papers concerning line searches, as well as a nearly 20 year old text book. Research papers on line searches are normally published in the optimization literature not so much the machine learning literature, and I am not familiar with the state of the art here, but it's an enormous and heavily studied research area and there are surely related work that should be cited in the last 50 years since the Armijo line search was introduced.
I would like to ask the authors to write a 1 page related work section that includes some recent state-of-the art research (at least some citations from the last few years), as well as older works. Please include this in your rebuttal and add it to the final camera ready.
EDIT: The additional literature review is reasonable. Please also include the missing references mentioned by other reviewers also. I have updated my score.
Methods And Evaluation Criteria: N/A
Theoretical Claims: Established results seem solid.
Experimental Designs Or Analyses: N/A
Supplementary Material: I did not review the appendix.
Relation To Broader Scientific Literature: See Above.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Typo line 146 “satisfiesAssumptions”
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback and address their concerns below.
> *(1) My major concern with this paper is that it doesn't have a related work section, and seems to only cite very old papers concerning line searches, as well as a nearly 20 year old text book...Please include this in your rebuttal and add it to the final camera ready.*
Even though we do not have an explicit related work section, we believe that we have cited the most relevant work. This includes recent work on Armijo line-search (Vaswani et al., 2019a; Galli et al., 2024; Hubler et al., 2024), Polyak step-sizes (Loizou et al., 2021) and variants of normalized gradient descent (Mei et al., 2021; Axiotis & Sviridenko, 2023; Taheri & Thrampoulidis, 2023). Moreover, we have added the most relevant and recent references for each example – logistic regression (Axiotis & Sviridenko, 2023; Freund et al., 2018; Wu et al., 2024), generalized linear models (Mei et al., 2020; Hazan et al., 2015), softmax policy gradient (Mei et al., 2020; Asad et al., 2024; Lu et al., 2024), two layer neural networks (Taheri & Thrampoulidis, 2023). In the final version of the paper, we will add the following extended comparison and the corresponding references.
**Comparison to GD with adaptive step-sizes on uniform smooth functions**: There has been substantial work on designing adaptive step-sizes for GD. However, most of this literature (for example (Orabona & Pal, 2016; Malitsky & Mishchenko, 2019; Carmon & Hinder, 2022; Khaled et al., 2023)) considers either the uniform-smooth, convex or non-smooth but Lipschitz, convex settings. In the uniform-smooth, convex case, the focus of these papers is to design more efficient ways to adapt to the smoothness constant, and the resulting methods achieve the same rate as GD with constant step-size. In contrast, we consider Armijo line-search which is the classic way to set the step-size for GD on smooth functions. The focus of our work is to identify when $\texttt{GD-LS}$ can be *provably* faster than $\texttt{GD(1/L)}$. To that end, we have identified a class of non-uniform smooth functions and shown that numerous examples in machine learning satisfy these properties. We believe that we are the first to identify such a broad range of examples and demonstrate the provable improvement of $\texttt{GD-LS}$ over $\texttt{GD(1/L)}$.
**Comparison to GD on non-uniform smooth functions:** There has been recent work on analyzing GD (with/without clipping) on functions satisfying (L0, L1) non-uniform smoothness (Zhang et al., 2019; 2020; Koloskova et al., 2023). This work requires the knowledge of L0 and L1. In contrast, as explained in Contribution 1 and in lines 133-142 (after Proposition 1) of our paper, the proposed non-uniform smoothness condition is different and the $\texttt{GD-LS}$ algorithm does not require the knowledge of L0, L1 or other problem-dependent constants.
**Comparison to $\texttt{GD-LS}$ on non-uniform smooth functions**: The most relevant paper to our setup is (Hubler et al., 2024) and we have compared to this in Lines 58-64. In particular, this paper analyzes the convergence of $\texttt{GD-LS}$ on functions satisfying a different notion of (L0, L1) non-uniform smoothness. However, their algorithm requires the knowledge of L1 and the resulting rate is the same as that of $\texttt{GD(1/L)}$
We hope that this addresses the reviewer’s concerns and helps them better contextualize our contributions. If the reviewer is aware of a key reference we have missed, we would be happy to include and compare to it.
> *(2) I object to the use of stochastic to refer to analysis under interpolation condition... The title of the paper is misleading and would suggest to readers that there is some novelty in the paper that addresses the full stochastic setting.*
As is evidenced by the numerous papers analyzing SGD under interpolation, we first note that the interpolation setting is an important special case. Moreover, this setting can serve as a building block towards analyzing the full stochastic setting. For example, the stochastic line-search (Vaswani et al., 2019b) and stochastic Polyak step-sizes (Loizou et al., 2021) were originally designed for the interpolation setting, and have been adapted to the full stochastic case by combining these techniques with appropriately decreasing step-sizes (Orvieto et al., 2022; Vaswani et al., 2022; Jiang & Stich, 2023). Moreover, the proof for $\texttt{SGD-SLS}$ in Section 6 is more involved compared to the standard SGD proofs. Given this, we think it is important to highlight that our paper also considers the stochastic setting, and hence we added "stochastic” to the title. Note that we clarify that we are only considering the interpolation setting in the abstract
We do understand the reviewer's point, and will change the title to *Armijo Line-search **Can** Make (Stochastic) Gradient Descent Go Fast* to suggest that the speedup is for some special settings. | Summary: The paper analyzes functions where the local smoothness constant is given by \\( L(\\theta) = L_0 + L_1 f(\\theta) \\), which is satisfied by common objectives like logistic regression or regression problems with generalized linear models. The authors prove that Gradient Descent with Armijo Line-Search (GD-LS) converges faster on these functions by showing that the step size selected is lower-bounded by \\( 1/L(\\theta) \\). They extend their analysis to non-convex functions such as generalized linear models (GLMs) with a logistic link function and demonstrate empirically that GD-LS outperforms standard GD with a fixed step size. The paper also discusses the benefits of using a line-search in the stochastic setting by analyzing SGD with a stochastic variant of the Armijo line-search (SGD-SLS). By introducing assumptions on non-uniform smoothness and relating gradient norms to function values, the authors provide a framework that facilitates the analysis of GD with Armijo line-search for functions with exponential-type losses.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I browse the proof of Theorem 1 and it looks correct to me.
Experimental Designs Or Analyses: The experiment designs look good.
Supplementary Material: I have browsed Section A and B.
Relation To Broader Scientific Literature: The main difference of this paper and other paper analyzing Armijo line search is that this paper consider a specific setting by Assumption 2. It can be viewed as a relaxed version of symmetric (L0, L1) smoothness discussed in [1]. As this setting is novel, the theoretical contribution of this paper is new.
[1] Gorbunov, Eduard, et al. "Methods for convex $(l_0, l_1) $-smooth optimization: Clipping, acceleration, and adaptivity." arXiv preprint arXiv:2409.14989 (2024).
Essential References Not Discussed: Not applicable.
Other Strengths And Weaknesses: Overall this paper provides a solid theoretical contribution for the study of Armijo line search for proposed (L0-L1) non-uniformly smooth optimization problems.
Nevertheless, the writing of the paper can be improved. For instance, the authors list 6 contributions in the introduction section, but each of them look like the organization of each of section. To be specific, the contribution 1 (i.e., the content of Section 2) is simply the problem formulation. In other words, the main contribution of this work is not easy to find. One potential improvement for this is that the author may list one or two informal version of their main theorem and provide a clear and succinct description of their contributions.
The targeted optimization problem in this paper is limited in terms of application perspective. The authors primarily focus on logistic regression and generalized linear models. Although the authors showed that object for softmax policy gradient also satisfies the assumptions in appendix, the application sides of this paper to real-world use cases are not significant enough for a venue like ICML.
Other Comments Or Suggestions: The description of taking $\eta_\max=\infty$ in GD-LS is very unclear. At line 3 in Algorithm 1, the author initializes $\tilde \eta_t$ by $\eta_\max$. By taking $\eta_\max=\infty$, every step $\tilde \eta_t$ in Algorithm 1 will simply be $\infty$. From line 193-194 at the right column and from the proof, it seems that this assumption on $\eta_\max$ is somehow equivalent to taking a sufficiently large $\eta_\max$. Moving the explanation on taking $\eta_\max=\infty$ before Theorem 1 will minimize the confusion.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback and address their concerns below.
> *(1) The targeted optimization problem in this paper is limited in terms of application perspective. The authors primarily focus on logistic regression and generalized linear models. Although the authors showed that object for softmax policy gradient also satisfies the assumptions in appendix, the application sides of this paper to real-world use cases are not significant enough for a venue like ICML.*
It is not required for papers in ICML to have "application sides...to real-world use cases''. In fact, the call for papers lists "Theory of Machine Learning (statistical learning theory, bandits, game theory, decision theory, etc.)'' explicitly as a topic of interest.
Having said that, we believe that our paper does have practical implications. In particular, our paper analyzes when $\texttt{GD-LS}$ (a standard optimization method widely used in practice) can be *provably* faster than $\texttt{GD(1/L)}$. Besides achieving adaptivity to the smoothness in a principled manner and hence requiring minimal hyper-parameter tuning (an important consideration from a practical standpoint), $\texttt{GD-LS}$ can result in faster convergence on numerous convex (e.g. logistic regression) and non-convex problems (e.g. generalized linear models, 2 layer neural networks, softmax policy gradient) important in machine learning. We have argued that $\texttt{GD-LS}$ can thus replace more specialized algorithms tailored for these specific problem settings. *Having a single algorithm that can work well across different applications is important from a practical standpoint.*
Furthermore, as evidenced by the experimental results in [Vaswani et al, 2019], $\texttt{SGD-SLS}$ (analyzed in Section 6 of our paper) results in good empirical performance on convex problems that satisfy our assumptions. Under the interpolation assumption, Vaswani et al, 2019 can only prove an $O(1/\epsilon)$ rate for $\texttt{SGD-SLS}$ on smooth convex losses such as logistic regression. However, the empirically, the convergence seems to be faster than $O(1/\epsilon)$ (for example, see the linear convergence on the $\texttt{mushrooms}$ dataset in Figure 3 of [Vaswani et al, 2019]. Our paper (specifically Corollary 5) can explain this fast convergence. *We believe that it is important to understand the behaviour of optimization algorithms, and that these insights can often lead to developing better methods in practice.*
Finally, it is important to note that this same method - $\texttt{SGD-SLS}$ and its non-monotone variants [Galli et al, 2024] result in strong empirical performance on non-convex losses with deep neural networks [Vaswani et al, 2019, Galli et al 2024] where our assumptions are not necessarily satisfied. *This demonstrates that developing methods that are theoretically principled in simple settings can result in empirical gains for real-world use cases.*
> *(2) One potential improvement for this is that the author may list one or two informal version of their main theorem and provide a clear and succinct description of their contributions.*
Thank you for the good suggestion. We will consider making this change in the final version of the paper.
> *(3) Moving the explanation on taking $\eta_{\max} = \infty$ before Theorem 1 will minimize the confusion.*
We will make the change. | Summary: This paper studies gradient descent using Armijo line search to choose the stepsize. They prove convergence rates for the algorithm under a non-uniform smoothness condition, and specialize these results to logistic regression, softmax policy gradient for multi-armed bandit problems, and GLMs with the logistic link function. They also have convergence guarantees for a stochastic version of the algorithm for finite-sum structured separable logistic regression.
Claims And Evidence: For the most part, the claims made in this paper are clear and backed up by convincing evidence. There are three main places where this is not the case:
1) At various times in the paper, the authors claim that GD-LS is better than GD-(1/L) by comparing the upper bounds they prove for GD-LS against upper bounds for GD-(1/L) derived in other papers. This is not a sound way of making the comparison---how do we know that those guarantees for GD-(1/L) are tight? Without lower bounds for GD-(1/L), it is possible that GD-(1/L) is just as good (or even better) than GD-LS in some or all of the cases considered in this paper. I agree with the authors that this seems unlikely given that GD-(1/L) needs to use a fixed stepsize, but to know this definitively, we'd need an actual proof.
2) Proposition 3 seems to require that the reward vector (and therefore the optimal action) are known---otherwise, it would not be possible to implement GD-LS to sole the problem. The authors later argue that GD-LS is superior to some alternatives because they require knowledge of r(a^*). Even though this is true, GD-LS requires knowledge of the reward vector (and therefore the optimal action), so there isn't much difference. I understand what they are trying to get at---their algorithm is less "specialized" to the problem---but I think the advantage would be more clear if this distinction was made a little bit more clearly. I believe they are trying to gesture towards GD-LS being a good method for more interesting cases when we don't just know the reward vector / optimal arm, but I think the problem setup in this section could use a little more explanation. In general, you wouldn't know the reward vector, so you'd probably need to do some kind of online/stochastic optimization here---is there reason to think GD-LS would be good for that? Does the line search work well enough when you only have online/stochastic gradients and can only estimate the function value? Section 6 and the last sentence of page 6 suggests that they think the answer is yes, but it's not completely clear to me precisely what they are showing vs just claiming for the specific problem here.
3) A lot of the claims in this paper are based on logistic regression or similar objectives where, for most gradient methods, convergence is initially fast and then slows down as the loss / gradients / hessian become very small. This is a rare example in optimization where very large stepsizes are possible / desirable, and it is not necessarily that clear that these gains from GD-LS would extend to more typical problems where the maximum possible stepsize is not so large.
Methods And Evaluation Criteria: Experimental evidence is provided for GD-LS versus other alternatives, but I think the stochastic setting is much more interesting for typical ML applications, and there are no experiments covering this. Experiments for SGD-SLS on logistic regression is one thing, but given that the upshot of Lemma 3 is not very clear outside of the application to Corollary 5, it would probably be interesting to see how well SGD-SLS works on other (non-logistic regression) supervised learning problems, both convex and non-convex.
At the bottom of pg 5, the authors point out that GD with the Polyak stepsize can achieve the same thing as Theorem 2. This raises the question of whether the Polyak stepsize can also match the guarantee, e.g., in Corollary 1. On the one hand, to implement the Polyak stepsize you need to know $f^*$, which is not necessarily available. On the other hand, you only need to evaluate the function value once (versus multiple times for the backtracking line search). I feel making this comparison more thoroughly is important context for this work.
Theoretical Claims: I read the proofs and they appear basically sound as far as I can tell.
The only issues I see are:
- I believe Corollary 2 and 6 are supposed to be the same thing? But, Corollary 6 in the appendix appears to be slightly different than Corollary 2 in the main text.
- Proposition 3: Shouldn't the reward gap be defined as \min_a rather than \max_a?
- Assumption 4: As stated, this appears to just a complicated way of saying that $f$ has no stationary points that are not global minima (for arbitrary \zeta, just take $\mu(\theta) = \|\nabla f(\theta)\|^\zeta / [f(\theta) - f^*]$, which is strictly positive for such $f$). I think it would be better to say that "$f$ satisfies the $(\zeta,\mu)$-gradient domination condition if..." because the $\mu$ function plays an important quantitative role (beyond just the fact that it exists).
- Pg 4: I'm being a little pedantic here, but this sentence "As a concrete example, consider the case when $f^* = \delta \epsilon$ where $\delta \geq 1$ is a constant independent of $\epsilon$" does not make very much sense. Of course $\delta$ depends on $\epsilon$, it is exactly $\delta = f^* / \epsilon$! Calling it a "constant" let's you ignore it in the big-O convergence rate $O(R \ln 1/\epsilon)$, but I would say this is cheating a little bit. To make a similar statement, I would phrase it differently, something like: if you are only trying to reach accuracy $\epsilon = \Theta(f^*)$, then GD-LS will result in $O(R \ln 1/\epsilon)$ convergence. This isn't a big deal and I apologize for the pedantry, but I am not a fan of using a sleight of hand to hide things in big-O notation.
Experimental Designs Or Analyses: There is not a ton of detail given for the experiments in the paper (this is probably fine given their simplicity), so it is hard to say.
Supplementary Material: I read through the proofs in the appendices
Relation To Broader Scientific Literature: This paper is trying to show that GD-LS compares favorably to alternatives (especially GD-(1/L)). I am honestly not very familiar with prior analysis of Armijo line search beyond the most basic cases, so this work appears novel to the best of my knowledge.
Given the emphasis of this paper, I think it would be useful to spend some additional time putting this work into the context of related work. E.g. see my comments above about the Polyak stepsize rule. Also, this paper puts an emphasis on the fact that GD-LS is adaptive to the problem parameters like the smoothness constant, so it would probably be useful to make some comparison to recent work on adaptive stepsizes for GD (e.g. [Carmon and Hinder 2022] and many more).
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: Nothing else to add.
Other Comments Or Suggestions: Small comments:
Assumption 2: Are parts (a) and (b) equivalent statements, or are these separate conditions that need to both be satisfied? It is not obvious whether or not (a) <=> (b).
Theorem 1: probably have $\| \nabla f(\theta) \|_2^2 \geq [f(\theta) - f^*]^2 / R$ be "Assumption 4" rather than sneaking it into the theorem statement? Then for e.g. Corollary 1 I would just note that convexity -> assumption 4.
Theorem 1: it might be better to write this as $\max\{ 2 R \lambda_1, 1 \}\left( \frac{f^* + \frac{\lambda_0}{\lambda_1}}{\epsilon} + 1 \right) \ln\left( \frac{f(\theta_0) - f^*}{\epsilon} \right)$ in all cases. In case 1, the bound is a little looser, but the lambda_0 / lambda_1 term is less than f^*, so by at most a factor of 2. And in case two, similarly, this is only looser by a log factor. On the other hand, it is easier to read and understand if there is only one case. Relatedly, in the last full paragraph on page 4, you describe the guarantee as exhibiting two phases, but unless you are considering the ln(1/eps) term to be "large", I don't see what is so different in the first/second phase---it is 1/eps or ln(1/eps)/eps convergence in all cases. The proof of Theorem 1 involves breaking things up into two phases, but that is more about the analysis than the actual convergence guarantee, at least as it is written.
Questions For Authors: See other comments.
Ethical Review Concerns: No concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback and address their concerns below. **For all unaddressed comments, we agree with the reviewer's suggestion and will make the corresponding change.**
> *(1) Lower-bounds for $\texttt{GD(1/L)}$*
For logistic regression, Theorem 3 in Wu et al, 2024 shows that $\texttt{GD(1/L)}$ (or more generally, GD with any constant step-size that guarantees monotonic descent) cannot have a convergence rate faster than $\Omega(1/\epsilon)$. Hence, both $\texttt{GD-LS}$ and $\texttt{SGD-SLS}$ on logistic regression are provably faster than their constant step-size analogs.
For the softmax policy gradient in Section 5.1, Theorems 9, 10 in Mei et al, 2020 show that GD with any constant step-size cannot achieve a rate faster than $\Omega(1/\epsilon)$ on both bandit and MDP problems. We mention this after Corollary 4 in the paper. For the GLM problem in Section 5.2, we are not aware of a lower bound for $\texttt{GD(1/L)}$ or normalized GD. We will include these comparisons.
> *(2) Softmax PG - Setup*
We note that the linear convergence result in Corollary 3 also applies to the general MDP problem. See Proposition 6 for the problem setting. $\texttt{GD-LS}$ in Section 5.1 is equivalent to softmax policy gradient in the "deterministic'' or "exact'' setting. This setting is commonly used as a testbed for analyzing policy gradient algorithms [Mei et al 2020; Section 4, Lu et al 2024]. For general MDPs, the deterministic setting corresponds to knowing the rewards and transition probabilities, and is the same setting under which classic RL algorithms such as value iteration or policy iteration are analyzed. This was our motivation to consider the problem setup in Section 5.1.
> *(3) Softmax PG - Handling stochasticity*
For the general MDP problem, in cases where the rewards/transition probabilities are unknown, the policy gradient is estimated by interacting with the environment. Standard policy gradient results (e.g [Theorem 29, Agarwal et al, 2020] can then prove convergence up to an error incurred because of the estimation.
For the specific case of stochastic multi-armed bandits with unknown rewards, recent work [Mei et al, 2023] has proven stronger global convergence results. In particular, Mei et al, 2023 use importance sampling to construct the stochastic softmax policy gradient and use it with constant step-size SGD. The resulting algorithm can be proven to converge to the optimal arm. From Proposition 3, we know that the objective function satisfies Assumptions 2, 3, and 4. It also satisfies the interpolation condition [Theorem 4.2 in Mei et al, 2023], required for the $\texttt{SGD-SLS}$ analysis in Section 6. Hence, we believe that a variant of $\texttt{SGD-SLS}$ could be a good algorithm for the stochastic multi-armed bandit problem. However, since the softmax policy objective is non-convex, we cannot directly use the results in Section 6 and leave this interesting direction to future work.
> *(4) ....not clear if gains from GD-LS would extend to more typical problems..*.
We have tried to argue that losses that satisfy our non-uniform smoothness assumptions are in fact, not as rare. To illustrate this, we have given numerous examples ranging from classification in supervised learning to policy gradient in reinforcement learning. In general, applications where we use losses with an exponential tail (e.g. exponential, logistic loss) or those that use a softmax function to parameterize probabilities can benefit from using a line-search. Another example that we did not study in detail is noise contrastive estimation [Lu et al 2021], where GD is slow because of the flatness of the landscape, and $\texttt{GD-LS}$ can potentially improve the convergence.
We agree that large gains from $\texttt{GD-LS}$ are not always possible (e.g. for quadratics) and if the function is only uniformly smooth, $\texttt{GD-LS}$ will converge at the same rate as $\texttt{GD(1/L)}$. However, even in this case, $\texttt{GD-LS}$ enables setting the step-size without estimating or conservatively bounding the smoothness constant and thus has a practical benefit.
> (5) *Comparison to Polyak step-size*
The proof of Theorem 1 (and non-convex results) relies on the monotonic decrease in the function values guaranteed by $\texttt{GD-LS}$. GD with the Polyak step-size is not a monotonic descent method, and it is unclear how to extend the proofs.
> (6) *Experimental Evaluation*
See Point (4) in our response to Rev. jUfU
> *(7) Comparison to recent work on adaptive step-sizes*
See Point (1) in our response to Rev. 8uuj
> (8) Cor. 2 vs Cor. 6:
Cor. 2 is for $\texttt{GD-LS}$, Cor. 6 is for GD with the Polyak step-size.
> (9) Proposition 3: Def. of reward gap
It is the difference between the rewards of the best (optimal) and second-best arm [see Lemma 17 in Mei et al, 2020].
> (10) Assumption 2 (a) and (b)
These are separate (but related) conditions that need to both be satisfied. | Summary: This paper investigates the effectiveness of the Armijo line-search (Armijo-LS) method for step-size selection in gradient descent (GD) algorithms. The authors introduce a class of functions that satisfy a non-uniform smoothness condition and show that GD with Armijo-LS (GD-LS) can adapt to local smoothness, leading to faster convergence compared to standard GD with a fixed step-size of $1/L$ (GD(1/L)). The analysis is further extended to the stochastic setting.
## update after rebuttal
Thanks the authors for answering my questions. I decide to keep my score.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Almost no experiment.
Supplementary Material: Yes. I have checked the proof for main theorems.
Relation To Broader Scientific Literature: See below.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Weaknesses:
* Independence of Assumptions: Assumption 2(b) is a direct consequence of the generalized smoothness condition used in prior literature. The proof primarily relies on Assumptions 2(c), 3, and 5, with Assumption 2(c) being implied by the other two. Since assumptions should ideally be independent, I suggest presenting only Assumptions 3 and 5 to clarify their distinct roles in the proof. Alternatively, would it be possible to provide an example satisfying Assumption 2(c) and 3 without Assumption 5?
* Positivity Assumption (Assumption 1): The paper assumes that $f$ is positive over the entire space. While it is possible to shift
$f$ by subtracting $f^*=\min f$, in most cases, $f^*$ is unknown. Would it be possible to remove Assumption 1 and instead consider
$|f|$ in the relevant conditions?
* Missing Reference: The paper should reference [1], which provides gives tight rates for GD(LO) that are faster than for GD(1/L) in strongly convex setting. This aligns with Assumption 4 when $\xi=2$.
[1] De Klerk E, Glineur F, Taylor AB. On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions. Optimization Letters, 2017, 11:1185-99.
* Experimental Validation: More experiments should be conducted to further validate the efficiency of the proposed line-search algorithm, particularly in multi-armed bandit (MAB) problems and stochastic settings.
Strengths:
* The newly proposed conditions are well-discussed and provide valuable insights.
* The paper reveals an interesting result: GD-LS provably adapts to local smoothness, leading to improved convergence rates.
* The paper is well-written and easy to follow.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback and address their concerns below.
> *(1) Independence of Assumptions*
Assumptions 3 and 5 do indeed imply Assumption 2(b). The reason we did not use Assumption 5 as our main assumption is because Assumption 5 cannot be satisfied for logistic regression, multi-class classification with $L_c = 0$. We explained this in Lines 137-142, and note that having $L_c = 0$ is essential to derive a linear convergence rate for logistic regression in Corollary 2. To see this from a technical perspective, consider $f$ to be a finite sum $f(\theta) = \frac{1}{n}\sum_i f_i(\theta)$ (for example, the logistic regression loss in Proposition 1). If $f_i$ satisfies Assumption 5 with constants $L_c, L_g$, it does necessarily imply that $f$ also satisfies Assumption 5 with the same constants (see Proposition 7 for a counter-example).
In contrast, using Assumption 2 as our main assumption enables us to prove that the losses corresponding to logistic regression, multi-class classification satisfy this assumption with $L_0 = 0$, and this enables us to prove the linear convergence rate in Corollary 2. Again, from a technical perspective, we can do this because if $f_i$ satisfies Assumption 2 with constants $L_0$ and $L_1$, then $f$ also satisfies this assumption with the same constants (please refer to the proof of Lemma 5).
In summary, we only use Assumption 5 for $f_i$ (and not $f$) corresponding to common finite-sum losses as a way (using Lemma 5) to show that $f$ satisfies Assumption 2. For example, see the proof of Proposition 1, where we instantiate this logic for logistic regression.
> *(2) Positivity Assumption (Assumption 1)*
From an algorithmic perspective, Assumption 1 is quite benign. As the reviewer suggests, it is possible to shift $f$ by subtracting $f^*$ and ensure that the resulting function $h(\theta) := f(\theta) - f^\ast$ is always non-negative. Since $f^\ast$ is a constant, $\nabla h(\theta) = \nabla f(\theta)$, meaning that we do not require the knowledge of $f^\ast$ to implement GD. Moreover, the Armijo condition on $h(\theta)$ has $f^\ast$ on both sides, implying that implementing $\texttt{GD-LS}$ on the non-negative function $h(\theta)$ does not require the knowledge of $f^*$. We use this property in Section 5.1 (see Proposition 3 and the corresponding discussion). Finally, we note that most common losses in machine learning (cross-entropy, exponential, and squared) are non-negative.
> *(3) Missing Reference*
Thank you for directing us to this paper. The assumption in their paper is more restrictive than ours, as they assume strong convexity, whereas we only rely on gradient domination. This enables us to consider non-convex losses such as generalized linear models. Additionally, they focus on exact line search, while we consider Armijo line search. We will cite this paper in the final version.
> *(4) Experimental Validation*
In the deterministic setting (with access to full gradients), $\texttt{GD-LS}$ is a standard algorithm for (non)-convex minimization and is widely used in practice. In the stochastic setting, Vaswani et al. (2019b); Galli et al. (2024) have empirically evaluated the performance of $\texttt{SGD-SLS}$ and its non-monotone variant for convex losses corresponding to linear and logistic regression and non-convex losses with deep neural networks. These supervised learning experiments have demonstrated the efficacy of $\texttt{SGD-SLS}$. As we explain in Section 6, under the interpolation assumption, Vaswani et al. (2019a) can only prove an O(1/ϵ) rate for SGD-SLS on smooth convex losses such as logistic regression. However, empirically, the convergence seems to be faster than O(1/ϵ) (for example, see the linear convergence on the mushrooms dataset in Vaswani et al. (2019b, Figure 3). Our paper (specifically Corollary 5) can explain this fast convergence.
For the softmax policy optimization problem in RL, we note that Lu et al. (2024, Figure 1) have shown the linear convergence of $\texttt{GD-LS}$ on tabular Markov decision processes. However, they could only prove an O(1/ϵ) convergence rate for the resulting algorithm (Lu et al., 2024, Theorem 1). In order to attain a linear convergence rate in theory, they propose a non-standard line-search scheme that requires the knowledge of the optimal value function. On the other hand, Corollary 4 in our paper proves a linear convergence rate for $\texttt{GD-LS}$ (the standard line-search) on bandit problems, and the same proof extends to the MDP setting (using Proposition 6 in our paper).
We will include these comparisons and explanations in the final version of the paper. | null | null | null | null | null | null |
Learning to Stop: Deep Learning for Mean Field Optimal Stopping | Accept (poster) | Summary: The authors consider mean field control problems where agents must perform optimal stopping, i.e., taking an action that stops the evolution of their state. The cooperative optimal stopping problem is theoretically reduced into a standard mean field control problem by extending the state with information of whether an agent already stopped or not, similar to prior work in the continuous setting. Authors then propose direct learning-based solution algorithms minimizing the costs, which are appropriate due to the dimensionality of the problem (space of mean fields is continuous).
## update after rebuttal
I thank the authors for their detailed response. My questions have been addressed and the clarifications have been helpful. I see no reason to reject the work, and increased my score.
Claims And Evidence: The theoretical claims in the submission are supported rigorously by proofs. The algorithms are supported by experiments and qualitative analysis in the main text. Additional details were printed in the Appendix.
Methods And Evaluation Criteria: The proposed algorithms use neural network learning methods by minimizing the loss directly, which makes sense given the dimensionality of the problem (space of mean fields is continuous). The new benchmark datasets make sense for the paper as well, as it considers a new special case of mean field control problems.
Theoretical Claims: The theoretical claims look largely correct to me (Thm. 3.2, 4.1, 4.2) and follow convergence rates and DPP results in the literature.
Minor question:
- In induction step of Lemma A.1, line 694, why is $\mathbb E[\lVert \nu^N_n - \nu_n \rVert] \leq (L_F (1 + L_P))^n \mathbb E[\lVert \nu^N_0 - \nu_0 \rVert]$? It looks like the induction assumption only specifies $\mathcal O(1 / \sqrt N)$ and since we have two terms, there should be an additional term from $|\mathcal S| / 4 / \sqrt N$?
Experimental Designs Or Analyses: The experiments contain an extensive verification on example problems, covering the entire set of possibilities in the model. They look good to me, in particular Figs. 1-7. Some of the example problems are somewhat artificial. It may be worth mentioning that the common noise in experiment 6 is an empirical extension beyond the theoretical framework, but I understand it should not be a problem to extend the framework accordingly.
Supplementary Material: I reviewed the theoretical parts of the supplementary material (Appx. A, B, C).
Relation To Broader Scientific Literature: The discussion with prior work in the areas of mean field control and continuous-time mean field stopping problems is sufficient. Moreover, references to prior work are made whenever appropriate in proofs or around theoretical claims.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - The exposition and clarity of the paper is good, as is the experimental verification and theoretical rigor.
- The originality of the theoretical results is slightly limited, as mean field optimal stopping problems and the reduction to MFC have been proposed in prior work.
- The significance is slightly limited, as the paper deals with a subset of two types of problems, though nevertheless both optimal stopping and mean field control problems in themselves are still sufficiently general.
Other Comments Or Suggestions: Typo line 788 "folliwng"
Questions For Authors: I may have missed it, but is it mentioned that agents must always stop before the time horizon T? Or is it allowed to never stop? If it is not mentioned, I think it is an important assumption.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive suggestions to improve our paper. We provide detailed responses to each of your questions below.
**(Theoretical Claims)** We would like to thank the reviewer for spotting a typo in the proof of Lemma A.1. We corrected the proof by modifying lines 694, 695, and 747 in the original manuscript. Following the argument presented, in the end we obtain with the following inequality:
$E\left[\|\nu_{n+1}^{N,p} - \nu_{n+1}^{p}\|\right]\leq \frac{|S|}{4\sqrt{N}}(\frac{1 - K^{n+1}}{1- K}) + K^{n+1}E[\lVert \nu^{N,p}_0 - \nu^p_0 \rVert],$
where $K=(L_F (1 + L_P))$. We remark that this updated inequality does not affect our conclusion as the convergence rate remains the order of $\mathcal O (1/\sqrt N)$.
**(Experimental Designs or Analyses)** Although we agree that the experiments are relatively simple (compared with real-world applications), establishing a solid theoretical framework and validating our algorithms through progressively more complex examples was a necessary step towards tackling real-world applications. We want to emphasize that all the experiments presented are intended as proof-of-concept demonstrations. Each of them was carefully designed to demonstrate specific features of MFOS. As mentioned in lines 406-411, we include common noise in Example 6 to show that our method has potential beyond the theoretical framework covered in our proofs (see Appendix E.6 for more details on this example). We will add more explanations about how to extend the theory to cover the common noise.
**(Originality and Significance)** Multi-agent problems have become a central topic in the machine learning community. The optimal stopping problem has not yet been extensively studied in the literature in a multi-agent scenario. MAOS fits neither in classical optimal stopping, nor in classical MDP theory.
To the best of our knowledge, our work is the first to propose deep learning algorithms to efficiently address this class of problems, through the lens of mean field MDPs. If the reviewer has specific suggestions about potential improvements, we would be grateful and we would try to provide a more detailed answer.
**(Question)** Thank you for this clarification question. In the setting we consider, all the agents are implicitly forced to stop when the problem reach the time horizon $T$. Concretely, we enforce $p_T^i(\cdot) = 1$. In this way, the stopping time $\tau^i$ defined at line 146 is always less than or equal to $T$. We will clarify this point in the text. | Summary: The goal of this paper is to extend the classical optimal stopping problem to multi-agent systems. While existing theoretical results primarily address continuous-time settings—such as those arising in options pricing—this work specifically studies discrete-time and discrete-state scenarios. The central approach is based on a mean-field approximation, where the distribution of agents across states is tracked rather than modeling each agent’s environment individually. The authors prove that the mean-field approximation converges to the multi-agent solution with a quantifiable convergence rate and validate the effectiveness of their method experimentally.
Claims And Evidence: The two central claims are:
1) The convergence of the mean-field approximations
2) The DP solution to the mean-field approximation
To the best of my knowledge, the proofs for both statements appear correct and support by evidence.
Methods And Evaluation Criteria: The focus of the paper is primarily theoretical with simple experimental setup demonstrating the functioning behaviour of the proposed method.
Theoretical Claims: I reviewed theorem 3.2 (A.2 and A.3) which appears to be correct.
Experimental Designs Or Analyses: The experiments presented in the paper supports the core theoretical results that mean-field approximation can solve multi-agent stopping problem at least in the small scale (small state dimension).
Supplementary Material: I reviewed theorem 3.2 (A.2 and A.3)
Relation To Broader Scientific Literature: I would be curious if the authors could expand on the connections between their work and the options framework in reinforcement learning. They have a similar concept of termination / cost switching function and it would be worth drawing the parallel between both. For example "When Waiting Is Not an Option: Learning Options with a Deliberation Cost". The option framework effectively models when it is valuable to "stop" the current policy and switch to another one by comparing the value of both decisions. This seems to mirror the paradigm presented in this paper.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: l78 "time yields to an approximate optimal" -> " an approximate optimal"
l89 "relay on the interpretation of MFOS" -> "rely on the interpretation of MFOS"
l117 "then she is incurred the cost:" -> "then it incurs the cost"
l788 "folliwng" -> "following"
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the helpful suggestions to improve our work, especially for raising the connection to the options framework in reinforcement learning. While we agree that this seems connected to MFOS in spirit, we believe that we cannot make one fit in the other. We list specific points below.
- **Options framework**: There are three main challenges that make it difficult to compare our work directly with the options framework.
1. Once an agent decides to stop, they stay in their current state until terminal time $T$, whereas in the options framework, when the current option terminates, the agent has the possibility to pick a new option and her state keeps changing.
2. The "termination" criterion in our framework is controlled by the agent itself whereas in the options framework, the termination of the option is exogeneous to the agent and most of the time completely random.
3. We are working in a finite-horizon setting with non-stationary policies whereas the options framework seems particularly relevant for infinite horizon discounted setting;
- **SMDP**: Our problem’s non‐Markovian nature differs from the semi‐MDP formulation found in options theory. In a semi‐MDP, the next state depends only on the current state and action, while the transition time can be random. In contrast, in our original framework, the transition depends on the agent’s entire history: the dynamics depends on whether the agent has stopped before or not. To make the problem markovian again, we need to introduce the augmented state space (Section 2.3) to capture the status of stopping decisions.
Given these limitations, we believe that it is not possible to represent mean-field optimal stopping as an option framework, although the two share some similarity in spirit. We are open to further discussions.
In the final version, we will include a summary of the above discussion on the options framework in RL. The typos will be corrected in the final version.
We hope these arguments clarify our approach and may improve the reviewer’s assessment. Of course, we remain open to further discussion on these points. | Summary: The paper introduces a mean field optimal stopping (MFOS) framework as an approximation to multi-agent optimal stopping (MAOS) problems where many agents must decide when to stop a stochastic process. The authors establish a theoretical foundation for MFOS, proving that solving the infinite-agent (mean field) version is a good proxy for the finite-agent problem. In particular, they show that the MFOS solution yields an optimal stopping policy for the $N$-agent case with an $O(1/\sqrt{N})$ optimality gap. They also derive a dynamic programming principle (DPP) for MFOS, analogous to Bellman’s principle in classical optimal stopping. Algorithmically, the paper proposes two deep learning methods to compute near-optimal stopping policies: (i) a Direct Approach (DA) that learns the stopping decision by forward-simulating full trajectories and training a neural network to decide stop/continue at each time, and (ii) a Dynamic Programming (DP) Approach that leverages the DPP by training neural networks backward in time (from the horizon to start) to approximate the value function and optimal stopping rule. Both methods output a time-dependent stopping probability for each state of an agent, effectively learning a randomized stopping policy that depends on the entire population distribution. The paper demonstrates these methods on six benchmark scenarios of increasing complexity. The experiments show that the learned policies closely achieve the intended objectives and scale to state spaces with up to 100 states (making the mean-field state distribution a 300-dimensional input). This work appears to be the first to formally define and solve MFOS in discrete time with finite state spaces, providing both new theory and scalable computational tools.
Claims And Evidence: Overall, the paper’s claims are credible and backed by clear evidence, with no glaring over-claims.
The paper’s main claims are well-supported by both theoretical arguments and empirical evidence. First, the authors claim that the mean-field model provides a good approximation for the original $N$-agent problem. This is backed by a rigorous theorem (Theorem 3.2) establishing an $ε$-approximation: the optimal cost achieved by the mean-field solution is within $O(N^{-1/2})$ of the true $N$-agent optimum. They not only prove this result under appropriate assumptions, but also corroborate it with a numerical experiment: applying the learned MFOS policy to finite-$N$ simulations shows that the optimality gap and distribution error shrink on the order of $N^{-1/2}$, consistent with the theorem. In Figure 2, for example, the L2-distance between the empirical $N$-agent distribution and the mean-field distribution, as well as the cost suboptimality, clearly decays at the predicted $\sim N^{-1/2}$ rate. Secondly, the claim of a valid dynamic programming principle for MFOS is supported by a formal derivation (Theorem 4.1) and reference to known results in mean-field control theory. The authors provide a proof outline (with full details in the appendix) showing the DPP holds for their mean-field state-value function, thus substantiating this theoretical claim. Thirdly, the paper claims that the proposed deep learning algorithms can effectively learn optimal stopping rules and handle high-dimensional problems. This is evidenced by multiple experiments: for simpler cases where the true optimum is known, the training loss converges to the provably optimal value and the learned stopping decisions match the expected optimal behavior (e.g. in the “Towards the Uniform” example, the loss converges to the analytically computed optimal value and the population indeed spreads to a uniform distribution). In more complex scenarios without closed-form solutions, the authors analyze the learned policies and outcomes to show they make intuitive sense (e.g. under congestion, agents slow down stopping; under asynchronous vs synchronous stopping, the costs differ as expected). All major claims – from theoretical approximation to the success of the algorithms – are supported by either proofs or convincing experimental results. We did not identify any broad claims that lacked support. If anything, the assertion that this approach “opens new directions for scalable MAOS” is forward-looking but reasonable given the demonstrated scalability.
Methods And Evaluation Criteria: The methods chosen are very well suited to the MFOS problem setting, and the evaluation — using custom but relevant scenarios rather than off-the-shelf datasets — is thorough and appropriate for demonstrating effectiveness on this problem.
Theoretical Claims: I reviewed the provided proofs for correctness and clarity, focusing on Theorem 3.2 and Theorem 4.1, 4.2, and both were convincing and free of apparent error.
Experimental Designs Or Analyses: I found the experimental design to be well thought-out and the analysis of results to be sound and credible. Each experiment’s outcome is interpreted with respect to the theory (e.g., checking if mean-field and multi-agent results agree, or if costs decrease as expected), which strengthens the validity of the conclusions.
Supplementary Material: Yes, we reviewed the supplementary material, including technical appendices and additional experiment details.
Relation To Broader Scientific Literature: The paper demonstrates a strong grasp of the relevant literature: it bridges the gap between single-agent optimal stopping methods and multi-agent mean field methods. The key contributions – establishing the MFOS approximation and solving it via deep learning – are clearly contrasted with prior works: none before have provided a computational solution for MFOS in discrete time. This positions the paper as a novel contribution that extends known ideas (optimal stopping, mean field control) into a new combined realm. I did not identify any major prior study that was directly relevant and omitted, aside from some classical methods discussed below. Overall, the authors’ related work section and citations indicate a high level of familiarity with the broader scientific context.
Essential References Not Discussed: While the paper covers most of the recent and theoretical literature relevant to MFOS, there are a couple of classical works in optimal stopping that were not mentioned and perhaps could have been cited for completeness. In particular, the Least-Squares Monte Carlo (LSM) method by Longstaff & Schwartz (2001), a seminal technique for approximating optimal stopping policies (especially in the context of pricing American options), is not referenced. LSM is a widely known baseline for high-dimensional single-agent optimal stopping problems and could provide context on how practitioners traditionally handle large state spaces (via simulation and regression). Including it would have highlighted the differences: LSM uses basis function regression and does not naturally extend to multi-agent interactions, whereas the proposed approach uses neural networks and mean-field coupling. Similarly, the approximate dynamic programming approach by Tsitsiklis & Van Roy (1999) for optimal stopping is an important early work in the machine learning literature that demonstrated how to approximate the value function for stopping problems in high dimensions. This work was not cited either. While these omissions do not detract from the paper’s contributions, mentioning them could have strengthened the background.
Other Strengths And Weaknesses: Strengths:
- Innovative Problem Formulation: This work is the first to formalize MFOS in a tractable discrete setup and solve it computationally. This is a significant theoretical step – bridging multi-agent optimal stopping and mean-field control – that opens up a new line of research. The conceptual connection made between MFOS and mean-field control (MFC) is novel and non-trivial, providing a fresh perspective on stopping problems by leveraging control theory tools.
- Combination of Theory and Practice: The paper excels in providing both theoretical guarantees and practical algorithms. It’s commendable that the authors prove an approximation rate for the multi-agent problem and derive a Bellman-like principle, and then use those insights to design algorithms. This synergy of theory and deep learning is a strong point – it lends credibility to the methods and also guides their design (e.g., the DP approach directly follows from the DPP theorem).
- Scalability and Demonstrated Performance: The proposed methods show the ability to handle high-dimensional state distributions (hundreds of dimensions) and complex scenarios. Solving an optimal stopping problem in a 100-state environment (with a 300-dimensional input when including the distribution) is non-trivial, and the paper demonstrates this feat. The fact that they can train neural networks on distribution inputs and still converge to near-optimal policies suggests the approach is robust and scalable. This is a crucial strength since one major goal was to enable solving very large multi-agent problems that are otherwise infeasible.
- Comprehensive Experiments and Insights: The range of experiments is a strength in itself. By examining six different examples, the authors validate their method under various conditions and also extract interesting insights (e.g., showing how asynchronous stopping outperforms synchronous stopping in terms of cost, which provides guidance for practitioners on what type of stopping rule to allow). They also examine the trade-offs between the two algorithms (DA vs DP), noting memory vs speed considerations, which adds depth to the evaluation. The paper doesn’t treat the method as a black box; it investigates why and when each approach works better, which improves the clarity and usefulness of the contribution.
Weaknesses:
- Restricted to Finite State Mean-Field: A notable limitation is that the approach is confined to finite state spaces (discrete state distributions). The authors themselves acknowledge that continuous-state MFOS leads to an infinite-dimensional problem which is intractable. While focusing on a finite state approximation is a reasonable and necessary step, it means the method might require state discretization for truly continuous problems (e.g. stopping problems in $\mathbb{R}^d$), which could be a source of approximation error or computational burden if the state must be finely discretized.
- Computational Cost: The paper does not deeply discuss the runtime or sample complexity of the methods. Training deep networks for each new MFOS problem could be computationally intensive, especially as the state space or time horizon grows. The authors do highlight the memory issue and how DP circumvents it, but there is little quantitative data on training times or how performance scales with problem size. Thus, a practical weakness is that the method might be computationally heavy for extremely large-scale problems (though still far better than brute force on the $N$-agent problem, of course).
Other Comments Or Suggestions: The paper is generally well-presented, but we have a few minor comments and suggestions that could improve it further:
- Typos: There are a handful of minor grammatical issues. For example, in the introduction the authors state “We will refer to this setting at multi-agent optimal stopping (MAOS)”, which should read “refer to this setting as multi-agent optimal stopping.” Another instance is in the Contributions section: “Our theoretical results relay on the interpretation of MFOS problems as MFC problems” – here “relay” should be “rely”. The same sentence ends with “opens up new direction to study MFOS problems,” where it should likely be “opens up new directions.” These are very minor and do not affect understanding, but fixing them would improve the polish of the text.
- Highlight Randomization Requirement: The need for randomized stopping policies is an important insight of this work (and of mean-field decision processes in general). The appendix example (Appendix A.1) illustrates it excellently. It might be worth alluding to this point in the main text as well, perhaps in Section 2.1 (Motivation and challenges) when discussing why single-agent methods cannot be applied. The authors do mention that in multi-agent setting “we allow agents to stop at different times” and that treating the whole system as one agent is flawed, which is related. But an explicit statement like “(Unlike single-agent stopping, an optimal policy in the multi-agent case may require randomizing: e.g., having 30% of agents stop now and others later.)” would prepare the reader for the introduction of $p_n(x) \in [0,1]$ as a control. This is a minor suggestion for pedagogical clarity.
- Include Classical Baselines: As noted, the paper omits references to classical optimal stopping solution methods (LSM, etc.). While those methods can’t handle the multi-agent aspect, it could be instructive to compare at least in the single-agent limiting case. For instance, for Example 1 (which is essentially a single-agent OS with an extra population cost term), one could compute the solution via a backward dynamic programming (since state space is small) or LSM and show the neural network achieves the same result. The authors did something similar by analytical calculation, which is great. If space permits in a final version, a brief mention that classical methods (like regression-based DP) would struggle as state dimension grows or with the need to incorporate distributions could strengthen the argument for why a new deep learning approach is warranted.
- Experimental Details: The appendix provides a lot of details on architecture and hyperparameters. We think it might be useful to mention in the main text (Section 6) a summary of the network architecture used for the experiments. For example, one or two sentences like “In all experiments, we use feed-forward neural networks taking the state (and distribution) as input; for asynchronous stopping the network input is [state, distribution] and for synchronous it is just [distribution], with appropriate embedding layers for state. We train using Adam with learning rate X, etc., as detailed in Appendix E.” This would give readers a concrete sense of the model complexity. Currently, one has to jump to Appx. E or C to find that information. Again, this is a minor suggestion for completeness.
Questions For Authors: What are the computational requirements of your approach as the problem size grows? In particular, how does training time scale with the number of states or the time horizon $T$? For example, if we were to double the state space size or consider a horizon $T=10$ instead of $T=4$, would the approach still be feasible? Any information on the runtime or memory usage for your largest experiment (Example 6 with a 100-state grid and presumably a larger $T$) would help assess the practical scalability of the method.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: First of all, we would like to sincerely thank the reviewer for their detailed review, which highlights our contributions and originality, and for the constructive suggestions to improve our manuscript.
**(Restricted to Finite State Mean-Field)**. We agree with the reviewer that focusing on a finite‐state mean‐field setting is a limitation, and we plan to explore continuous settings in future work. However, we would like to emphasize that this is the first work to both model and propose deep learning algorithms for multi‐agent optimal stopping in this setting. We believe this represents a fundamental step before tackling more complex scenarios.
**(Typos)**. We thank the reviewer for taking the time to carefully read our work. We have corrected the typos and improved the readability in the final version.
**(Highlight Randomization Requirement)**. The randomized stopping policies are a key aspect of this work, as emphasized by the reviewer. Following the proposed modifications, we have strengthened the argument in the main text.
**(Essential References Not Discussed and Classical Baselines).** We thank the reviewer for directing us to these important works on optimal stopping. In response, we will update the introduction to more clearly position our method relative to LSM and the work of Tsitsiklis & Van Roy (1999), comparing the different approaches. Although our first two examples already compare our algorithm’s solution against the analytical solution obtained via dynamic programming, we agree that conducting additional comparisons with established classical methods will strengthen our argument.
**(Experimental Details)**. We agree with the reviewer that including a description of the network´s architecture in the main text (section 6) will help the reader better understand the results. We thank the reviewer for the valuable suggestions, which we will incorporate into the final version.
**Runtime and Memory Usage** -
Regarding the reviewer's question and observations on runtime and memory usage we have briefly commented on the training time and computational resources in Appendix D. To provide a more ***quantitiative analysis***, we list the memory usage and required training time per 100 iterations in the following table, for a varying size of problem with different time horizons and dimensions. We fix the batch size to be 128 and deep networks to have around 260k parameters.
- From the table, we see that **memory usage** for the direct approach (DA) scales as $O(DT)$, where $D$ is the problem dimension and $T$ is the time horizon, while the dynamic programming approach (DP) only requires memory of $O(D)$. We also observe that DA in general requires more memory than DP, which makes DP the most preferable approach for problems with long time horizons or very high dimensions, as we have discussed in the paper.
- As for the **running time**, the time horizon $T$ has a crucial impact on the required time per $100$ iterations, while the dimension plays a relatively minimal role. While DA tends to run slower per hundred iterations, we want to stress that in practice it takes less than $2000$ iterations for DA training, but would usually requires around $200 * T$ total iterations for DP training. Therefore, DA still could be faster in training time compared with DP when the memory usage is affordable.
Based on these quantitative data, we believe that our proposed approaches are quite scalable, and therefore still feasible for long-horizon, high dimensional problems and are of pratical impact in real scenarios.
| Memory| T=10 dim=100|T=30 dim=100|T=50 dim=25|T=50 dim=64|T=50 dim=100|
|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
|Direct Approach|3.8GB|10.0GB| 4.7GB | 10.1GB |16.1GB|
| Dynamic Programing| 0.9GB | 0.9GB | 0.7GB | 0.8GB|0.9GB |
| Time/100 iter| T=10 dim=100|T=30 dim=100|T=50 dim=25|T=50 dim=64|T=50 dim=100|
|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
|Direct Approach| 14s | 42s | 33s | 56s | 71s|
| Dynamic Programing| 2s | 4s | 6s | 6s | 6s | | null | null | null | null | null | null | null | null |
Accelerating PDE-Constrained Optimization by the Derivative of Neural Operators | Accept (poster) | Summary: The paper improve on the reference neural operator (RNO) for PDE-constrained optimization (PDECO) by (1) using the full trajectory data of PDECO and the sensitivity loss (2) introducing the virtual Fourier layer (3) using numerical solution to correct the neural operator solution.
Claims And Evidence: The claims are generally supported by the evidence.
Some clarification is needed regarding Figure 6:
- Why is the objective increasing as the number of calls increases? From the appendix, this example (Microreactor2D) is a minimization problem.
- As mentioned in the caption,"R-VF w/o reference represents the optimization process relying solely on the R-VF model without any calls to the numerical solver". I think that means the num. of calls should be 0 for R-VF w/o reference? How do we get the blue line
Methods And Evaluation Criteria: The proposed methods make sense. While the evaluation require improvement.
- Table 1 seems to come from 1 testing data set. The performance should be evaluated on multiple testing problems and report the mean/std.
- From table 1, the improvement from VF also seems to be marginal. For example, comparing the best and the second best, the objectives (L) decreases by roughly 0.03, 0.02, 0.02, 0.01 etc. The improvement might not be robust when the number of test case increases.
And some clarification are needed:
- Algorithm 2, what is $u_{gt}$, is it the numerical solution corresponding to the current $\lambda$? Or is it the "ground truth solution to the PDECO"?
- Figure 4, how to make the predicted $u$ and $J$ exact (blue arrows) ? From my understanding of algorithm 2, we don't have access to the red trajectory.
Theoretical Claims: There is no theoretical claim.
Experimental Designs Or Analyses: See Claims And Evidence.
Supplementary Material: The appendix include calculation and describe the PDE numerical examples. No other supplementary material.
Relation To Broader Scientific Literature: The work improve on an existing method (RNO) for PDECO. While effective, the broader impact might be limited.
Essential References Not Discussed: The author should discuss how does their method of handling different geometry differs from [1]
And how does their use of derivative differs from [2]
[1] Li, Z., Kovachki, N., Choy, C., Li, B., Kossaifi, J., Otta, S., Nabian, M.A., Stadler, M., Hundt, C., Azizzadenesheli, K., Anandkumar, A., 2023. Geometry-Informed Neural Operator for Large-Scale 3D PDEs.
[2] O’Leary-Roseberry, T., Chen, P., Villa, U., and Ghattas, O. Derivative-informed neural operator: an efficient framework for high-dimensional parametric derivative learning.
Other Strengths And Weaknesses: Strength:
- the example problems are challenging.
Weakness:
- see other boxes.
Other Comments Or Suggestions: No other comments.
Questions For Authors: Since numerical solutions are used to assist RNO, it's unclear that the hybrid approach is more effective than using numerical solver alone. In particular, the key questions for practitioner will be:
- How many sample/trajectories are needed to generate the training data?
- What is the total time for pre-training and what is the time for each solve?
From figure 4, it seems that the performance stays flat after 4 function calls, this suggest that the accuracy can not be improved with more function calls for Hybrid approach. It's totally reasonable that the accuracy is lower than the numerical solver, but the author need to show that the speed makes up for the loss of accuracy.
Another important question is, does using the sensitivity make the Neural operator less general? An important advantage of NO is that, once pre-trained, it can be used for different objective function. With proposed method, it seems modifying the objective function would require retraining the whole neural operator.
Since RNO is an essential part of the work, I think the author should review RNO with more detail, in addition to the high level introduction in section 3.3. Some related questions:
- Are both $u$ and $\lambda$ are nodal function on a mesh, or $\Omega\rightarrow \mathbb{R}$ Is it really possible to have $\varphi(\lambda_r)=\lambda_q$ and $u_r \circ \varphi ^{-1} = u_q$?
- What is the training pipeline for RNO? How to represent and sample $\varphi$?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review with carefulness and details, and it is highly valuable and helpful to us!
1. **Claims and Evidence**
Nice catch on the mismatch of objective! As you pointed out, the objective of Microreactor is to minimize $J = -\int…$ in eq. (17) in appendix C.2. This objective is equivalent to maximize $J=\int…$ without the negative sign. We will revise the form of objectives to be consistent. In our implementation, we used maximization, please see `Line 726 in utils.py`.
The blue line in Fig. 6 is obtained for illustration purpose. R-VF w/o ref. indeed runs without being given ground truth solution along optimization process. We check the quality of its output in the same way as R-VF with ref. The only difference is to drop the reference GT solution from input.
2. **Methods And Evaluation Criteria**
We respectfully disagree with the evaluation of marginal improvement. First, let us clarify that in Table 1, errors are shown by each component, and $L_{\cdot}$’s are sensitivity errors. We will modify the caption of Table 1 accordingly. We should emphasize that our main improvement is on derivative learning, which achieves error reduction 3.2%, (1.8%, 2%), (1.2%, 3.5%), 6.1% (or relatively 15.8%, (3.9%, 7.2%), (3.8%, 6.85%), 10.8%) from the second best. Second, the baseline models GNOT and Transolver are SOTA neural operators (NO), which leaves small room for improvement at operator learning (on physical fields). However, our model achieves Pareto front in the tradeoff between operator learning and derivative learning (See Fig. 5 and Section 4.1.1). Please see also **Item 1 for Rebuttal for Reviewer 1** for the reason of the improvement of VF.
We appreciate the rigorousness of the reviewer and decide to supplement some experiment to address the question on the robustness of the improvement. Due to space limit, please check the table in **Item 2 of the rebuttal for Reviewer 2**.
$u_{gt}$ indicates the numerical solution corresponding to the current $\lambda$, NOT the numerical solution of PDECO.
It is a good question on how to make predicted $u$ and $J$ (blue arrow) exact, which is the key to our method. Given a GT solution ($\lambda_r, u_r$) obtained from numerical solver, let $\lambda_q =\lambda_r$ and thus $\varphi=0$. The input would be $(\lambda_q, u_r, 0)$. The output of reference neural operators (RNO) should be **close** to the exact reference solution $u_r$ due to our training algorithm. For each query, dataloader would randomly pick neighboring samples from the same optimization trajectory as references, including the queried data itself (See App. C.1). It forces RNO to learn an identity mapping when the deformation $\varphi$ vanishes.
3. Reference not discussed.
On geometry handling, [1] belongs to the framework of traditional NO, i.e., learning the mapping between geometry and solution. RNO is a generalization of NO to learn the change of solution according to deformation of geometry (and more general parameters, see Remark 3.2).
[2] is a relevant work, and we thank the reviewer for pointing this out. It has been cited in line 160-161 right. [2] is a general work on training NO in Sobolev space efficiently by exploiting the intrinsic low-dimensionality of the derivatives when handling the high-dimensional Jacobian. Our work learns the directional derivative of NO since the derivative is taken wrt some objective functions, which can be view as a special case of [2].
4. Other questions
We apologize for missing the information of dataset sizes. Please see **Item 1 in Rebuttal 2** for details due to space limitation.
Please see **Item 3 in Rebuttal 2** for discussion on wall-clock time for optimization.
It is a great question on if derivative learning would make NO less general. On the problem Inductor2D, we consider a flexible objective with a changing weight $\gamma$ between two terms (See eq. (19)). The dataset is drawn with random sampling of $\gamma$. Therefore, the NOs are able to predict different objectives for different $\gamma$. In principle, the dataset determines the generalization of NO, including objective functions.
Regarding RNO, $u$ and $\lambda$ are nodal functions on mesh. It is indeed possible to have $\varphi$ satisfying the relation between reference and query. To give a simple example, let $\varphi$ be a spatial rotation, and then the nodal value of $u$ remains unchanged, while coordinates $\lambda$ will be rotated accordingly. In the domain of shape optimization, the existence of $\varphi$ is the cornerstone for all applications. See Remark 3.2 for more details.
The training pipeline of RNO is summarized in Algorithm 1 in Appendix D. To represent and construct $\varphi$ would be rather simple with the data from optimization. One can use the shift of mesh nodes between ref. and query for shape optimization, or use the difference between control variable $\lambda$ of ref. and query for topology optimization. See line 256-265 right in the paper. | Summary: The authors propose a neural-operator based approach for solving PDE-constrained optimization problems. While this approach is not new in the literature, the authors come up with several innovations to improve existing approaches: (i) data-driven training by using trajectories generated by traditional optimization algorithms, (ii) enhanced derivative learning through virtual-Fourier layers, and (iii) hybrid optimization integrating neural operators with numerical solvers. The authors show the potential of their approach on several datasets from shape and topology optimization.
## update after rebuttal
I thank the authors for responding to my comments. I have raised my score from 1 -> 2. I am hesitant to raise my score further because of the initial quality of presentation in the paper.
Claims And Evidence: I do not believe that the claims made in the submission are properly supported.
The authors claim in the abstract that their “extensive experimental results demonstrate the effectiveness of our model in accurately learning operators and their derivatives”. However, what is the threshold for accuracy? I am also skeptical of this claim because their method is outperformed by the numerical method in Figure 6. To me this suggests that the derivatives are not being learned to sufficient accuracy. Furthermore, I’m not fully convinced that experiments of four different datasets are enough to be considered extensive.
Also, how exactly do the authors address the data efficiency issues that they describe in the abstract? I think the authors should compare to other data sampling approaches (other than generating data from traditional optimization algorithms) to show that their method yields an improvement.
Methods And Evaluation Criteria: How much data is needed to train the neural operator in this framework? Is this data easy to generate? If the data generation itself is very expensive, would this method be useful for PDE-constrained optimization?
Another thing the authors should address is the runtime (in seconds) of their method, especially since it seems like the classic numerical method outperforms their approach in Figure 6. Does Hybrid R-VF at least have a faster runtime than the numerical method?
Theoretical Claims: There are no theoretical claims made in the paper.
Experimental Designs Or Analyses: The experimental design and analysis seem fine.
Supplementary Material: I skimmed all of the sections in the supplement. I appreciate that the authors made an effort to describe the datasets and optimization tasks they solved in the experiments section.
Relation To Broader Scientific Literature: To the best of my knowledge, the paper makes an interesting contribution to the broader literature on PDE-constrained optimization. In particular, the virtual-Fourier layer seems to be novel, and could be used in future work to improve the derivative accuracy in neural operators.
Essential References Not Discussed: The paper is missing references to DeepONet, which is also popular in the operator learning literature.
There is also work within the last two years that generalizes FNO to irregular meshes, e.g., geo-FNO (https://www.jmlr.org/papers/volume24/23-0064/23-0064.pdf) — the authors should address this in the related work section, where they claim that the applicability of FNO to irregular meshes is limited.
Other Strengths And Weaknesses: Combining neural operators with PDE-constrained optimization is an interesting idea. However, there is a lot of room for improvement in the presentation (see “Other Comments Or Suggestions” and “Questions For Authors”). I found section 3.2, which introduces virtual-Fourier layers, particularly hard to follow. I had a hard time distinguishing what aspects of the virtual-Fourier layer were being introduced by the work, and what aspects were just based off of the FNO.
Other Comments Or Suggestions: Using $\hat \cdot$ to denote variables with ground-truth values could be confusing to readers skimming the paper. Typically $\hat \cdot$ is used to denote an estimate of a ground-truth value.
I’d recommend adding a notation subsection at the end of the introduction. For example, the bold notation for tensors and the use of $\mathcal F$ for the Fourier transform could be part of this subsection.
“Softmax” and “Project” are italicized in Remark 3.1, but they are not italicized in the rest of the paper.
Questions For Authors: 1. Page 2: The paper refers to “sequences of solutions and sensitivities” that are computed by the adjoint method. However, the mathematical expressions for the sensitivities are never shown in the paper. Presenting the mathematical expressions for these sensitivities (and perhaps, a brief overview of the adjoint method) will improve the quality of presentation in the paper.
2. Page 4: What does it mean for the Fourier transform of Fourier-based layers to be “practically linear”. Does this mean that the Fourier transform of these layers is a linear operator, or that it has linear time complexity. This statement should be clarified in the paper.
3. Page 4: What does it mean for the derivative to “not add additional bias to the layer”? Is this supposed to motivate why the authors use FNO rather than transformer-based architectures?
4. Page 5: After the discussion of virtual-Fourier layers in section 3.2, the paper abruptly jumps to training and optimization with RNO. How are the virtual-Fourier layers related to the RNO? Are they the building block of RNO?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper, your feedback is in-detail and comprehensive. Among your comments we notice that you particularly expressed “having a hard time following virtual-Fourier (VF) layers and distinguishing VF from FNO”. We would like to clarify this point first since it lies in the core of our motivation of this work.
FNO and its variants require a uniform-grid domain due to the need of applying fast Fourier transform. geo-FNO uses a 1-to-1 mapping to deform a uniform grid to an arbitrary shape, and the computation essentially still happens on uniform grids. To simply put, the input shape (H, W, C) must be fixed. VF on the other hand can be applied to arbitrary number of mesh grids (T, C), where the length of input T can change. This is similar to transformer-based neural operators and preferable since we are dealing with data from shape/topology optimization, where **mesh grids can be arbitrary and uniform grids are inapplicable.**
The way of handling arbitrary length input with VF layer consists of, 1. Project the input onto a **fixed** number of virtual sensors; 2. Process the virtual signals by Fourier layers; 3. Project the processed signals back to physical space (See Figure 3). The projection technique is entirely independent from FNO and inspired by Transolver instead. Additionally, our projection is different from Transolver due to our consideration of learning derivatives (See Remark 3.1).
We emphasize that, to our best knowledge, this is the first work that extends Fourier layers to arbitrary mesh grids. The benefit is its advantage in learning derivatives due to its simple structure compared to transformer counterparts (See items 2&3 below).
To address your other questions:
1. Sensitivity is defined on line 69 left by equation (2), $\frac{\partial \tilde{J}}{\partial \lambda} = \frac{\partial J}{\partial u} \frac{\partial u}{\partial \lambda} + \frac{\partial J}{\partial \lambda}$. We will bold the word sensitivity here.
2. We meant Fourier transform is a linear operator, which has a simple derivative, i.e., $\frac{d}{dx}\mathcal{F}(z)= \mathcal{F}(\frac{dz}{dx})$. This motivates us to integrate Fourier layer into our design to learn derivatives. We will modify this part to improve clarity.
3. Consider the derivative of the attention of a transformer, $\frac{d}{dx} softmax(QK)V$. The derivative must be taken wrt to Q, K, V, hence involving product rule. Also, the derivative of softmax is $s_i(\delta_{ij}-s_j)$ (See line 189-191). Thus the derivative of attention would be roughly and informally $s_i(\delta_{ij}-s_j)(Q’K+QK’)V + S(QK)V’$. Hence, we pointed out that the derivative of transformer is assigned with strong inductive bias (probably undesired). Imagine learning derivatives of operators with layers constructed as above. The expressiveness is therefore restricted.
On the other hand, the derivative of Fourier transform is as simple as $\frac{d}{dx}\mathcal{F}(z)= \mathcal{F}(\frac{dz}{dx})$, and hence has “no additional bias”. We conducted more analysis of the derivatives of VF in Remark 3.1 and Appendix A.
4. Indeed, VF is the building block of RNO. Namely, according to the general architecture of neural operators $\mathcal{G}_{\theta}:= \mathcal{Q}\circ\mathcal{L}\circ\cdots\circ\mathcal{L}\circ\mathcal{P}$ (On line 149 right, in the beginning of Section 3), VF instantiates the integral operator $\mathcal{L}$. To smoothly transit between sections, we will clarify the connection in the beginning of section 3.3.
5. **Claims and Evidence**
The reason why we did not compare with traditional sampling method is the following. For example, consider random sampling in parametric space of the drone problem (Fig. 10). The parameter space is $[0,1]^{30000}$, given 3E4 mesh nodes. The likelihood of sampling an optimal design is almost surely zero. Therefore, an optimal design would be an out-of-distribution (OoD) data for random sampling method. Since neural operators would fail on predicting OoD data, they are inapplicable for optimization tasks. Thus, in order to make data near optimality in-distribution, in our opinion, it is necessary to adopt optimization data sampling. See the discussion on line 97-109 left in our paper.
The data efficiency of our approach is exemplified in Table 1. Due to space limit, please see discussion in **Item 1 of Rebuttal 2**.
6. **Methods And Evaluation Criteria**
We apologize for missing the detail of data size in our submission. Please also see **Item 1 of Rebuttal 2** for details. The dataset size we use is around 1E3, which is similar to the setup of many traditional neural operators. Therefore, the cost of dataset is NOT significantly more than traditional datasets.
Regarding runtime, please see discussion in **Item 3 of Rebuttal 2**. | Summary: This paper proposed a novel framework to enhance PDE-constrained optimization (PDECO) with neural operators, addressing data efficiency and robustness. Key innovations include data-driven training, a Virtual-Fourier layer for improved derivative learning, and a hybrid optimization approach integrating neural operators with numerical solvers. Experiments show accurate operator learning, robust convergence, and improved optimization performance.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Looks good to me.
Theoretical Claims: No such
Experimental Designs Or Analyses: All Look good to me.
Supplementary Material: All look good to me.
Relation To Broader Scientific Literature: We need such optimization works using operator learning.
Essential References Not Discussed: The authors may add some reviews of papers from 'AI-aided geometric design of anti-infection catheters' and 'Physical Design using Differentiable Learned Simulators'
Other Strengths And Weaknesses: No such.
Other Comments Or Suggestions: No such.
Questions For Authors: No such.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper, your summarization is precise and your suggestion on additional references are well received. Particularly, we thank you for suggesting the 2nd work that we'd like to discuss the distinction with our work. The first distinction is that their work is targeting time-dependent fluid problems. Our work is targeting static problems, e.g., steady flow, electromagnetics on frequency domain, solid mechanics. Second, their approach intends to replace numerical simulation with surrogate models. Our work is complementary to surrogate approach that hybrids with numerical solvers. It is an interesting question on how to apply our hybrid method to problems with rollout operations, which can be explored in the future.
Below are some frequently asked points we’d like to address:
1. Dataset sizes and Table 1
Microreactor, Fuelcell, Inductor, Drone have 100x12, 30x20, 100x10, 10x20 samples respectively as in num_trajectory x num_steps. We added the information in appendix.
In Table 1, all errors are listed by components, and $L_{\cdot}$’s are sensitivity errors (we will clarify this in caption). Naïve supervised learning paradigm for neural operators perform poorly due to overfitting highly correlated optimization data. The data from optimization steps are quite similar to each other. Our solution to learn by reference neural operators (RNO) is much more efficient in terms of low error rate (Comparing models with and without “R-”).
To see the improvement due to virtual-Fourier (VF), we observe that the sensitivity errors are reduced by 3.2%, (1.8%, 2%), (1.2%, 3.5%), 6.1% (or relatively 15.8%, (3.9%, 7.2%), (3.8%, 6.85%), 10.8%) from the second best. See also the balance of tradeoff between error and sensitivity error in section 4.1.1, VF achieves the best Pareto front.
2. Robustness of improvement of R-VF
The following results are obtained on 4 runs on Microreactor with different random seeds. R-VF consistently outperforms the baselines. Note that, LA: Linear Attention, PA: Physics Attention, and R-: reference neural operators.
| Models | p | u | v | c| $L_s$ |
|---|---|---|---|---|---|
|R-LA| $1.29e-2 \pm 3.3e-4 $ | $2.33e-2 \pm 1.04e-3 $ | $6.85e-2 \pm 1.94e-3$ | $2.32e-2 \pm 9.3e-4$ | $2.16e-2 \pm 3.55e-4$ |
| R-PA | $1.25e-2 \pm 4.1e-4 $ | $2.22e-2 \pm 6.6e-4 $ | $6.6e-2 \pm 1.23e-3$ | $2.28e-2 \pm 2.1e-4$ | $2.02e-2 \pm 7.8e-4$ |
| R-VF | $\textbf{1.2e-2} \pm 3.2e-4 $ | $\textbf{2.06e-2} \pm 6.1e-4 $ | $\textbf{6.42e-2} \pm 1.42e-3$ | $\textbf{2.14e-2} \pm 6.89e-4$ | $\textbf{1.76e-2} \pm 3.81e-4$ |
Also, we doubled the size of Microreactor dataset to 200 trajectories (each of 12 steps and effectively 2400 samples). 40 traj. are kept as test set. We train models on 80, 160 traj. respectively. We observe that with larger dataset, R-VF kept advantage over baselines on predicting both physical fields and sensitivities. The main improvement is still on sensitivities.
| | p | u | v | c| $L_s$|
|---|---|---|---|---|---|
|R-LA-80| 1.64e-2 | 3.25e-2 | 7.83e-2 | 2.65e-2 | 2. 69e-2 |
| R-PA-80 | 1.51e-2 | 2.74e-2 | $\textbf{7.46e-2}$ | 2.48e-2 | 2.54e-2 |
| R-VF-80 | $\textbf{1.48e-2}$ | $\textbf{2.57e-2}$ | 7.56e-2 | $\textbf{2.42e-2}$ | $\textbf{2.30e-2}$ |
|R-LA-160| 1.40e-2 | 2.42e-2 | 7.16e-2 | 2.29e-2 | 2.37e-2 |
| R-PA-160 | 1.32e-2 | 2.18e-2 | $\textbf{6.81e-2}$ | $\textbf{2.15e-2}$ | 2.30e-2 |
| R-VF-160 | $\textbf{1.31e-2}$ | $\textbf{2.17e-2}$ | 6.95e-2 | $\textbf{2.15e-2}$ | $\textbf{2.18e-2}$ |
3. Wall-clock time for optimization with R-VF
The runtime of our hybrid method consists of time on optimizing with NO and time on calling numerical solvers. The latter dominates the runtime, and optimization with NO is much cheaper. A potential cost saving of hybrid method lies in fewer calls of numerical solvers.
One crucial aspect of optimization with neural operators is the method of optimization. In our implementation we adopted GD and MMA, but there are many other choices such as Adam, L-BFGS, SIMP, etc. A good optimization method can reach higher objective values and save both the numbers of iterations and function calls. Thus, the optimization method significantly affects wall-clock time. Besides, there are some important techniques in optimization, e.g., gradient filtering, projection, clipping, etc. A comprehensive and fair comparison requires all these techniques. Therefore, in our opinion it is premature to compare the runtime in this work.
---
Rebuttal Comment 1.1:
Comment: The derivative of Neural Operator in general is of interest and importance.
---
Reply to Comment 1.1.1:
Comment: Thank you again for reviewing our paper. Your comments on our contribution are precise and concise. Your suggestion on reference is valuable and helpful. | Summary: Authors propose a general architecture-agnostic framework that can be used for PDE-constrained optimization and a Virtual Fourier Layer suitable for data on irregular grids.
The framework consists of three essential parts: (i) a particular structure of inputs and outputs for selected neural networks, (ii) a way to generate training data and train the model, (iii) an inference strategy that utilises classical solver.
(i) Roughly, the inputs to the neural networks are (solution for current parameters, desired parameters parameters, difference between current and desired parameters) and the output is a solution for desired parameters.
(ii) Data is generated along the trajectories of the optimization process and the training is performed with intermittent masking of two inputs: (solution for current parameters, difference between current and desired parameters). The latter encourages neural networks to be more robust and learn mapping from parameters to solution.
(iii) At the inference stage authors propose to sparingly correct optimisation trajectory using classical solver. This is possible because of the special structure of the neural network described in (i).
## update after rebuttal
Summarised in https://openreview.net/forum?id=LFF7kUQ5Rp¬eId=gR2VRvNh9Z
Claims And Evidence: In my view most claims made by authors are well-supported.
The one missing metric is a wall-clock time, required by different methods to reach optimal solution or approximation to thereof. Without this data it is not possible to claim that the proposed method leads to faster PDE-constrained optimization as stated in the title.
Methods And Evaluation Criteria: Authors consider several challenging PDE-constrained optimisation problems and perform reasonable ablation study. I believe that both methods and evaluation criteria are adequate.
Theoretical Claims: Authors do not provide novel theoretical claims.
Experimental Designs Or Analyses: The most important part of experiments is the ablation study done by authors.
The proposed approach consists of three techniques: (i) training strategy and the overall design of architecture (inputs and output), (ii) Virtual Fourier Layer, (iii) inference strategy. Given that, it is reasonable to evaluate the effect of each component.
Quite appropriately authors perform such ablation tests and present the results in Table 1. From this table I can conclude that corrections with reference at inference play the most significant role, the effect of training strategy is the second most important factor and the improvement from the use of Virtual Fourier Layer is the least important but still pronounced.
Supplementary Material: I reviewed all presented supplementary materials.
Relation To Broader Scientific Literature: PDE-constrained optimization is a mature field with a large number of classical techniques available https://link.springer.com/book/10.1007/978-3-319-13395-9.
The approach by authors is within a scope of surrogate modelling https://arc.aiaa.org/doi/10.2514/6.2000-4891. In the PDE-constrained optimization a repeated solution of PDE is required, so the idea is to replace accurate but numerically expensive classical solver with numerically cheap surrogate model. Various surrogate models were developed including ROM https://link.springer.com/chapter/10.1007/978-3-642-55508-4_16 and neural networks https://arxiv.org/abs/2111.04941, https://arxiv.org/abs/2110.13297.
The main innovation of the current contribution is a design of a special training scheme and network structure that allows one to use a numerical solver in conjunction with a surrogate model.
Essential References Not Discussed: Authors do not discuss development of surrogate modelling at all. In place of that they provide references for related problems: prediction with neural operators, training that promotes smoothness, hybrid solvers that combine neural networks and classical techniques.
I suggest authors provide a brief review of PDE-constrained optimization methods that incorporate neural networks for surrogate modelling. Several references are provided above, but more thorough review will improve the overall quality of the contribution.
Other Strengths And Weaknesses: **Strengths:**
1. The approach by authors is clearly more robust than alternative. This robustness is achieved by learning small corrections to the reference solution and by the use of classical solvers that can provide unbiased reference.
2. The developed framework is general and can be used applied with arbitrary architectures
**Weaknesses:**
1. Classical solver is still required at the inference stage.
2. More challenging collection of training data: (i) data is collected along optimisation trajectories, (ii) in addition to input-output pairs the derivative of the loss with respect to parameters is collected.
Other Comments Or Suggestions: The article is generally well-written, so I have only a few minor points:
1. The need for Figure 1 is not evident, since it does not explain much beyond the fact that optimization trajectory with approximate solver diverges from trajectory with more accurate solver. Figure 4 seems to demonstrate the same effect.
2. Line 408, right column. "without numerical solver feedback quickly derails." -> "Without numerical solver feedback quickly derails."
Questions For Authors: 1. Section 3.2. contains several fragments on bias and complexity of derivatives. For example:
1. Lines 187-188 "For a transformer, the nonlinearity of attention unit causes complex calculation of derivatives ..."
2. Lines 196-197 "Fourier-based layers is practically linear, and therefore does not introduce additional biases ..."
3. Lines 190-192, right column "The derivative of equation (8) introduces less bias due to its simpler structure, which motivates us to adopt this form."
4. Lines 210-213 "Since Fourier-based layer is practically linear, its derivative does not add additional bias to the layer. This is highly favorable for derivative learning."
I do not understand these claims. Can the authors please clarify what they mean by the "complex calculations" of derivatives? Do they refer to numerical complexity? Isn't it the case that numerical complexity of automatic differentiation (AD) is roughly the same as forward pass through the architecture?
Next, what do authors mean by "bias" of derivative? Again, since AD is used the derivative is computed up to machine precision for all but pathological cases. Since authors use fairly stable architectures and compute derivatives with respect to input, I would not suspect to see instabilities in AD.
2. In equation (12) authors use $\varphi$ and explain that for topological optimization this is $\lambda_r - \lambda_q$. From Appendix one can find that this is the case for all considered examples. Can the authors please provide an example where $\varphi$ has a different form or reflect in the main text that $\varphi$ is a difference of controlled variables for most of the problems.
Ethical Review Concerns: na
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper in such depth and being very informative, especially on reference. It is highly valuable to us!
1. **Experimental Designs Or Analyses**
We respectfully disagree that Virtual Fourier (VF) Layer is least important. It reflects our thoughts on designing architecture for learning derivatives. To this sense, it is as crucial as inference with reference in our framework. In Table 1, all errors are listed by components, and $L_{cdot}$’s are sensitivity errors (we will clarify this in caption). We observe that the sensitivity errors are reduced by 3.2%, (1.8%, 2%), (1.2%, 3.5%), 6.1% (or relatively 15.8%, (3.9%, 7.2%), (3.8%, 6.85%), 10.8%) from the second best. This improvement is due to the architecture of VF. See also the balance of tradeoff between error and sensitivity error in section 4.1.1, VF achieves the best Pareto front.
Specifically, the derivative of VF layer has a simpler form than baseline models such as transformers. Consider the derivative of the attention of a transformer, $\frac{d}{dx} softmax(QK)V$. The derivative must be taken wrt to Q, K, V, hence involving product rule. Also, the derivative of softmax is $s_i(\delta_{ij}-s_j)$ (See line 189-191). Thus the derivative of attention would be roughly and informally $s_i(\delta_{ij}-s_j)(Q’K+QK’)V + S(QK)V’$. Hence, we pointed out that the derivative of transformer is assigned with strong inductive bias (probably undesired). Imagine learning derivatives of operators with layers constructed as above. The expressiveness is therefore restricted.
On the other hand, the derivative of Fourier transform is as simple as $\frac{d}{dx}\mathcal{F}(z)= \mathcal{F}(\frac{dz}{dx})$, and hence has “no additional bias”. We conducted more analysis of the derivatives of VF at Remark 3.1 and Appendix A.
2. **Questions For Authors**
Following the analysis from last question, we hope that it is clearer to see the derivative of VF having more expressiveness compared to transformer counterparts. We will modify the paper to reduce the ambiguity on “complex derivatives” and “additional bias”.
In our paper, there is another type of example of $\varphi$ besides topology optimization, namely shape optimization. Let $\varphi$ be a deformation between two shapes, which is instantiated as mesh grids warping and finally discretized as mesh grids shift. See Fuelcell and Inductor examples in our experiment. Thus, in this paper, for both topology and shape optimization, the discrete $\varphi$ can be represented as $\lambda_r-\lambda_q$.
It is definitely an interesting question to consider other form of $\varphi$. In fact, the deformation $\varphi$ is not unique. For example, given two shapes of the same topology, there are infinite many ways of deforming one to another. How does $\varphi$ affect the operator learning? Is it possible to take integral of RNO along a path of warping a shape to a much more different shape beyond perturbing? We believe there is a vast field to explore in this area.
3. **Weakness**
We appreciate your precise judgement on recognizing the reliance of our method on classic solver and the nature of data as optimization trajectories. However, we’d like to discuss with you as these topics do not present as weakness in our opinion.
Hybrid methods have great potential in scientific computing area. One of the main reasons is that modern computing algorithms are developed with profound theories and abundant ecosystems, e.g., open/closed source software. The guaranteed accuracy and reliability are still indispensable in the era of data-driven approach. We are glad you mentioned the work and effort of surrogate models. Advocates embrace them for the dominating advantage in cost. We absolutely agree and would love to contribute to this community. However, we hold a slightly different view that AI will not replace numerical computing but rather play as a complementary part, which will be different from what happens to CV or NLP. It is because of this reason hybrid method is getting popular in AI4PDE, e.g., Multigrid method and other hybrid methods mentioned in our paper.
The cost of optimization dataset is NOT larger than traditional neural operators. We apologize for missing this information of dataset sizes and added to appendix. Please see **item 1 in Rebuttal 2** for details due to space limit. Also, in our opinion the derivative information would be necessary if we tend to obtain surrogate models with good gradient property.
4. Missing metric of wall-clock time
Please see **Item 3 in Rebuttal 2** due to space limit.
---
Rebuttal Comment 1.1:
Comment: In my view the techniques proposed by authors are valuable and reasonable. Besides that the claims are well supported numerically. I revise my recommendation accordingly.
I read the comment on computation time required for different methods and I agree that it is not straightforward to perform a fair comparison. My suggestion for the author is to include this discussion in the main body of the paper and provide at least some reference running times for the readers.
---
Reply to Comment 1.1.1:
Comment: Thank you for your quick reply and being such a responsible reviewer! We warmly agree with your suggestion, the discussion on runtime will be included in the main body.
Below we summarize the runtime of optimization based on RNO and numerical method. We run the experiment both on a laptop with CPU 2.5 GHz (11th Gen Intel i7) and on GPU Nvidia V100. The time is an average value on 10 iterations. Unit is second per iteration. The case of Microreactor has 4.9E3 mesh nodes, and the case of Drone has 2.1E4 mesh nodes.
| | R-VF (GPU) | R-VF (CPU) | Numerical (CPU) |
|---|---|---|---|
|Microreactor2D| 0.53 | 1.89 | 3.20 |
|Drone3D| 0.62 | 3.20 |49.40 |
The runtime of numerical method severely increases as the problem scales. This is because R-VF has a linear complexity wrt number of mesh nodes, and numerical method is known to suffer from curse of dimensionality.
We sincerely thank you again for your overall reviews, your expertise has truly helped us improving the comprehensiveness, clarity and rigorousness. | null | null | null | null | null | null |
Accelerating Quantum Reinforcement Learning with a Quantum Natural Policy Gradient Based Approach | Accept (poster) | Summary: This paper presents a method for quantum based NGD for RL that improves the asymptotic scaling of the sample complexity. Building on previous work that analyze quantum mean estimation and variance reduction, this paper shows proves its algorithms achieves O(eps^-1.5) sample complexity (over the classical -2).
Claims And Evidence: The main claim of the paper is supported by a proof, given its theoretical nature, this is sufficient evidence.
Methods And Evaluation Criteria: There are no benchmarks or evaluation criteria. The only method of evaluating the result is what is theoretically derived (which is expected given the nature of the paper).
Theoretical Claims: I did not find any issue with the theoretical proofs, however, I was not able to check them meticulously.
Experimental Designs Or Analyses: There were no experiments.
Supplementary Material: I reviewed the entirety of the SM, but only focused on the algorithms in Sec B.
Relation To Broader Scientific Literature: The key contribution is the idea that you can take these quantum mean estimation policies and build something that has a provable advantage with it. This is also more relevant than some theory work, since recent developments in the parameterized quantum RL world have shown that this quantum enabled policy is a feasible idea.
Essential References Not Discussed: Although much of modern quantum RL focuses on parameterized quantum circuits as the focus, some of the early work of quantum RL is missing (e.g. https://arxiv.org/pdf/0810.3828) which would be worth discussing. Additionally some more recent MAB quantum work would be worth highlighting in the discussion on MAB (e.g. https://arxiv.org/abs/2007.07049).
Other Strengths And Weaknesses: A central question for me is the novelty. While originality does arise from creative combinations of existing ideas, the main algorithm is heavily based on the existing literature, and I wonder how much this paper adds to those works which also had asymptotic bounds.
Other Comments Or Suggestions: Some of the paragraphs look like their line separation is smaller than the others, e.g. 250 right side of page. Additionally, although I recognize this is out of scope for the paper, an integration of the algorithm with something like raw-PQC would be exceedingly interesting.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and insightful comments. Below we clarify and address the points raised in your review:
## **Regarding Novelty:**
We appreciate the reviewer’s concern regarding the novelty of our work. To clarify, our paper is indeed the first in quantum parameterized reinforcement learning to provide a rigorous sample complexity analysis with concrete bounds that surpass classical asymptotic limits. To achieve this, we introduce a novel quantum algorithm incorporating truncated NPG estimators (Algorithm 1), along with a detailed and novel theoretical analysis of the bias-variance tradeoff (specifically Lemma 5 and Lemma 6). This thorough analytical framework constitutes a significant step forward compared to previous quantum reinforcement learning literature.
## **Regarding Integration with raw-PQC:**
The reviewer’s suggestion on integrating our QNPG approach with raw-PQC is indeed interesting. We note, however, that the policy parameterizations of QNPG and raw-PQC differ fundamentally: QNPG utilizes classical parameterization, whereas raw-PQC explicitly leverages quantum circuit parameterization. Despite this difference, exploring integration or hybrid approaches between these two setups could be an intriguing direction for future research, which we will mention in the future work directions.
## **Regarding Formatting Issues:**
We carefully re-examined our LaTeX source file following the reviewer's comment on the line spacing discrepancies. Upon checking, we find no use of any command that reduces the line spacing. We believe this formatting inconsistency is likely due to LaTeX's inherent compilation behavior. One thing we see is that $\hat{g}_\rho$ is in text, and seems to be taking more space than the text, and thus the space seems to be less due to in-line Math text.
## **Regarding Essential References:**
We sincerely thank the reviewer for highlighting the additional references. These valuable references will be explicitly cited and discussed in the final version of our related work section. The mentioned RL paper does not study sample complexity guarantee. Further, will describe the literature on Quantum Multi-Armed Bandits where speedup is shown.
Thank you once again for your insightful feedback, which significantly strengthens our manuscript.
---
Rebuttal Comment 1.1:
Comment: I appreciate the responses for clarifying my understanding. I think the inclusion of references and the changes that will be implemented as a result of other reviewers comments will improve the paper and I have adjusted my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your time, effort, and the updated score. Thank you for your valuable feedback and support. As noted, we will incorporate the additional references you suggested and revise the draft in response to the comments from all reviewers to further enhance the clarity and quality of the paper. | Summary: In this paper, the authors introduce a novel algorithm called Quantum Natural Policy Gradient (QNPG). In QNPG the classical Natural Policy Gradient (NPG) estimators are replaced with a deterministic gradient estimation approach. Theoretically they show that QNPG achieves a lower sample complexity for queries to the quantum oracle/MDP in comparison to the classical lower bound for queries to the classical MDP.
Claims And Evidence: The authors state that their method is a quantum analog to classical NPG, which is a model-free RL method:
“In this work, we introduce the first quantum model-free reinforcement learning (RL) algorithm with theoretical guarantees, addressing the challenges posed by large state and action spaces. To our knowledge, this is the first work to coherently embed the entire Natural Policy Gradient (NPG) into a quantum state by leveraging only the standard environment oracles from reinforcement learning.”
As I understood the approach, the claim “model-free RL algorithm” is only valid if the environment of the RL problem is the actual “quantum transition oracle”. For any classical real-world RL problem QNPG could not be used in a model-free way, since the transition oracle U_P would have to be created from prior knowledge or existing data to produce meaningful transition probabilities. However, this would make the method a model-based RL method for nearly every known RL usecase. Creating U_P as surrogate of the real environment from data and optimizing policy parameters in an N step rollout sounds like PILCO (Gaussian processes) or MOOSE (neural networks). And of course, no one would say that PILCO is a model-free algorithm under the assumption that your environment is actually a provided Gaussian process.
Methods And Evaluation Criteria: No experiments on benchmarks are conducted. However, I think it would be very helpful to show at least a small example how QNPG is applied and whether empirical results support the theoretic bounds.
Theoretical Claims: No.
Experimental Designs Or Analyses: N/A
Supplementary Material: No.
Relation To Broader Scientific Literature: Finding novel QRL algorithms with actual advantages over classical ones is of huge interest. I’m not convinced that QNPG can be considered a model-free method, since a quantum transition oracle is needed, which can be considered a model.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: It is an important topic and an interesting new method.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and valuable insights. We address your primary concerns clearly below:
## **Clarification on the "Model-Free" Nature of QNPG:**
We appreciate the insightful discussion about whether our Quantum Natural Policy Gradient (QNPG) algorithm should be considered "model-free". Indeed, we agree with the reviewer that we explicitly assume the existence of a quantum transition oracle. However, our use of the term "model-free" aligns with the standard distinctions within reinforcement learning literature:
- Classical **model-based** reinforcement learning algorithms explicitly attempt to learn or estimate the transition dynamics (state-action probabilities) of the environment from sampled data. This learning process typically introduces a complexity that scales directly with the size of the state-action space \(|S||A|\), making it computationally expensive for large environments.
- In contrast, our QNPG algorithm does not explicitly attempt to estimate the environment's transition probabilities. Instead, we directly use the provided quantum transition oracle to sample trajectories and estimate gradients without explicitly estimating or storing transition dynamics. Consequently, the sample complexity for parameterized model-free methods (like our proposed QNPG) does not scale with \(|S||A|\). Instead, it depends primarily on mixing times and related properties, reflecting a fundamentally different complexity structure from traditional model-based methods.
We note that the use of quantum transition oracle is like the use of generative models in classical decision making, and thus unless the transition dynamics is learnt, the algorithm is denoted model-free. This important distinction will be further emphasized in the final version of the manuscript.
## **Regarding Numerical Results:**
We acknowledge the reviewer's point regarding numerical experimentation. The current focus of our work is theoretical, and comprehensive numerical experiments are planned for future studies. We clarify that while implementing quantum oracles in actual quantum computers is feasible with polynomial complexity and no fundamental conceptual barriers, simulating these oracles classically is NP-hard.
We greatly appreciate your constructive comments and hope these clarifications clearly address your points.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will maintain my score. | Summary: The paper introduces a novel Quantum Natural Policy Gradient (QNPG) algorithm aimed at accelerating reinforcement learning (RL) in quantum computing environments. The authors introduce QNPG, a quantum-compatible variant of the classical Natural Policy Gradient (NPG) algorithm. The key innovation involves replacing the classical stochastic gradient estimators with deterministic gradient estimators based on truncation methods. The sample complexity is improved from $\mathcal{O}(\epsilon^{-2})$ to $\mathcal{O}(\epsilon^{-1.5})$.
Claims And Evidence: The sample complexity is supported by proofs.
Methods And Evaluation Criteria: Using quantum computing to accelerate classical NPG algorithm makes
sense for RL.
Theoretical Claims: I have roughly checked the proof of Theorem 1 and Lemma 5, and it looks
reasonable and correct.
Experimental Designs Or Analyses: No experimental designs.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper is related to application of quantum computing in RL.
Essential References Not Discussed: This paper has discussed essential references.
Other Strengths And Weaknesses: Strengths:
1. Application of quantum computing in RL may a new trend in these years. The method of this paper looks applicable for real-world problems.
Weaknesses:
1. One possible weakness is the lack of novelty. It seems that quantum computing is only used in estimating $g_h$ and $F_h$ in algorithm 1, and the quantum computing algorithm (Lemma 1) is also the result in previous literature. The proof also follows the proofs of classical NPG with application of Lemma 1.
2. The paper lacks numerical results. As the paper is about accelerating algorithm with quantum computing, the numerical experiments is important to show the accelerating effects.
Other Comments Or Suggestions: It would be better to explain why quantum computing can accelerate the mean estimating. And I suggest you add more explanation about the notations of quantum computing in section 2.3.
Questions For Authors: 1. Could you explain why quantum computing can accelerate mean estimating in Lemma 1? It seems that it is the key step to improve the sample complexity, right?
2. In the aspect of convergence analysis, could you mention how quantum computing improves the complexity? If I understand correctly, it only appears in Lemma 5, right?
3. Based on previous two questions, could you explain more on the novelty? It looks like that quantum computing algorithm is old and the proof just follows NPG.
4. Could you add numerical results to show the accelerating effects?
## update after rebuttal: I raise the score from 2 to 3. Please refer to the comment below for reason.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s thoughtful comments and valuable feedback. We address the main points and questions raised as follows:
## **Response to Weakness 1 and Questions 1-3:**
**Why Quantum Computing Accelerates Mean Estimation (Lemma 1)**:
Quantum computing fundamentally accelerates mean estimation through the Quantum Amplitude Estimation (QAE) algorithm [1]. Quantum states inherently encode probability distributions directly into their amplitudes, allowing QAE to estimate probabilities and expectations using methods similar to Grover iterations [2]. This results in a quadratic speedup in terms of accuracy and the required number of queries or experiments compared to classical sampling methods.
**Quantum Acceleration in Convergence Analysis (Lemma 5)**:
We agree that Lemma 5 is central to the speedup presented in our paper. Specifically, quantum computing provides variance-reduced estimators for $F_h$ and $g_h$. Compared with classical NPG, our quantum estimators achieve quadratically reduced variance. However, this reduction introduces a bias-variance trade-off absent from classical NPG, which traditionally provides unbiased gradient estimators.
**Key Novelties**:
We thus sum up our key novelties. The introduction of biased estimators in our quantum NPG algorithm is due to the use of truncated estimators for the Fisher information and policy gradients, which are necessary for compatibility with quantum RL oracles (Equations 16-17). This truncation introduces a bias absent in classical estimators. Specifically:
- Classical NPG methods rely on stochastic algorithms that sample the NPG gradient from trajectories of random length, typically following a geometric distribution. However, we cannot achieve quantum superpositions of trajectories with random lengths due to fundamental quantum computational constraints. Consequently, we adapt the classical algorithm to deterministically truncate trajectory lengths.
- This deterministic truncation changes the classical unbiased estimation approach into one that inherently introduces bias. Therefore, we modify the methods of estimating the Fisher information and gradient estimators, replacing geometric distribution sampling with deterministic truncation.
- To rigorously address the bias introduced by truncation, we significantly adjust the analytical framework. Specifically, Lemma 4 quantifies complexities associated with obtaining quantum superpositions, and Theorem 1 explicitly bounds the biases and variances resulting from truncation. Lemmas 5 and 6 subsequently utilize these bounds, meticulously handling bias accumulation and variance reduction through modified analytical techniques and recursive error bounding methods.
These methodological adaptations and theoretical analyses represent substantial novelty over classical NPG algorithms and are crucial to achieving our main theoretical results.
## **Response to Weakness 2 and Question 4:**
**Numerical Experiments**:
We acknowledge the reviewer's concern regarding the absence of numerical results. The primary objective of this paper is theoretical, and thus detailed numerical experimentation is intended as future work. While implementing quantum oracles on an actual quantum device would be of polynomial complexity and presents no conceptual challenges, simulating these oracles classically is NP-hard. We can ofcourse simulate for a very small scale, but that will not help validate in a larger example.
Thank you again for your valuable comments. We hope these clarifications clearly articulate the novelty and significance of our contributions.
## **References**
[1] Quantum Sub-Gaussian Mean Estimator, 2021
[2] A fast quantum mechanical algorithm for database search, 1996
---
Rebuttal Comment 1.1:
Comment: I appreciate the responses for my questions. Now I can understand the novelty of your algorithm compared with classical NPG, and I have adjusted my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thanks a lot for your thoughtful reconsideration and updated score. We truly appreciate your time, insightful questions, and constructive feedback. We're glad the clarifications helped convey the novelty of our algorithm. As suggested, we will add further explanations about the quantum computing notations in Section 2.3 to improve the clarity and accessibility of the draft. | Summary: This paper introduces a Quantum Natural Policy Gradient (QNPG) algorithm aimed at accelerating quantum reinforcement learning. The key idea is to replace the classical random sampling in natural policy gradient estimation with a deterministic gradient estimation method that can be integrated into quantum systems via quantum oracles. The authors claim that by doing so, they can achieve a sample complexity of O ̃(ϵ^(-1.5) ), which improves over the classical lower bound of O ̃(ϵ^(-2) ). The paper presents a thorough theoretical analysis, including convergence proofs and bias–variance trade-off evaluations, and provides detailed constructions of the required quantum oracles.
Claims And Evidence: Most of the submission's claims are supported by relatively clear and convincing evidence. The paper first builds on established concepts in quantum computing and reinforcement learning, such as quantum mean estimation and MDP formulations. For the QNPG algorithm, the authors derive the estimators for the policy gradient and Fisher information matrix and analyze their biases and variances. A series of lemmas and theorems derives the sample complexity and convergence results.
However, the assumption of access to quantum oracles might be a bit idealized. Implementing these quantum oracles could be extremely challenging in real-world scenarios. The paper claims that any classically computable policy can be converted into a quantum-evaluatable unitary operation, but it doesn't fully address the practical difficulties and overheads associated with this conversion.
Methods And Evaluation Criteria: The proposed methods make sense for the problem at hand. Using quantum oracles to sample from the environment and initial state distribution and constructing quantum-compatible estimators for policy gradients are innovative ways to apply quantum computing to reinforcement learning. The evaluation criteria, mainly based on sample complexity and convergence analysis, are appropriate for measuring the algorithm's performance in the context of policy optimization.
Theoretical Claims: I have checked the correctness of the proofs for the theoretical claims. The proofs of lemmas and theorems are generally well-structured. For example, in the proof of Theorem 1, the authors compare the truncated estimators with their infinite - horizon counterparts to analyze the bias introduced by truncation. The use of Jensen's inequality and other mathematical techniques is appropriate.
However, in some of the proofs, the steps could be made more explicit. For instance, in the proof of Lemma 6, the bounding of some terms might be a bit difficult to follow for readers who are not extremely familiar with the specific notations and concepts in the paper. Some additional explanations about the inequalities used and how they are derived would be beneficial.
Experimental Designs Or Analyses: Since the paper is mainly theoretical, there are no traditional experimental designs in the sense of running experiments on physical quantum systems. The analysis is mainly based on theoretical derivations of sample complexity and convergence. The soundness of these analyses depends on the validity of the assumptions made, such as the smoothness and Lipschitz continuity of the score function. If possible, some numerical simulations on quantum simulators to demonstrate the practical performance of the QNPG algorithm would strengthen the paper. Also, sensitivity analyses of the parameters in the algorithm (e.g., the discount factor γ, the learning rate η ) could be included to show how they affect the performance.
Supplementary Material: I have reviewed the supplementary material, specifically the proofs in Appendices A-F. The supplementary material provides detailed derivations and explanations that are crucial for understanding the main paper. For example, Appendix A shows the construction of the unitary U_P(τ_N ) and the query complexity of quantum NPG estimators, which helps to clarify the implementation details of the proposed algorithm.
Relation To Broader Scientific Literature: In the area of quantum computing, the paper builds on the concept of quantum mean estimation, which has shown quadratic improvement over classical methods. In reinforcement learning, it extends the classical Natural Policy Gradient algorithm to the quantum domain. Compared to previous works, this paper is the first to coherently embed the entire NPG into a quantum state using standard RL environment oracles. It also demonstrates quantum speedups for parameterized model-free infinite-horizon MDPs, while most existing QRL works are limited to tabular setups.
Essential References Not Discussed: There are no obvious essential references that are not discussed in the paper. The authors have cited a comprehensive set of related works in the fields of quantum computing, reinforcement learning, and quantum reinforcement learning.
Other Strengths And Weaknesses: Strengths
1: It introduces a deterministic gradient estimation in quantum RL, offering a new way to use quantum computing in RL, which is different from traditional methods.
2: Conducts thorough convergence, sample complexity, and bias-variance analyses.
3: It has an extensive literature review that effectively positions the research within the existing knowledge in quantum computing and RL.
4: The QNPG algorithm may enhance RL-based decision-making in various fields, like robotics and finance, due to its improved sample complexity.
5: It integrates quantum computing concepts well, leveraging quantum features through oracles and exploring a new area of quantum-enhanced RL.
Weaknesses
1: Assumes accessible quantum oracles without discussing real-world implementation challenges.
2: Some sections, especially those with proofs and complex math, are hard to follow. Simplifying notations and adding explanations is needed.
3: Mainly compares with classical algorithms, lacking sufficient comparison with modern quantum RL algorithms to show relative performance.
4: Lacks experimental validation
Other Comments Or Suggestions: Some symbols in the Markov Decision Process (MDP) and in quantum oracles are confusing. They may be unified or require more explanations.
Questions For Authors: 1. Given quantum oracle implementation challenges, how can the QNPG algorithm be made more practical in real-world scenarios?
2. In convergence analysis, score function assumptions may not hold for all policies. How would the QNPG algorithm performance change if so?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and constructive comments. We address your concerns and questions clearly below:
## **Response to Weaknesses 1, 4, and Question 1:**
**Practicality of Quantum Oracle Implementations and Experimental Validation**:
We acknowledge the reviewer’s valid concerns about the real-world implementation of quantum oracles and experimental validations. Our current work is primarily theoretical, and extensive numerical experiments and implementations are planned as future work. Importantly, the quantum oracles required by our algorithm can be implemented with polynomial cost on quantum computers as shown in [1]-[3], posing no conceptual barriers. However, classically simulating these oracles is NP-hard. We can ofcourse simulate for a very small scale, but that will not help validate in a larger example. Future studies will aim to implement and validate our methods on quantum simulators or quantum hardware.
## **Response to Weakness 2:**
Given additional page in the final version, we will work on adding explanations. Further, we appreciate the reviewer’s suggestion regarding the complexity of Lemma 6's proof. While we have provided detailed explanations and derivations for each steps of the proof in the Appendix, we agree that further clarification could help readers that are unfamiliar with the specific concepts involved understand better. To clarify, we summarize the high-level idea of the proof as follows, which we will incorporate into the final draft:
"The proof of Lemma 6 proceeds by first expressing the error between the estimated gradient and the ideal gradient as a recursive update from the inner-loop iteration. It then demonstrates that each update contracts the error exponentially, meaning the error shrinks by a fixed multiplicative factor with each iteration. Subsequently, the proof carefully bounds the additional errors introduced by truncation bias and estimator variance, leading to explicit residual terms. Finally, these contraction and residual terms are combined through a recursive argument to establish the overall error guarantees."
If you can provide some specific comments for confusing symbols, we would appreciate that so that we can revise them in the final version.
## **Response to Weakness 3:**
**Comparison with Existing Quantum RL Algorithms**:
We clarify that, to our knowledge, there are currently no quantum reinforcement learning algorithms with provable guarantees for general parameterization setting in the existing literature. Thus, our approach introduces significant novelty in providing theoretical foundations and complexity improvements as compared to classical and prior quantum algorithms.
## **Response to Question 2:**
**Validity of Score Function Assumptions**:
We note as we stated after the assumption, regarding Lipschitz continuity and smoothness of the score function, this assumption is common in reinforcement learning literature, and can indeed be verified for commonly used parameterized policies, such as softmax policies [4] and Gaussian policies, where the action is modeled as a Gaussian distribution, with parameters (mean and standard deviation) learned from the environment [5]. The assumption is necessary since without this, we would have unbounded gradients.
Thank you again for your thoughtful comments. We hope these clarifications enhance the clarity and impact of our contributions.
## **References**
[1] Implementing Grover oracles for quantum key search on AES and LowMC, 2019
[2] Automatic Generation of Grover Quantum Oracles for Arbitrary Data Structures, 2021
[3] Verified Compilation of Quantum Oracles, 2022
[4] On the Theory of Policy Gradient Methods:
Optimality, Approximation, and Distribution Shift, 2021
[5] An Improved Analysis of (Variance-Reduced) Policy Gradient and Natural Policy Gradient Methods, 2020 | null | null | null | null | null | null |
Preference Adaptive and Sequential Text-to-Image Generation | Accept (poster) | Summary: The paper proposes a novel method for text-to-image (T2I) generation that adapts to user preferences over the course of multiple turns. The problem is framed as a Markov decision process (MDP) where the initial state is some prompt $p_0$ provided by the user, each subsequent state is given by the history of interactions up to the current turn (we assume that there are at most $H$ turns), the action is to propose a selection of $L$ text prompts for the next batch of images ($M$ images are generated for each prompt), and the transition is induced by the user's choice of his/her preferred prompt among the proposed selection. The reward is given by a _user utility function_, which quantifies the user's satisfaction with a given prompt based on the images that were generated with it.
The task now becomes to maximize the cumulative reward, i.e. the user utility, over the course of $H$ turns by proposing an appropriate set of prompts in each turn. To this end, an agent is trained with offline reinforcement learning (implicit Q-learning) to select the best $L$ prompts from a set of $L_C$ candidate prompts generated using a fixed multi-modal language model (Gemini 1.5 Flash). Training data is obtained using a baseline prompt generator that interacts with human annotators who, for each turn, label their preferred prompt among the presented slate. In order to quantify the _utility function_ encoded in these preference labels, a _user model_ is trained on the gathered data as well as preference datasets from the literature. The user model's objective is to predict which prompt was chosen as the preferred one by the human annotator. To bulk up the dataset, a simulated dataset of user interaction is generated by using the trained user model as a proxy for human annotators. Finally, the prompt selection agent is trained using offline RL where the reward signal is obtained from the user model.
Evaluation consists of three components: The user model's accuracy on human preference prediction, the user model's accuracy on user choice prediction, and the selection agent's ability to adapt the selected prompts to the user's preference. Positive results are reported in each case.
### Update after rebuttal
After reading the other reviews and the authors' rebuttals, and taking into account the additional experimental results, I believe that the paper proposes a good idea and executes it well. Of course there is still room for improvement: ablating the importance of the LLM and/or prompt as well as improving the clarity of the writing comes to mind. However, if the promised clarifications can be implemented, and given the significance of the investigated problem and the potential impact of releasing the full training data, I believe that the paper meets the acceptance threshold. To reflect this, I will increase my score from 3 (weak accept) to 4 (accept).
Claims And Evidence: The paper’s claims are generally well-supported, with a couple opportunities for improvement.
1. The fact that the proposed agent is trained with offline RL (as opposed to online) is worth highlighting more clearly, as this is a big distinction.
2. The paper claims that there are two separate models, one user _utility_ and one user _choice_ model (L239 ff.), but from what I can tell, there is only a single model being trained (the preference model). Besides the added temperature parameter, is the user choice model a different model?
3. The reported 99% confidence intervals in Figure 5 may be overly pessimistic. As presented, all confidence intervals overlap, making the results not statistically significant, which, if it’s the case, should be discussed. A different statistical test may offer a higher significance level.
4. Releasing the code and trained models in addition to the dataset would go a long way to bolster the paper’s claims, as it resolves any concerns about reproducibility stemming from somewhat limited details/clarity on the model architecture and training procedure.
Methods And Evaluation Criteria: The conducted experiments make sense for evaluating both the user model as well as the T2I agent, and the use of human raters for evaluation is a big strength.
1. Since we have a model-based way to measure user preference, it would have been nice to see an improvement over multiple turns in that regard as well. E.g. what is the average satisfaction/improvement according to the user model in each round, and does it saturate or keep increasing?
2. Given that there already exist methods for multi-turn text-to-image generation (e.g. von Rütte et al., 2023; Liu et al., 2024), some sort of comparison (qualitative or quantitative) would feel appropriate. What does the proposed method bring to the table that previous approaches didn’t?
Theoretical Claims: As far as I can tell, there are no major theoretical claims.
Experimental Designs Or Analyses: The conducted experiments seem sound besides what is already mentioned above.
Supplementary Material: I have skimmed all of the appendix, and have read the parts on training of the model in detail.
Relation To Broader Scientific Literature: The paper introduces an adaptive prompt-refinement agent that is trained using offline RL, which to the best of my knowledge has not been done before and is a valuable addition to the literature of both prompt-refinement and personalization through iterative feedback. In terms of performance, it is not clear how the proposed method stacks up to existing approaches, and by extension, whether it is worth the fairly considerable increase in complexity.
Besides the proposed model, the collection and release of iterative preference data is a valuable contribution to the field that may lay the foundation for future work.
Essential References Not Discussed: The background of preference-adaptive image generation is covered fairly well. Additionally, there exist methods for iterative prompt improvement based on human feedback [1] or based on automated scoring [2] that may be worth mentioning.
- [1] Martins et al., 2023. https://cdv.dei.uc.pt/wp-content/uploads/publications-cdv/martins2023metaprompter.pdf
- [2] Mañas et al., 2024. https://arxiv.org/abs/2403.17804
Other Strengths And Weaknesses: Strengths:
- The paper tackles an important problem in text-to-image generation in trying to reduce the burden of prompt engineering, continuing a line of work that consists of incorporating user feedback to adapt the generation process.
- The proposed approach to do this is novel and makes sense, leveraging value-based RL to train a user-adaptive prompting agent.
- Releasing the training data will allow future work to build and improve on the results of this work.
- Evaluating the trained model using human raters directly measures the downstream performance.
Weaknesses:
- Clarity of writing is the paper’s biggest weakness. This includes multiple separate aspects:
- Flow: It may be helpful to switch the order of sections to more accurately reflect the logical flow of the project, which is dataset creation -> user model training -> synthetic data generation -> offline RL training -> evaluation (consisting of user model evaluation -> agent evaluation). As is, the paper (to me) felt somewhat out-of-order, and it took me a couple of passes to wrap my head around it.
- The formalism introduced in Section 2.2 feels a bit contrived and unnecessarily sophisticated. This may obfuscate the actual problem formulations and confuse some readers, a clear and simple textual introduction may be more appropriate. For an MDP, we just need to know the state space, the action space, and the reward function. For the sake of Section 2.2, it may suffice to state this in natural language.
- Important details are deferred to the appendix (presumably due to lacking space caused by, IMO, unnecessary formalism), leading to obfuscation. The model architecture and training is a core contribution of the paper and should be covered properly in the main text. In fact, I would argue that all of the formalism introduced at the beginning of Sections 2 and 5 is fluff that should be replaced with the actual meat that is the design and training of the user model and prompting agent. This is what the paper is about, not whether the user actually does or does not choose the image that they prefer most (Section 5).
- The lack of clarity somewhat extends to the evaluation setup: How can accuracy on Pick-a-Pic and Spearman correlation on HPS be computed if the user type is considered given in the user model? How is the user type determined at inference time? Or, when evaluating PASTA, is the preferred prompt chosen by the human rater or by the user model?
- Minor questions regarding clarity are deferred to the “Questions For Authors” section.
- Not releasing the code and trained models is closely behind as the paper’s second biggest weakness. As-is, I would not feel confident in being able to reproduce the results presented in the paper, due to lacking details and/or vague/confusing writing (just one example: the main text uses $R$ to denote user utility, but the same quantity seems to be called $s$ in the appendix).
- The prompt generation/expansion module of the proposed agent is a significant factor that potentially has major impacts on performance and warrants dedicated evaluation/ablation, or at the very least discussion. For example, does the proposed way of sampling 5 prompts from 5 different categories actually help? And if so, by how much? Other things like the “system prompt” (which is chosen arbitrarily), model, output format, etc. can also plausibly affect the performance of the final agent in a significant way.
- Impact statement: The field of T2I modeling has a potentially big impact on the economics of visual content creation. Agentic content recommendation also has potential risks associated with it. Providing some perspective on this feels appropriate.
Given the relevance of the tackled problem and the strength of the contributions, I am inclined to recommend an accepting decision. However, considering the clarity of the writeup and the mentioned concerns regarding reproducibility, I cannot give a strong recommendation.
Other Comments Or Suggestions: Nits:
- Line 31, Column 2, also other occurrences: Gemini report is referenced as “Team et al.”, should be “Gemini Team”.
- Line 45, Column 2: missing period after “LMM”
- Section 2.1, RL: The value function, strictly speaking, measures the expected cumulative reward of a given state, not state and action (this would be the state-action value function, or Q function).
- Footnote 4 is of little value without further details about the contractors (e.g. country/method of employment country, compensation)
- Eq. 3: Why introduce a generic aggregator if we are anyways only going to use softmax?
Questions For Authors: 1. Will the model weights and training code be released?
2. How is the human-rated sequential data generated? Specifically, how are the prompt slates generated and, precisely, how is the slate selected from candidate prompts? It feels like this would already require an agent, which, presumably, we don’t have yet.
3. In what sense is the user model _not_ simply a reward/preference model? This term is much more common in the literature and very easy to understand, so if appropriate, it should be preferred.
4. What is the setup and sample size for human evaluation of the final agent?
5. L302, Col. 2: “Rewards provided after the final round” – what are the rewards? Based on a model or based on human raters?
6. Assumption 5.2: What does it mean for Equation 2 to be satisfied? Equation 2 seems to be an expression, not an equality.
7. What are baseline performances on Pick-a-Pic and HPS? Without even the scores from the original papers, the presented numbers are rather meaningless.
8. Figure 3: What is the shaded area? X- and y-labels are missing. Unclear what “cross-turn accuracy” is.
9. L304, Col. 2: What does “softmax sampling” mean? Do we sample a random index based on the softmax distribution? Or do we compute a weighted average over the scores, with the weights given by the softmax distribution?
10. L306, Col. 1: How does the temperature parameterization “ensure” that Assumption 5.2 is satisfied? Did we not make this assumption beforehand in order for this parameterization to be valid?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s helpful feedback, as well as their recognition of the significance and originality of our work and the value of sharing our distinctive dataset. Below, we respond to the reviewer’s comments.
**A: Model weights and training code release**
We are currently reviewing options for releasing our user model in addition to the already open-sourced complete datasets.
**B: “...proposed agent is trained with offline RL (as opposed to online) is worth highlighting more clearly..”**
This is a good point. Offline RL was chosen because: 1) It allows (also) training directly on the rich human rater dataset, capturing authentic complexities. Online RL with real people is impractical, forcing reliance on simplified user models. 2) Offline RL is faster, avoiding slow environment interactions and inference with large user models/LLMs required by online RL. We will highlight this.
**C: “...is the user choice model a different model?”, “the main text uses $R$ to denote user utility, but the same quantity seems to be called $s$ in the appendix”**
In the paper we assume equivalence assumption between choice and preference and consistency in utility. A trained score function $s_\theta$ builds the utility function (Eq 3), scoring image slates (per prompt/user type). Applying softmax over utility scores (Eq 4) yields a user choice probability consistent with utility and the preference model. A learned temperature adds flexibility while keeping the assumptions (scaling the entire vector of utility scores by a constant $\tau_\theta$ does not alter the ranking consistency). Thus, both models share $s_\theta$ with the choice model derived from utility output. $R$ refers to the overall utility concept, while $s$ is the specific learned score function implementing it.
**D: “The reported 99% confidence intervals in Figure 5 may be overly pessimistic”**
Confidence intervals don't overlap vs. the base model, but some overlap exists when comparing our method with different data types. We'll run another rater study assessing full trajectory improvement (final vs. first turn), expecting clearer results as cumulative improvement is what we ultimately care about. Updates during discussion.
**E: “How can accuracy on Pick-a-Pic and Spearman correlation on HPS be computed if the user type is considered”**
The test sets for Pick-a-Pic and HPS are designed such that each sample contains a specific annotator's samples. To assess the posterior of the annotator over user types, we sampled a subset (of size 3) from the full set of annotated samples, then calculated the accuracy or rank of the remaining held-out samples and averaged the results over the posterior. We found this comparable to using the full set for the posterior computation. This detail was omitted due to space and it will be added in revision.
**F. Questions For Authors:**
1. See comment A.
2. For human data creation, we used Gemini 1.5 Flash as the candidate generator. Instead of a trained selector, a random selector proposed the slate, picking up to one prompt per category from the generator. This combination provides quality expansions and broad exploration, crucial for building an unambiguous offline latent MDP dataset.
3. Our user model consists of the utility component as well as the user choice model that translates this reward into user choices, as is standard in utility theory, choice modeling, econometrics, etc. Indeed, for a fixed choice model the user model is a generalized (user specific, set-based) preference model.
4. Our evaluation consists of 60 human raters, each experiment was evaluated over 200 trajectories. Exact metrics will be in the appendix.
5. Agent trained with sparse rewards (only at final 5th round), determined by the learned user utility function. Human rater data was also labeled using this same learned utility function post-collection.
6. Eq 2 is the generalized preference probability of preferring image set $\ell^*$ over the other sets. This is not an equality but a description of the preference probability.
7. Since the user model's goal is mimicking users for data generation, our metrics focus on performance across user types (averaged over posterior). However, we will add original baseline scores from the Pick-a-Pic/HPS papers for comparison.
8. The y-axis represents the accuracy outlined in the subtitles, while the x-axis indicates the number of user types used in the user model. The shaded area is 95% CI. We will clarify this in the paper. Appendix D.3 details cross-turn accuracy: preference accuracy between selected images at consecutive test set turns.
9. Softmax sampling: The Eq 3 aggregation function can be max, average, softmax sampling, etc. Softmax sampling means selecting an element randomly based on the softmax distribution over scores. We chose it over max to add randomness reflecting user selection variability observed during data collection and sequential T2I.
10. see comment C. | Summary: This paper introduces PASTA, a reinforcement learning (RL) agent designed for preference-aligned, sequential text-to-image (T2I) prompt expansion. The framework aims to enhance T2I generation by formulating multi-turn interactions as a sequential decision-making problem, leveraging LMMs (Gemma) and image diffusion models (Stable Diffusion XL). The paper evaluates PASTA through human and simulated experiments, claiming improved user satisfaction and model-agnostic adaptability.
Main Contributions:
1. Formulate the challenge of multi-turn T2I generation as a sequential decision-making problem.
2. The use of IQL for offline RL and user category-based prompt generation is innovative, enhancing diversity and personalization.
3. Use LMM(Gemma) and the Image Diffusion Model (SD-XL) to construct the generation framework.
4. Collect sequential sample rater data and simulated data for multi-turn T2I generation and facilitating further research.
Claims And Evidence: 1.In section 7, “PASTA operates by directly adapting the input prompt, offering a model-agnostic approach to adaptive preference learning in T2I generation”. Could you provide more evidence for this? As different T2I models may perform different given the same prompt settings, also LMM’s capability could affect this prompt expansion process. The lack of experiments with other SOTA T2I generators (e.g., DALL-E 3) or LMMs weakens this claim. Additional validation across diverse models is necessary to substantiate model-agnostic adaptability.
2.Presented User Model improves user satisfaction compared to baseline(Gemini 1.5 Flash). Could you compare your with other preference-alignment reward models (e.g. ImageReward), T2I refiner(G-Refine, Idea2Img) or more advanced LMMs (e.g. GPT-4o)?
Methods And Evaluation Criteria: 1. Strict word limits (12 for human, 10 for simulated) may not reflect real-world usage, limiting applicability.
2 . What is the intention of not allowing users to see intermediate prompts?
Theoretical Claims: 1. What is the essence of recognizing user’s category or type in a multi-turn generation process instead of explicitly using textual interactions? For example, if a user wants to have realistic and scenic images about a Railroad Journey (Example #1 in Appendix G), why couldn’t we lay out several generation styles for choosing and then generate more style-aligned images efficiently?
Experimental Designs Or Analyses: 1. Human evaluations rely on subjective turn-over-turn judgments (Better/Worse/Same), lacking quantitative and objective metrics like image quality scores (e.g., FID, IS). This limits the assessment of PASTA’s performance in terms of visual fidelity and semantic accuracy, which are critical for T2I tasks. We may need more experiments in image generation benchmarks evaluating both preference and objective metrics like DreamBench++ to show that your framework could generate images having significantly better performance than baseline in more objective metrics.
2.Better / Worse / Same selection results between different turns could be influenced a lot by psychological factors, only by sampling different images, users may also select “Better” at a probability, even if there is no significant prompt changes between different turns. The paper does not provide “blank” experiments results to measure this bias.
Supplementary Material: 1. Is the “sample rater data” provided in the supplementary material the full version of your curated human rater data?
2 . If so, could 30 initial prompts effectively reflect various and personalized user requests in a real-life scenario?
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: None
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: I am willing to raise my score if the authors can address my concerns.
Questions For Authors: Illustrations (e.g., "An image of happiness/love," Figure 6) focus on abstract prompts, which may not be representative enough of typical T2I applications. How do these prompts align with real-world usage, and what is PASTA’s performance on different categories of prompts?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed review and hope that the following response addresses the reviewer’s concerns.
**A: "The lack of experiments with other SOTA T2I generators.."**
**Response:** To address this, we are running additional test-time experiments evaluating models our agent wasn't trained on. This includes a rater study using our method with Flux.1 and comparing our method to a GPT4.5-based agent. Results will be shared during the discussion period.
**B: "Could you compare your with other preference-alignment reward models?", "lacking quantitative and objective metrics"**
**Response:** We agree further analysis is helpful. We will add additional evaluation metrics to our experiments during the author-reviewer discussion phase (by next week) including: user model score, FID, and LPIPS.
**C: “Strict word limits..”**
**Response:** The constraint arises from the pretrained CLIP text encoders used in the user-score model, which are limited to 77 tokens. To stay below this threshold after five turns, we set the initial prompt to 12 words. Additionally, the text-to-image (T2I) model has a limited context capacity, restricting the level of detail that can be included. Despite this, the model still accommodates a variety of starting prompts that can evolve into detailed and rich expansions. Moreover, keeping initial prompts short shifts the focus toward feedback derived from user choices rather than critiques, which aligns more naturally with generalizing the standard single-turn preference setup. We intentionally minimize the influence of critiques to maintain this balance. That said, using improved encoders like LLM2CLIP is left for future exploration.
**D: “What is the intention of not allowing users to see intermediate prompts?”**
**Response:** PASTA refines prompts preserving user intent. Users choose the prompt-image pair best matching their initial prompt, preserving core elements. This method keeps the sequence of generated images closely tied to the content of the starting prompt, reducing the likelihood of producing images that stray too far from the user's starting point (e.g., beginning with a prompt about a train and ending up with an image of a dog). We experimented with displaying intermediate prompts to raters but found that presenting these prompts especially given their often complex nature placed too great a cognitive load on raters. We also believe this could compromise the quality of feedback provided by raters. Prompt refinement and expansion are natural (and commonly used) interaction modes for users engaged in iterative image generation. Surfacing the prompts as suggestions along with the images is something to be explored, but might require more "controlled" prompt expansion to elicit proper responses. We will elaborate further in our paper.
**E: “What is the essence of recognizing user’s category or type in a multi-turn generation process instead of explicitly using textual interactions?”**
**Response:** PASTA doesn't identify user types explicitly. Types only simulate consistent preferences in synthetic data creation. The agent adjusts to the user by leveraging the interaction history, adapting to their preferences. Our method offers advantages over textual instructions because users are often unaware of the prompt possibilities provided by the agent, and selecting images is significantly more user-friendly than drafting instructions.
That said, PASTA can incorporate textual instructions by integrating them into the candidate generator’s prompt, allowing it to focus on specific categories or directions specified by the user. Our dataset includes rollouts (20%) with text critiques for potential model training. We kept the framework simple, leaving text integration for future work.
**F: “Better / Worse / Same selection results between different turns could be influenced a lot by psychological factors”**
**Response:** User feedback inconsistency due to psychological factors is expected. However, average human evaluations remain a valuable performance measure. Our results show the trained PASTA agent significantly outperforms a standard LLM.
**G: “Is the “sample rater data” provided in the supplementary material the full version of your curated human rater data?”**
**Response:** Our human rater data, collected with a random policy selector, consists of over 7k 5-step rollouts (>500k images), hundreds of prompts, approx. 100 raters. Simulated data: over 30k rollouts (>2.5M images). The supplementary sample is small due to size limits. A link to the full dataset will be provided in the final version of the paper.
**H: “How do these prompts align with real-world usage..”**
**Response:** Abstract prompts help discern user type preferences during interactions better than typical prompts (e.g., 'dog running'), which often have universally preferred styles and biases. Supplementary materials include full rollouts for typical T2I prompts. | Summary: This paper introduces PASTA, a RL framework for interactive text-to-image (T2I) generation.
This method enables multi-turn collaboration between models and users to refine prompts/images iteratively to align the preferences of users.
A novel dataset of sequential user interactions is proposed and a user simulator trained via EM strategies to model diverse preference types. Evaluations show PASTA outperforms baseline models in human ratings
This paper compared the situations of training on combined real and simulated data or training on sole dataset.
## update after rebuttal
The provided additional evaluation addressed most of the reviewer's concerns thus the reviewer choose to enhance the rating to 3.
Claims And Evidence: Refer to the questions in "Experimental Designs Or Analyses".
Methods And Evaluation Criteria: Strength:
Prompt rewriting is essential for text-to-image generation. This paper introduces the first RL-based multi-turn interactive text-to-image generation framework PASTA, mainly for generating a better prompt for T2I that can progressively refines the generated results through dynamic prompt expansion.
This paper constructed the first dataset that encompasses multi-turn interactive behaviors of human evaluators.
Weakness:
This method drawback lies in the costly data collection process. This paper does not mention the scale of user annotations. If a different generative model is used, would the same annotations be required? The reviewers are concerned that the annotation scale might be too large, costly and not reusable.
This paper does not analyze whether the synthetic data contains noise, the extent of such noise, or the overall quality of the data. Whether there exists a domain gap between single-turn and multi-turn preferences for models trained on single-turn preference datasets requires thorough analysis.
Generalizability of the method. It remains to be verified whether this approach would still be effective if applied to other LMMs (Large Multimodal Models) and visual generators. Additionally, the generalizability of the trained value model (in this case, Gemma) needs to be assessed.
Theoretical Claims: The theoretical claims are sound.
Experimental Designs Or Analyses: The article only conducted experiments on Gemini + SDXL, failing to demonstrate the generalizability of the method.
Why was the number of interaction turns set to five? According to Figure 5, the results continue to improve even in the fifth turn, necessitating a convergence analysis.
This paper should evaluate more metrics, in addition to Pick-a-Pic and HPS accuracy, other metrics such as FID (for assessing quality) and LPIPS (for measuring diversity) could also be evaluated.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: This paper holds significant implications for the application of reinforcement learning in the realm of text-to-image synthesis.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: Conduct experiments on a wider array of generative models.
Questions For Authors: Could the authors perform a manual or automated analysis of the synthetic data's quality?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful for the reviewer’s constructive feedback and for pointing out our dataset contribution as the first T2I dataset for multi-turn interaction setting. Bellow we address the concerns raised by the reviewer:
**Comment 1: "This paper does not mention the scale of user annotations."**
**Response:** Thank you for highlighting this oversight. Our human rater data consists of over 7,000 five-step rollouts (totaling more than 500,000 images), annotated by approximately 100 human raters using a random candidate selector. The simulated user data includes over 30,000 rollouts (exceeding 2.5 million images). We will add this information to the paper.
**Comment 2: "The reviewers are concerned that the annotation scale might be too large, costly and not reusable."**
**Response:** To encourage research in this novel setting, we are making all datasets used during training publicly available—we hope this will allow researchers who lack the resources to conduct such studies to benefit. Furthermore, we have proposed that exploiting trained user simulators, independent of large language models and easily trainable with the provided dataset, is an effective solution to mitigate data scarcity. Finally, we are sharing the synthetic dataset created by our user simulator to further assist researchers in addressing these challenges with greater ease.
**Comment 3: “Generalizability of the method.”**
**Response:** Thank you for raising this point. To address this we are now running additional experiments with different models during test-time (i.e., evaluating using models that our agent was not trained on). Specifically, we are conducting an additional rater study using our method with the Flux.1 T2I model. We are also running a study to compare our method to a GPT4.5-based model as an agent. We will comment here with these new results during the discussion period.
**Comment 4: Analyze the synthetic data.**
**Response:** Our approach to analyzing the synthetic data involves assessing the user model, which represents the sole distinction between the real human data and the data produced by the simulated user model. Visually, we can observe a clear distinction between user types by either scoring the top five highest-rated images in the HPS test set or by reviewing the rollouts of PASTA against the user model, both of which highlight distinct preferences across different user types. For a numerical perspective, we refer to Section 6.2, particularly Figure 3, where our user model achieves a prediction accuracy of 70% compared to real human raters, suggesting that the synthetic dataset effectively captures essential aspects of real user interactions. However, we agree with the reviewer’s perspective that further analysis is necessary. As such, we will add additional evaluation metrics to our experiments, including: user model score, FID, and LPIPS. We will comment here with these new results during the discussion period.
**Comment 5: “...domain gap between single-turn and multi-turn preferences..”.**
**Response:** Indeed, there is a distributional shift between single-turn and multi-turn preferences. However, our user model parameterization, encompassing both the user choice model and utility, relies on a single-turn score function $s_\theta$. Consequently, we can utilize single-turn datasets like Pick-a-Pic to augment the data available for training the user model. That said, due to the distributional disparity and our emphasis on the multi-turn context, these single-turn datasets are employed solely in the initial phase of the user model training to provide a stronger starting point for the subsequent phase training with the human rater multi-turn dataset, which is our primary focus. This point may not be entirely transparent in our exposition: we will elaborate further in our discussion of the user model training process in the paper.
**Comment 6: “Why was the number of interaction turns set to five?”**
**Response:** We settled on five steps to balance between providing a challenging enough interaction length and meeting the real-time expectation of users for relatively swift engagement with the agent. Exploring the (variable) optimal rollout lengths is something we have deferred to future research, given that the PASTA framework includes numerous components that warrant deeper study. | Summary: This paper introduces PASTA, a novel reinforcement learning framework for interactive text-to-image generation. It addresses the challenge of capturing precise user intent through iterative prompt expansion. The core ideas involve using a large multimodal language model (LMM) for prompt candidate generation, a value-based RL agent for prompt selection, and a user model trained on both human and simulated data to guide the agent. The paper also contributes a new dataset of sequential user preferences.
Claims And Evidence: The claims are generally well-supported by experimental evidence. The human evaluation demonstrates a significant improvement in user satisfaction compared to baseline methods, particularly when PASTA is trained on a combination of real and synthetic data. However, the performance gains from the user simulation require further scrutiny.
Methods And Evaluation Criteria: The proposed methods are well-suited for the problem of interactive text-to-image generation. The decomposition of the problem into candidate prompt generation (using a large multimodal language model) and candidate selection (using value-based reinforcement learning) is a sensible approach. The LMM allows for exploration of a diverse set of prompt expansions, while the RL agent learns to choose the most promising prompts based on user feedback. The implicit Q-learning (IQL) algorithm is a reasonable choice for offline RL in this context, given its ability to handle overestimation issues.
The evaluation criteria are also appropriate. The human evaluation, where raters judge the improvement of images across turns, directly measures the effectiveness of the interactive refinement process. The use of both absolute satisfaction scores and turn-over-turn comparisons provides a comprehensive assessment of user experience. The simulated user experiments offer a valuable means to analyze the impact of different training data regimes and reward settings.
However, the reliance on specific, potentially proprietary, pre-trained models might raise concerns about reproducibility and accessibility. It would be beneficial to discuss the potential impact of model choice on the overall performance.
Theoretical Claims: No Obvious Problem.
Experimental Designs Or Analyses: The experimental design appears sound, with appropriate baselines and evaluation metrics. The analysis of the impact of different training data regimes is insightful. More discussion is needed on limitations on rater bias and diversity.
Supplementary Material: NAN
Relation To Broader Scientific Literature: This paper builds upon recent advances in interactive and preference-adaptive image generation, particularly those leveraging LLMs and RL. It extends existing methods by framing multi-turn image generation as a sequential decision-making problem, allowing for iterative refinement towards a desired visual outcome.
Essential References Not Discussed: NAN
Other Strengths And Weaknesses: Strengths: The paper tackles an important problem with a novel and well-designed framework. The dataset contribution is valuable. The use of both human and simulated data is clever.
Weaknesses: The gains from the user simulation require further justification. The dependency on particular pre-trained models(Gemini, Gemma, SDXL) might limit the generality of the results.
Other Comments Or Suggestions: Consider adding a section discussing the limitations of the user simulation and potential biases it may introduce.
Questions For Authors: Could you provide a more detailed analysis of the specific dynamics captured by the user simulator that contribute to the performance gains observed in the simulated user experiments? (Understanding this better would strengthen the justification for using simulated data).
What are the computational costs for implementing and training PASTA?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your positive feedback on the novelty of our approach and our dataset contribution for the community. Bellow we address the concerns raised by the reviewer:
**Comment 1: "...the performance gains from the user simulation require further scrutiny."**
**Response:** The user simulator is used for two reasons: (1) generating synthetic data that is user-coherent in order to extend the real human rater dataset; and (2) labeling the human rater dataset. Our experiment with real users shows that adding additional synthetic data improves the model’s performance, indicating that our generated dataset is beneficial for training in real-world scenarios. In addition, we examine the ability of our user model to predict real users choices with 70% accuracy (Fig 3), which further indicates that our user simulator does capture some key dynamics of real user interactions. We will add a plot showing the user model score over time in order to add an additional performance evaluation metric. We will comment here with these new results during the discussion period.
**Comment 2: "The dependency on particular pre-trained models(Gemini, Gemma, SDXL) might limit the generality of the results."**
**Response:** Thank you for raising this point. In order to address it we are running additional experiments with different models during test-time (i.e., evaluating using models that our agent was not trained on). Specifically, we are running an additional rater study using our method with the Flux.1 T2I model. We will also compare our method to the use of the GPT4.5 model as an agent. We will comment here with these new results during the discussion period. | null | null | null | null | null | null |
Expected Variational Inequalities | Accept (oral) | Summary: This paper considers a relaxation of the variational inequality problem (VIP), the $\phi$-Expected Variational Inequality problem (EVI) (see Def. 1.2). This definition relaxes the 'classical' definition of the VIP in three ways: (1) the objective is to find a distribution $\mu$ such that the inequality holds in expectation, (2) the comparator is not wrt any $x \in X$, but with respect to $\phi(x)$ where $\phi:X \rightarrow X$ is a map (which is assumed to be linear for most results) and (3) the inequality is relaxed to hold wrt $-\epsilon$.
The authors provide complexity results (PPAD hardness for non-linear $\phi$), a polynomial bound if $\phi$ is linear (Thm 4.1), a regret bound and give application examples for games. Further, they give some conditions under which an $\epsilon$ solution for a EVI is also an $\epsilon$ solution for a VI.
Claims And Evidence: This is theoretical work, all claims are supported by proofs.
Methods And Evaluation Criteria: This is theoretical work, thus benchmarks etc do not apply.
Theoretical Claims: I did check some, but not all.
Experimental Designs Or Analyses: No experiments are provided. (but this is fine since this is theoretical work)
Supplementary Material: I did review some parts of the supplementary material, i.e., some proofs (see above).
Relation To Broader Scientific Literature: To the best of my understanding, the paper follows a similar idea as in Cai et. al. 2024 and (to some extent) generalizes this to VIP. Further, it falls into the brought area of 'local (cc) equilibria/first order optimality' for non-concave games and approaches this from the VIP perspective.
Essential References Not Discussed: To the best of my knowledge, all relevant references are discussed.
Other Strengths And Weaknesses: The paper is well written. I like the versatility of the proofs and results.
I am a bit unsure about the relevance of the solution concept of $\phi$-EVI. To the best of my understanding, if $\Phi$ contains all functions mapping into $X$, EVIs would correspond to VIs if $\mu$ is restricted to Dirac measures. However, the class of mappings $\Phi$ is quite restrictive. Thus I am not yet convinced of the relevance of the solution concept, but I may also miss something. Thus I am happy to revise my evaluation.
Also see my questions.
Other Comments Or Suggestions: none.
Questions For Authors: I am a bit confused regarding the interpretation of the solution for $\phi$-EVIs. For example, in the setting of games: Is there a clear relation to the $\phi$-equilibria considered in Cai et al.?
Is there a similar solution concept in optimization to which you can relate $\phi$-EVIs (as VIs relate to convex optimization)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and service.
*"To the best of my understanding, if $\Phi$ contains all functions [...] EVIs would correspond to VIs if $\mu$ is restricted to Dirac measures."*
We clarify that this equivalence holds without any restriction on $\mu$; it doesn’t have to be a Dirac measure.
*"Thus I am not yet convinced of the relevance of the solution concept"*
Our main positive result captures the setting where $\Phi$ contains all linear functions, which we argue is a strong solution concept. Even in the special case of normal-form games, it is tighter (that is, closer to Nash equilibria) than the usual notion of a correlated equilibrium (see Figure 1). Moreover, our hardness results preclude efficient computation when $\Phi$ contains even quadratic functions; as a result, there is a fundamental sense in which our solution concept is the best one can hope for using efficient algorithms.
*"In the setting of games: Is there a clear relation to the $\Phi$-equilibria considered in Cai et al.?"*
In the special case of nonconcave games, our solution captures the one considered by Cai et al., as we show and discuss in Appendix G. In fact, when $\Phi$ contains all linear functions, our solution concept is tighter (see Figure 1). We also argue that our definition is more natural and simple, while also applying to any variational inequality problem. Besides those points, we provide significant algorithmic improvements compared to Cai et al., as we point out in Appendix G.
*"Is there a similar solution concept in optimization to which you can relate?"*
To the best of our knowledge, there is no existing solution concept in optimization that directly relates to $\Phi$-EVIs, except in more specific classes of problems (see Section 6). Given how natural this concept is, we believe that introducing it to the optimization community is an important contribution. | Summary: The authors relax VI to a new notion of EVI. They then take a game-theoretic approach to design polynomial-time approximation algorithms for EVIs.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: N/A
Supplementary Material: No.
Relation To Broader Scientific Literature: VIs are a general way to capture many important problems across different fields. This paper addresses the complexity issues of VI problems by introducing a new notion that retains the key flavor of VI while making them computationally tractable. As such this work has the potential to lead to further insights to make VI problems more efficiently solvable, and out of the box gives efficient methods that could be promising heuristics to known VI problems. Overall, I believe this paper has the potential to have significant impact on the literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper seems well written and not only introduces a reasonable relaxation of a known useful framework, but also presents provable approximation algorithms for the relaxed problem. The only weakness I can see is whether EVI is too great of a relaxation. Considering it brings a known hard problem down to polynomial time complexity, there must be some loss in modeling power or guarantees that result. Either way, I believe the insights are still meaningful even if EVI is too much of a relaxation.
Other Comments Or Suggestions: It would be helpful if the authors could compare the strength of VI to EVI in terms of modeling power. For example, are there still many VI problems for which EVI suffices? Furthermore, are there any problems where EVI is actually more natural of a model than VI?
Questions For Authors: See my suggestions above.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and service.
*“It would be helpful if the authors could compare the strength of VI to EVI in terms of modeling power. For example, are there still many VI problems for which EVI suffices? Furthermore, are there any problems where EVI is actually more natural of a model than VI?”*
This is a great question. If we look at the special case of games, $\Phi$-EVIs (which subsume correlated equilibria) are in many ways more natural than VIs (that is, Nash equilibria). Indeed, they often lead to better outcomes in terms of welfare as they enable correlation between the players, which in many applications is more reasonable than assuming that players are acting independently (a traffic light is a common example of a correlation device). This shows that there are problems in which EVIs are, in many ways, more natural than VIs. Furthermore, given how expressive games are from an optimization standpoint, we argue that this already makes a compelling case for EVIs compared to VIs. That being said, we believe that further understanding the power of EVIs and how they compare against VIs is an important question for follow-up work. | Summary: The paper studies the Variational Inequality (VI) problem, a well-known problem used in various optimisation problems, often in settings where equilibria need to be computed. The general problem is intractable, so there is a wide literature identifying subclasses of the VI problem for which it is tractable.
This paper proposes an alternative approach: relaxing the definition of the solution to the problem. Instead of requiring a single point in a convex space that should satisfy the central variational inequality, the *expected* variational inequality (EVI) problem asks to find a *distribution* of points that *in expectation* satisfy the inequality.
This problem is parametrised by the size of the set of deviations ($\Phi$) from the convex space for which the inequality must hold. The larger the set of deviations, the smaller the found distribution will be.
The paper considers different types of maps $\Phi$, including constant and linear map. A key contribution is a complexity result for when $\Phi$ contains only linear maps. In this case, the $\Phi$-EVI problem can be solved in time polynomial in dimension $d$ of the problem space and logarithm of the inverse of the precision parameter $\epsilon$. Since this result relies on the ellipsoid against hope (EAH) algorithm, which is slow in practice, the paper proposes an alternative algorithm that improves upon the state of the art in terms of the per-iteration complexity.
Furthermore, the paper presents equivalence and performance results, and identifies applications of (constant-$\Phi$) EVIs.
Claims And Evidence: I honestly do not have the background to be an accurate judge of this. I am unfamiliar with many of the concepts discussed in this paper, and Wikipedia can only get you so far. I have no obvious reasons to doubt any of the claims that are made.
Methods And Evaluation Criteria: The main claims are theoretical and are supported by proofs in the appendix. That seems appropriate to me.
Theoretical Claims: I read through some of the proofs, but do not have the expertise to confidently assess their correctness.
Experimental Designs Or Analyses: N/A
Supplementary Material: I read Appendix A and B, skimmed through Appendix C, and did not read beyond that.
Relation To Broader Scientific Literature: To a layperson like me, at least, it seems like the paper positions the main contributions well in relation to the existing literature. In doing so, it covers applications and theoretical results spanning several decades. The appendix contains more details on the relationship between EVIs and different game theory concepts.
Essential References Not Discussed: I do not consider myself knowledgable enough to judge this.
Other Strengths And Weaknesses: I really appreciate the efforts to make the paper somewhat standalone by including a recap of the ellipsoids against hope (EAH) algorithm. I also find that the paper is well-written in terms of structure and signposting. The writing is clean, with hardly any typos. The paragraph labelling and subsection labelling is inconsistent and confusing to me.
The main research direction and contribution seem valuable and interesting to me. I also like the efforts made by the authors to connect their results to different fields, and their discussion of their finding's implications for game-theoretic questions.
I personally dislike the writing style. Take the second sentence of the introduction. It spans 14 lines: almost a quarter page. I would've preferred that to be split up in a few sentences, organised by application category or something. Elsewhere in the paper there's also needlessly lavish prose. Why write "in lieu of" when "instead of" is right there? Going for the simpler option would even have saved the authors an `{\it }`. I personally believe that scientific writing should be as accessible as possible. Hence, I wish that the authors would have kept the language more simple.
Other Comments Or Suggestions: - The bibliography is a bit sloppy and inconsistent. For some entries the full name of the venue is given, for others just an abbreviation. Many entries seem to be missing page numbers.
Questions For Authors: I do not feel like I have the expertise to formulate questions whose answers would make a difference in my judgement.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and service.
We are happy to adjust the writing style and follow any other suggestions for the revision. Regarding the bibliography, we follow the convention that page numbers are omitted for conference publications, which explains why some references have page numbers and others don’t. We are happy to change that if the reviewer finds it inconsistent. | null | null | null | null | null | null | null | null |
From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information? | Accept (poster) | Summary: This paper introduces AR-Bench, a benchmark designed to test if large language models can actively gather missing information by asking the right questions. The benchmark includes tasks like detective cases, lateral thinking puzzles, and a number-guessing game, pushing models into multi-turn interactions rather than relying on a single pass of reasoning.
Claims And Evidence: The findings indicate that even state-of-the-art models often ask vague or repetitive questions, and advanced tweaks yield only modest improvements. Overall, the work underscores the gap between passive reasoning and the more dynamic, active reasoning required for complex problem solving, suggesting that new training strategies are needed. But this is left for future work.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-motivated for tackling active reasoning. The benchmark’s design captures the challenges of acquiring missing information through multi-turn dialogues. That said, the reliance on LLM-based judges, whose reliability is somewhat limited (84% at best), does inject some noise into the evaluation process, potentially affecting direct comparisons.
Theoretical Claims: The paper does not include any formal proofs.
Experimental Designs Or Analyses: The benchmark design is fine. However, the reliance on an LLM-based judge is unfortunate. Using 4o is not necessarily reproducible, and llama 405 is difficult and costly to host. This will likely lower the impact of the benchmark. It would have been better to design the task such that a simpler method could be used to provide information to the player, i.e. the LLM that is being tested.
Supplementary Material: no
Relation To Broader Scientific Literature: I'm not too familiar with the relevant active learning literature but the authors seemed to have made a decent job in summarising recent work.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The paper's motivation is great, and the results present the limitations of current LLMs. The paper is clear with detailed experimental designs and comprehensive ablation studies.
That said, it's unfortunate that the NPC role requires a large LLM and yet only achieves about 84% accuracy
Other Comments Or Suggestions: no further comments
Questions For Authors: Why was the reliability of LLMs in generating correct responses to given questions not tested on the tasks at hand but instead done on Turtlebenchmark?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback. To solve these concerns and questions, we provide point-to-point responses as follows.
> Q1. The reliability and necessity of using LLM as a judge, and evaluating the judge on Turtlebenchmark.
**Reply:** Here, we (1) explain why we use LLM judges instead of human judges, (2) clarify our choice of TurtleBenchmark for evaluation, and (3) present further evidence supporting the reliability of LLM judges on AR-Bench.
**(1) We use LLMs as judges because model-based evaluation is essential for active reasoning.** Human evaluation in multi-turn interactions is prohibitively expensive, so we employ LLMs as a cost-effective alternative. Recent studies [1, 2] support the effectiveness of LLMs as judges, showing that they can reliably simulate human judgment. This approach enables large-scale evaluation, which is crucial for studying active reasoning.
**(2) We use TurtleBenchmark to evaluate LLM judges because it provides human-verified annotations from real-world games.** TurtleBenchmark includes 4,448 high-quality samples, making it a reliable dataset for testing LLM-as-a-judge capabilities. However, it does not support process-level evaluation, such as assessing the quality of the questions asked by LLM players. For this reason, we use AR-Bench to evaluate active reasoning abilities and TurtleBenchmark to assess the accuracy of LLMs as judges.
**(3) We run additional experiments with AR-Bench to further verify the reliability of LLM judges.** Here, we collect 200 questions from the SP tasks of AR-Bench and manually annotate their answers. Then, we evaluate various LLM judges on this collected dataset. The results, shown in the table below, align closely with findings from TurtleBenchmark. Notably, Llama3.1-405B achieved 96% accuracy, demonstrating reliable judgment on AR-Bench.
|Model|Llama3.1-70B|Llama3.1-405B|GPT-4o-mini|GPT-4o|QwQ-32B|DeepSeek-R1|DeepSeek-V3|
|:-|:-|:-|:-|:-|:-|:-|:-|
|Judge Accuracy|89.5|96|83.5|89.5|92|91.5|92.5|
[1] LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods. In arXiv, 2024.
[2] Let’s verify step by step. In ICLR, 2024.
> Q2. The potential of designing a simpler method to provide information to the player.
**Reply:** We would like to (1) discuss the potential of using smaller LLMs as the judges and (2) explore the possibility of using rule-based functions as judges in certain scenarios.
**(1) Smaller LLMs can serve effectively as judges.** As shown in Q1, smaller and more efficient models like QwQ-32B achieve performance comparable to larger models such as Llama3.1-405B, while being significantly more cost-effective. Besides, based on the price in TogetherAI, the API cost of Llama3.1-405B is 3.50 USD per million tokens, whereas QwQ-32B costs only 1.20 USD per million tokens. These results support the feasibility of using advanced smaller LLMs as judges.
**(2) Rule-based functions can effectively act as judges when the action space is limited.** For example, in tasks like GN, where the possible actions are predefined (e.g., 5,040 unique four-digit numbers), rule-based judgment is feasible. Although this constrains the player’s actions to a fixed set, it eliminates the need for an LLM-based judge. In contrast, tasks like SP and DC encourage open-ended question generation, making it necessary to use an LLM judge to handle the broader, more flexible action space. Overall, rule-based evaluation is a promising direction for assessing active reasoning tasks. We believe it’s worth further exploration, especially to address the challenges posed by open-ended scenarios.
> Q3. Discuss the future work on new training strategies.
**Reply:** We would like to answer this question from the following two perspectives.
**(1) Data perspective. It is feasible to create small-scale, high-quality datasets for fine-tuning large language models in active reasoning tasks.** As with s1 and LIMO, these datasets should capture the detailed thinking and interaction process involved in active reasoning—revealing how to ask effective questions and ultimately make a final decision. Because current LLMs often struggle with generating high-quality questions, it may be necessary to curate such data through human annotation, enabling models to learn directly from human question-asking strategies.
**(2) Algorithm perspective. We can leverage reinforcement learning techniques with outcome-based rewards, drawing inspiration from methods like PPO and GRPO.** Instead of assigning a reward after every question, the model receives a significant reward only if it arrives at the correct solution, thereby eliminating the need to manually label the quality of each individual question. This reduces annotation costs while naturally promoting planning, exploration, and self-reflection in the model’s questioning process.
We will include the above clarifications and discussions in the revision. Any further comments and discussions are welcome! Thank you!
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal.
How do you explain the discrepancy between the TurtleBenchmark judge accuracy and the AR-Bench judge accuracy?
How did you select the 200 AR-Bench questions?
GPT-4o was recently updated. How does that affect the reproducibility of your results?
---
Reply to Comment 1.1.1:
Comment: Thanks for the feedback! We provide a point-to-point response to these further questions.
> Q1. How do you explain the discrepancy between the TurtleBenchmark judge accuracy and the AR-Bench judge accuracy?
**Reply**: Thanks for this insightful question! The performance discrepancy between TurtleBenchmark and AR-Bench judges stems primarily from **the different nature of questions posed by humans versus models.**
**In TurtleBenchmark, human-generated questions tend to be more diverse and complex.** Upon careful examination of this benchmark, we find that some questions in TurtleBenchmark are ambiguous and challenging to judge directly based on the facts in the ground truth. Effectively answering these questions requires a comprehensive understanding of the context and careful consideration of how to respond without being misleading.
**In contrast, questions collected from AR-Bench are generated by models**, which tend to be more closely aligned with the facts in the ground truth and can be answered more straightforwardly without concerns about potential misinterpretation.
We provide some examples of the asked questions in these two benchmarks as follows.
Questions in TurtleBenchmark:
> Puzzle: A man's body is found in the desert, holding half a matchstick. His luggage is found nearby. It's known that he died from a fall. Please deduce what happened.
Truth: He was on a hot air balloon tour in the desert with a group. Halfway through, the balloon ran out of fuel. To keep flying, they threw out all their luggage, but it wasn't enough. They decided to draw lots using matchsticks - whoever drew the short one would be thrown off. He drew the short matchstick and was pushed out, falling to his death.
Question 1: The man was once very desperate
Gound truth answer 1: Yes
Question 2: The hot air balloon malfunctioned
Gound truth answer 2: Yes
Question 3: The man was murdered
Gound truth answer 3: Yes
> Puzzle: Thomas visits his wife's best friend's house for the first time with his wife. After returning home, his wife wants a divorce. Why?
Truth: Thomas's wife saw that his phone automatically connected to her best friend's WiFi.
Question 1: Thomas knows where the best friend lives
Gound truth answer 1: Yes
Question 2: Thomas revealed some kind of habit
Gound truth answer 2: Yes
Question 3: Thomas knows which floor the best friend's house is on
Gound truth answer 3: Yes
Questions in AR-Bench:
> Puzzle: A family received a letter from a man who was officially declared dead and buried months ago. How could this be possible?
Truth: The man, once deeply entangled in illegal activities, realized that his life was in constant danger due to the threats from rival groups and law enforcement closing in. To escape this perilous existence, he meticulously devised a plan to fake his own death. Leveraging his extensive knowledge of survival skills, he identified a remote island that could sustain him long-term. With the help of trusted connections, he secured transportation and gathered essential supplies for his new life. After staging a convincing accident, he was declared dead and buried, allowing him to vanish without a trace. Despite the isolation and challenges of starting anew, he found solace in his newfound freedom. From his secluded sanctuary, he sent a letter to his family, revealing his survival and reassuring them of his safety, but ensuring they understood the necessity of maintaining his secret.
Question 1: Was the family in danger because of the man's criminal activities?
Gound truth answer 1: Yes
Question 2: Was the man actually dead when he was declared dead?
Gound truth answer 2: No
Question 3: Did the man fake his death to escape from something or someone?
Gound truth answer 3: Yes
These examples illustrate the key differences between these two benchmarks.
> Q2. How did you select the 200 AR-Bench questions?
**Reply:** Thanks for this question. For the selection process, we randomly sampled 200 representative questions from logs of our previous experiments with GPT-4o. To ensure diversity, we limited the number of questions from any single puzzle. These selected questions are then used to evaluate the reliability of LLM judges on the AR-Bench.
> Q3. GPT-4o was recently updated. How does that affect the reproducibility of your results
**Reply:** We used a fixed version of GPT-4o (gpt-4o-2024-08-06) throughout our entire experimental period to ensure consistency and reproducibility of our results. Note that the updated version of GPT-4o is currently only available through the chat interface and has not yet been released for public API access. We plan to evaluate any potential discrepancies between the old and new versions of GPT-4o once the updated version becomes available through the API interface.
We will incorporate the above discussions into our submission. Please tell us if you have any further questions! | Summary: The paper proposes AR-Bench, a new benchmark to evaluate active reasoning abilities of large language models (LLMs) – i.e. their capacity to solve problems with incomplete initial information by asking questions. This contrasts with passive reasoning, where the model is given all necessary information up front. AR-Bench consists of three interactive task types – Detective Cases, Situation Puzzles, and Guessing Numbers – corresponding to commonsense, logical, and symbolic reasoning scenarios. In each task, the model must engage in a multi-turn conversation to gather clues and gradually uncover the solution.
Using AR-Bench, the authors systematically evaluate several state-of-the-art LLMs (including open models up to 70B and OpenAI’s GPT-4 variants) under different prompting strategies and fine-tuning methods. The main finding, in addition to contributing a new benchmark, is that current LLMs struggle significantly with active reasoning, performing far below perfect accuracy in these tasks.
The paper also shows that advanced prompting or training techniques yield only marginal improvements: methods like Tree-of-Thought (which searches through possible reasoning paths) and instruction fine-tuning or alignment (SFT, DPO) did not consistently or substantially boost accuracy. These results highlight that simply scaling up model size or applying existing reasoning tricks is not enough to reach reliable performance in active reasoning. The authors conclude that there is an urgent need for new techniques to improve LLMs’ ability to ask the right questions and use the answers effectively.
The paper’s contributions are thus: (1) introducing AR-Bench with well-defined tasks and evaluation metrics for active reasoning, (2) an empirical study revealing significant limitations of current LLMs in these scenarios, and (3) analysis tools (a “process score” metric based on key question completion) to quantify the quality of the model’s questioning strategy over time.
## update after rebuttal
No change in scores. I am already at 3 and I think I will maintain my score.
Claims And Evidence: Support for Key Claims: The paper’s central claims are mostly well-supported by empirical evidence
- LLMs struggle at active reasoning: The experiments clearly demonstrate this. For example, GPT-4o achieves only 54% accuracy in Detective Cases, 63 F1 in Situation Puzzles, and 35% in Guessing Numbers. Smaller open models (LLaMA 8B/70B) fare even worse (often near zero on Guessing Numbers).
- Advanced methods give only marginal or inconsistent gains: For example, the Tree-of-Thought (ToT) method improved the 8B model’s accuracy on Detective Cases from 31% to 45%, but for the 70B model it decreased accuracy from 40% to 40% (no gain). Likewise, fine-tuning the model on the training puzzles (SFT) boosted detective case accuracy (e.g. LLaMA-8B from 32% to 62%) but completely failed on Guessing Numbers (0% correct). The direct preference optimization (DPO) alignment method had mixed results, even performing worse than zero-shot on some tasks. The paper’s claim that these methods provide only “marginal improvements” is a bit modest – in some cases the improvements were sizable (e.g. +30 points for 8B SFT on Detective Cases), but overall no method achieved high absolute performance or worked reliably across all tasks. Thus, the evidence supports the spirit of the claim: none of the tested state-of-the-art techniques is anywhere close to solving active reasoning.
### Questionable or Unsubstantiated Claims:
- Disparity with real-world tasks: The introduction asserts that passive reasoning alone is insufficient for many real-world problems and gives intuitive examples (travel planning requiring asking the client questions, medical diagnosis requiring patient Q&A). This claim is commonsensical and not really disputed, but the extent to which current models fail in realistic active scenarios isn’t directly proven in the paper since AR-Bench is composed of synthetic puzzles.
Methods And Evaluation Criteria: - The three tasks in AR-Bench are well-chosen to cover a spectrum of reasoning with incomplete information. Each task explicitly requires the model to interact through questions/guesses rather than simply compute an answer from a given prompt, capturing the essence of active reasoning.
- As highlighted above, one possible concern is that these tasks, being synthetic, might not capture all nuances of real-world active reasoning. For instance, real information-seeking often involves unclear or unstructured answers, whereas AR-Bench’s NPC (non-player character) respondents give relatively clean, formatted feedback (yes/no, or narrative statements following a script). However, this is arguably a necessary simplification to start quantifying performance, and perhaps a step in the right direction. We would expect these benchmarks to get more realistic and difficult over time with more deceitful/multifaceted communication between the player and NPC.
- The authors created the dataset via a multi-step generation pipeline using GPT-4 (referred to as GPT-4o) to synthesize scenarios and ground-truth solutions. This pipeline involves generating a core scenario, expanding it into a detailed story tree, extracting key facts as question-answerable bits, and then packaging the puzzle for the model. This is a clever approach to scale up data without manual writing.
- The evaluation is conducted as a multi-turn dialogue between the tested model (the “player”) and the environment (NPCs). They fixed the number of rounds (questions) to 25 for each session. . In most cases, if the model hasn’t solved the puzzle in 25 turns, it’s probably going in circles and indeed the paper’s analysis shows diminishing returns in late rounds.
- The evaluation metrics for final performance are straightforward and appropriate: Accuracy for Detective Cases (did you find the murderer or not), F1 score for Situation Puzzles (how well does the model’s final explanation overlap with the ground-truth explanation), and Exact Match for Guessing Numbers (did the model get the number exactly right). Accuracy and exact match are clear-cut. The use of F1 for the puzzles is reasonable since the solution is a free-form text describing the scenario; F1 (as commonly used in QA tasks) gives partial credit if the model’s answer includes some of the correct information
Theoretical Claims: This paper is largely empirical and does not present new theoretical formalisms or proofs. Most claims are qualitative or quantitative observations about model performance. Therefore, there aren’t traditional “theorems” or mathematical proofs to verify in the work. The authors’ arguments are logical rather than formally theoretical.
Experimental Designs Or Analyses: - The authors evaluated a wide range of conditions (different models, multiple methods) on a reasonably large test set for each task (100 puzzles per task). This gives the results some statistical significance – they’re not anecdotal one-off examples, but averaged over 100 trials per task. They also use train/validation splits for fine-tuning, which helps avoid overfitting. The inclusion of multiple model sizes (8B vs 70B vs the extremely large 405B and GPT-4) allows analysis of scaling effects.
- A baseline to consider is optimal strategy (or human expert performance). The paper doesn’t provide any human evaluation or results from the Oracle policy. It would be illuminating to see what score a perfect questioner would get - presumably 100% on all tasks by design. Or at least, a reasonable heuristic strategy: e.g., for Guessing Numbers, there are known algorithms that could solve 100% within ~7 guesses. --For Detective Cases, a simple strategy could be to ask directly about each suspect’s motive, alibi, etc., which should solve it by elimination. The models did not approach anywhere near 100% on those tasks, highlighting the gap.
- The analysis doesn’t deeply break down errors beyond broad strokes. For a rigorous critique, one could say the paper could have included more error analysis: e.g., what types of questions did models tend to ask, where did they go wrong?
Supplementary Material: I read the dataset examples in the supplementary material.
Relation To Broader Scientific Literature: This work fits into a quickly growing body of literature on enabling and evaluating interactive reasoning in language models. The authors do acknowledge many related works:
- Clarification Question Benchmarks: For example, Qulac (Aliannejadi et al., 2019) is a dataset of ambiguous search queries where models propose a question to clarify user intent. Abg-CoQA (Guo et al., 2021) similarly deals with ambiguous questions in a conversational QA setting. These works established the importance of asking the right question when faced with ambiguity, but they typically involve just one question-response exchange before answering. AR-Bench goes beyond by requiring multiple rounds of questioning and complex inference, which is a step up in difficulty.
- Twenty Questions and Guessing Games: Perhaps the closest precursors to AR-Bench are benchmarks inspired by the classic 20 Questions game. Abdulhai et al. (2023) introduced LMRL-Gym, a suite of language-game environments for reinforcement learning, which included a 20 Questions game and a “Guess my City” game. In these, the model must identify a hidden topic or city by asking yes/no questions, very much like AR-Bench’s Situation Puzzles or a constrained version of Guessing Numbers. In fact, AR-Bench’s Guessing Numbers (GN) task can be seen as a variant of a guessing game, but with numeric feedback instead of simple yes/no. The novelty of AR-Bench’s GN is slight in content (number guessing has been around for decades as a game), but novel in being used to evaluate LLMs systematically. It provides a symbolic reasoning test that wasn’t explicitly in LMRL-Gym (which focused on yes/no only). On the other hand, the 20 Questions task from LMRL-Gym covers similar grounds of information-seeking.
- Other concurrent works have targeted active reasoning in specialized contexts. MediQ (Li et al., 2024c) is a benchmark for interactive medical diagnosis dialogues. In MediQ, an LLM plays doctor, asking a patient (simulated) questions to gather symptoms and test results, before making a diagnosis. This is clearly a real-world analog of active reasoning. Similarly, a Troubleshooting environment (mentioned as Hu et al., 2024 in the paper) tackles technical problem solving through Q&A. These domain-specific benchmarks share the same spirit as AR-Bench: the model must proactively query information. The difference is the domain knowledge required – MediQ expects medical knowledge, troubleshooting expects understanding of devices – whereas AR-Bench’s tasks are mostly domain-general or self-contained fiction.
- The novelty is incremental rather than completely groundbreaking. Each individual task has antecedents (e.g., bulls and cows game has existed since the 19th century; it’s novel only that we test an LLM on it). The scientific contribution lies in the empirical findings about LLM capabilities and the thoroughness of the benchmark. Those findings – that even GPT-4 struggles with these puzzles – might not shock those who have informally played such games with ChatGPT.
Essential References Not Discussed: Related work section looks complete.
Other Strengths And Weaknesses: ### Strengths:
- The paper does an excellent job of laying out a new benchmark with clarity. The tasks are well-motivated and the dataset is reasonably large considering the complexity of each example. The generation pipeline demonstrates originality in how to create diverse puzzles systematically. creating evaluation data for interactive reasoning is non-trivial – the authors provided a solution that future researchers can build on.
- The paper addresses a capability gap that is widely recognized as important for AI – the ability to interact and ask questions. In terms of impact, if the community adopts AR-Bench or is inspired by it, it could steer research toward more agentive AI, which is significant. The results are a bit surprising to those who assume GPT-4 is near human-level on all reasoning; highlighting this gap is an important contribution.
- Aside from the technical content, the paper is generally well-written. The objectives, setup, and findings are presented in a logical order. Key points are illustrated with figures and tables that are easy to read. The authors also took care to include any ethical considerations like fictionalizing violent scenarios.
### Weaknesses
- As discussed, the benchmark, while new, overlaps with prior and concurrent works. The paper could be perceived as incremental in that it packages known puzzle types and tests known models without proposing a new algorithm. Essentially, AR-Bench is a benchmark paper, and such papers are often judged by whether they’ll be widely used. The risk is that if the community decides to focus on a different benchmark, AR-Bench could be bypassed. Its success and relevance will depend on adoption.
- There have been few attempts to explain certain unexpected results, such as SFT's getting 0% accuracy on GN with a 70B model.
Other Comments Or Suggestions: -
Questions For Authors: 1. Did you fine-tune one model on all three tasks jointly, or separate models per task? If jointly, could task interference have hurt the performance on the numeric task? Why do you think the fine-tuned models underperformed on some tasks (especially the 0% accuracy on Guessing Numbers)
2. Did you consider or attempt any of the active reasoning-specific prompting strategies from related work, such as “Proactive CoT” or “Uncertainty of Thoughts (UoT)”, on AR-Bench? It would be interesting to see their results on AR-bench given they specifically target this problem.
3. How do the NPCs handle odd or irrelevant questions from the player model? For example, if the model asks a nonsensical question or something not covered in the story, does the NPC respond with “I don’t know” or does it make something up? Handling such cases consistently can affect the fairness of evaluation – if an NPC accidentally provides a hint or gets confused, it could skew a round’s outcome. Any observations on the NPCs’ behavior would be helpful.
4. Do you think it is possible to characterize any model’s mistakes into categories; if they’re running into identical or closely related issues which stops them from reaching the correct answer? Like asking an unrelated question midway would take it off track and this could be a common error models across scale would commit.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback. To solve these concerns and questions, we provide point-to-point responses as follows.
> Q1. The benchmark’s novelty and consistency with real-world scenarios.
**Reply:** We answer this question from three aspects.
**(1) Reality.** Real-world active reasoning can be messy, but benchmarks initially need simpler, controlled tasks to isolate specific capabilities, i.e., effectively asking questions to obtain missing information. Synthetic puzzles ensure consistency and clear metrics, while the yes/no feedback is more structured than real interactions. We plan to add more nuanced “deceitful” NPCs over time.
**(2) Novelty.** Although our puzzles resemble known games, combining them into a single benchmark systematically tests active reasoning, which is an overlooked dimension. We provide 6,000+ puzzles, curated prompts, automated feedback, and extensive experiments, showing current LLMs struggle primarily due to inadequate questioning. Pinpointing *how* and *why* it fails is our key contribution.
**(3) Influence.** We will release all data and code publicly, enabling easy reproduction and further improvement. AR-Bench fills a gap in current reasoning benchmarks, which largely assume complete information is available. Our controlled environment highlights specific failings and can evolve to include richer, more realistic forms of active reasoning.
> Q2. The error patterns of LLMs on AR-Bench.
**Reply:** We carefully check the 600 running dialogs of Llama3.1-8B and GPT-4o on AR-Bench and categorize typical errors in each task.
In DC:
- Timeline Misinterpretation (TM): Models confuse the timeline of the murderer and ask for information of an irrelevant time.
- Evidence Overlooked (EO): Models ask for details and overlook the key evidence needed to find the murderer.
In SP:
- Evidence Overlooked (EO): Models jump to conclusions without uncovering supporting evidence.
- Unsupported Assumptions (UA): Models fabricate fake details in the conclusion.
In GN:
- Feedback Misunderstanding (FM): Models misinterpret feedback, failing to keep correct digits or eliminate wrong ones.
- Incomplete Testing (IT): After identifying a correct digit, models fail to determine its correct position.
The following tables show error statistics for Llama3.1-8B and GPT-4o.
- In DC, GPT-4o overlooks less key evidence but struggles more with timeline issues.
- In SP, Llama3.1-8B fabricates details more often, while GPT-4o tends to miss details.
- In GN, Llama3.1-8B makes more errors overall, though GPT-4o also has high error rates.
|Task|Error Type|Llama3.1-8B|GPT-4o|
|:-|:-|:-|:-|
|DC|TM|10%|31%|
||EO|61%|15%|
|SP|EO|36%|44%|
||UA|90%|72%|
|GN|FM|78%|61%|
||IT|81%|55%|
> Q3. Explain the technical details and empirical results of SFT.
**Reply:** We train the model separately for each task. Our analysis of the SFT results reveals the following:
**SFT tends to memorize training data rather than develop active reasoning skills.** This is especially problematic in the GN task, where memorizing number patterns doesn’t teach the logical deduction needed for accurate guessing, resulting in 0% accuracy on unseen test cases.
**Moreover, the training data includes only the output questions, without showing the underlying reasoning behind them.** This lack of reasoning traces limits learning, especially in symbolic tasks like GN. In contrast, DC data (with its free-form questions) better supports the development of active reasoning, leading to improved performance.
> Q4. Attempt related work Proactive CoT and UoT on AR-Bench.
**Reply:** We adapt these two methods to AR-Bench:
- **Proactive CoT**: Pre-defined task-specific strategies and actions based on human reasoning patterns.
- **UoT**: In DC, we sample 3 question branches, simulate 1 turn, and estimate uncertainty reduction. In SP, we initialize the potential answers, sample 3 questions/turn, and estimate uncertainty reduction. In GN, we sample 3 guess branches, simulate 1 turn, and use the model to estimate uncertainty reduction.
The results below show that both methods cannot solve AR-Bench well. This demonstrates the necessity of designing new methods (please see Q3 to reviewer Briw).
|Method|Base Model|Proactive CoT|UoT|
|:-|:-|:-|:-|
|DC|31|31|10|
|SP|43|47|33|
|GN|0|0|1|
> Q5. The robustness of the LLM judges against irrelevant questions.
**Reply:** We instruct judge models to withhold useful information when questions are irrelevant. In SP, they respond with “unknown”. In CD, while suspects may reply, they avoid revealing useful information. We illustrate this with [running examples](https://anonymous.4open.science/r/AR-Rebuttal-BC37/1zoX/Q5/examples.png) of irrelevant questions.
> Q6. Human evaluation on AR-Bench.
**Reply:** Due to the high cost of large-scale human evaluation, we conducted a small demo with two undergraduate students on AR-Bench to show human performance. Please see our response to Q4 for Reviewer Vrs4. | Summary: The paper addresses a significant research gap in evaluating Large Language Models (LLMs) for active reasoning, where models must actively query external sources due to incomplete information, rather than passively reasoning from complete data. The authors propose AR-Bench, featuring three active reasoning tasks: detective cases, situation puzzles, and guessing numbers. They show that modern LLMs, including state-of-the-art models like GPT-4o, perform poorly in these tasks, highlighting a major gap between passive and active reasoning abilities. The authors demonstrate that even sophisticated reasoning techniques such as tree-of-thought (ToT) and post-training optimization methods like supervised fine-tuning (SFT) and direct preference optimization (DPO) yield only marginal improvements.
Claims And Evidence: **Claim:** LLMs struggle to consistently generate high-quality, effective questions during multi-turn interactions.
**Evidence**: Quantitative results from experiments conducted on AR-Bench demonstrate low accuracy scores for LLMs, such as GPT-4o achieving only 35% accuracy on the guessing numbers task. Qualitative case studies highlighting examples of vague or ineffective questions posed by models during experiments, reinforcing observed limitations
**Claim:**Inference-time optimizations, such as increasing interaction rounds, have minimal effects in improving LLMs' active reasoning performance.
**Evidence**:Comparative analysis of models (e.g., Llama-3.1-8B vs. GPT-4o) across tasks provides empirical evidence showing marginal improvements with advanced techniques.
Methods And Evaluation Criteria: * The authors introduce AR-Bench, which contains three tasks (detective cases, situation puzzles, guessing numbers) to evaluate the active reasoning of LLMs.
* Active reasoning capabilities are assessed by simulating multi-turn interactions where the model must iteratively formulate questions or guesses and interpret external feedback.
* Performance metrics are accuracy (detective cases), F1-score (situation puzzles), and exact match (guessing numbers).
* Tasks effectively test different reasoning types—commonsense, logical, symbolic—providing comprehensive coverage
* The models considered for the evaluation for this particular task doesn't really makes sense as the GPT4o and GPt-4o mini are reasoning models as mentioned by OpenAI and the comparison of these models with zero-shot performance of LLama models, doesn't really make sense.
* Also the benchmark created is a synthetic data, there is no human intervention in any part, I feel the synthetic data has to be evaluated for it's accuracy as well, if the data created itself its wrong then it's no use.
* Then the models selected for this task are very limited, just two families of models, the authors should have considered more wide range of models with more diverse architectures, instead of considering one open source and one closed course and comparing them. Also the methods considered were only evaluated on two models of same family, which doesn't really give much of insights.
* Also the details of the model configurations are not mentioned, maybe they should mention the temperatures and topk etc, try different values to see how the creativity differs, as the tasks needs more reasoning. High temperatures are generally used for creative tasks.
Theoretical Claims: N/a
Experimental Designs Or Analyses: * Evaluations include various LLMs (Llama-3.1-8B, 70B, 405B, GPT-4o-mini, GPT-4o).
* Reasoning methods evaluated include zero-shot, few-shot, few-shot with instructions, tree-of-thought (ToT), supervised fine-tuning (SFT), and direct preference optimization (DPO).
* Performance metrics include accuracy (detective cases), F1-score (situation puzzles), and exact match rate (guessing numbers).
* Multi-round evaluation measures the progression in active reasoning capabilities over interactions.
Supplementary Material: Yes, it's just test and train set.
Relation To Broader Scientific Literature: It's a good benchmark, but needs more robustness.
Essential References Not Discussed: Yes.
Other Strengths And Weaknesses: ## Strengths
* The paper addresses an important yet underexplored aspect of large language models: active reasoning in contexts with incomplete information.
* The paper is very well written and the details are very clear. Clear distinction between passive and active reasoning is well explained, accompanied by illustrative figures, making the work easy to comprehend.
## Weaknesses
* Absence of direct human evaluation in assessing question quality or reasoning effectiveness limits verification of the benchmark’s real-world relevance and reliability.
* Others mentioned in the above Methods section.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback. To solve these concerns and questions, we provide point-to-point responses as follows.
> Q1. The selection of language models to compare.
**Reply:** We provide a two-fold answer as follows.
**(1) Considering the huge expense of computing and API calls, we evaluate the currently representative LLMs on AR-Bench.** Specifically, we select GPT-4o and GPT-4o-mini as representatives of closed-source LLMs; and Llama3.1 8B, 70B, and 405B as representatives of open-source LLMs. These models are widely used and compared in current leaderboards and arenas, which include both reasoning and non-reasoning models. In our experiments, we guarantee fair evaluations and comparisons by providing the same information to different models under evaluation.
**(2) We conduct extra experiments with more language models and show their general deficiency in active reasoning tasks.** We involve Qwen2.5 (3B, 7B) - another representative model family, and QwQ-32B - a powerful reasoning model. The evaluation result shown below is consistent with our observation. This highlights the widespread challenges that current LLMs face in active reasoning tasks.
|Model|DC|SP|GN|
|:-|:-|:-|:-|
|GPT-4o-mini|62|41|6|
|GPT-4o|54|63|35|
|Llama3.1-8B|31|43|0|
|Llama3.1-70B|55|54|8|
|Llama3.1-405B|43|55|12|
|Qwen2.5-3B|19|34|0|
|Qwen2.5-7B|12|51|4|
|QwQ-32B|57|49|1|
> Q2. Ablation study with various model configurations of temperatures and topk.
**Reply:** In our experiments, we used consistent hyperparameter settings with temperature = 0.7 and top-p = 0.7. We did not modify the top-k parameter, as it is not configurable in the OpenAI API. **Further, we present extra ablation studies and show that varying temperature and top-p values resulted in only marginal differences in performance across active reasoning tasks**. As shown in the tables below, extreme values of temperature and top-p (either too high or too low) lead to suboptimal performance in SP and DC tasks, while having minimal impact on GN.
|temperature|0|0.3|0.5|0.7|1|
|:-|:-|:-|:-|:-|:-|
|DC|39|35|39|31|37|
|SP|35|34|35|43|31|
|GN|2|0|0|0|2|
|top-p|0|0.3|0.5|0.7|1|
|:-|:-|:-|:-|:-|:-|
|DC|32|40|45|31|44|
|SP|32|36|37|43|34|
|GN|1|0|1|0|0|
> Q3. The human intervention in generating the dataset.
**Reply:** We answer this question in three folds:
**(1) We used a rigorous multi-stage process to ensure the reliability and logical consistency of each AR-Bench sample.** Specifically:
- For DC, the core logic remains intact. Each puzzle has a uniquely identifiable murderer based on motive, opportunity, and weapon access. The synthetic process only adds narrative details without affecting the deductive structure.
- For SP, which is based on lateral thinking, any logically consistent explanation is valid. We start by sampling a core counterfactual fact and use tree-based expansion to generate explanatory questions and corresponding facts (see Figure 4).
- For GN, the task is simple and the synthetic process is reliable, as it only involves generating four-digit numbers with unique digits.
**(2) We conducted manual checking of the puzzles and ground truths in AR-Bench to ensure they are reliable, logically consistent, and solvable through reasoning.** Specifically:
- For DC, we verify that the murderer has all key traits (motive, opportunity, weapon access) and that innocent suspects lack at least one.
- For SP, we check that the provided answer logically explains the scenario and maintains internal consistency.
- For GN, we confirm that each number consists of four unique digits.
Besides, as introduced in Appendix A, we provided the source files of AR-Bench in an [anonymous repository](https://anonymous.4open.science/r/AR-Bench-submission-code-591C). The reviewers can directly check the quality of data points in the AR-Bench.
**(3) In addition, synthetic data offers several advantages.** Synthetic data reduces overlap with existing datasets for mitigating data leakage, scales efficiently without the cost of human-annotated examples, and is easily adaptable for creating new active reasoning tasks.
> Q4. The human evaluation of the benchmark.
**Reply:** Considering the unaffordable expense of large-scale human evaluation, we conduct an extra demo-level human evaluation with two undergraduate students on AR-Bench to show human performance (please see the table below). Despite the inherent challenges in human evaluation, the results still reveal a substantial performance gap between humans and LLMs, underscoring the critical need to enhance active reasoning capabilities in current language models.
||DC|SP|GN|
|:-|:-|:-|:-|
|Human Evaluation|80|67|100|
|GPT-4o|54|63|35|
We will include the above clarifications and discussions in the revision. Any further comments and discussions are welcome! Thank you! | null | null | null | null | null | null | null | null |
A Near Linear Query Lower Bound for Submodular Maximization | Accept (poster) | Summary: The authors present query lower bounds for the problem of maximizing a monotone submodular function under a cardinality constraint.
Specifically, they improve the already existing lower bound of $\Omega(n/k)$ to $\tilde{\Omega}(n)$.
The bound holds for even estimating the value of the optimal set.
They also study the special case of additive functions (with non-negative weights).
They show that in this case one can find the optimal value with $\tilde{O}(n/k)$ queries
and that this is optimal up to polylog factors. However, they prove that finding the optimal set still requires
$\Omega(n)$ queries.
The main technique used in the paper is to reduce submodular maximization to distributed set detection.
## update after rebuttal
I keep my score.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I have not checked in detail but overall the claims and sketches make sense
Experimental Designs Or Analyses: N/A
Supplementary Material: No
Relation To Broader Scientific Literature: The paper is of interest to the submodular community.
Essential References Not Discussed: None that I know of
Other Strengths And Weaknesses: The paper studies a natural problem which is previously studied in the literature
and provides a noticeable improvement. The techniques used are interesting and may be of independent interest.
For the special case of additive functions, the results are somewhat surprising.
The main weakness is that the improvement compared to the previous work is in a somewhat limited setting.
Specifically, previous papers essentially already obtain a nearly-linear query lower bound for both very large $k$ and very small $k$.
The paper's main contribution therefore is to further study what the role of $k$ would be in such lower bounds.
While I agree that $k$ is a natural parameter, one could argue that the existing lower bounds were already "nearly-linear".
Other Comments Or Suggestions: The paper title in open review does not say "nearly-linear" and just says "linear". This should be corrected;
the lower bounds are not linear!
Algorithm 1: I believe this is a reduction from distributed set detection to submodular maximization, not the other way round.
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful feedback, we will incorporate your suggestions into our paper. | Summary: They give a lower bound for monotone submodular maximization under cardinality constraint, showing any algorithm achieving a constant approximation factor requires $\Omega(n)$ query complexity.
They also provide an algorithm which estimates the maximum value when the function is additive using $\tilde{O}(n/k)$ query calls.
Claims And Evidence: They have completely proved their claims.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I only checked the proofs in the main part of the paper and not the appendix.
Experimental Designs Or Analyses: N/A
Supplementary Material: No
Relation To Broader Scientific Literature: Previously, this problem had a general $\Omega(n/k)$ lower bound and also a $\Omega(n)$ lower bound when $k=\theta(n)$ but they could show that this lower bound holds for any $k$.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The lower bound result is interesting, especially while previous works provided $\Omega(n)$ lower bound for only large $k$, they show that it always holds and closes the gap between lower bound and upper bounds for this problem.
However, their algorithm for the additive case does not seem interesting since in the additive case, selecting $k$ elements with maximum value is enough and while they find a sublinear query complexity for this problem, the query complexity is not really important in this case.
Other Comments Or Suggestions: I am not sure if the statement in the theorem at the end of the first page matches Theorem 3.9. Instead of 'no O(n)-query complexity,' shouldn't it be 'no o(n)-query complexity'? (using little-o instead of big-O)
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful feedback. We will incorporate your suggestions into our paper and expand the discussion on the motivation behind our algorithm for additive functions. | Summary: The paper studies the submodular maximization problem under cardinality constraint $k$. They prove a new lower bound on the required query complexity for achieving a constant factor approximation ratio, which improves upon established results when $\text{polylog}{(n)} \le k \le \frac{n}{\text{polylog}{(n)}}$.
They also consider a special case of this problem where the function subject to maximization is also additive. For this problem, they prove the same lower bound and additionally propose an $(1 + \epsilon)$ approximation algorithm with only $\tilde{O}(\frac{n}{k})$ query complexity to find the value of the optimal solution.
Claims And Evidence: Fine.
Methods And Evaluation Criteria: Fine.
Theoretical Claims: The high-level ideas seem fine and sound.
Experimental Designs Or Analyses: This theoretical paper focuses on impossibility results, rendering experiments inapplicable.
Supplementary Material: I have not reviewed the proofs provided in the appendices in detail.
Relation To Broader Scientific Literature: .
Essential References Not Discussed: .
Other Strengths And Weaknesses: Strengths:
The paper uses novel and involved techniques.
The lower bound results are strong.
Weakness:
The writing could have had a better flow.
Proposing an algorithm for additive functions is not (well-)motivated.
Proposing an algorithm with the objective of finding value of optimal subset instead of the subset using sub-linear query time is motivated, but it's not convincing enough.
Other Comments Or Suggestions: line 118: vallue -> value
line 134 and 135: wrong parenthesis
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful feedback and will incorporate your suggestions into our paper. | Summary: This paper studies the query complexity for monotone submodular maximization over cardinality constraint. The authors provide a nearly tight query lower bound $\tilde{\Omega}(n)$ for obtaining any constant factor approximation for any $k = o(n)$, which improves over the existing bound of $\Omega(n / k)$. This bounds holds even in the special case of a monotone modular (additive) function. They also show that this lower bound holds even for estimating the optimal value of a monotone submodular function. But for estimating the optimal value of a monotone modular function, they provide an algorithm which achieves a $( 1 \pm \epsilon)$-approximation in $\tilde{O}(n/k)$ queries, and show that this is tight up to polylog factors.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: I checked all proofs except those of Lemmas 3.5, 4.3, 4.5, 4.6 and Theorems 4.1 and 4.7.
The proofs I checked are correct, but have some typos and steps which are not well explained. In general, more details and intuitive explanations should be provided in the proofs.
- In the proof of Lemma 3.6, $Pr[i_1 \in \mathcal{I}]$ should be $Pr[i_1 \in \hat{\mathcal{I}} \mid V = 1]$ on line 648-649 and $Pr[i_1 \in \hat{\mathcal{I}} \mid V = 0]$ on line 652-653. I think the latter should be equal to $ |\mathcal{I} \cap \hat{\mathcal{I}}| / (n- k + 1)$ instead of $ |\mathcal{I}| / (n- k + 1)$ (the lemma is still correct though as this is a valid upper bound). Also, more explanation should be given for why the equalities on both these lines hold.
- In the proof of Theorem 3.2, it's not clear why the first inequality on TV holds.
- In the proof of Lemma 3.10, the second inequality bounds the probability that $\sum_t X_{t, i}$ deviates from its mean by more than $\epsilon m/2$, while the Lemma requires bounding the probability of $\sum_t X_{t, i}$ not being in $(\epsilon m/2, 2 \epsilon m)$, so the bound given doesn't actually match that. The lemma is still correct, but the proof needs to corrected.
- In the proof of Lemma 4.4, give more details for why the first two inequalities hold (1st one can be proved by induction, for 2nd one remind the reader of how all these quantities are related..). Moreover why is Azuma-Hoeffding bound needed, this bound holds by Hoeffding inequality.
Experimental Designs Or Analyses: No experiments included.
Supplementary Material: I reviewed the appendices containing the proofs of the results I verified.
Relation To Broader Scientific Literature: Obtaining constant factor approximation algorithms for monotone submodular maximization with low query complexity is important for applications where function evaluations are costly, such as neural networks training and influence maximization. Estimating the optimal value is also valuable in some cases, for example to assess dataset quality before proceeding with further analysis or selection, or as a subroutine in some submodular optimization algorithms.
Existing query bounds for constant factor approximation in monotone submodular maximization are $\Omega(n / k)$ for any $k$ and $\Omega(n / \log n)$ for $k = \Theta(n)$. These bounds rule out sublinear query complexity algorithms for
$k = \Theta(1)$ and $k = \Theta(n)$. This work strengthen these impossibility results, showing that sublinear query complexity is also not possible for any $k \ll n$. This is particularly relevant as many applications involve $k$ that is not constant but still much smaller than $n$.
The lower bound $\tilde{\Omega}(n)$ provided here holds for the special case of modular functions too. This is also the case for the existing $\Omega(n / \log n)$ bound (see Theorem 4.2 in (Li et al., 2022)) but not for the $\Omega(n / k)$ bound.
The provided lower bound holds also for estimating the optimal value of a monotone submodular function. I am not aware of other existing lower bounds for this.
Existing algorithms for submodular maximization achieve a $(1 - 1/e - \epsilon)$-approximation, using $O(n \log(1 /\epsilon))$ queries with a randomized algorithm, and $O(n/\epsilon)$ queries with a deterministic algorithm. So the $\tilde{\Omega}(n)$ query lower bound in this paper is tight up to log factors.
From a technical perspective, prior lower bounds are based on counting arguments. This paper uses novel techniques. Namely the lower bound for finding an approximate solution is derived via a reduction from the query complexity of submodular maximization to the communication complexity of a problem called distributed set detection. The lower bound for estimating the optimal value of a monotone submodular function is obtained by constructing a challenging submodular instance.
The proposed algorithm for estimating the optimal value of a monotone modular function uses two intuitive subroutines. I am not aware of existing algorithms for this problem that use sublinear queries, so this might be the first.
Essential References Not Discussed: The lower bound $\Omega(n / \log n)$ for $k = \Theta(n)$ provided in (Li et al., 2022) holds for the special case of monotone modular function too (see Theorem 4.2 therein). This should be mentioned.
Other Strengths And Weaknesses: Already highlighted above.
Other weaknesses:
The paper lacks clarity due to numerous typos and insufficient details in the proofs
Other Comments Or Suggestions: I recommend moving the discussion in Section 1.1 to the relevant sections, as in provide the high level idea of each result in the same section where you give the formal result, and only keep in this section a very high level description of the main techniques you use.
- Restate theorems/lemmas in the appendix for convenience for the reader.
- In the definition of $f_{yes}$ in Eq. (2), why not simply write this as one sum over $i \in S$?
- Specify the expected input in Algorithm 3 ($g: [m] \to \mathbb{R}^+, t > 0$). Also clarify where do you query $g$ on line 6.
- On line 412, make it clearer that you're discussing here the proof sketch of Lemma 4.3.
Typos:
- References to lines in Algorithms need to be fixed throughout.
- In Definition 3.1, $I$ should be $\mathcal{I}$.
- In Figure 1, $i$ should be $i_1$?
- In Algorithm 3, line 1 the inequality should be $>$.
- LINEARSUM is mispelled a few times on p. 7
Questions For Authors: 1- Are there any existing algorithm for estimating the optimal value of a monotone modular function using sublinear queries?
2- Are there any existing query lower bounds for estimating the optimal value of a monotone submodular function?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and will incorporate your suggestions into our paper.
> Are there any existing algorithms for estimating the optimal value of a monotone modular function using sublinear queries?
To the best of our knowledge, no such algorithms currently exist.
> Are there any existing query lower bounds for estimating the optimal value of a monotone submodular function?
Yes—the $\Omega(n/k)$ lower bound established in prior work also applies to the estimation of the optimal value. | null | null | null | null | null | null |
FreeMesh: Boosting Mesh Generation with Coordinates Merging | Accept (poster) | Summary: This paper propose a plug-and-play tokenize strategy for auto-regressive mesh generation. This strategy learns a BPE tokenzier from discretized mesh coordinate sequences to merge multiple coordinates into one token. This paper also propose the Per-Token-Mesh-Entropy to evaluate how hard it could be to learn to generate the mesh sequence. Results show that the proposed tokenizer helps existing auto-regressive mesh generation methods for better generation results.
Claims And Evidence: The claims are made clear.
Methods And Evaluation Criteria: 1. The introduction of RMC (line 267, left column) is not clear. Instead, the authors provide pseudo codes without any specified explanations, thus hard for the reader to understand how the authors handle groups with ``less than 9 specially'' (line 222, right column).
2. The evaluations are somehow limited. As a universal compression method for mesh coordinate sequences, the authors only provide metrics like Chamfer Distance (CD) and Hausdorff Distance (HD) under the point-cloud-to-mesh generation task.
For a fair evaluation, the authors should also evaluate the class conditioned generation task (*i.e.* MeshGPT, MeshXL) and consider other evaluation metrics like FID scores on the rendered images to evaluate the ``connectivity'', *i.e.* to check whether there are holes in the generated meshes.
Metrics like CD and HD for point-cloud-to-mesh generation mainly evaluates how well a model selects point from the provided point cloud input. These metrics alone might not serve as a good metric for mesh quality evaluation.
Also, it requires user studies to see whether the generated meshes align with human preference.
Theoretical Claims: An more straightforward metric is to directly calculate the entropy of a specified mesh sequence, *i.e.* $\sum_{|seq|} p \log p$, as long as the sequence could be decoded into the same mesh, regardless of how the sequence is produced.
What is the difference or relation between the above one and the PCME / PTME introduced in the paper.
Experimental Designs Or Analyses: See **Methods And Evaluation Criteria** section (2)
Supplementary Material: The supp provides the details on how to calculating PTME and more qualitative results.
Relation To Broader Scientific Literature: The key contributions of this paper is closely related to Byte-Pair Encoding (BPE) for its tokenizer and other auto-regressive mesh generation methods like MeshGPT, MeshXL, MeshAnything, EdgeRunner.
In this paper, the authors adopt the BPE as a token compression strategy to help auto-regressive mesh generation methods learn better mesh generation results with a higher compression ratio.
Essential References Not Discussed: Related works are well discussed.
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ## 1. RMC Algorithm Clarification
**Reviewer Concern**: Handling groups with fewer than 9 coordinates (e.g., 7) was unclear.
**Response**:
For sequences with length < 9 (e.g., 7):
1. **Truncation**: Reduce to the largest multiple of 3 (e.g., 6).
2. **Permutation**: Reorder coordinates from `xxyyzzz` → `xxyyzz` → `xyzxyz`.
- **Implementation**: Full algorithm provided in [`serialization.py`](https://anonymous.4open.science/r/FreeMesh-1BB5/data/serialization.py).
## 2. Per-Token Entropy Design Rationale
**Reviewer Concern**: Why not use standard entropy ($-\sum p\log p$)?
**Response**:
Standard entropy fails to capture compression efficiency. For example:
- String `aaabbb` with vocab `{a,b}`: $H = \log 2$
- Compressed as `XY` (vocab `{X,Y,a,b}`): $H = \log 2$ (unreasonably identical)
- **PTME** yields $\frac{1}{3}\log 2$, correctly reflecting reduced complexity.
- **Example**: Demonstrates PTME's superiority over naïve entropy in [`intuition.py`](https://anonymous.4open.science/r/FreeMesh-1BB5/intuition.py).
## 3. PCME vs. PTME Differentiation
**Reviewer Concern**: Clarify the relationship between PCME and PTME.
**Response**:
Both PCME and PTME are derived from Equation 6 in Section 3.2.
- **PCME**: Measures entropy at the unmerged coordinate level, where each token represents one geometric coordinate (x/y/z). This granularity enables direct comparisons between geometric tokenizers like RAW/AMT/EDR.
- **PTME**: Aggregates entropy calculation over merged coordinates, with each token encoding multiple coordinates. PTME generalizes PCME by using abstract token representations instead of individual coordinates as the fundamental unit.
## 4. Expanded Evaluation Metrics
**Reviewer Concern**: Limited evaluation metrics for mesh quality.
**Response**: We now report:
| Method | Boundary Edge Rate (↓) | Topology Score (↑) | Human Preference (↑) |
|--------------|-------------------------|---------------------|-----------------------|
| EDR | 2.41 | 52.3 | 2.2 |
| EDR + MC | 2.32 | 51.4 | 2.1 |
| EDR + RMC | **0.85** | **68.2** | **2.8** |
- **Boundary Edge Rate**: This metric reflects the mesh hole rate by detecting boundary edges (edges that are used by only one face). The detection is carried out using the scripts [`detect_boundary.py`](https://anonymous.4open.science/r/FreeMesh-1BB5/metric/detect_boundary.py).
- **Topology Score**: This metric evaluates the topological quality of a mesh by converting triangular faces into quadrilateral faces. A high-quality quadrilateral mesh should be regular, with its surrounding shapes being as close as possible to rectangles or trapezoids, and the ratio of opposite sides should not be excessively large. The evaluation is performed through the scripts [`tri2quad.py`](https://anonymous.4open.science/r/FreeMesh-1BB5/metric/tri2quad.py) and [`topo_score.py`](https://anonymous.4open.science/r/FreeMesh-1BB5/metric/topo_score.py).
- **User Study**: 10 participants rated meshes on a 5-point scale
## 5. Modality Focus Justification
**Reviewer Concern**: Why prioritize point-cloud conditioning? not other modality.
**Response**:
1. **Technical Impact**: Point-cloud conditioning is the _de facto_ standard for industrial remeshing pipelines (e.g., MeshAnythingV2, Meshtron).
2. **Task Complexity**: Class-conditioned generation on ShapeNet is oversimplified compared to real-world application. Cross-modal systems (text/image → 3D) like EdgeRunner universally use point-cloud conditioning as their foundational step. These systems align image/text features to point-cloud latent spaces. Currently, this method is not very effective.
3. **Scope Alignment**: Our work focuses on coordinate merging, which are modality-agnostic. Modality-specific adaptations (e.g., encoders for text/images) would not affect our core contribution. Point-cloud conditioning suffices to validate our method. | Summary: This paper introduces (1) Per-Token-Mesh-Entropy (PTME), a novel metric for evaluating mesh tokenizers without requiring training, and (2) "Rearrange & Merge Coordinates" (RMC) approach that improves existing tokenizers by rearranging and merging frequently occurring coordinate patterns. Experiments conducted with multiple tokenization methods (MeshXL, MeshAnythingV2, and EdgeRunner) show that their approach achieves a state-of-the-art compression ratio and improves the quality of generated meshes.
Claims And Evidence: The paper makes several key claims that are generally well-supported by evidence:
PTME effectively evaluates tokenizers without training: The authors define PTME as a product of entropy and compression rate, demonstrating through experiments that this metric correlates well with the performance of different tokenization methods without requiring training. Table 1 shows a clear relationship between lower PTME values and better mesh generation quality.
Basic merge coordinates (MC) fails to reduce PTME: The authors show that naively applying token merging increases compression ratio but paradoxically increases PTME, which explains why it doesn't improve generation quality. Figure 1(b) and Table 1 provide clear evidence for this claim.
RMC improves tokenizer efficiency and generation quality: The combination of rearrangement and merging achieves significant improvements in compression ratios (up to 21.2% with EdgeRunner) and generation quality, particularly visible in higher face count meshes. Figure 4 provides convincing visual comparisons.
RMC increases usable mesh count: Figure 6 demonstrates that RMC allows the model to process more meshes within context window constraints, expanding the training data available to the model.
Methods And Evaluation Criteria: I think the evaluation is not enough. See **Experimental Designs Or Analyses** and **Other Strengths And Weaknesses**.
Theoretical Claims: The proposed Per-Token-Mesh-Entropy and Coordinates Merging are sound and reasonable.
Experimental Designs Or Analyses: Comparison with BPT should be added for pointcloud coniditioned generation. Also, the test data is not explained in details, how are they selected and why there is no complex mesh shown in the figure?
BPT: Scaling Mesh Generation via Compressive Tokenization.
Supplementary Material: Yes. Per-Token-Mesh-Entropy and Further Results.
Relation To Broader Scientific Literature: The Coordinates Merging is promising for data compression which can be used in many areas.
Essential References Not Discussed: Most of the related methods are mentioned.
Discussion with [1] in terms of open-surface modelling would be better.
[1] DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation. arXiv 2024.
Other Strengths And Weaknesses: **Weaknesses**: There is no text/image conditioned mesh generation results to compare with other methods and validate the effectiveness of proposed method.
**Weaknesses**: Limitations are not discussed.
Other Comments Or Suggestions: More complex genrated mesh structures should be included such as visualizations in Meshtron: High-Fidelity, Artist-Like 3D Mesh Generation at Scale.
Questions For Authors: How the data selection is done?
How the closeness is guaranteed for the watertight meshes?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## 1. Benchmarking with BPT Method
**Reviewer Concern**: Missing comparison with BPT for point-cloud conditioned generation.
**Response**:
We have added quantitative and qualitative comparisons with BPT under identical settings:
| Method | Compress Ratio ↓ | HD ↓ | CD ↓ | Boundary Edge Rate ↓ | Topology Score ↑ | Human Preference ↑ |
|--------------|-------------------|-------|-------|----------------------|------------------|--------------------|
| BPT | 0.260 | 0.275 | 0.121 | 0.88 | 66.7 | 2.7 |
| EDR + RMC | **0.212** | 0.280 | 0.123 | **0.85** | **68.2** | **2.8** |
- RMC achieves better compression ratio (**21.2%** vs 26.0%) and topology quality.
- Visual comparisons of complex cases provided in [`bpt_compare`](https://anonymous.4open.science/r/FreeMesh-1BB5/figures/bpt_compare.png).
## 2. Complex Mesh Generation
**Reviewer Concern**: Lack of high-fidelity complex mesh results.
**Response**:
- **Current Limitation**: Our method uses a plain Transformer with a 9k context window, which restricts high-face mesh generation capability. The comparison above with BPT shows our relatively complex cases.
- **Future Direction**: Integrating RMC into architectures like Meshtron (hourglass Transformer) or DeepMesh (BPT+Meshtron+DPO) could bridge this gap.
## 3. Data Selection Protocol
**Reviewer Concern**: Clarify data selection.
**Response**: Refer to my response (Reviewer NgVn/Q3).
## 4. Multi-Modality Evaluation
**Reviewer Concern**: Missing text/image-conditioned results.
**Response**:
As highlighted in responses (Reviewer Ri2A/Q5), we prioritize point-cloud conditioning. Currently, direct end-to-end training of text/image conditioning has poor results. Others are trained with point cloud conditioning in the first stage, and align the text or image with the point cloud in the second stage. The decoder is frozen in this process, so it is enough to only look at the point cloud condition.
## 5. Watertight Meshes Justification
**Reviewer Concern**: How closeness is guaranteed for watertight meshes?
**Response**:
Current explicit mesh representations (including ours) cannot inherently guarantee watertightness. To address this, we implement a post-processing pipeline ([`post_process.py`](https://anonymous.4open.science/r/FreeMesh-1BB5/data/post_process.py)) that ensures watertight meshes through three steps:
1. **Hole filling** using robust mesh completion algorithms,
2. **Small component removal** to eliminate floating artifacts,
3. **Folded face repair** via mesh regularization.
This pipeline leverages established mesh processing libraries to robustly mitigate non-watertight geometry in final outputs.
## 6. DI-PCG Discussion
We will add this paper in our related work. | Summary: Core contributions are a new metric and a new approach to mesh tokenization. The new approach to tokenization uses BPE style compression, compress the tokenized representation of a mesh for the purposes of autoregressive mesh generation, getting additional compression by utilizing the permutation invariance of mesh representations and rearranging before the merging that happens in BPE, a strategy that is compatible with existing tokenization works.
## update after rebuttal
Review unchanged after rebuttal, review was made bearing in mind that the weak explanations of dataset creation can still be fixed and the underwhelming length / amount of discussion of the many results / graphs shown can still be fixed. My major concerns that might have negatively changed my review were addressed in the rebuttal
Claims And Evidence: Running baselines with other tokenization methods, they have evidence that these contributions of rearranging with BPE improves the training quality, measuring chamfer and Hausdorff distances in their setup with their given model architecture for autoregressive mesh generation. The evidence they provide for their new metric PTME is a bit less rigorous, and would benefit from some kind of correlation coefficient calculated between PTME and chamfer, and PTME and Hausdorff over a few different hyperparameter settings (vocab size for instance), whereas right now they single out a single example in their table where the compression ratio fails to reflect the improved performance. The evidence they have for their new tokenization strategy improving the size of trainable data is well done and clear and concise.
The paper would benefit from a longer discussion on the results, it feels slightly rushed towards the end, figure 5 for instance isn't really discussed at all. Further discussion on figure 1 and why PTME increases when you increase the vocab size is also very much needed. If we expect vocabulary size to improve the performance monotonically, then this behavior of PTME as a metric needs to be justified as to how PTME is valid given this experiment, and if we expect vocabulary size to have a saturation point on downstream performance, then this also needs to be discussed.
I would like some confirmation (given that you discuss needing to exclude certain meshes with large tokenizations from training) that you use the same set of 10,000 meshes during training across all runs in order to avoid biasing your results towards models with higher compression ratios that could see more complex meshes end up in their training set. I'd also like more information on how you chose the 10,000 meshes in the training set, given that you have over 1million meshes, and 45k meshes that could be trained on with the 9k token limit for even RAW which has the lowest compression ratio.
Methods And Evaluation Criteria: It seems pretty straightforward and makes sense for how they evaluated this. I have some unanswered questions about out of distribution generalization performance, especially given the aforementioned lack of discussion on how they generated their training / test set from the datasets specified
Theoretical Claims: No proofs
Experimental Designs Or Analyses: My issues were enumerated earlier on experimental design, not enough information specified about the generation of the training / test sets from the datasets specified, not enough discussion on the figures provided. The only figure that I didn't see in the paper and wanted to is a figure with correlation coefficients for their new proposed metric with the metrics we care about after training, and this is the only issue that is hard to fix about the paper as is.
Supplementary Material: Yes, just the math and the further result. The further result feels a bit strange to have in the suppmat, since this feels like this is a standalone experiment instead of something that expands upon some existing experiments in the main body of the paper.
Relation To Broader Scientific Literature: Narrow but interesting improvement in a specific flavor of 3D generation via mesh autoregression. Mesh autoregressive methods can suffer in their inability to generate / train on large meshes, so this also improves the utilization of existing datasets by a large amount when using this technique of autoregressive mesh generation.
Essential References Not Discussed: I appreciate the fact that a lot of references provided were recent, I would appreciate further discussion on other mesh tokenization methods, "Scaling Mesh Generation via Compressive Tokenization" in particular was cited as the architecture for the mesh generation method, and is a paper specifically about better tokenization through compression, and no mention of it in the related works other than to introduce autoregressive mesh generation as a whole. More extensive related works on mesh tokenization specifically is warranted, edgerunner and meshanythingv2 were the only ones brought up and were only given a few sentences. This is the bulk of the method section, so it really really needs further related works to better contextualize where this work sits in achieving the stated purpose of improving tokenization itself as a means to improve performance on mesh generation.
Other Strengths And Weaknesses: I think the main weaknesses come from lack of discussion on existing related works on mesh tokenization as a means to improve downstream performance, and the underwhelming discussion of the figures provided. The nitpicks on the lack of explanation on the experimental setup too need to be fixed. The figures themselves are informative and leave a high ceiling for what this paper could be, which is a strength of the paper. The metric seems novel that they introduce as is the method they introduce.
Other Comments Or Suggestions: Some other nitpicks:
"H120 GPU" needs to be fixed
I'm a bit confused what you mean when you say "mapping integer coordinates (0-127) to atomic Chinese character units"
Questions For Authors: Questions / issues were raised above, specifically the issues raised in the "Claims and Evidence" section of the review I'd like answers to, and further information on how this tokenization sits in the research landscape against other tokenization works like BPT.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## 1. PTME-CD Correlation Analysis
**Reviewer Concern**: Need rigorous correlation between PTME and generation metrics.
**Response**:
For EDR+RMC, we calculated the Pearson correlation between PTME and Chamfer Distance (CD) under varying vocabulary sizes:
- **Pearson r = 0.965** , indicating a strong positive linear correlation.
- **Visualization**: [`EDR_RMC_PTME_CD`](https://anonymous.4open.science/r/FreeMesh-1BB5/figures/EDR_RMC_PTME_CD.png).
## 2. PTME Increase with Naive Merging
**Reviewer Concern**: Why does Basic Merge Coordinates (MC) increase PTME?
**Response**:
PTME balances two factors:
1. **Compression ratio**: Merging increases average token length.
2. **Entropy**: Merging may flatten token distribution (higher entropy).
**Key Insight**:
- **Example**: Naive merging increases PTME. Rearranging coordinates before merging, lowering PTME. [`intuition.py`](https://anonymous.4open.science/r/FreeMesh-1BB5/intuition.py)
- **Proof**: High co-occurrence probability between adjacent pairs is critical for PTME reduction. [`proof`](https://anonymous.4open.science/r/FreeMesh-1BB5/figures/proof.png)
## 3. Training Set & Generalization
**Reviewer Concern**: Data selection and generalization analysis.
**Response**:
- **Data Selection**:you can Refer to my response (Reviewer NgVn/Q3).
- **Generalization**: While our method demonstrates robustness on simple geometries, architectural models with complex geometries often fail. [`failure_case`](https://anonymous.4open.science/r/FreeMesh-1BB5/figures/failure_case.png)
## 4. Training on Same number Face
**Reviewer Concern**: Exclusion of high-face meshes.
**Response**:
We conducted experiments on meshes with 500-1k faces selected based on maximum RAW sequence length from the original training set. Results demonstrate consistent improvements:
| Method | CD (w/o RMC) | CD (w/ RMC) |
|----------|--------------|-------------|
| RAW | 0.201 | 0.183 |
| AMT | 0.172 | 0.132 |
| EDR | **0.154** | **0.118** |
## 5. Vocab Size Saturation
**Reviewer Concern**: Does CD plateau with larger vocab?
**Response**:
- PTME and CD are linearly correlated (r=0.965).
- Saturation occurs when PTME stabilizes (vocab ~8k), aligning with CD trends.
## 6. Figure 5 Discussion
**Reviewer Concern**: Insufficient analysis of compression ratio trends.
**Response**:
We discuss it in Line 263: Larger vocab sizes reduce compression ratio.
## 7. GPU Specification Correction
**Typo Fix**:
- Original: "H120 GPU" → Revised: "NVIDIA H20 GPU".
## 8. Chinese Character Mapping
**Reviewer Concern**: I'm confused about "mapping integer coordinates (0-127) to atomic Chinese character units."
**Response**:
This mapping ensures compatibility with SentencePiece’s tokenization workflow while preserving coordinate integrity:
1. **Purpose**: SentencePiece requires string inputs, but naive coordinate-to-string conversion (e.g., `(12, 34)` → `"12,34"`) splits digits into separate tokens (`["1", "2", ",", "3", "4"]`).
2. **Solution**: We create a **bijective mapping** where each integer (0-127) corresponds to a unique Chinese character (e.g., `124` → "亜"), ensuring each coordinate occupies **one atomic token**.
Implementation:
- Tokenizer training: [`train_vocab.py`](https://anonymous.4open.science/r/FreeMesh-1BB5/bpe/train_vocab.py)
- Bidirectional conversion: [`decimal_to_chinese.py`](https://anonymous.4open.science/r/FreeMesh-1BB5/data/data_utils.py)
This prevents token fragmentation and maintains structural consistency for model training. | Summary: The manuscript adapts subword tokenization techniques from natural language processing to compress mesh coordinate sequences, proposing the Rearrange & Merge Coordinates method to achieve higher mesh encoding efficiency while being easily integrated into existing mesh generation frameworks. Additionally, the manuscript introduces an entropy-based theoretical framework to evaluate mesh tokenizers. The manuscript conducted experiments on a dataset and achieved a token compression ratio of 21.2% compared to EdgeRunner.
Claims And Evidence: The PTME is relatively straightforward in design and lacks in-depth theoretical discussion (or experimental validation) to prove the effectiveness of this indicator (compared with indicators such as perplexity).
Methods And Evaluation Criteria: The experimental design, particularly Table 1, is too simple, and there is a lack of evaluation effects under different numbers of faces. It would be valuable to assess whether the method outperforms baselines on Chamfer distance for meshes with lower face counts (~200 to ~500).
Theoretical Claims: This paper doesn't use very deep theoretical knowledge and mathematical proofs. Based on the current formulas, the claims in the paper are correct.
Experimental Designs Or Analyses: 1. The experimental design, particularly Table 1, is too simple, and there is a lack of evaluation effects under different numbers of faces. It would be valuable to assess whether the method outperforms baselines on Chamfer distance for meshes with lower face counts (~200 to ~500).
2. The specific filtering process of the training data (filtered Objaverse) is not detailed, and the selection criteria for 10K mesh samples are also unclear, which limits the reproducibility of the results. More detailed statistics for the training data and validation data are needed.
Supplementary Material: Yes, I reviewed all parts of the supplementary material.
Relation To Broader Scientific Literature: Compared to the existing methods, this paper compress mesh coordinate sequences, proposing the Rearrange & Merge Coordinates method to achieve higher mesh encoding efficiency while being easily integrable into existing mesh generation frameworks.
Essential References Not Discussed: In the research area of mesh generation using autoregressive models, there are currently few relevant papers. I believe the authors have done a good job of discussing the current research status and related work.
Other Strengths And Weaknesses: Strengths:
1. The paper adapts subword tokenization technology from natural language processing to compress mesh coordinate sequences, achieving higher efficiency, conforming to common sense, and is easy to integrate into the existing mesh generation framework.
2. The paper introduces an entropy-based theoretical framework to evaluate mesh tokenizers.
3. The comparison between Per-Token-Mesh-Entropy and Vocab Size is intuitive
Weaknesses:
1. The PTME is relatively straightforward in design and lacks in-depth theoretical discussion (or experimental validation) to prove the effectiveness of this indicator (compared with indicators such as perplexity).
2. The experimental design, particularly Table 1, is too simple, and there is a lack of evaluation effects under different numbers of faces. It would be valuable to assess whether the method outperforms baselines on Chamfer distance for meshes with lower face counts (~200 to ~500).
3. The specific filtering process of the training data (filtered Objaverse) is not detailed, and the selection criteria for 10K mesh samples are also unclear, which limits the reproducibility of the results. More detailed statistics for the training data and validation data are needed.
Other Comments Or Suggestions: Please refer to "Other Strengths And Weaknesses"
Questions For Authors: 1. Why do RAW and AMT perform poorly in the ~500 face column of the RAW row in Figure 4? According to their paper, both methods should have been trained on a mesh dataset consisting of ~500 faces and should be able to handle meshes of this size.
2. The proposed method achieves compression of mesh coordinate sequences compared to baselines. Does this reduction in sequence length lead to a measurable decrease in fine-tuning time?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## 1. PTME vs Perplexity (PPL)
**Reviewer Concern**: Lack of theoretical validation for PTME compared to metrics like PPL.
**Response**:
1. **Fundamental Difference**:
- PPL requires model training and correlates poorly with final generation quality in our task.
- Empirical observation: Loss plateaus early (e.g., 0.2 for vocab=8k, 0.1 for vocab=256) while quality improves post 100k steps.
- Weak Coorelation: calculated Pearson’s (r = -0.407, p = 0.423) between $PPL = e^{\text{Loss}}$ and CD
| Method | Loss (w/o RMC) | CD (w/o RMC) | Loss (w/ RMC) | CD (w/ RMC) |
|----------|---------------|--------------|--------------|-------------|
| RAW | 0.103 | 0.326 | 0.202 | 0.282 |
| AMT | 0.105 | 0.219 | 0.205 | 0.164 |
| EDR | 0.099 | 0.198 | 0.198 | 0.123 |
2. **PTME Advantage**:
- Training-free evaluation of tokenizers.
- Strong correlation with downstream CD (r=0.965, p=0.0004) as shown in [`EDR_RMC_PTME_CD`](https://anonymous.4open.science/r/FreeMesh-1BB5/figures/EDR_RMC_PTME_CD.png).
## 2. Evaluation on Low-Face Meshes (~200-500 faces)
**Reviewer Concern**: Missing analysis on meshes with fewer faces.
**Response**:
- **Performance Consistency**:
| Method | CD (w/o RMC) | CD (w/ RMC) |
|----------|--------------|-------------|
| RAW | 0.171 | 0.163 |
| AMT | 0.152 | 0.121 |
| EDR | **0.144** | **0.106** |
Most of simple cases (~200-500 faces) show lower Chamfer distance, but can still benefit from the RMC method.
## 3. Data Selection
**Reviewer Concern**: Unclear Data Selection.
**Response**:
We filter low-poly CAD meshes and retain human-crafted meshes with complex geometries. Training data predominantly contains meshes with <5k faces. During training, sequences exceeding 9k tokens after tokenization are discarded (see [`data_distribution`](https://anonymous.4open.science/r/FreeMesh-1BB5/figures/data_distribution.png)), showing a long-tailed distribution where most meshes have moderate face counts.
Our RMC method enables training on more high-face meshes while reducing average sequence length. Verification scripts: [`get_data_distribution.py`](https://anonymous.4open.science/r/FreeMesh-1BB5/data/get_data_distribution.py). For the usable mesh number of each method, you can refer to my Section 4.3.
The tokenizer is trained on the first 10k meshes (by UID) from 1M filtered Objaverse samples. Evaluation data strictly follows the same distribution as training.
## 4. Training Efficiency
**Reviewer Concern**: Does sequence compression reduce training time?
**Response**:
- **Empirical Evidence**:
Initially, to ensure fairness in our experiments, all the methods were trained for the same duration, and we directly compared the performance based on the last checkpoint. From the perspective of training steps, a shorter sequence length undoubtedly leads to a shorter time per iteration. However, this isolated time metric doesn't provide a comprehensive picture of the training efficiency.
In **DeepMesh's** (DeepMesh: Auto-Regressive Artist-mesh Creation with Reinforcement Learning) observations (Figure 10), shorter sequences can accelerate the convergence process.
## 5. Baseline Performance on ~500 Faces
**Reviewer Concern**: Why RAW/AMT perform poorly compare to their paper?
**Response**:
- **Key Limitation**:
- RAW Representation: Our decoder(512M) is smaller compared to the MeshXL(1.3B)architecture.
- AMT Representation: To isolate merged token contributions, we omitted face embeddings used in the MeshanythingV2 baseline.
- **Case Selection**: While some easy examples were well-generated by all methods, we intentionally selected challenging examples with high surface detail from the 500-face dataset. | null | null | null | null | null | null |
A Novel Characterization of the Population Area Under the Risk Coverage Curve (AURC) and Rates of Finite Sample Estimators | Accept (poster) | Summary: This paper concerns evaluating the performance of *selective classifiers*, where the classifier has the option of abstaining from making a prediction when the confidence is low. For such classifiers, the Area Under the Risk Coverage curve (AURC) has been a commonly used (population) metric for evaluating their performances. This paper finds an equivalent representation of AURC, based on which an estimator is provided. Asymptotic properties of the estimator are analyzed and the proposed method is evaluated in simulations and real data examples.
Claims And Evidence: The major claim of the paper is that it provides a new estimator for AURC that is consistent asymptotically and computationally efficient. The claim is supported by proofs and numerical evidence.
Methods And Evaluation Criteria: The estimator is constructed by first establishing an equivalent formulation of the population quantity and then plugging in sample-version estimates. I wonder, what if one uses the estimator in Equation (10) or (11) directly (for example, in the numerical studies)? What is the advantage of the proposed method over it?
Theoretical Claims: I briefly went over the outline of the proof for the consistency.
Experimental Designs Or Analyses: Yes, I have checked the experimental design. My questions are:
1. What if one uses the estimator in Equation (10) or (11) directly (for example, in the numerical studies)? What is the advantage of the proposed method over it?
2. In some of the plots, the confidence intervals are quite wide and the performance of the candidate methods is not quite distinguishable. I wonder if more accurate results can be presented.
3. What is the take-away of Table 1? There do not seem to be significant differences across different methods.
Supplementary Material: I briefly went over the outline of the proof for the consistency.
Relation To Broader Scientific Literature: This paper finds an equivalent representation of AURC, based on which an estimator is provided. Asymptotic properties of the estimator are established.
Essential References Not Discussed: NA.
Other Strengths And Weaknesses: NA.
Other Comments Or Suggestions: NA.
Questions For Authors: 1. What if one uses the estimator in Equation (10) or (11) directly (for example, in the numerical studies)? What is the advantage of the proposed method over it?
2. In some of the plots, the confidence intervals are quite wide and the performance of the candidate methods is not quite distinguishable. I wonder if more accurate results can be presented.
3. What is the take-away of Table 1? There do not seem to be significant differences across different methods.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ***Response to Reviewer BrTV:***
**Q1:** See general response from "Notably, the Monte Carlo estimator using $\hat{\alpha}_{i}$...from a theoretical perspective."
**Q2:** Take Fig. 3 as an example. When the confidence intervals are wide, it is typically due to the estimator being evaluated with a small sample size. This is expected, as small $n$ naturally leads to higher variability. In our analysis, we show that the convergence rate for our estimators is $\mathcal{O}(\sqrt{\ln(n)/n})$, which explains the wider intervals at smaller sample sizes. Nevertheless, as the sample size increases, the variance—and hence the confidence intervals—of the estimators converge as expected. With a sufficiently large sample size, both Monte Carlo estimators demonstrate consistency.
**Q3:** In Table 1, we show that our estimators perform comparably to SELE and outperform standard cross-entropy risk minimization. While [1] describes the SELE score as a “computationally manageable proxy of AuRC”, we demonstrate that it is possible to directly optimize consistent estimators of AURC using our formulations—rather than relying on a lower bound, as SELE does. In general, optimizing a lower bound is often less effective than optimizing the true objective or an upper bound. We are not concerned by the fact that SELE achieves similar results in terms of optimization—indeed, optimization is not the primary contribution of this paper. Rather, our findings illustrate that population AURC can, in fact, be optimized via batch training using our estimators as loss functions. The main focus of this work lies in the estimation of AURC and the theoretical foundation that supports it—not in advocating for AURC optimization. Regarding SELE, although [1] claims that $2\times$SELE serves as an upper bound for the empirical AURC (Theorem 8), we show in Appendix Section A.2 that this is not the case. Furthermore, our empirical results indicate that SELE is not a consistent estimator of the population AURC.
[1] Franc, Vojtech, Daniel Prusa, and Vaclav Voracek. "Optimal strategies for reject option classifiers." *Journal of Machine Learning Research* 24.11 (2023): 1-49. | Summary: This work addresses estimation of AURC risk aware classification by Selective Classifiers (SC).
It considers the theoretical properties of a pre existing AURC estimators, specifically SELE and proposes several new population based estimators with improved convergence properties.
Furthermore, they conduct empirical analysis of their findings to confirm the properties predicted by theory.
Claims And Evidence: The claims are that the two proposed estimators have improved convergence properties compared to established approaches.
The claims are backed theoretically and largely check out empirically.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense.
Theoretical Claims: The theoretical section is large, the proofs were superficially checked and appear to be in good order.
Experimental Designs Or Analyses: The experiments were set up reasonably. Several datasets were used, effect of 0/1 as well as CE loss were investigated.
The proposed population estimators were also used to tune the models successfully.
Supplementary Material: The appendix was not provided as a part of the submitted PDF and was not carefully checked.
Relation To Broader Scientific Literature: This paper falls into the broader work on risk aware machine learning.
Essential References Not Discussed: None to extent of my knowledge.
Other Strengths And Weaknesses: The strength of the paper is in depth analysis of statistical properties of AURC estimators.
Other Comments Or Suggestions: Minor point: formulas in Proof 3.4 overflow into the page margins.
Questions For Authors: 1. The theoretical results derived apply to a fixed classifier. Can this approach be generalized to the epistemic risk with respect to multiple modes of the posterior distribution $p(\mathbf{w} \mid \mathcal{D})$ ?
2. How would introducing a cost function $c$ affect the properties of the estimators?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ***Response to Reviewer 3mMn:***
**Claims and Evidence:**
We largely agree that our analysis focuses on the two proposed Monte Carlo estimators. In fact, one of them is equivalent to the widely used plug-in estimator, and our results confirm the effectiveness of this plug-in estimator from a theoretical perspective. Moreover, our analysis provides practical guidelines on which estimator to use and demonstrates that the novel estimator based on $\hat{\alpha}^{\prime}_i$ achieves the same convergence rate as the plug-in estimator.
**Comments.**
Thanks and we'll correct this format issue in the final version.
**Question.**
**Q1:** A straightforward way to address this issue is to consider the following expression:
$$
\mathbb{E}\_{f\sim \mathbb{P}(f|\mathcal{D})}[\text{AURC}(f)] = \mathbb{E}\_{f\sim \mathbb{P}(f|\mathcal{D})}\left[ \mathbb{E}\_{x} \left[ \mathcal{R}(f,g) \mid \tau = g(x)\right] \right]
$$
where $\tau$ is the threshold value. This can be computed directly using our method with Monte Carlo sampling. By applying Fubini’s Theorem, we can exchange the order of the two expectations, leading to:
$$
\mathbb{E}\_{f\sim \mathbb{P}(f|\mathcal{D})}[\text{AURC}(f)] = \mathbb{E}\_{x}\left[ \mathbb{E}\_{f\sim \mathbb{P}(f|\mathcal{D})} \left[ \mathcal{R}(f,g) \mid \tau = g(x)\right] \right].
$$
Since $g(x)$ depends on $f(x)$ under the posterior distribution $\mathbb{P}(f|\mathcal{D})$, and the predictions are made through a full Bayesian framework, this formulation allows the evaluation of AURC in a way analogous to the standard AURC for a fixed model. We can envision several ways to define potential quantities of interest based on model uncertainty. Many of these quantities could be potentially connected to AURC and epistemic risk, and exploring these relationships could open up valuable avenues for further investigation.
**Q2:** Given an explicit cost function, it can naturally be incorporated into a loss function. If a cost is explicitly assigned to abstention, then evaluating AURC becomes unnecessary. Instead, with appropriate analysis—such as that in [2]—one can identify the optimal operating point that determines a threshold on the confidence score function (CSF). This is a well-studied area in the literature. However, it is outside the scope of this paper, as our focus is on analyzing the entire risk-coverage curve. Unlike approaches that rely on a fixed cost, AURC evaluates performance across all possible cost trade-offs, providing a broader view of model behavior under selective prediction.
[2] Cortes, Corinna, Giulia DeSalvo, and Mehryar Mohri. "Learning with rejection." Algorithmic Learning Theory: 27th International Conference, ALT 2016, Bari, Italy, October 19-21, 2016, Proceedings 27. Springer International Publishing, 2016. | Summary: Introduces population Area Under the Risk-Coverage Curve (AURC) for selective classifiers. Statistical properties of these estimators are analyzed and evaluated on CIFAR datasets with VGG models.
Claims And Evidence: Needs more evidence evaluating AURC optimization (convergence, computational difficulty, etc).
While mathematically sound, the practical benefits of the proposed approach over existing methods are not convincingly demonstrated through compelling real-world examples.
Methods And Evaluation Criteria: Only one baseline (SELE) is compared. Other selective classification methods should also be evaluated as baselines.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Improvements over SELE baselines are marginal. Analysis or discussion of computational overhead of the proposed estimators should be compared to baselines.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Connection to broader literature on uncertainty estimation in deep learning seems superficial
Essential References Not Discussed: Missing discussion and comparion of previous work in learning with rejection:
* https://cs.nyu.edu/~mohri/pub/rej.pdf
* https://arxiv.org/html/2107.11277v3
Other Strengths And Weaknesses: Limited practical applicability and novelty. The novelty compared to existing work on AURC seems incremental.
Other Comments Or Suggestions: How to generalize beyond classification accuracy. What about calibration, robustness to distribution shift, or performance on minority classes?
Questions For Authors: * What are the real-world implications of these marginal improvements?
* How do your estimators perform under distribution shift or with imbalanced datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ***Response to Reviewer cuHV: (continued): Please refer to the initial part of this response above.***
**Essential Reference Not Discussed:**
We agree that learning with rejection is related to AURC in relation to selective classifiers, where the model includes a reject option. However, this line of work is not directly relevant to the literature on AURC estimation, which is the primary focus of our study. As highlighted in [3], in the learning-with-rejection setting, the rejection threshold is learned by optimizing an explicit cost function (e.g., Eq. (9) in [3], Eq.(1) in [2]. This work primarily focuses on learning the rejector and evaluating models equipped with a rejector. In contrast, the objective of AURC is fundamentally different—it is an evaluation metric that measures the expected selective risk across all possible threshold values induced by the data. Optimizing AURC is equivalent to finding a model that achieves consistently better accuracy across the entire range of rejection thresholds. Although [3] uses the Area Under the Accuracy-Reject Curve (referred to as AURC in their Sec 3.2) to evaluate the trade-off between model accuracy and rejection, the intuition behind their Fig.7 aligns with the Area Under the Risk-Coverage Curve (AURC) used in our paper. However, they treat AURC purely as an evaluation metric and do not attempt to estimate or optimize it, which is the central goal of our work. While these papers provide interesting background on selective classification, they are not directly relevant to AURC estimation and analysis. Nevertheless, we will cite them in the final version in the introduction section as related work.
**Weakness:**
As noted in [3], AURC is an important evaluation metric for analyzing the risk-coverage trade-off. We believe it is valuable to understand the characteristics of this metric more deeply. In our work, we provide a novel definition of the population AURC (see Definition 3.2), framing it as a reweighted risk function, where the weighting term admits a closed-form expression (see Eq. (7)). To estimate the population AURC, we introduce two plug-in estimators derived via Monte Carlo methods. One of these corresponds to the widely used empirical AURC found in existing literature, while the other—based on the novel weight estimator $\hat{\alpha}^{\prime}_i$—is proposed in this paper. As highlighted in this paper, this novel formula achieves a computation cost $\mathcal{O}(n\log(n))$, offering a significant improvement over the naïve empirical AURC implementation. Importantly, we also establish convergence rates for both estimators, which, to our knowledge, have not been addressed in prior work. While [1] claims that “SELE loss is a close approximation of the AURC and, at the same time, amenable to optimization,” our results show that SELE is not a reliable estimator for the population AURC when compared with our estimators. Furthermore, our findings show that AURC can be effectively optimized via batch training using our estimators as loss functions, yielding significant improvements over standard cross-entropy risk minimization. We believe these results are important for the application of AURC estimation and optimization.
**Other comments:**
Thanks! These are indeed the directions we're considering for future work. However, our current goal is to first publish this paper, which provides a solid foundation for AURC estimation, independent of those other directions. AURC is an interesting metric in its own right and deserves a proper analysis—both in terms of its characteristics and how it should be estimated.
**Q1:** These are not marginal improvements—they provide precise guidance on how AURC should be estimated and optimized. While SELE may achieve results comparable to our proposed estimator in terms of optimization, it is still not a reliable estimator of the population AURC. As we have emphasized, optimization is only a minor aspect of this paper, and we are comfortable with the fact that SELE performs similarly in that regard. Our primary focus is on the estimation side, for which we provide extensive theoretical and empirical analysis. In particular, our paper shows what convergence rates can be expected for these plug-in estimators, providing a theoretical underpinning for further analysis of the AURC objective.
**Q2:** Estimation under distribution shift is an crucial and interesting direction, but it is beyond the scope of this paper. Our theoretical characterization of AURC and the convergence rates we establish provide a solid foundation for future work on understanding the effects of distribution shift and related challenges.
**Reference**
[1] Franc, V., Prusa, D., & Voracek, V. (2023). *Optimal strategies for reject option classifiers*. JMLR.
[2] Cortes, C., DeSalvo, G., & Mohri, M. (2016). *Learning with rejection*. In ALT 2016.
[3] Hendrickx, K., et al. (2024). *Machine learning with a reject option: A survey*. Machine Learning. | Summary: This paper proposes a new approach for learning classifiers for selective classification tasks where classifiers need not only make predictions, but also decide if it wants make a prediction or to abstain. It reformulates the area under the risk coverage curve (AURC) as an optimization objective during training of selective classifiers. Accompanying this, is an approach for estimating these classifiers using Monte Carlo methods The results show that the proposed approach outperforms the existing approach of performing uncertainty estimation on top of pre-trained models for selective classification.
Claims And Evidence: The submission claims that Selective Classifier Learning (SELE) has some drawbacks – specifically, it learns to predict uncertainty scores based on outputs of pretrained models, and that the proposed approach of joint selective classification learning and uncertainty estimation can alleviate this issue. The results show that their framework outperforms SELE in terms of MAE.
Methods And Evaluation Criteria: The risk-coverage curve is a standard tool for selective classification. When we need to abstain from predicting, coverage becomes an important metric of interest. Pairing coverage with performance on the test set in the form of risk, gives us the risk-coverage curve, the area under which is a suitable metric for evaluating selective classifiers.
Theoretical Claims: I took a high-level look at the proofs, but they are beyond the area of my expertise. It would be best if someone with more expertise would kindly check them.
Experimental Designs Or Analyses: The paper formulates a selective classifier based on the area under the risk coverage curve (AURC), and estimation via Monte Carlo methods. Instead of treating the AURC as a metric, the proposed approach suggests directly optimizing this metric during training.
Supplementary Material: This paper does not have a supplementary material.
Relation To Broader Scientific Literature: Seamless integration between safety critical system and machine learning models depends on frameworks such as selective classification. Understanding (and estimating) the metrics that govern these types of frameworks is a step-forward towards deployment of machine learning models in real-world safety critical system.
Essential References Not Discussed: The related works section details evaluation metrics used for selective classification, especially focusing on metrics related to the area under the risk coverage curve. This section also discusses approaches to uncertainty estimation and why the logit scaling over ensembling is chosen.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ***General response:***
We thank all reviewers for the valuable feedback. We hope our response has addressed all the concerns.
**Supplementary Material:**
We indeed included the appendix as supplementary material; however, while two reviewers did not see it, the other two did and were able to check it. The appendix contains extensive additional empirical results as well as detailed theoretical proofs.
**Main Contribution:**
The primary goal of this work is to characterize the population AURC metric, with an emphasis on its interpretation as a redistribution of the risk function. To study this, we employ Monte Carlo methods and introduce two estimators based on weights $\hat{\alpha}\_{i}$ and $\hat{\alpha}\_{i}^{\prime}$. Notably, the Monte Carlo estimator using $\hat{\alpha}\_{i}$ corresponds exactly to the commonly used empirical AURC. This is to say, the weight estimators in Eqs. (12), (13), and (17) are equivalent, leading to the empirical AURC formulation given in Eq. (10) or Eq. (11). Our statistical analysis of this estimator—including its bias, MSE, and convergence rate—shows that this estimator is, in fact, principled from a theoretical perspective. The estimator with $\hat{\alpha}_{i}^{\prime}$ is a novel contribution of this paper, and we establish a connection between the two estimators in Eq. (20). The statistical analysis of both estimators is new and, to our knowledge, has not been addressed in the existing literature.
***Response to Reviewer eX5V:***
Thanks very much for your comments. We direct you to our general reviewer response, in particular the section about supplementary material.
***Response to Reviewer cuHV:***
**About Claims and Evidence:**
Although we show that the population AURC metric can be optimized using batch training, this is only a minor aspect of the paper and not its central focus. The core contribution of this work lies in characterizing the statistical properties of these two estimators. We not only establish their convergence rates theoretically in Proposition 3.6, but also empirically validate these results across multiple models (i.e. VGG13/16, ResNet 20/56/110/164, Bert, RoBERTa, ViT and Swin models) and diverse datasets (i.e. CIFAR10/100, ImageNet, Amazon).
The reviewer states that "Needs more evidence evaluating AURC optimization (convergence, computational difficulty, etc.)" We wholeheartedly disagree. As shown in Figures 3–6 and Figures S9–S28 (in total **152** Figures), we have provided an overwhelming amount of evidence of the optimization and convergence of the estimators. The computational complexity is $\mathcal{O}(n\log n)$ due to the need to sort the points by the CSF, and requires only a linear number of forward passes. Theoretical convergence rates are rigorously proved and demonstrated empirically.
**Experimental Designs Or Analyses:**
In terms of AURC estimation, [1] proposed the SELE and claimed $2\times$SELE serves as an upper bound for the empirical AURC. Both SELE and its upper bound are used as baselines in our comparison. In terms of estimation performance, our two proposed plug-in estimators significantly outperform the SELE, which we show to be inconsistent. We demonstrate that they possess strong statistical properties—such as asymptotic unbiasedness and provable convergence rates—and validate these findings empirically through extensive experiments. These results provide strong evidence that our plug-in estimators are reliable for estimating the population AURC, whereas SELE and $2\times$SELE are not. In terms of AURC optimization, our estimators achieve performance comparable to the SELE score and both outperform models trained with the standard cross-entropy loss. While this optimization result is not the main focus of the paper, it serves to demonstrate that the population AURC can, in fact, be optimized using our estimators as loss functions within a batch training framework.
**Relation To Broader Scientific Literature:**
Our paper shows that AURC can be interpreted as a redistribution of the risk function, where the redistribution is guided by the confidence score function used to rank the dataset. This interpretation is the key characteristic of the population AURC. While we acknowledge that many other uncertainty estimation methods could potentially be integrated with AURC, this is not the main focus of our study. Instead, our work centers on AURC estimation and the statistical properties associated with it. Moreover, we empirically examine the AURC combined with uncertainty estimation methods such as MSP, Negative Entropy, MaxLogit, Softmax Margin, MaxLogit-$\ell_2$ norm, and Negative Gini Score. Our results, presented in Figure 6 and Figures S26–S28, show that the plug-in estimators consistently outperform SELE and $2\times$SELE, regardless of the chosen uncertainty estimation method. This is sufficient to demonstrate that SELE and $2\times$SELE are not reliable AURC estimators. | null | null | null | null | null | null |
LangDAug: Langevin Data Augmentation for Multi-Source Domain Generalization in Medical Image Segmentation | Accept (poster) | Summary: This paper presents LangDAug, a novel data augmentation method designed to address multi-source domain generalization challenges in medical image segmentation. Leveraging Langevin Dynamics (LD) and Energy-Based Models (EBMs), LangDAug generates intermediate samples bridging source domains to enhance model generalization across unseen target domains. The approach involves training EBMs to capture energy distributions between source domains, using LD to create bridging samples, and incorporating these samples into the segmentation model's training process. Theoretical analysis demonstrates that LangDAug induces regularization, reducing Rademacher complexity and improving generalization. Experiments on retinal fundus segmentation and 2D MRI prostate segmentation datasets show that LangDAug outperforms existing domain generalization methods and effectively complements domain randomization techniques like FedDG and RAM.
## update after rebuttal
While this work is application-oriented, after thoroughly reviewing all the responses, I remain concerned that the high computational cost and storage requirement could significantly limit the practical applicability of the proposed method. Therefore, I maintain my original score as 2.
Claims And Evidence: N/A
Methods And Evaluation Criteria: N/A
Theoretical Claims: N/A
Experimental Designs Or Analyses: N/A
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths:**
1. LangDAug introduces a new data augmentation method combining Langevin Dynamics and Energy-Based Models, offering a new perspective for solving multi-source domain generalization problems in medical image segmentation.
2. The method is supported by theoretical analysis, demonstrating its regularization effect and its ability to reduce Rademacher complexity, enhancing the reliability and scientific foundation of the approach.
3. This method can be used independently or integrated with existing domain randomization techniques (such as FedDG and RAM), showing its flexibility and practical applicability.
**Weaknesses:**
1. Training an EBM for every pair of source domains results in significant computational and storage requirements, especially when the number of source domains increases. Generating a large number of intermediate samples substantially increases training time, which may hinder the method's practicality for large-scale datasets.
2. The effectiveness of LangDAug heavily relies on the quality of the trained EBMs, but the paper lacks an in-depth discussion of EBM training stability and its potential impact on results.
3. Generating inter-domain samples to improve generalization is a reliable solution. However, this requires that the samples in the training set cover the sample space uniformly to obtain good interpolation results. What happens to the performance of the model if the distribution of samples in the training set is restricted?
Other Comments Or Suggestions: N/A
Questions For Authors: Pls refer to Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your comments. We have tried our best to answer your concerns. We will be happy to engage in follow-up discussion if you have further questions.
## Computational and Storage Requirements
Thanks for this comment. We acknowledged this in our limitations, our method increases storage and computational costs.
- Additional storage is needed for Langevin samples - specifically $(k/f)×n$ samples per domain pair (where $f$ is saving frequency, $n$ is total data points).
- Even though we train EBMs for each pair, our EBMs are lightweight and require only 0.357 hr/pair for training. Further, this can be parallelized to reduce this time.
- EBM training is performed offline, and models can be discarded after generating samples. This doesn't affect the main segmentation network's training or inference requirements.
- As shown in response to reviewer 1jQ9, LangDAug's training time (3.14 hrs) are comparable to existing methods like RAM (2.75 hrs) and less than FedDG (4.60 hrs).
- Increased training cost is an inherent limitation of DA methods that use synthetic samples. Solutions could include selective sampling (e.g., coresets) and shared architectures to jointly train EBMs with conditioning on source/target domains.We leave these techniques to be explored in future works.
## Stability of EBMs
We appreciate the reviewer’s comment regarding the importance of EBM training stability in performance. While our submission mentioned training details, we didn't observe any instability. To show this, we conduct additional ablations to assess the stability of the EBMs used in our work. Specifically, we monitor the segmentation performance of our method against different EBM hparam settings. We observe the effect of number of langevin steps ($k$), step size ($\beta$) and EBM complexity (by varying the number of conv blocks):
| | Domain 1 | | Domain 2 | | Domain 3 | | Domain 4 | |
|-------------------------|----------|-------|----------|-------|----------|-------|----------|-------|
| | mIoU | mDSC | mIoU | mDSC | mIoU | mDSC | mIoU | mDSC |
| $k$ | | | | | | | | |
| 20 | 66.18 | 71.07 | 70.39 | 80.29 | 77.67 | 82.45 | 75.39 | 84.31 |
| 40 | 78.79 | 87.00 | 75.05 | 85.87 | 81.00 | 88.91 | 80.51 | 88.68 |
| 60 | 78.59 | 87.39 | 74.11 | 85.85 | 81.46 | 88.06 | 80.94 | 87.72 |
| 80 | 75.92 | 83.82 | 72.35 | 82.84 | 79.03 | 85.22 | 78.95 | 85.59 |
| $\beta$ | | | | | | | | |
| 0.1 | 77.32 | 85.71 | 74.17 | 85.44 | 79.56 | 88.07 | 79.99 | 86.95 |
| 1 | 78.79 | 87.00 | 75.05 | 85.87 | 81.00 | 88.91 | 80.51 | 88.68 |
| 10 | 71.97 | 79.05 | 69.07 | 78.11 | 76.10 | 83.62 | 73.72 | 81.24 |
| #Conv blocks/resolution | | | | | | | | |
| 1 | 78.79 | 87.00 | 75.05 | 85.87 | 81.00 | 88.91 | 80.51 | 88.68 |
| 4 | 79.81 | 88.08 | 74.80 | 85.28 | 81.56 | 88.67 | 80.59 | 87.19 |
| 7 | 78.725 | 86.80 | 73.48 | 84.70 | 79.29 | 86.69 | 79.30 | 86.59 |
We see that under reasonable hparam values, the performance of LangDAug is stable. Particularly, smaller $\beta$, moderate $k$ lead to good performance. Whereas, the EBM complexity has very minimal effect on performance, indicating lightweight EBMs are sufficient. From this, we conclude stable performance of LangDAug under reasonable hparam settings.
## Restricted Training distribution
Thanks for this comment. In general, all DG methods rely on diversity in source domains, as noted in Sec. 1, 2.
This aligns with a fundamental assumption in DG - diverse source domains allow the model to learn representations for better generalization.
When the original distribution is restricted or sparse in certain regions, LangDAug helps fill these gaps, enriching the training distribution. While extremely limited diversity would challenge any DG method, our experiments show competitive performance even in difficult cases. We refer to Tab. 1 & 2 where Domain B (Tab. 1) & Domain C (Tab. 2) are the most difficult domains because of significantly different intstruments used for image acquisition. In this case, the source domains are ineffective in capturing the corresponding target domain distribution. Despite this challenge, LangDAug outperforms other methods in several metrics for these domains. This demonstrates that while limited diversity in source domains is a fundamental challenge for all DG methods, our approach remains effective even under these constrained conditions. | Summary: The authors propose LangDAug, a Langevin Data Augmentation technique to improve multi-source domain generalization in medical image segmentation tasks. The core idea is to train Energy-Based Models (EBMs) via contrastive divergence to model the transitions between pairs of source domains and use Langevin dynamics (LD) to generate intermediate samples that serve as augmented data bridging domain gaps. Theoretically, the authors demonstrate that this augmentation method induces a regularization effect, leading to smoother optimization landscapes and better generalization by bounding model complexity based on intrinsic data manifold dimensionality. Experimental evaluations conducted on retinal fundus and prostate MRI segmentation datasets show that LangDAug outperforms previous methods across unseen domains.
Claims And Evidence: Overall, the paper’s main claims regarding the effectiveness and robustness of the proposed LangDAug approach are convincingly supported by empirical evidence (quantitative benchmarks and visualizations) and theoretical analysis. Some limitations exist concerning scalability and detailed validation of anatomical fidelity, but these do not substantially undermine the validity or clarity of the core contributions.
Methods And Evaluation Criteria: The proposed methodology and evaluation criteria used in the submission are clearly appropriate and well-justified for the stated objectives of the paper; however, I would also want to know if the proposed LangDAug could be used for 3D medical image segmentation tasks and the efficiency of doing so.
Theoretical Claims: Overall, the theoretical claims made by the authors are clearly articulated, logically consistent, and mathematically rigorous. The authors provide a coherent theoretical framework that convincingly justifies why LangDAug would improve domain generalization. There are no substantial mathematical errors or inconsistencies evident from the provided derivations.
Experimental Designs Or Analyses: The experimental design and analyses used in this submission are sound, valid, and appropriately rigorous for assessing domain generalization performance.
However, there are still some limitations:
- Missing detailed computational cost analyses as the proposed LangDAug could increase training time.
- It is better to analyze the contribution of individual components of LangDAug (e.g., varying Langevin steps, EBMs complexity, or the number of augmented samples).
- How about applying LangDAug to 3D medical image segmentation tasks?
Supplementary Material: Yes. Implementation details and the proof.
Relation To Broader Scientific Literature: - The key contributions (LangDAug) of this paper closely align with and extend several ideas from the broader scientific literature, specifically related to domain generalization, energy-based models, Langevin dynamics, and data augmentation strategies in machine learning, particularly in medical imaging.
- LangDAug directly extends prior work on domain generalization, especially the notion that models trained under the Empirical Risk Minimization (ERM) paradigm often fail to generalize under domain shifts (Gulrajani & Lopez-Paz, 2021; Ganin et al., 2016). It is positioned within data manipulation approaches, specifically leveraging the concept of intermediate-domain traversal via Langevin dynamics, distinguishing itself from prior DG methods that primarily relied on style-mixing (MixStyle), random convolutions (RandConv), or frequency domain augmentation (FedDG, RAM, TriD).
Essential References Not Discussed: The manuscript thoroughly references relevant literature on domain generalization (DG), energy-based models (EBMs), Langevin dynamics, and medical image segmentation.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: - Could the authors provide a detailed computational cost analysis (e.g., GPU hours, memory requirements, inference and training times) of the proposed LangDAug method compared to the baseline domain generalization approaches?
- The current implementation and validation of LangDAug focus exclusively on 2D medical image segmentation. Could the authors explain any specific technical or methodological reasons why you have not extended LangDAug directly to 3D medical image segmentation tasks? Are there fundamental challenges, such as increased complexity, computational demands, or theoretical limitations, that prevented immediate extension?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your appreciation and in-depth review. We have tried our best to answer your concerns. We will include these in final version of the paper. Please let us know if you have any other questions.
### Computational Cost Analyses
We acknowledged the increased computational cost of LangDAug in our limitations. Below, we provide details of the training time and memory usage across methods on the RFS data (averaged across domains on single 48GB A6000 card):
| Metric | ERM | Ours | FedDG | FedDG+Ours | RAM | RAM+Ours | TriD | TriD+Ours |
|--------|---------|------|-------|------------|-----|----------|------|-----------|
| GPU hrs/train. time | 1.508 | 3.141 | 4.604 | 6.129 | 2.746 | 3.772 | 5.528 | 7.494 |
| Peak Memory (GB) | 10.36 | 19.41 | 16.77 | 23.16 | 12.58 | 20.24 | 24.87 | 30.11 |
While LangDAug increases training time, it's important to note that LangDAug's time (3.14 hrs) is comparable to existing methods like RAM (2.75 hrs) and less than FedDG (4.60 hrs). Moreover, LangDAug performs better than these methods.
In addition to above, we note an average EBM training time of 0.357 hr/pair. Further the inference cost (to run one LD chain) is very minimal ~2 sec.
Increased training cost is an inherent limitation of DA methods that use synthetic samples. Future optimizations could include selective sampling (e.g., coresets) and shared architectures to jointly train EBMs with conditioning on source/target domains.
## Effect of each component
Thanks for this suggestion. We provide ablations for several components of our method in the following tables:
| | Domain 1 | | Domain 2 | | Domain 3 | | Domain 4 | |
|-------------------------|----------|-------|----------|-------|----------|-------|----------|-------|
| | mIoU | mDSC | mIoU | mDSC | mIoU | mDSC | mIoU | mDSC |
| $k$ | | | | | | | | |
| 20 | 66.18 | 71.07 | 70.39 | 80.29 | 77.67 | 82.45 | 75.39 | 84.31 |
| 40 | 78.79 | 87.00 | 75.05 | 85.87 | 81.00 | 88.91 | 80.51 | 88.68 |
| 60 | 78.59 | 87.39 | 74.11 | 85.85 | 81.46 | 88.06 | 80.94 | 87.72 |
| 80 | 75.92 | 83.82 | 72.35 | 82.84 | 79.03 | 85.22 | 78.95 | 85.59 |
| $\beta$ | | | | | | | | |
| 0.1 | 77.32 | 85.71 | 74.17 | 85.44 | 79.56 | 88.07 | 79.99 | 86.95 |
| 1 | 78.79 | 87.00 | 75.05 | 85.87 | 81.00 | 88.91 | 80.51 | 88.68 |
| 10 | 71.97 | 79.05 | 69.07 | 78.11 | 76.10 | 83.62 | 73.72 | 81.24 |
| #Conv blocks/resolution | | | | | | | | |
| 1 | 78.79 | 87.00 | 75.05 | 85.87 | 81.00 | 88.91 | 80.51 | 88.68 |
| 4 | 79.81 | 88.08 | 74.80 | 85.28 | 81.56 | 88.67 | 80.59 | 87.19 |
| 7 | 78.725 | 86.80 | 73.48 | 84.70 | 79.29 | 86.69 | 79.30 | 86.59 |
| #Augmented Samples | | | | | | | | |
| 2/chain | 74.78 | 80.09 | 69.23 | 79.33 | 75.70 | 84.63 | 73.90 | 84.01 |
| 13/chain | 78.79 | 87.00 | 75.05 | 85.87 | 81.00 | 88.91 | 80.51 | 88.68 |
| 40/chain | 71.32 | 79.12 | 66.09 | 77.28 | 74.09 | 81.20 | 72.05 | 82.28 |
We make the following observations:
- Under reasonable EBM hparams (smaller $\beta$, higher $k$), the performance of LangDAug is stable.
- The performance remains stable with varying EBM complexity.
- Smaller number of langevin samples is not effective as it doesn't capture the vicinal distributions.
- Large number of langevin samples also harm performance due to high auto-correlation between langevin samples leading to biased learning. Hence, a well spaced selection of langevin samples is desired.
## Application to 3D Medical Images
The primary challenge in extending LangDAug to 3D medical images is the current lack of mature methodologies for generating 3D volumes using EBMs. Current approaches [1] are limited to primitive 3D shapes and don't scale well to complex anatomical structures.
As a partial evaluation, we applied LangDAug to the 3D Prostate MRI dataset by processing and augmenting 2D axial slices. While this doesn't capture full volumetric continuity, it provides evidence of LangDAug's effectiveness in 3D imaging contexts.
We view full 3D EBM generation as an important direction for future work, which could be enabled by recent advances in score-based modeling and memory-efficient architectures.
[1] Xie, Jianwen et al. "Learning descriptor networks for 3D shape synthesis and analysis." CVPR 2018. | Summary: This paper proposes a data augmentation method via Langevin dynamics with theoretical analyses. They also conduct experiments to show its usefulness.
Claims And Evidence: In Line 019-023, "DA methods, which enrich model representations through synthetic samples, have shown comparable or superior performance to representation learning approaches." Is there empirical evidence for this?
Methods And Evaluation Criteria: N/A
Theoretical Claims: N/A
Experimental Designs Or Analyses: There seem to be too few baselines considering the abundant literature of domain generalization. To name a few: CORAL [1], RSC [2], SWAD [3], SagNet [4], etc.
[1] Sun B, Saenko K. Deep coral: Correlation alignment for deep domain adaptation[C]//Computer vision–ECCV 2016 workshops: Amsterdam, the Netherlands, October 8-10 and 15-16, 2016, proceedings, part III 14. Springer International Publishing, 2016: 443-450.
[2] Huang Z, Wang H, Xing E P, et al. Self-challenging improves cross-domain generalization[C]//Computer vision–ECCV 2020: 16th European conference, Glasgow, UK, August 23–28, 2020, proceedings, part II 16. Springer International Publishing, 2020: 124-140.
[3] Cha J, Chun S, Lee K, et al. Swad: Domain generalization by seeking flat minima[J]. Advances in Neural Information Processing Systems, 2021, 34: 22405-22418.
[4] Nam H, Lee H J, Park J, et al. Reducing domain gap via style-agnostic networks[J]. arXiv preprint arXiv:1910.11645, 2019, 2(7): 8.
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: In this paper, "DA" is short for data augmentation. However, in OOD literature, DA usually stands for domain adaptation. It is better to change another abbreviation for data augmentation to avoid confusion.
Other Comments Or Suggestions: N/A
Questions For Authors: What is the benefit of the proposed method in domain generalization specifically in medical image segmentation? It seems that it can be applied to normal domain generalization tasks as well.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your comments. We have tried our best to address your concerns by adding more baseline comparisons with your suggest methods. We will include these in the final version of the manuscript. We will be happy to answer any follow-up questions you might have.
## Comparison with more Baselines
Thanks for this suggestion. The mentioned literature are slightly older DomainBed methods primarily designed for classification tasks. We have compared our method against methods like Fish, Fishr and Hutchinson which fall in same category. We provide the comparison of our method with the mentioned methods on RFS dataset (due to character limit) in the table below:
| Baselines | Domain 1 | | Domain 2 | | Domain 3 | | Domain 4 | |
|------------|----------|-------|----------|-------|----------|-------|----------|-------|
| | mIoU | mDSC | mIoU | mDSC | mIoU | mDSC | mIoU | mDSC |
| CORAL |78.37 |86.55 |63.94 |74.20 |78.89 |87.71 |75.85 |85.65 |
| RSC |76.82 |86.11 |62.54 |72.42 |79.73 |87.92 |74.75 |85.04 |
| SagNet |75.42 |84.99 |54.51 |63.69 |75.52 |85.29 |67.31 |79.20 |
| SWAD |76.19 |84.99 | 65.32 |75.07 |76.91 |86.30 |72.09 |82.82 |
| LangDAug |**78.79** |**87.00** | **75.05** |**85.87** |**81.00** |**88.91** |**80.51** |**88.68** |
We observe that our method outperforms these methods which is consistent to our observation with methods like Fish, Fishr and Hutchinson.
## Confusion for DA as Domain Adaptation
Thanks for this suggestion. We will change this abbrevation from DA to DAug to avoid any such confusion.
## Benefit Specific to Medical Image Segmentation
Thanks for this comment. While LangDAug can be potentially used for normal DG scenarios, we note that there are certain factors that guided our application design:
- The effectiveness of LangDAug depends on the ability of EBMs to effectively traverse and interpolate between source domain distributions.
- Such traversal becomes especially natural when source domains share structured similarities or consistent underlying factors of variation. Medical imaging data typically demonstrates structured variations, predominantly reflected through differences in amplitude spectra across domains [1,2].
- EBMs have been shown to excel at capturing and modeling variations in amplitude spectra [3,4], making them suited for the domain variations characteristic of medical imaging data. Thus, LangDAug’s capability aligns well with the domain shift challenges encountered specifically in medical image segmentation.
For these reasons, we use LangDAug in context of medical image segmentation.
[1] Zhou et al. "Generalizable medical image segmentation via random amplitude mixup and domain-specific image restoration." ECCV, 2022.
[2] Liu et al. "FedDG: Federated domain generalization on medical image segmentation via episodic learning in continuous frequency space." CVPR, 2021.
[3] Du et al. "Compositional visual generation with energy based models." NeurIPS, 2020.
[4] Tancik et al. "Fourier features let networks learn high frequency functions in low dimensional domains." NeurIPS, 2020. | null | null | null | null | null | null | null | null |
On the Diversity of Adversarial Ensemble Learning | Accept (poster) | Summary: This paper investigates the role of diversity in adversarial ensemble learning, addressing two key questions: how to define diversity formally in adversarial scenario and how diversity correlates. To address this questions, the paper introduces a first-order approximation and proposes a novel diversity decomposition within four components: average of individual adversarial losses, prediction diversity, gradient diversity, and cross diversity. Empirical evaluations validate the effectiveness of proposed AdvEOAP method.
## Update After Rebuttal
Dear Author,
Thank you for your detailed response and clarification. My concerns have mostly been addressed, and I will raise my score.
Please remember to organize the rebuttal materials mentioned above in your next revision, as this will greatly enhance the clarity of the paper.
Best regards,\
Reviewer ybcr
Claims And Evidence: The major claims are well supported by empirical or theoretical evidences.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense to me.
Theoretical Claims: I only reviewed the general formula derivation without verifying the detailed proofs.
Experimental Designs Or Analyses: The experimental designs make sense to me.
Supplementary Material: Not applicable.
Relation To Broader Scientific Literature: Not applicable.
Essential References Not Discussed: Not applicable.
Other Strengths And Weaknesses: Strengths:
1. The NP-Hardness proof is well-motivated and distinguishes adversarial ensemble learning from traditional ensemble theory.
2. The paper is technically sound, with rigorous mathematical formulations and proofs provided for key theorems that underpin the main claims.
3. This paper is well-written, making it easy to follow.
Weaknesses:
1. It would benefit to examine the performance of the authors’ proposed method with Expectation over Transformation (EOT) and Backward Pass Differentiable Approximation (BPDA).
2. The practical applicability of the method in real-world scenarios is not fully explored, such as large-scale Imagenet-1K and Vit architecture.
3. Understanding the computational overhead introduced by the proposed method is essential, especially when compared to other baseline methods.
Other Comments Or Suggestions: None
Questions For Authors: Refer to strengths and weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: [Q1] It would benefit to examine the performance of the authors’ proposed method with Expectation over Transformation (EOT) and Backward Pass Differentiable Approximation (BPDA).
[A1] We will clarify that this work does not consider the EOT and BPDA attacks, because the EOT attack is designed for random networks [Athalye et al., 2017], while the BPDA attack is considered for networks with non-differentiable operations [Athalye et al., 2018]. In contrast, our model is deterministic without any non-differentiable operations.
[Q2] The practical applicability of the method in real-world scenarios is not fully explored, such as large-scale Imagenet-1K and Vit architecture.
[A2] We will clarify that we consider ResNet20 on datasets Minist, F-minist and Cifar10 for fair comparisons with previous ensemble methods, and we have been conducting experiments on larger datasets with complex network architectures. Here are some preliminary results:
Adversarial accuracy for CIFAR100 with 5 ResNet18
| Methods | PGD20 | AutoAttack |
|-|-|-|
| Our Method | 25.16 | 24.39 |
| AdvADP | 23.690 | 21.22 |
We could present more experimental results in the following days due to computational resources. For example, it is about 4 days to train a deep neural network on Tiny-ImageNet with four A6000 GPUs, while we only have a single NVIDIA GeForce RTX 4090 GPU to train multiple deep neural networks for diversity.
[Q3] Understanding the computational overhead introduced by the proposed method is essential, especially when compared to other baseline methods.
[A3] We will clarify that, for AdvEOAP, the training time takes $m$-times as that of training a single neural network ($m$ is the number of neural networks in the ensemble). In addition, it takes $O(m^3)$ computational cost for the regularization with its gradients.
---
Rebuttal Comment 1.1:
Comment: Dear Author,
Thank you for your response.
Regarding Q1, the EOT and BPDA techniques are designed to handle the potential pseudo-robustness in more challenging scenarios. Therefore, they are definitely applicable to the proposed AdvEOAP which is differentiable.
For Q2, considering the limited rebuttal time, my concern has been partially addressed.
Regarding Q3, considering the $O(m^3)$) computational cost associated with the regularization, I am concerned about the fairness and scalability of the proposed method. For a fair comparison, what would the performance look like if AdvEOAP and baseline methods (e.g., AdvADP) are trained for exactly the same GPU time? Regarding scalability, if we require to ensemble bench of models, such as more than 20 single neural networks, is the proposed method still practicable in an affordable time?
Since the authors did not address my major concerns well, I will lower my score.
Best regards,\
Reviewer ybcr
---
Reply to Comment 1.1.1:
Comment: [Q1] Regarding Q1, the EOT and BPDA techniques … definitely applicable to the proposed AdvEOAP...
[A1] We will clarify that EOT is applicable to the AdvEOAP by adding adversarial perturbations insensitively to transformations as in [Athalye et al. ICML2018a], and we implement 20 times rotations randomly within -30 to +30 degrees. Empirical results of adversarial accuracy (%) are shown as follows:
| Datasets | Our method | AdvADP | TRS |
|-|-|-|-|
| MNIST | 88.41 | 85.84 | 77.12 |
| F-MNIST | 62.43 | 61.75 | 55.06 |
| CIFAR10 | 42.51 | 41.21 | 31.22 |
We will also clarify that BPDA is applicable to the AdvEOAP by considering potential risks and designing attacks as in [Athalye et al. ICML2018b], and we will add empirical results as follows:
i) For potential gradients vanishing risk (i.e., small gradient of ensemble from different gradient of base learner), we instead consider the attack of $m$ times of gradients rather than the original gradients, and empirical results of adversarial accuracy (%) are shown as follows:
| Datasets | Our method | AdvADP | TRS |
|-|-|-|-|
| MNIST | 95.40 | 92.70 | 85.86 |
| F-MNIST | 80.32 | 79.07 | 66.35 |
| CIFAR10 | 49.68 | 46.82 | 33.05 |
We could also consider the attack of selecting randomly a single base learner at each step, and empirical results of adversarial accuracy (%) are shown as follows:
| Datasets | Our method | AdvADP | TRS |
|-|-|-|-|
| MNIST | 96.21 | 94.98 | 89.41 |
| F-MNIST | 81.68 | 81.60 | 68.82 |
| CIFAR10 | 55.77 | 51.97 | 38.93 |
ii) For potential incorrect gradients from random gradients of base learners, we consider the attack of deleting a single base learner and using other gradients of the ensemble, and empirical results of adversarial accuracy (%) are shown as follows:
| Datasets | Our method | AdvADP | TRS |
|-|-|-|-|
| MNIST | 95.79 | 94.03 | 87.58 |
| F-MNIST | 81.30 | 80.67 | 68.08 |
| CIFAR10 | 53.46 | 47.84 | 37.05 |
iii) For other potential risks, we could consider the attack of using the adversarial training [Madry et al. 2018] model as a surrogated model, and empirical results of adversarial accuracy (%) are shown as follows:
| Datasets | Our method | AdvADP | TRS |
|-|-|-|-|
| MNIST | 96.12 | 95.36 | 93.42 |
| F-MNIST | 82.79 | 82.40 | 73.13 |
| CIFAR10 | 53.58 | 50.41 | 47.90 |
We could also consider the black box attack [Andriushchenko et al. 2020], and empirical results of adversarial accuracy (%) are shown as follows:
| Datasets | Our method | AdvADP | TRS |
|-|-|-|-|
| MNIST | 94.49 | 92.71 | 83.85 |
| F-MNIST | 80.33 | 79.92 | 66.84 |
| CIFAR10 | 53.72 | 53.17 | 40.22 |
Our AdvEOAP always achieves better performance than other adversarial ensemble methods under EOT and BPDA attacks.
[Q2] …the $O(m^3)$ computational cost associated with the regularization…the fairness and scalability of the proposed method…what would the performance look like if AdvEOAP and baseline methods (e.g., AdvADP) are trained for exactly the same GPU time…is the proposed method still practicable in an affordable time?
[A2] We will clarify that
$$\text{total computational cost} = \text{computational cost for neural networks} + O(m^3) \text{ computational cost for regularization}$$
where the regularization takes much smaller computational cost than that of training neural networks. We will add the running-time (time(s) per epoch) comparisons as follows:
| Datasets | without regularization | with regularization |
|-|-|-|
| MNIST | 40.08 | 41.45 |
| F-MNIST | 40.11 | 41.32 |
| CIFAR10 | 123.84 | 125.48 |
We will also clarify that our method takes comparable running time to ensemble methods with adversarial training [Madry et al. 2018], but more running time than other ensemble methods without adversarial training, which takes smaller adversarial prediction accuracy obviously. We will add the running time (time(s) per epoch) comparisons as follows.
| Datasets | GAL (non-adv.) | ADP (non-adv.) | AdvADP | DVERGE | PDD | TRS | $\text{iGAT}_{\text{ADP}}$ | Our $\text{AdvE}_{\text{OAP}}$ |
|-|-|-|-|-|-|-|-|-|
| MNIST | 23.45 | 7.14 | 45.36 | 40.65 | 70.72 | 241.8 | 32.93 | 41.45 |
| F-MNIST | 23.45 | 7.15 | 45.50 | 39.52 | 69.48 | 219.32 | 33.94 | 41.32 |
| CIFAR10 | 69.47 | 15.18 | 134.54 | 172.93 | 291.07 | 626.33 | 110.51 | 125.48 | | Summary: This work explores the role of diversity in adversarial ensemble learning, focusing on its definition and impact on algorithmic performance. The authors demonstrate that precisely calculating diversity is NP-Hard, distinguishing it from traditional diversity analysis. They introduce the first diversity decomposition in adversarial ensemble learning, decomposing adversarial ensemble loss into average of individual adversarial losses, prediction diversity, gradient diversity, and cross diversity—challenging previous methods that considered only gradient diversity. Extending this decomposition to classification with cross-entropy loss, they propose a novel ensemble method based on orthogonal adversarial predictions to enhance both gradient and cross diversity. Empirical results confirm the effectiveness of their approach.
## update after rebuttal
I have carefully read the authors' rebuttal and the feedback has well addressed my questions and concerns. Thus, I would like to insist on the score of accept . Thanks.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
1. The authors establish that calculating diversity precisely is NP-hard.
2. They introduce the first diversity decomposition in adversarial ensemble learning.
3. They propose a novel ensemble method based on orthogonal adversarial predictions to enhance the proposed diversity.
Evidences
1. The NP-hardness of the problem is demonstrated through a reduction from the 3-SAT problem, as proven in Theorem 3.1.
2. Theorems 3.2 and 3.3 present the diversity decomposition with respect to squared loss and cross-entropy loss, respectively. Experimental results further validate the correctness of the decomposition, as shown in Figures 1 and 3.
3. Empirical results confirm that the proposed method enhances diversity (Figure 7) and improves robustness (Table 1).
Methods And Evaluation Criteria: The proposed diversities and ensemble method make sense for adversarial ensemble learning.
The proposed diversity measures are derived through decomposition, clearly illustrating their impact on adversarial ensemble loss. As demonstrated in Theorems 3.2 and 3.3, greater diversity leads to a lower adversarial ensemble loss.
Building on this insight, the authors emphasize the orthogonality of adversarial predictions. Empirical results further validate the effectiveness of the proposed method, showing enhanced diversity (Figure 7) and improved robustness (Table 1).
Theoretical Claims: I have reviewed all the proofs, including the NP-hardness in Theorem 3.1, the diversity decompositions in Theorem 3.2 and 3.4, and others.
The NP-hardness is demonstrated through a reduction from the 3-SAT problem. The authors build two neural networks given a 3-SAT problem and show the value of the diversity of the neural networks can determine the satisfiability of the 3-SAT problem. The proof of the decomposition is relatively simple, mainly involving some equation transformation techniques and solving for adversarial perturbations.
I found no errors.
Experimental Designs Or Analyses: I have checked the experiments and found them to be sound and valid.
The experiments in Section 3 validate the correctness of the proposed diversity measures and highlight the limitations of previous approaches. Furthermore, the experiments in Section 5 confirm the effectiveness of the proposed method, demonstrating increased diversity (Figure 7) and enhanced robustness (Table 1).
Supplementary Material: I have reviewed the supplementary materials, including the proofs and other discussions. I found no errors.
Relation To Broader Scientific Literature: This work defines the diversity for adversarial ensemble by decomposing the adversarial ensemble loss to the average of individual losses and diversities. While such decomposition has been explored in non-adversarial environment [1,2], this work introduces the first decomposition in adversarial environment.
Previous works define the diversity via gradient [3,4]. The proposed decomposition shows that it is not sufficient to only consider gradient to characterize the diversity.
[1] Zhou, Z.-H. Ensemble Methods: Foundations and Algorithms. CRC Press, 2012.
[2] Wood, D., Mu, T., Webb, A. M., Reeve, H. W., Lujan, M., and Brown, G. A unified theory of diversity in ensemble learning. Journal of Machine Learning Research, 24 (359):1–49, 2023.
[3] Dabouei, A., Soleymani, S., Taherkhani, F., Dawson, J., and Nasrabadi, N. M. Exploiting joint robustness to adversarial perturbations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1122–1131, Seattle, WA, 2020.
[4] Huang, B., Ke, Z., Wang, Y., Wang, W., Shen, L., and Liu, F. Adversarial defence by diversified simultaneous training of deep ensembles. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, pp. 7823–7831, Virtual Event, 2021.
Essential References Not Discussed: To the best of my knowledge, this paper cites all essential works related to traditional diversity or adversarial diversity.
Other Strengths And Weaknesses: Strengths:
1. This work is well-structured and easy to follow, with clear proofs and informative supplementary materials. The proof of NP-hardness in Theorem 1 constructs two neural networks by introducing an auxiliary variable, which is skillful.
2. The authors demonstrate that the proposed diversities can answer what the diversity is in adversarial ensemble learning, both theoretically and empirically. They also show that previous diversity measures may fail to capture true diversity, as illustrated in Example 1 and Figure 2.
3. The analysis of diversity in cross-entropy loss suggests that attention should be given to the predictions of adversarial examples of base learners. This insight is both natural and valuable, and experimental results further validate it.
4. Extensive experiments are conducted, demonstrating the effectiveness of the proposed method. As shown in Figure 7, the proposed method enhances the proposed diversities, leading to improved adversarial accuracy.
Weaknesses:
1. This work proposes the diversity with the first-order approximation, which is somewhat limited. The neural network may not exhibit strong linearity in local regions.
2. The proof of the decompositions is relatively simple and does not involve any special techniques.
Other Comments Or Suggestions: 1. The author has scattered the introduction of related work throughout the main text. It would be better to add a “Related Work” section in the appendix to summarize previous studies.
2. It would be better to investigate how the hyper-parameters affect the performance of the proposed method.
3. It would be better to conduct experiments on other l_p norms.
Questions For Authors: 1. What is the time complexity of Algorithm 1?
2. Can the decomposition be applied to other types of loss functions?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: [Q1] This work proposes the diversity with the first-order approximation, which is somewhat limited. The neural network may not exhibit strong linearity in local regions.
[A1] We will clarify that the first-order approximation is motivated from previous first-order approximation methods for robust learning [Kariyappa & Qureshi 2019; Dabouei et al. 2020; Huang et al. 2021; Bogun et al., 2022], and it is difficult to make similar analysis for higher-order (even for second-order) approximations without closed-form solutions of adversarial examples and loss functions, which is relevant to the roots of polynomials of $2d$ degree ($d$ is the input dimensionality) [More & Sorensen, 1983; Fortin & Wolkowicz, 2004].
[Q2] What is the time complexity of Algorithm 1?
[A2] We will clarify that the time complexity of Algorithm 1 takes $m$-times as that of training a single neural network ($m$ is the number of neural networks in the ensemble). In addition, it takes $O(m^3)$ computational cost for the regularization with its gradients.
[Q3] Can the decomposition be applied to other types of loss functions?
[A3] We will clarify that this work makes the decompositions for two commonly-used loss functions, squared loss for regression and cross-entropy loss for neural network in classification, and it can be applied to other loss functions such as Poisson loss as in [Wood et al., 2023].
---
Rebuttal Comment 1.1:
Comment: I have carefully read the authors' rebuttal and the feedback has well addressed my questions and concerns. Thus, I would like to insist on the score of 4 (accept) . Thanks. | Summary: This paper addresses the critical role of diversity in adversarial ensemble learning, demonstrating that precisely calculating diversity for neural networks under adversarial perturbations is NP-hard due to structural and predictive interdependencies. By decomposing adversarial ensemble loss into four components—average individual losses, prediction diversity, gradient diversity, and cross diversity—the authors reveal the insufficiency of prior gradient-centric approaches. They propose AdvEOAP, a method that enforces orthogonal adversarial predictions through KL divergence regularization and entropy maximization, simultaneously improving gradient and cross diversity. Empirical validation on MNIST, F-MNIST, and CIFAR10 shows state-of-the-art robustness, with AdvEOAP outperforming baselines by up to 80% under AutoAttack, while ablation studies confirm the necessity of diversity components. The work establishes a theoretical foundation for adversarial ensemble diversity and offers a practical framework for enhanced robustness.
Claims And Evidence: The claims in the paper are supported by rigorous theoretical and empirical evidence. The NP-hardness of calculating adversarial ensemble diversity is proven via a reduction from the 3-SAT problem, demonstrating that precise diversity computation requires solving an NP-hard optimization due to structural and predictive interdependencies. The proposed adversarial loss decomposition (into average individual losses, prediction/gradient/cross diversity) is validated theoretically through first-order Taylor approximations (Theorems 3.2 and 3.4) and empirically via experiments on MNIST, F-MNIST, and CIFAR10, showing gradient/cross diversity dominate under large perturbations. The effectiveness of AdvEOAP is confirmed by ablation studies (e.g., 1.1–1.5% accuracy drop without orthogonality regularization) and superior robustness over baselines (e.g., +40% accuracy vs. DVERGE under AutoAttack). Critiques of prior gradient-centric methods are supported by constructed counterexamples (Example 1) and empirical trends (Figure 2), where adversarial loss decreases despite unchanged gradient alignment. While the analysis relies on first-order approximations and scalability to larger models remains unexplored, the core claims are substantiated by theoretical proofs, controlled experiments, and statistically significant results across 50 runs.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are aligned with the problem of adversarial ensemble learning. The AdvEOAP method, which enforces orthogonal adversarial predictions via KL divergence and determinant-based regularization, directly addresses the need for diverse vulnerability patterns among base learners-a critical requirement for robustness against adaptive attacks. By optimizing both gradient diversity and cross diversity, the method systematically tackles the limitations of prior gradient-centric approaches. The evaluation criteria are appropriate and comprehensive.
Theoretical Claims: The theoretical claims in the paper are correct under their stated assumptions but face limitations in generality. The NP-hardness proof for diversity calculation (Theorem 3.1) is valid for ReLU networks under specific perturbation bounds but does not generalize to all activation functions or norms. The loss decomposition theorems (3.2 and 3.4) rigorously hold under first-order Taylor approximations but neglect higher-order effects, limiting applicability to small perturbations (ϵ) or highly nonlinear regimes. Overall, the core proofs are logically consistent but rely on simplifications (e.g., linearized adversarial perturbations) that may not fully capture real-world model behaviors, particularly for deep networks with complex interactions.
Experimental Designs Or Analyses: The experimental designs and analyses are methodologically sound but have minor limitations. The authors evaluate AdvEOAP on MNIST, F-MNIST, and CIFAR-10—standard benchmarks for adversarial robustness—using diverse attacks (FGSM, PGD, AutoAttack) to cover gradient-based, iterative, and adaptive threat models. The inclusion of ablation studies (Table 2) effectively isolates the impact of orthogonality regularization, showing a 1.1–1.5% accuracy drop when removed. Statistical rigor is ensured via 50 runs with reported mean ± std. deviations, and Figure 7’s training dynamics validate the theoretical link between diversity and robustness. However:
1. Scalability gaps: Experiments are limited to ResNet20 on small/medium datasets; larger architectures (e.g., Vision Transformers) and high-resolution datasets (e.g., ImageNet) are untested, leaving scalability in doubt.
2. Attack parameter details: While AutoAttack is included, hyperparameters for weaker attacks (e.g., FGSM step size) are not fully disclosed, limiting reproducibility.
3. Computational costs: Training time and resource requirements for AdvEOAP (e.g., orthogonalization overhead) are unquantified, which is critical for real-world deployment.
Overall, the experiments robustly validate the method’s core claims but leave practical scalability and efficiency as open questions.
Supplementary Material: The supplementary material was reviewed, focusing on Appendix A (Proofs), Appendix B (Additional Experiments), and Appendix C (Implementation Details). Appendix A provides complete proofs for Theorems 3.1–3.4 and Lemma 3.3, including the 3-SAT reduction and loss decomposition derivations, which are rigorous and align with the main text’s claims. Appendix B includes additional experiments:
1.Sensitivity analysis for hyperparameters (e.g., λ, α) showing stable performance across reasonable ranges.
2.Extended diversity metrics (e.g., gradient cosine similarity distributions across layers), reinforcing the link between orthogonality and robustness.
3.Failure cases under extreme perturbations (ϵ>0.1), highlighting method limitations.
Appendix C details training protocols (optimizer settings, attack parameters) and computational resources, ensuring reproducibility. However, two gaps remain:
Pseudocode for AdvEOAP: While high-level steps are described, a formal algorithm listing is missing, leaving ambiguity in orthogonalization implementation.
Broader architectural tests: All experiments use CNNs; tests on non-CNN architectures (e.g., ViTs) are absent.
Overall, the supplement robustly supports the main paper but could improve clarity with broader architecture validation.
Relation To Broader Scientific Literature: The paper’s contributions significantly advance the adversarial robustness literature by addressing critical gaps in ensemble diversity theory. Prior works (e.g., Kariyappa & Qureshi, 2019; Dabouei et al., 2020) focused on gradient misalignment as the primary diversity metric but lacked theoretical grounding for its sufficiency. This work bridges that gap by proving that diversity in adversarial settings is inherently NP-Hard (extending Katz et al.’s (2017) NP-Hardness results for neural network verification) and introducing a multi-component diversity decomposition—building on classical ensemble theories (Krogh & Vedelsby, 1994; Zhou, 2012) but adapting them to adversarial perturbations. The AdvEOAP method’s orthogonality regularization draws inspiration from Pang et al. (2019), who used prediction disagreements, but innovates by combining KL divergence and determinant-based constraints to enforce structural diversity. These contributions align with broader trends in robustness research (e.g., Yang et al.) that emphasize diversifying vulnerability patterns but provide the first formal framework linking diversity components to adversarial loss. By unifying theoretical rigor (NP-Hardness, decomposition) with practical algorithm design, the paper addresses a longstanding challenge in adversarial ensemble learning: how to systematically define and optimize diversity for robustness.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Enter any comments on other strengths and weaknesses of the paper, such as those concerning originality, significance, and clarity. We encourage you to be open-minded in terms of potential strengths. For example, originality may arise from creative combinations of existing ideas, removing restrictive assumptions from prior theoretical results, or application to a real-world use case (particularly for application-driven ML papers, indicated in the flag above and described in the Reviewer Instructions).
Other Comments Or Suggestions: N/A
Questions For Authors: Q1. Theoretical Generality: The NP-hardness proof (Theorem 3.1) assumes ReLU activations and specific perturbation bounds. How generalizable are these results to other activation functions (e.g., sigmoid) or arbitrary perturbation norms?
Q2. First-Order Approximation Limits: The loss decomposition (Theorems 3.2/3.4) relies on first-order Taylor expansions. How might higher-order terms affect the validity of the decomposition under large perturbations (e.g., ϵ>0.1) or highly nonlinear regimes?
Q3. Scalability: Experiments are limited to ResNet20 on small/medium datasets (MNIST, CIFAR-10). Can AdvEOAP scale to larger architectures (e.g., Vision Transformers) or high-resolution datasets (e.g., ImageNet) without prohibitive computational costs?
Q4. Computational Overhead: Training time and resource requirements for AdvEOAP need to be quantified in theory and experiments.
Q5. Attack Parameter Transparency: Hyperparameters for comparison methods are not fully disclosed. This might affect reproducibility and comparative analysis.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: [Q1] …The NP-hardness proof (Theorem 3.1) assumes ReLU activations and specific perturbation bounds. How generalizable are these results to other activation functions (e.g., sigmoid) or arbitrary perturbation norms?
[A1] We will clarify that our results hold for $l_p$ norms with $p = 1, 2, \cdots, \infty$ in Theorem 3.1, and it is feasible to generalize to other activation functions (e.g., sigmoid) based on universal approximation [Kidger et al. 2020], i.e., a 3-SAT problem can be reduced to a diversity problem of ReLU neural networks as in Theorem 3.1, and network employing other activation functions (e.g., sigmoid) can approximate any ReLU network with arbitrary precision.
[Q2] First-Order Approximation Limits: The loss decomposition (Theorems 3.2/3.4) relies on first-order Taylor expansions. How might higher-order terms affect the validity of the decomposition…
[A2] We will clarify that the first-order approximation is motivated from previous first-order approximation methods for robust learning [Kariyappa & Qureshi 2019; Dabouei et al. 2020; Huang et al. 2021; Bogun et al., 2022], and it is difficult to make similar analysis for higher-order (even for second-order) approximations without closed-form solutions of adversarial examples and loss functions, which is relevant to the roots of polynomials of $2d$ degree ($d$ is the input dimensionality) [More & Sorensen, 1983; Fortin & Wolkowicz, 2004].
[Q3] Scalability: Experiments are limited to ResNet20 on small/medium datasets (MNIST, CIFAR-10). Can AdvEOAP scale to larger architectures (e.g., Vision Transformers) or high-resolution datasets (e.g., ImageNet) without prohibitive computational costs?
[A3] We will clarify that we consider ResNet20 on datasets Minist, F-minist and Cifar10 for fair comparisons with previous ensemble methods, and we have been conducting experiments on larger datasets with complex network architectures. Here are some preliminary results:
Adversarial accuracy for CIFAR100 with 5 ResNet18
| Methods | PGD20 | AutoAttack |
|-|-|-|
| Our Method | 25.16 | 24.39 |
| AdvADP | 23.690 | 21.22 |
We could present more experimental results in the following days due to computational resources. For example, it is about 4 days to train a deep neural network on Tiny-ImageNet with four A6000 GPUs, while we only have a single NVIDIA GeForce RTX 4090 GPU to train multiple deep neural networks for diversity.
[Q4] Computational Overhead: Training time and resource requirements for AdvEOAP need to be quantified in theory and experiments.
[A4] We will clarify that, for AdvEOAP, the training time takes $m$-times as that of training a single neural network ($m$ is the number of neural networks in ensemble). In addition, it takes $O(m^3)$ computational cost for the regularization with its gradients. For resource requirements, AdvEOAP needs $m$ times the space complexity of a single neural network.
Here are some experiments as follows:
For MNIST dataset:
| Methods | GAL | ADP | AdvADP | DVERGE | PDD | TRS | $\text{iGAT}_{\text{ADP}}$ | Our $\text{AdvE}_{\text{OAP}}$ |
|-|-|-|-|-|-|-|-|-|
| Time per Epoch (s) | 23.45 | 7.14 | 45.36 | 40.65 | 70.72 | 241.8 | 22.93 | 41.45 |
For F-MNIST dataset:
| Methods | GAL | ADP | AdvADP | DVERGE | PDD | TRS | $\text{iGAT}_{\text{ADP}}$ | Our $\text{AdvE}_{\text{OAP}}$ |
|-|-|-|-|-|-|-|-|-|
| Time per Epoch (s) | 23.45 | 7.15 | 45.50 | 39.52 | 69.48 | 219.32 | 23.94 | 41.32 |
For CIFAR10 dataset:
| Methods | GAL | ADP | AdvADP | DVERGE | PDD | TRS | $\text{iGAT}_{\text{ADP}}$ | Our $\text{AdvE}_{\text{OAP}}$ |
|-|-|-|-|-|-|-|-|-|
| Time per Epoch (s) | 69.47 | 15.18 | 134.54 | 172.93 | 291.07 | 626.33 | 100.51 | 125.48 |
All methods require approximately 4.4GB memory except TRS with 10.3GB memory due to storing the Hessian matrix.
[Q5] Parameter details: Hyperparameters for comparison methods are not fully disclosed. While AutoAttack is included, hyperparameters for weaker attacks (e.g., FGSM step size) are not fully disclosed, limiting reproducibility.
[A5] We will clarify more parameter details in Appendix: For the FGSM, PGD10, and PGD20 attack methods, the hyperparameters are set as
| Method | Steps | Step Size |
|-|-|-|
| FGSM | 1 | $\epsilon$ |
| PGD10 | 10 | $\epsilon/3$ |
| PGD20 | 20 | $\epsilon/6$ |
Here, $\epsilon$ is the perturbation size.
For the AutoPGD, MORA, and AutoAttack attack methods, we set the number of steps to 20, and keep all other parameters in the original reference or the provided code defaults.
We will add a formal description for our algorithm in Appendix to clarify orthogonalization implementation, and release the code to ensure reproducibility. | Summary: This paper considers the problem of defining and correlating diversity with algorithmic performance in adversarial ensemble learning. The authors first prove that precisely calculating the diversity in adversarial ensemble learning is an NP-Hard problem. They then propose a new diversity decomposition for adversarial ensemble learning, breaking down the adversarial ensemble loss into four components: the average of individual adversarial losses, prediction diversity, gradient diversity, and cross diversity. This decomposition highlights the insufficiency of considering only gradient diversity, as done in prior works. The authors also extend this decomposition to classification tasks using cross-entropy loss. Based on their theoretical analysis, they develop a new ensemble method called AdvE$_{oap}$, which improves gradient and cross diversity simultaneously by orthogonalizing adversarial predictions of base learners.
After rebuttal:
The authors have address my questions. I will keep my score.
Claims And Evidence: In general the Claims are clear. The claims made in the paper are well-supported by both theoretical analysis and empirical evidence.
Yet I think the following discussions could be improved.
1) There is a gap between NP-hard problem to first-order approximation. It is a very weak argument that we only consider first order approximation since the complete analysis is a NP-hard problem. There are still many things we can do in between.
2) In the theorem of decomposition (Theorem 3.2 and 3.4), It is weird to me to you equally, since those are only first order approximation
Methods And Evaluation Criteria: The proposed methods are well-suited for the problem of adversarial ensemble learning. The authors use first-order Taylor approximation to decompose the adversarial ensemble loss. The evaluation criteria include classification accuracy under various adversarial attacks (PGD, AutoPGD, AutoAttack etc.), which are standard benchmarks in adversarial robustness research. The experiments are conducted on widely used datasets, and the results are compared against several baseline methods, ensuring a comprehensive evaluation.
It would be better if the authors could provide experiments on larger dataset, such as TinyImageNet.
Theoretical Claims: The theoretical claims are sound and well-justified. The authors provide detailed proofs for the NP-Hardness of calculating diversity (Theorem 3.1) and the diversity decomposition for both squared loss (Theorem 3.2) and cross-entropy loss (Theorem 3.4). The proofs are rigorous.
Again, as I mention before, it is weird to me to see equality when using first order approximation.
Experimental Designs Or Analyses: The experimental design is appropriate for validating the theoretical claims.
After the authors proposed the decomposition, they provide experiments to see which components are more relevant. This helps to justify their propose method AdvE$_{oap}$.
Then, the authors conduct extensive experiments on three datasets (MNIST, F-MNIST, and CIFAR10) and compare their method against several state-of-the-art adversarial ensemble methods. The use of multiple adversarial attacks (PGD, AutoPGD, Autoattack etc.) ensures that the evaluation is comprehensive.
Supplementary Material: I go through the proof of Theorem 3.1, 3.2 and 3.4. The proof is clear and rigorous.
Relation To Broader Scientific Literature: To my knowledge, no.
Essential References Not Discussed: To my knowledge, no.
Other Strengths And Weaknesses: Weaknesses:
The paper could benefit from a more detailed discussion of the limitations of the proposed method, particularly in scenarios where the assumptions of first-order approximation may not hold.
Other Comments Or Suggestions: no
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: [Q1] … a gap between NP-hard problem to first-order approximation …. only consider first order approximation since the complete analysis is a NP-hard problem.
[A1] We will clarify that the first-order approximation is motivated from previous first-order approximation methods for robust learning [Kariyappa & Qureshi 2019; Dabouei et al. 2020; Huang et al. 2021; Bogun et al., 2022], and it is difficult to make similar analysis for higher-order (even for second-order) approximations without closed-form solutions of adversarial examples and loss functions, which is relevant to the roots of polynomials of $2d$ degree ($d$ is the input dimensionality) [More & Sorensen, 1983; Fortin & Wolkowicz, 2004].
[Q2] In the theorem of decomposition (Theorem 3.2 and 3.4), It is weird to me to you equally, since those are only first order approximation.
[A2] We will clarify that we consider the first-order approximation in the decomposition theorems, and we will take the approximately equal rather than exactly equal.
[Q3] It would be better if the authors could provide experiments on larger dataset, such as TinyImageNet.
[A3] We will clarify that we consider datasets Minist, F-minist and Cifar10 for fair comparisons with previous ensemble methods, and we have been conducting experiments on larger datasets with complex network architectures. Here are some preliminary results:
Adversarial accuracy for CIFAR100 with 5 ResNet18
| Methods | PGD20 | AutoAttack |
|-|-|-|
| Our Method | 25.16 | 24.39 |
| AdvADP | 23.690 | 21.22 |
We could present more experimental results in the following days due to computational resources. For example, it is about 4 days to train a deep neural network on Tiny-ImageNet with four A6000 GPUs, while we only have a single NVIDIA GeForce RTX 4090 GPU to train multiple deep neural networks for diversity.
[Q4] The paper could benefit from a more detailed discussion of the limitations of the proposed method, particularly in scenarios where the assumptions of first-order approximation may not hold.
[A4] We will add a figure to discuss the limitations of the proposed method with and without first-order approximations, and the basic idea is to consider the residual terms on the first-order approximation in experiments and analyze its influence. | null | null | null | null | null | null |
Policy Filtration for RLHF to Mitigate Noise in Reward Models | Accept (poster) | Summary: This paper finds that the reliability of the reward model varies across responses assigned with different rewards. Motivated by this fact, this paper considers filtering the samples whose rewards may be unreliable to improve the signal-to-noise ratio during policy learning, resulting in Policy Filtration for Proximal Policy Optimization (PF-PPO). To choose a proper policy filtering strategy, the authors use the coefficient of determination between the rewards and actual scores on filtered samples as the metrics to help us find promising strategies since it measures how well the rewards filtered by PF-PPO indicate real performance. The authors provide extensive experiments to validate the effectiveness of PF-PPO in code generation and math reasoning tasks. In code generation, PF-PPO achieves the state-of-the-art performance of 7-billion-parameter models on HumanEval (+7.9%), MBPP (+0.7%), and LeetCode Contest (+10.0%) which is a newly-created and more challenging benchmark. In math reasoning, PF-PPO yields performance increase using different reward models and benchmarks (Ape210K and CMATH).
Claims And Evidence: The claims made in this paper are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria (e.g., benchmark datasets) make sense.
Theoretical Claims: There is no theoretical result in this paper.
Experimental Designs Or Analyses: The experiments look reasonable.
Supplementary Material: I didn’t read the supplementary material.
Relation To Broader Scientific Literature: This paper is relevant to the literature.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
1. This paper studies a well-motivated and important problem in the RLHF and LLM community, i.e., mitigating noise in reward models for RLHF. This topic and the experiments on math reasoning and code generation is timely.
2. The authors conduct experiments on multiple LLM benchmarks, including HumanEval, MBPP, LeetCode Contest, Ape210K and CMATH, to evaluate their approach, and show an improved performance compared to the baselines.
3. This paper is well-written and clearly organized.
Weaknesses:
1. The proposed Policy Filtration PPO approach seems very straightforward. The main idea of this approach is to generate multiple responses, use a weight vector (assign more weight to the response that has a higher reward) to combine them to generate a new response, and adopts this combined response in LLM finetuning. Is there any other novelty in this approach?
2. This Policy Filtration PPO approach seems very heuristic. There is no theoretical result, or theoretical derivation that guides the design of this approach.
3. The authors should explain more on the reason why this Policy Filtration PPO approach achieves better performance in experiments, except the simple idea of “generating multiple candidate responses and choosing the best one”. For example, why is the quality of samples (i.e., responses $y$) important for algorithm PPO? Why improving the quality of samples in algorithm PPO can achieve good performance?
Other Comments Or Suggestions: Please see the weaknesses above.
Questions For Authors: Please see the weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: # Author Response for Reviewer kkSM
We thank the reviewer for highlighting these points.
## Novelty of this paper
While simple, PF-PPO’s contribution lies in its *universal effectiveness* for RLHF noise mitigation. Key innovations:
1. **Empirical Validation:** Extensive experiments across domains (see extended results) demonstrate consistent improvements.
2. **Practical Impact:** Addressing reward noise—a critical but underexplored RLHF challenge—via a task-agnostic approach.
We also refer the reviewer to check the new experiment results on various tasks (in “Response to All Reviewers”) and take our effort to demonstrate universal effectiveness of our method into account.
## Lack of theoretical derivation
Though heuristic, PF-PPO is motivated by empirical evidence that mid-reward samples harm convergence (likely due to conflicting gradients). Future work will formalize this via gradient variance analysis.
## Why sample quality matters in PPO?
PPO relies on accurate advantage estimates. Noisy rewards distort advantage estimates, leading to suboptimal updates. Ablations (e.g., PF-PPO vs PPO-M) show that filtering unreliable samples stabilizes training by ensuring gradients derive from high-confidence data.
# Response to All Reviewers
We sincerely thank the reviewers for their constructive feedback and insightful suggestions.
One major advantage of our method is its simplicity, universality, and effectiveness. To further validate the broader effectiveness of our method, we conducted experiments across diverse domains using Doubao-25k (policy and reward model backbone). Tasks included logic reasoning, math, code generation, STEM problems, complex tasks, instruction following, knowledge QA, and language understanding. Each task has distinct evaluation sets and verifiers to assess response correctness. Results (accuracy improvement over vanilla PPO) are shown below. Statistically significant changes (exceeding $\pm 0.5$, based on test case counts) are **bolded**. These results demonstrate PF-PPO’s consistent effectiveness across tasks.
| Task (Evaluation Set Size) | BO1 Accuracy (%) | BO5 Accuracy (%) |
|----------------------------------|------------------|------------------|
| Logic Reasoning (1203) | 48.9 (**+2.3**) | 63.8 (**+2.8**) |
| Math (1759) | 69.7 (**+1.1**) | 79.9 (**+2.3**) |
| Code (3933) | 55.8 (-0.2) | 67.4 (+0.1) |
| STEM (4466) | 54.7 (-0.1) | 63.1 (+0.1) |
| Complex Tasks (2990) | 9.5 (**+1.0**) | 14.9 (**+0.6**) |
| Instruction Following (1525) | 49.6 (**+1.7**) | 59.8 (**+1.8**) |
| Knowledge (775) | 47.3 (**+1.9**) | 58.3 (**+1.8**) |
| Language Understanding (680) | 63.8 (**+1.6**) | 68.4 (**+3.8**) |
We thank the reviewers for their rigorous feedback, which has strengthened our analysis. The revised manuscript will incorporate these responses, clarify methodological details, and emphasize PF-PPO’s broader applicability. | Summary: This paper introduces Policy Filtration for Proximal Policy Optimization (PF-PPO), a reinforcement learning from human feedback (RLHF) method that addresses reward model noise by selectively training on samples where rewards are most reliable. Observing that reward models are more accurate for extreme (high/low) rewards than moderate ones, PF-PPO filters responses using strategies like Best-Random (BR) and Best-Worst (BW), guided by the coefficient of determination to optimize reward-actual score alignment. Evaluated on code generation (HumanEval, MBPP, LeetCode) and math reasoning (Ape210K, CMATH) tasks, PF-PPO is claimed to achieve state-of-the-art performance for 7B models, outperforming standard PPO and other baselines by reducing reward over-optimization and enhancing training efficiency through noise mitigation.
Claims And Evidence: It is unclear to me why BW performs the best.
Is reward model $r_{\phi}$ mentioned in line 144 the same as the reward model $R_{\phi}$?
Methods And Evaluation Criteria: While $R^2$ correlates with performance, the causal link between it and policy improvement is assumed rather than proven. It seems to be calculated using SFT policy responses, and its generalization to RL-trained policies is not rigorously validated.
Theoretical Claims: No theoretical claims. Providing some fundamental theoretical results would be better.
Experimental Designs Or Analyses: More detailed discussion on experiment results would be better for me to understand the insights of the results. For example, why the current design works and what are the implications.
Supplementary Material: No
Relation To Broader Scientific Literature: Limited new contributions
Essential References Not Discussed: No
Other Strengths And Weaknesses: It is unclear why using the coefficient of determination ($R^2$), how to calculate it in the experiments, and why it measures the reliability.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: # Author Response for Reviewer 2Fwc
We appreciate the reviewer’s questions.
## Why does BW perform the best?
BW selects extreme samples with high/low rewards, which are most reliably aligned with actual scores (Fig. 1). Mid-reward samples often mix correct/incorrect elements (e.g., non-standard solutions with partial correctness), confusing the policy. BW maximizes the proportion of high-confidence samples, and thus improves learning efficiency.
## Why using the coefficient of determination (R2), why it measures the reliability, and how to calculate it in the experiments.
R² measures how well the reward model ranks samples by their true scores/quality. The filtration strategy with high R² ensures remaining samples provide reliable learning signals (i.e., those rewards can well indicate the true quality of the responses).
Calculation steps:
- Generate multiple responses per prompt using the SFT policy.
- Filter responses using the given strategy (e.g., BW).
- Compute R² between reward scores and ground-truth scores.
## Is the reward model $r_\phi$ mentioned in line 144 the same as the reward model $R_\phi$?
This was a typo, and it should be $R_\phi$ in line 144. We will correct this in the revised manuscript. The reward model in line 144 should be $R_\phi$ (not $r_\phi$). This will be corrected in the manuscript.
# Response to All Reviewers
We sincerely thank the reviewers for their constructive feedback and insightful suggestions.
One major advantage of our method is its simplicity, universality, and effectiveness. To further validate the broader effectiveness of our method, we conducted experiments across diverse domains using Doubao-25k (policy and reward model backbone). Tasks included logic reasoning, math, code generation, STEM problems, complex tasks, instruction following, knowledge QA, and language understanding. Each task has distinct evaluation sets and verifiers to assess response correctness. Results (accuracy improvement over vanilla PPO) are shown below. Statistically significant changes (exceeding $\pm 0.5$, based on test case counts) are **bolded**. These results demonstrate PF-PPO’s consistent effectiveness across tasks.
| Task (Evaluation Set Size) | BO1 Accuracy (%) | BO5 Accuracy (%) |
|----------------------------------|------------------|------------------|
| Logic Reasoning (1203) | 48.9 (**+2.3**) | 63.8 (**+2.8**) |
| Math (1759) | 69.7 (**+1.1**) | 79.9 (**+2.3**) |
| Code (3933) | 55.8 (-0.2) | 67.4 (+0.1) |
| STEM (4466) | 54.7 (-0.1) | 63.1 (+0.1) |
| Complex Tasks (2990) | 9.5 (**+1.0**) | 14.9 (**+0.6**) |
| Instruction Following (1525) | 49.6 (**+1.7**) | 59.8 (**+1.8**) |
| Knowledge (775) | 47.3 (**+1.9**) | 58.3 (**+1.8**) |
| Language Understanding (680) | 63.8 (**+1.6**) | 68.4 (**+3.8**) |
We thank the reviewers for their rigorous feedback, which has strengthened our analysis. The revised manuscript will incorporate these responses, clarify methodological details, and emphasize PF-PPO’s broader applicability. | Summary: This paper introduces a novel method, Policy Filtration for Proximal Policy Optimization (PF-PPO), to address the challenge of reward model inaccuracy in Reinforcement Learning from Human Feedback (RLHF). The authors propose a filtering mechanism to select samples with more reliable rewards, thereby improving the signal-to-noise ratio during policy learning. Motivated by empirical observations on the reward models, PF-PPO achieves this goal by excluding samples with moderate rewards. The method is validated on code generation and math reasoning tasks, demonstrating significant performance improvements.
Claims And Evidence: The claims are supported by provided evidence.
1 The authors claim that the reward model is less reliable for responses with moderate rewards compared to those with high or low rewards. This claim is supported by empirical evidence from experiments on code and math tasks where the reward model's reliability is analyzed across different reward regions.
2 The authors also claim that the filtering strategies can be selected by measuring the R2 between the true rewards and the predicted rewards on the filtered samples. This claim is supported by monitoring this measure and the final performance across different strategies.
Methods And Evaluation Criteria: The proposed methods (PPO plus a sample filter) make sense in general, and the evaluation criteria (pass@1 and accuracy) follow the standard for the code and math tasks.
Theoretical Claims: The paper does not present explicit theoretical claims.
Experimental Designs Or Analyses: I have checked the experimental designs and gone through the analysis on the experimental results and computational costs.
Supplementary Material: I have gone through the supplementary material.
Relation To Broader Scientific Literature: The paper builds on the foundation of RLHF and addresses the critical issue of reward model inaccuracy. It relates to prior work on improving reward model accuracy and methods to mitigate reward over-optimization. The proposed method follows a different path to address the issue of reward model inaccuracy, i.e., to analyze which samples are associated with inaccurate rewards and filter them out. This path is interesting but seems too empirical.
Essential References Not Discussed: The paper includes a comprehensive survey of related literature. Although Deepseek-math and Qwen-math employ a combination of a dense reward model and a rule-based reward model, claiming advantages for this configuration, I want to highlight the ongoing debate spurred by the success of DeepSeek-R1. Specifically, the debate centers on whether code or math tasks truly require a learned reward model. By leveraging long-chain-of-thought (CoT) reasoning, the DeepSeek team employs only the sparse true reward for these tasks. It would greatly benefit the authors to include a discussion on this debate in their paper.
Other Strengths And Weaknesses: S1 The idea of filtering samples based on reward reliability is easy to implement to address the reward inaccuracy issue issue in RLHF.
S2 The authors provide extensive experiments on multiple benchmarks and different tasks to demonstrate the effectiveness of their method.
S3 The findings are applicable to a wide range of tasks, including code generation and math reasoning.
W1 This paper is mostly motivated by empirical observations and lacks theoretical analysis on why it works.
W2 The effectiveness of PF-PPO is highly dependent on the quality or the property of the reward model. Although the paper provides experimental results to demonstrate the property applies to reward models trained in different domains, these is no strong evidence for the effectiveness of this method in other scenarios, e.g., where the reward model is poorly trained or biased.
Other Comments Or Suggestions: See the questions below.
Questions For Authors: 1 The method is proposed to address the inaccurate reward issue. Why this method also improves the results when the reward is given by the oracle (cf. Table 4)?
2 For the computational costs, the authors state that PF-PPO and standard PPO and evaluated in a fair manner using approximately the same number of samples (including the filtered samples). Since the performance is reported as the best score during the whole training process, how does PF-PPO converges compared to standard PPO? It is possible PF-PPO converges slowly and therefore corresponds to a poorer computational efficiency.
3 Are there other metrics besides R2 that could be used to select filtering strategies? How do these metrics compare in terms of predicting performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: # Author Response to Reviewer DFFh
We thank the reviewer for the positive and constructive feedback.
## W1: Theoretical Motivation
While PF-PPO is empirically driven, we ground our approach in the signal-to-noise ratio principle: ambiguous samples (mid-reward) introduce conflicting gradients during PPO updates, destabilizing training. By filtering these, PF-PPO reduces variance in policy updates. We will formalize this intuition in future work by analyzing gradient variance under different filtration strategies.
## W2: Dependency on Reward Model Quality
We believe that our algorithm will yield improvement over different rewards (either generated by reward models or provided by oracles) since filtering out less reliable samples should help across different settings. One example is that, even with perfect rewards (Table 4), PF-PPO improves performance by filtering suboptimal but correct samples (e.g., verbose code), allowing the policy to prioritize concise, high-quality solutions (see Appendix B). Please also refer to the new set of our experiment results shown in the post “Response to All Reviewers” which indicate that our method is robust to rewards for different fields.
# Response to All Reviewers
We sincerely thank the reviewers for their constructive feedback and insightful suggestions.
One major advantage of our method is its simplicity, universality, and effectiveness. To further validate the broader effectiveness of our method, we conducted experiments across diverse domains using Doubao-25k (policy and reward model backbone). Tasks included logic reasoning, math, code generation, STEM problems, complex tasks, instruction following, knowledge QA, and language understanding. Each task has distinct evaluation sets and verifiers to assess response correctness. Results (accuracy improvement over vanilla PPO) are shown below. Statistically significant changes (exceeding $\pm 0.5$, based on test case counts) are **bolded**. These results demonstrate PF-PPO’s consistent effectiveness across tasks.
| Task (Evaluation Set Size) | BO1 Accuracy (%) | BO5 Accuracy (%) |
|----------------------------------|------------------|------------------|
| Logic Reasoning (1203) | 48.9 (**+2.3**) | 63.8 (**+2.8**) |
| Math (1759) | 69.7 (**+1.1**) | 79.9 (**+2.3**) |
| Code (3933) | 55.8 (-0.2) | 67.4 (+0.1) |
| STEM (4466) | 54.7 (-0.1) | 63.1 (+0.1) |
| Complex Tasks (2990) | 9.5 (**+1.0**) | 14.9 (**+0.6**) |
| Instruction Following (1525) | 49.6 (**+1.7**) | 59.8 (**+1.8**) |
| Knowledge (775) | 47.3 (**+1.9**) | 58.3 (**+1.8**) |
| Language Understanding (680) | 63.8 (**+1.6**) | 68.4 (**+3.8**) |
We thank the reviewers for their rigorous feedback, which has strengthened our analysis. The revised manuscript will incorporate these responses, clarify methodological details, and emphasize PF-PPO’s broader applicability. | Summary: The authors introduce Policy Filtration for Proximal Policy Optimization (PF-PPO). Their key insight is that the reward signal is more useful in cases of high-reward or low-reward and design an algorithm around exploiting this by filtering the samples used in PPO based on their quality as measured by some heuristic (R-squared). They also construct the LeetCode Contest benchmark, consisting of 160 weekly LeetCode problems from July 2022 to Janurary 2024
They evaluate their algorithm on the HumanEval, MBPP, Ape210K and CMATH benchmarks as well as a LeetCode benchmark, testing against a range of SFT, DPO and PPO methods, demonstrating strong results over their chosen baselines.
Claims And Evidence: The primary claims of the paper are the following:
-The reward model used in RLHF is primarily more reliable when it comes to exemplar samples (samples given a high score/reward).
-Because of the former, a model trained with RLHF with filtering such that the primary samples used are exemplar positive or negative samples should yield higher performance.
These claims are supposed by the following:
-across a large number of experiments, (10 outputs for 160 prompts across 10 trials) the authors show that the actual reward aligns with the predicted most closely as both the predicted score gets higher.
-The authors demonstrate PF-PPO outperforms the other training methods.
Methods And Evaluation Criteria: The authors apply three different filtration methods, 'best-of-N' (BoN), always selecting the top response; 'best-of-random' (BR), 50% chance to select the best response, 50% chance to select any other policy; and 'best-worst' (BW) a coin flip on whether the best or the worst policy is selected. They use the R-squared between the predicted and the actual score and show a high linear correlation between BR and BW, even higher than the R-squared with no filter.
The benchmarks evaluated on consisted of a range of Math and Coding benchmarks. These are fitting evaluation benchmarks as both Math and Coding reward models tend to be easier to train due to how verifiable the rewards are. This attribute makes these benchmarks ideal for the authors method, as a messy reward signal could call into question the source of a decreased performance.
Theoretical Claims: No proofs were presented in the paper. The correctness of theoretical claims was provided by the authors (see R-squared verification above).
Experimental Designs Or Analyses: I believe the experimental design with respect to the baselines and evaluation benchmarks is sound (Please see 'Methods And Evaluation Criteria')
I agree with the range of methods covered as baselines. They all focus around training an LLM to be better at a specific domain, and I do not believe other zero-shot or in-context learning examples are relevant. The authors primary contribution is an improvement over PPO and this is compared against. They also demonstrate that their approach is able to raise vanilla PPO over other methods such as DPO or BOND-SFT in, for example, HumanEval.
These results also correlate with their R-squared results with BR and BW improving performance while BoN harms performance.
Supplementary Material: I read through the appendix, but did not examine it as closely as the main paper.
Relation To Broader Scientific Literature: The key contributions of this paper are in their insight of the tendency of reward models to be most accurate in the extrema of the reward distribution and filtration method they developed for applying this to PPO along with their LeetCode benchmark.
Both are very relevant to RLHF and the latter will be a good, new environment for evaluating the new reasoning models.
Essential References Not Discussed: None that I am aware of.
Other Strengths And Weaknesses: Strengths:
-The method is relatively straight-forward conceptually but the authors both verify its correctness and show that leveraging it can lead to increases in performance in relevant benchmarks.
-The authors verifiably demonstrate the correlation between the reward model predictions and the actual rewards in cases of high/low reward.
-A good range of benchmarks are selected and the author's method demonstrates gains over all of them when expected
-The authors also introduce a new benchmark for evaluating their method
Weaknesses:
-While testing was done primarily in benchmarks with easily verifiable rewards, it was not studied what happens when this is not the case
Other Comments Or Suggestions: N/A
Questions For Authors: Did the authors consider using a benchmark with less verifiable rewards as an ablation? In such a case where the policy filtration may be less accurate in some cases, how does this performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Author Response to Reviewer Rkmi
We thank the reviewer for the careful investigation and fruitful discussion.
## Generalization to Messy Reward Scenarios
While our experiments focus on code and math tasks with verifiable rewards, we acknowledge the importance of evaluating PF-PPO in scenarios with noisier or subjective rewards (e.g., safety, helpfulness). Our core insight—filtering ambiguous samples to retain high/low-reward extremes—is task-agnostic and relies only on the relative reliability of reward signals. Moreover, our additional experiment results shown in the post “Response to All Reviewers” should also support this claim since the tasks such as language understanding and instruction following are associated with different levels of reward accuracy (e.g., involving subjective rewards).
# Response to All Reviewers
We sincerely thank the reviewers for their constructive feedback and insightful suggestions.
One major advantage of our method is its simplicity, universality, and effectiveness. To further validate the broader effectiveness of our method, we conducted experiments across diverse domains using Doubao-25k (policy and reward model backbone). Tasks included logic reasoning, math, code generation, STEM problems, complex tasks, instruction following, knowledge QA, and language understanding. Each task has distinct evaluation sets and verifiers to assess response correctness. Results (accuracy improvement over vanilla PPO) are shown below. Statistically significant changes (exceeding $\pm 0.5$, based on test case counts) are **bolded**. These results demonstrate PF-PPO’s consistent effectiveness across tasks.
| Task (Evaluation Set Size) | BO1 Accuracy (%) | BO5 Accuracy (%) |
|----------------------------------|------------------|------------------|
| Logic Reasoning (1203) | 48.9 (**+2.3**) | 63.8 (**+2.8**) |
| Math (1759) | 69.7 (**+1.1**) | 79.9 (**+2.3**) |
| Code (3933) | 55.8 (-0.2) | 67.4 (+0.1) |
| STEM (4466) | 54.7 (-0.1) | 63.1 (+0.1) |
| Complex Tasks (2990) | 9.5 (**+1.0**) | 14.9 (**+0.6**) |
| Instruction Following (1525) | 49.6 (**+1.7**) | 59.8 (**+1.8**) |
| Knowledge (775) | 47.3 (**+1.9**) | 58.3 (**+1.8**) |
| Language Understanding (680) | 63.8 (**+1.6**) | 68.4 (**+3.8**) |
We thank the reviewers for their rigorous feedback, which has strengthened our analysis. The revised manuscript will incorporate these responses, clarify methodological details, and emphasize PF-PPO’s broader applicability. | null | null | null | null | null | null |
On the Importance of Embedding Norms in Self-Supervised Learning | Accept (poster) | Summary: This paper explores the role of embedding norms in self-supervised learning (SSL). It shows that embedding norms are crucial for SSL models in two key aspects: they influence convergence rates and encode model confidence. The study demonstrates that smaller embedding norms are associated with unexpected samples, and manipulating embedding norms can affect training efficiency. The paper includes theoretical analysis alongside simulations and experiments to establish how embedding norms influence the dynamics of SSL training. The results reveal that embedding norms govern how SSL models train and evolve, offering insights into the optimization process and network behavior. The study highlights the importance of controlling these norms for better performance in SSL tasks.
Claims And Evidence: The claims in this paper are generally supported by evidence, but the study's limited scope—focused on just a few SSL methods (SimCLR and SimSiam)—means that these findings may not universally apply to other SSL methods. This is not explicitly stated in the paper, which could lead to questions about the generalizability of the results. Therefore, while the empirical evidence is convincing, a broader range of methods and further analysis could provide stronger and more universally applicable results.
Methods And Evaluation Criteria: The paper uses both theoretical analysis and empirical experiments, which are appropriate for exploring the relationship between embedding norms and SSL dynamics. The theoretical component provides mathematical bounds and insights into how norms influence convergence rates and gradient magnitudes, while the empirical experiments validate these claims under practical conditions.
Theoretical Claims: I have checked the correctness of proofs in this paper. However, most theoretical analyses in this paper build on prior work.
In addition, the theorem 3.4 is easy but unrealistic in my opinion, as it only involves the cosine similarity of positive pairs and neglects the interactions of negative samples which are the important part to avoid feature collapse.
As a result, the proofs lacks theoretical novelty.
Experimental Designs Or Analyses: 1. The experiments are conducted on CIFAR-10, CIFAR-100, and ImageNet-100, which are widely accepted SSL benchmarks. The models are trained for 128–256 epochs, with SimCLR and SimSiam used as the primary SSL models. However, standard benchmarks[1] require 1000 epochs on CIFAR-10/100 and 400 epochs on ImageNet-100 for proper convergence.The limited training time raises concerns about whether the observed effects persist in fully trained models.
2. The paper only evaluates a few SSL methods, specifically SimCLR and SimSiam, while omitting other prominent frameworks like Barlow Twins, VICReg, SwAV, and DINO. Since Barlow Twins and VICReg do not use embedding norms and actually experience collapse when norms are introduced, it is crucial to analyze their behavior to confirm whether the findings hold across SSL paradigms. The lack of diversity in SSL models weakens the generalizability of the study.
[1] solo-learn: A library of self-supervised methods for visual representation learning.
Supplementary Material: I have reviewed all parts of the appendixes as well as the codes provided in supplementary material.
Relation To Broader Scientific Literature: The contributions of this paper are rooted in and extend the broader literature on embedding norms in SSL. By mainly offering new empirical results, and practical techniques for manipulating embedding norms, the paper provides contributions to the understanding of SSL dynamics and introduces methods to improve the training and performance of SSL models. The work builds on and extends prior findings while addressing gaps in the literature related to convergence speed, confidence encoding, and embedding norm manipulation.
Essential References Not Discussed: The author emphasizes the importance of Embedding Norm in self-supervised learning. However, it is worth noting that methods like Barlow Twins[1] and VICReg[2] do not adopt Embedding Norm; rather, incorporating it leads to collapse in these methods. The author should delve deeper into analyzing these phenomena.
[1] Self-supervised learning via redundancy reduction. In ICML, 2021.
[2] Vicreg: Varianceinvariance-covariance regularization for self-supervised learning. In ICLR, 2022.
Other Strengths And Weaknesses: Significance:
The insights into embedding norm manipulation could have important implications for optimizing SSL models, especially in terms of convergence efficiency. By providing methods like cut-initialization and weight decay to control norms, the paper offers practical techniques that could be applied to a wide range of self-supervised methods to improve training speed and stability. This is a significant contribution to the SSL field, as improving training dynamics remains a crucial challenge in modern machine learning.
Weaknesses:
1. Limited Scope of Methods: while the paper makes a significant contribution to understanding the role of embedding norms in SSL, the study is limited to just two SSL methods: SimCLR and SimSiam. These models represent only a small subset of the diverse family of SSL techniques. The paper does not discuss or evaluate methods like Barlow Twins, VICReg, or DINO, which could have provided a more comprehensive analysis. Including these methods would have made the findings more generalizable and would have strengthened the argument that the observed effects of embedding norms are consistent across different SSL paradigms.
2. Theoretical Depth: the theoretical contributions, while useful, lack novel insights and mostly build on prior work, especially in terms of embedding norm behavior. The mathematical analysis of how norms affect convergence rates and gradient scaling is solid, but the lack of original theoretical contributions (such as new theorems or proofs) limits the novelty of the paper’s theoretical aspect. The paper could have been more impactful if it had introduced new theoretical findings that directly contribute to advancing the understanding of norm manipulation in SSL.
3. Training Epochs: the training duration used in the experiments (128–256 epochs) is relatively short, especially for datasets like CIFAR-10/100 and ImageNet-100, which typically require much longer training times (e.g., 1000 epochs for CIFAR-10/100). This raises concerns about whether the effects of embedding norm manipulation hold during the full training process or if they only manifest after longer periods of training. A more extended evaluation would provide more robust evidence of the claims, particularly regarding long-term stability and model performance.
Other Comments Or Suggestions: See Other Strengths And Weaknesses.
Questions For Authors: I am curious whether the same phenomenon would occur with embedding norm in clustering and whitening-based methods, such as SwAV and W-MSE.
I would be happy to raise the score if authors' rebuttal can solve my concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the helpful suggestions and the in-depth review! We include experiments and discussion in response to your questions on the SSL methods used in the paper, the training epochs and the applicability/novelty of our analysis. Responses to individual questions are below:
> "while the paper makes a significant contribution to understanding the role of embedding norms in SSL, the study is limited to just two SSL methods"
We agree with the reviewer that adding more methods would be beneficial. To address this concern, we have run additional experiments on BYOL (resnet backbone; optimizes the cosine similarity), Mocov3 (ViT backbone; optimizes InfoNCE) and Dino (ViT backbone; not cos.sim.-based) on ImageNet-100. The results can be found in the response to Reviewer FWxF and are in accordance with the results in the paper. Please let us know if this has helped to address the concern.
> "I am curious whether the same phenomenon would occur with embedding norm in clustering and whitening-based methods, such as SwAV and W-MSE."
Our key theoretical contribution (norms grow, gradients decrease by norm) holds generally for loss functions based on normalized embeddings. As a result, it applies to SwAV, as well as the similar ProtoNCE, and W-MSE. The latter two directly use the cosine similarity, so Prop. 3.1 and Thm. 3.4 apply directly. Additional methods which use normalized embeddings for which our results apply include NN-CLR, ProtoNCE, BYOL, CLIP, DCL, and BEITv2, to name a few. We will make this clearer when revising the paper and will include a formalization of the general statement (that our findings hold for loss functions which depend on normalized embeddings).
> "Methods like Barlow Twins[1] and VICReg[2] do not adopt Embedding Norm; incorporating it leads to collapse in these methods."
We agree that this is an interesting direction for future work. As the reviewer points out, several SSL models perform worse when the embeddings are normalized. However, our analysis does not apply to these methods and we are not able to test why Barlow Twins and VICREG fail in the normalized setting in the limited rebuttal period. We will clarify this during the revision.
> "theorem 3.4 is unrealistic in my opinion, as it only involves the cosine similarity of positive pairs and neglects the interactions of negative samples"; "the theoretical contributions, while useful, lack novel insights"
We maintain that theorem 3.4 is a novel theoretical contribution and note that it holds immediately for the non-contrastive models SimSiam and BYOL, both of which only optimize the cosine similarity between positive samples. While it does not directly apply to the InfoNCE loss, we believe that our extensive analysis in Section 6 describes how embedding norms and convergence interact in contrastive settings. We also refer to our rebuttal to reviewer Jmax, where we extended the simulations in Section 4.1 to further analyze how weight-decay affects convergence.
> "The lack of original theoretical contributions (such as new theorems or proofs) limits the novelty of the paper’s theoretical aspect."
We recognize that our theoretical contributions build upon existing work in deep metric learning. However, our paper's novelty lies not in creating new mathematical frameworks, but in bringing theory and observations together and showing that they are causally related. The theory about gradient norms had not previously been experimentally verified and the experimental observations about SSL confidence did not have a theoretic underpinning. Our theory, simulations and experiments were designed to validate and explore these connections.
> "the training duration used in the experiments (128–256 epochs) is relatively short"
To address this concern, we have run on Cifar10/100 for 1000 epochs and report the kNN accuracies below:
|SimCLR|Default|Cut|GradScale|
|---|---|---|---|
|Cifar10|87.7|88.2|88.2|
|Cifar100|56.6|60.1|58.2|
|SimSiam|Default|Cut|
|---|---|---|
|Cifar10|88.1|88.4|
|Cifar100|61.8|62.6|
The performance improvements of cut-initialization and GradScale persist after 1000 epoch training and even grow in the case of Cifar100. We thank the reviewer for the suggestion.
The ImageNet experiments in Tables 2 and 3 were already run for the requested 500 epochs.
------
We hope we've addressed your concerns but please let us know if there is something else we can do to convince you further. | Summary: The paper proposes that the norm of the embeddings play an important role that may affect both optimization (convergence) and generalization properties of self-supervised learning methods. The paper makes an analytical observation about how embedding norm can slow down convergence. The paper argues about a model's confidence on seen/unseen data can be linked to embedding norm empirically. The paper uses SimCLR as an example of contrastive method and SimSiam as an example of non-constrastive method in its (empirical) analysis.
Claims And Evidence: Convergence: The paper makes a claim on convergence and supports it via analysis (P 3.1, P3.2, C3.3, T3.4) and uses a toy setting for empirical support. Note that P3.1 is a result from prior work (clearly noted in the paper) while the other proprositions/theorems are new work proposed in the paper (IMO)
Seen/Unseen Data: The paper uses an empirical approach to argue about embedding norm and seen/unseen data. A problem that I see with the analysis is that the paper uses norm values that aren't clearly interpretable. I find it hard to figure out what constitutes a small norm vs big norm value
Methods And Evaluation Criteria: The paper uses a mix of analysis, toy models and small-scale experiments. Datasets include CIFAR-10, CIFAR-100 and ImageNet-100 (a subset of ImageNet). The paper uses SimSiam and SimCLR as stated earlier for as model prototypes.
While the above is good, I found the lack of analysis/experiments on ImageNet a big concern. This is especially concerning as the newly proposed methods/interventions need to be tested at least at ImageNet-1K scale as is standard practice in current SSL literature
Theoretical Claims: The theoretical claims appears to be fine. I checked P3.1, P3.2, C3.3 and T3.4 and read/skimmed proofs in the appendix
Experimental Designs Or Analyses: - I found the lack of analysis/experiments on ImageNet a big concern. This is especially concerning as the newly proposed methods/interventions need to be tested at least at ImageNet-1K scale as is standard practice in current SSL literature
- Additionally, the use of embedding norm makes it harder for this reader to interpret (why is 0.74 considered a god indicator of "OOD" in Figure 3 on the right)
- While kNN classifier is a good probe, I have seen linear probe and more recently attentive probe being favored in SSL literature. This omission is a concern for this reader/reviewer
- While SimCLR and SimSiam are good prototypes for SSL there are other recent methods like I-JEPA, DinoV2 that needs to be considered if the authors are interested in testing non-contrastive methods. While I understand that jumping off point is InfoNCE loss analysis, the authors already consider SimSiam so using popular SSL methods would make the paper interesting to readers as well as the authors.
Supplementary Material: Read/skimmed the material. I appreciate the Pytorch-like implementation of GradScale very nice in addition to the analytical results and experimental results and details
Relation To Broader Scientific Literature: One area that's a significant concern is the lack of discussion on the uses of embedding rank and relationship to downstream performance. See the following works:
- \alpha-Req: https://openreview.net/forum?id=ii9X4vtZGTZ
- RankMe: https://arxiv.org/abs/2210.02885
- LiDAR: https://arxiv.org/abs/2312.04000
- CLID: https://openreview.net/forum?id=BxdrpnRHNh
(see section on essential references on suggested changes to manuscript)
Essential References Not Discussed: One area that's a significant concern is the lack of discussion on the uses of embedding rank and relationship to downstream performance. See the following works:
- \alpha-Req: https://openreview.net/forum?id=ii9X4vtZGTZ
- RankMe: https://arxiv.org/abs/2210.02885
- LiDAR: https://arxiv.org/abs/2312.04000
- CLID: https://openreview.net/forum?id=BxdrpnRHNh
Specifically, I believe the authors should check the conclusions CLID paper makes wrt kNN classifiers. Ideally, all of the above papers should be discussed by the authors as they appear to be related work. Rank is different from embedding norm but the fact that rank has been shown to correlate well with downstream performance appears to be a related part WRT generalization.
Other Strengths And Weaknesses: - The paper is well written. I really mean this as this has been an enjoyable read
A weakness I see is that the new interventions (Cut-Initialization and GradScale) needs thorough testing at a larger scale than what the authors used to ensure the observations may be useful to a broader group of readers in SSL
Other Comments Or Suggestions: - Figure 3 could be improved
- (a) the axes would reader better if transposed
- (b) the colors make it hard to distinguish classes
Questions For Authors: Please see my comments and observations made above. I look forward to authors' response as that may help clarify my questions and/or confusion about the work reported in the paper.
## Updated score to indicate my support for the paper
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the extensive analysis and suggestions for how to improve our work! Among other things, our rebuttal includes experiments on the Tiny-ImageNet dataset, on additional models, additional probes and on how the embedding's rank corresponds to the embedding norm. We respond to individual points below:
> "I found the lack of analysis/experiments on **ImageNet** a concern."
Unfortunately, due to computational constraints, this is infeasible for us. However, to address this concern, we trained SimCLR for 500 epochs on the Tiny-Imagenet dataset, which contains 200 classes, each with 500 training samples. The results are consistent with our other experiments:
|Probe|Default|Cut|GradScale|
|---|---|---|---|
|kNN probe|36|37.7|38.1|
|Lin. probe|41.9|42.8|43.2|
We will include these when updating the paper.
> "I have seen linear probe... favored in SSL literature"
We focused on kNN as it is known to lower-bound the other probes [1] and is a good indicator of model performance [2]. However, to address this concern, we have also run linear probes on several datasets:
**Tiny-ImageNet** linear probe results are presented in the table above.
**CIFAR-100** linear probe results can be found below:
|SimCLR|Default|Cut|GradScale|
|---|---|---|---|
|Cifar100|59.8|63.2|62.2|
|SimSiam|Default|Cut|
|---|---|---|
|Cifar100|63.7|64.9|
In both cases, the linear probe results are in line with the kNN ones. We will include linear probe evaluations in the revision.
> "using popular SSL methods would make the paper interesting to readers as well as the authors"
We agree that adding more methods would be beneficial. To this end, we have run additional experiments on BYOL (resnet backbone; optimizes the cosine similarity), Mocov3 (ViT backbone; optimizes InfoNCE) and Dinov2 (ViT backbone; not cos.sim.-based) on ImageNet-100. The results can be found in the response to Reviewer FWxF and are consistent with our other experiments.
> "the use of embedding norm makes it harder to interpret (why is 0.74 considered a good indicator of "OOD" in Figure 3)"
The absolute embedding norms will differ between models and would be difficult to use directly as a confidence metric. Instead, we propose using the *relative* embedding norms. Put simply, if an embedding norm is smaller than most norms seen on the training set, then the sample is likely OOD. Thus, in Figure 3, we have normalized the values by the training set's mean embedding norm, implying that the expected in-distribution norm is 1. The closer to 0 the value is, the more likely the sample is to be OOD. We will make this more clear in the revision.
> "lack of discussion on the uses of embedding rank and relationship to downstream performance"
We thank the reviewer for these references, we will include them in the related work section. Although the references address a different aspect of SSL embeddings than we do (they discuss the rank of all the embeddings whereas we discuss the norm of individual embeddings), we agree that these topics are related. Specifically, our work shows that the embedding norms grow in regions of the latent space which have high density (Section 4.2). This implies that these regions may induce large eigenvalues on the covariance matrix or be clusterable as in CLID.
As a preliminary test, we include below a table which shows the rank of the Cifar10 embedding space over training epochs and the corresponding mean embedding norm. We calculate rank as the number of principal components required to capture 99 percent of the latent space's variance. Interestingly, we find that the rank starts growing at roughly the same epoch where the norms stop growing, implying a potential correlation. We will include this in the discussion section at the end of the paper.
|Epoch|1|2|4|8|16|32|64|128|
|---|---|---|---|---|---|---|---|---|
|Rank|23|13|10|8|9|12|17|21|
|Norm|1.4|2.5|5.5|12.9|24.1|22.7|23.5|22.1|
We also note that this is likely related to the study on dimensional collapse in SSL methods, such as [3].
> "the new interventions need testing at a larger scale"
We emphasize that these interventions were included to study the behavior of the embedding-norm effect in practice and were not intended to obtain SOTA results. However, we agree that further testing would be helpful. To this end, we hope the analysis on additional models (BYOL, MoCov3 and Dino on ImageNet-100) and the additional experiments on Tiny-Imagenet have addressed this concern.
> Figure 3
We will amend the figure.
### References
[1]: Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." TMLR 2023.
[2]: Marks, Markus, et al. "A closer look at benchmarking self-supervised pre-training with image classification." arXiv preprint 2024.
[3]: Tian, Yuandong, et al. "Understanding self-supervised learning dynamics without contrastive pairs." ICML 2021.
------
Please let us know if these have addressed your concerns.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. The rebuttal adds good support to existing content in the paper but concerns with empirical setup/analysis remain. I will consider the authors rebuttal in my discussion with the other reviewers and AC in the next phase. I appreciate your hard work. | Summary: The article examines the structure of the gradient expression for the InfoNCE SSL objective. Building on a previous result, this gradient expression is reformulated to emphasize that:
- The gradient involves a projection onto a subspace orthogonal to the embedding vector with respect to which the gradient is computed.
- The gradient is inversely proportional to the norm of this embedding vector.
These properties are utilized to derive learning characterizations, such as the continuous growth of embedding vector norms and an upper bound on the improvement of the cosine similarity between positive pairs. These characterizations are then applied to analyses, including the use of embedding norms for addressing class imbalance and assessing network confidence, as demonstrated through numerical examples.
Claims And Evidence: - The claims regarding the special structure of the cosine distance gradient—specifically, its projection onto the tangent space and its inverse dependence on the embedding norm—are extensions of previous work and are well-supported by straightforward derivations.
- The claim about the increase in embedding norm and the dependence of cosine angle improvement is analytically grounded in the gradient structure.
- The claims concerning class imbalance and confidence are motivated by the earlier theoretical findings but are primarily supported by numerical experiments.
These claims are specific to SSL loss functions that involve cosine distance and optimization schemes that use a constant or non-adaptive learning rate.
Methods And Evaluation Criteria: The setups for numerical experiments based on synthetic data and classical ML datasets mainly make sense as well as the conclusions drawn from these examples.
Theoretical Claims: The analytical results are mainly in Section 3 (and their proofs are in Appendix A). These are extensions of the cosine angle gradient result by Zhang et al (2020) and appear to be correct. The probability bound (Proposition C.1) in Appendix C appears to be correct.
Experimental Designs Or Analyses: The experimental designs seem appropriate, although the presentation clarity for Figure 2b (Section 4.2 ) and Figure 3 (Section 5) needs improvement.
Supplementary Material: I reviewed Appendix A, which contains the main derivations and proofs, and found it to be sound. I briefly skimmed Appendix B, which provides some experimental details. I did not examine Appendices C and D in detail, except for Proposition C.1, which I checked.
Relation To Broader Scientific Literature: The article analytically extends the structural interpretation of the cosine distance gradient and provides simulation-supported insights into SSL training.
Essential References Not Discussed: I believe the article offers essential references including Zhang et al. (2020) which provides the structural form of the cosine distance gradient, which is extended and further elaborated in this article.
Other Strengths And Weaknesses: Strengths:
The article provides useful insights into the learning process of cosine-distance-based self-supervised learning (SSL), particularly focusing on embedding-norm normalization and projection onto the tangent space of the embedding vector.
Weaknesses:
The main results build upon prior work on the structure of the cosine-distance gradient. Extending this analysis to the entire InfoNCE objective is relatively trivial, as it naturally follows from the use of cosine distance for negative samples.
The analysis is specific to the use of SGD-based updates for cosine-distance-based SSL with a constant or non-adaptive learning rate. However, in practice, learning rates can be chosen adaptively—for instance, using learning rate rules that scale with the norm of the gradient—thereby eliminating the quadratic slowdown claimed in Theorem 3.4. Moreover, SSL algorithms often incorporate $\ell_2$ -normalization or regularization on embeddings or projector outputs during training, which fundamentally alters the dynamics of embedding norms.
Despite the fact that the article provides interesting insights about the impact of embedding-norm and tangent-space projection components of the cosine-distance based SSL gradient, the overall contribution is weakened by multiple factors. Most notably, the core analysis essentially reaffirms known results about the structure of the cosine-distance gradient, making its extension to the broader InfoNCE objective feel rather trivial. Furthermore, the reliance on a strictly SGD-based framework with a fixed learning rate neglects the practical reality of common SSL training, where adaptive learning rate schedules—and often $\ell_2$-normalization or related regularization strategies—impact the claimed quadratic slowdown feature. Therefore, while there is value in the article’s focus on embedding-norm dynamics, the limited novelty in analytical extension and practical applicability diminish the strength of the overall contribution.
Other Comments Or Suggestions: Presentation improvement suggestions: There appears to be various typos:
- Line 802: R^20->R^{20}?
- Line 895: variables->vectors
- Line 926: Table ??
Questions For Authors: - Based on Equation (3), the authors claim that embedding norms cause a slowdown proportional to their magnitude. In this expression, the learning rate $\gamma$ is a tunable algorithm parameterthat can be adjusted to compensate for this effect. Could we not simply choose $\gamma \propto \rho^2$, or use adaptive learning rate rules to prevent the slow down? Alternatively, would it not be possible to constrain embeddings, to $S^{d-1}$ for example?
- How do regularization terms in the overall loss (such as weight decay) impact the conclusions drawn about the embedding norms based on the special structure of the cosine-distance metric?
Ethical Review Concerns: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough review. Regarding Theorem 3.4's applicability, we appreciate your insights about potential mitigations - weight decay, scaling gradients by embedding norms, and adaptive optimizers. We note that our paper systematically analyzes how the first two affect SSL training in practice. For completeness, we include adaptive optimizer experiments below.
We now respond to individual comments:
> "The extension to the broader InfoNCE objective is trivial"
We agree that the extension from cosine similarity to the InfoNCE objective is straightforward. However, we believe that this is a strength of our paper.
The InfoNCE objective function is used in countless settings (see reply for reviewer FWxF) and our work shows that, in each of these, it has a dependence on the embedding norms. Although a few of these ideas were known in the deep metric learning literature, they had not been experimentally validated or extended to standard contrastive losses. In short, it is relevant for the SSL community to be aware of how the embedding norms interact with the InfoNCE loss function.
> "the core analysis reaffirms known results about the cosine-distance gradient"
Quick clarification: we believe that our core analysis lies in studying the relationship between SSL training and the embedding norms with (i) new theory (Thm 3.2 and 3.4), (ii) new experiments and (iii) new mitigation strategies. Such a comprehensive analysis was previously absent from the literature.
> "Could we not... use adaptive learning rate rules to prevent the slow down"
We believe the reviewer is implying that the results in Theorem 3.4 do not apply under, for example, the Adam optimizer. If so, we disagree. The gradient under Adam optimization still inversely depends on the embedding norm and, consequently, the embedding norms slow down the training process. To test this, we have trained SimCLR and SimSiam on the Cifar datasets using the Adam optimizer for 100 epochs and find consistent results:
|SimCLR w/ Adam|Default|Cut|GradScale|
|---|---|---|---|
|Cifar10|79.6|79.9|80.5|
|Cifar100|44.3|45.3|46.6|
|SimSiam w/ Adam|Default|Cut|
|---|---|---|
|Cifar10|73.7|79.8|
|Cifar100|36.4|44.9|
We also note that the Adam optimizer is not common in cos.sim.-based SSL models.
> "Could we not choose the learning rate proportional to the embedding norm?"
We agree. In fact, this is *precisely* what our GradScale does: it multiplies the gradient on each sample by its embedding's norm. We find this improves training but can lead to instability due to a positive feedback loop which is introduced: the embedding norm grows with the magnitude of the gradient and the gradient is now multiplied by the embedding norm. We include the embedding norms from the SimCLR w/ Adam Cifar10 training runs as an example of how the norms are affected by our mitigation strategies:
|Default|Cut|GradScale|
|---|---|---|
|81.0|2.1|174.8|
> "How do regularization terms in the overall loss (such as weight decay) impact the conclusions drawn?"
We politely note that we extensively analyzed how weight-decay improves SSL convergence in Section 6 and the appendix, particularly Table 1 and Figures 5, S3.
Additionally, weight decay would not change the correctness of Theorem 3.4: each individual step still depends on the square of the embedding norm. Nonetheless, we see the reviewer's point that weight decay will interact with the bound in Theorem 3.4, possibly leading to faster convergence. Practically, however, it is difficult to find a regularization strength large enough to mitigate the slow-down in Theorem 3.4 without training diverging.
To evaluate this, we have extended the simulation from Section 4.1 by augmenting the objective with a regularization term on the embedding norm to simulate how weight-decay affects convergence. Thus, we now optimize $\mathcal{L} = -\hat{z}_i^\top \hat{z}_j + \gamma \cdot ||z_i||_2$. The results are listed in the tables below. We find that while weight-decay can speed up convergence, the number of required steps still depends quadratically on the initial embedding norm. However, if the weight decay is made too large, convergence never occurs. This is in line with our experiments in Section 6.
For $\gamma = 0.5$:
|Initial Norm|Steps to Converge|
|---|---|
|1|64|
|4|526|
|7|1080|
For $\gamma = 1$:
|Initial Norm|Steps to Converge|
|---|---|
|1|49|
|4|318|
|7|610|
For $\gamma = 10$, the simulation diverges.
> "Would it be possible to constrain embeddings to the sphere?"
This is something we considered: it would mean forcing the outputs onto $\mathcal{S}^{d-1}$ and may work well with the loss from Koishekenov et al. [1]. We chose to implement the GradScale layer as an alternative as it accomplishes a similar effect in principle.
### References
[1]: Koishekenov, et al. "Geometric contrastive learning." VIPriors Workshop, CVPR 2023.
------
Please let us know if we have addressed your concerns. | Summary: This paper investigated the relatively overlooked area of embedding normalization in self-supervised learning (SSL), as most prior works default to using cosine similarity between embedding vectors -- which normalizes by the product of magnitude of both vectors -- and effectively projects data onto a unit hypersphere. Inspired by some empirical evidence that the pre-normalization embedding norms contain meaningful information, the authors aimed to systematically establish the embedding norm’s rule in SSL. The authors claimed that embedding norms (1) govern SSL convergence rates, and (2) encode network confidence, and validated the claim with theoretical analyses, simulations and empirical results.
The first main result is on how embedding norms affect convergence. The authors first proved theoretical bounds showing that embedding norms impose a quadratic slowdown on SSL convergence, and further validated in simulation and real experiments, demonstrating the benefit of small embedding norms. Then they showed embedding norms grow during training when cosine similarity is optimized. These two results indicated that effective and efficient training of SSL requires managing the embedding norms, and they provided methods for doing this.
The second main result is on how embedding norms encode confidence. The authors argue that since the embeddings grow with each gradient update, their norms naturally correspond to the frequency of observed latent features, and this naturally correspond to model confidence. They also provided methods for studying this and validated this claim.
Claims And Evidence: Yes. The claims are clearly organized and supported by evidence.
Methods And Evaluation Criteria: The proposed methods (weight decay, cut-initialization, and GradScale) are reasonable and directly address the identified embedding norm effect. Even though the cut-initialization is a little too simple and brutal, in my opinion the main contribution is the insight rather than the specific novelty of the method itself.
Evaluation criteria, such as k-NN classification accuracy on standard datasets (Cifar-10, Cifar-100, ImageNet-100, Flowers102), appropriately measure SSL representation quality. The methods and evaluation criteria used in the experiments are sensible for studying the stated problems and phenomena.
Theoretical Claims: I reviewed the correctness of the main theoretical claims (Proposition 3.1, Proposition 3.2 and Theorem 3.4). The proofs seem correct to me, or at least I did not identify issues. But I am not particularly good at proofs so I wouldn't rely on this analysis.
Experimental Designs Or Analyses: The experimental designs and analyses conducted are sound and valid. I reviewed the experimental setup for results shown in Figures 2-5 and Tables 1-3. The approach is thorough and methodologically sound.
Supplementary Material: I did not review the supplementary material except for the proofs.
Relation To Broader Scientific Literature: The paper builds effectively upon existing literature regarding embedding norms and SSL. It clearly positions itself relative to foundational works such as SimCLR, SimSiam, and prior studies on embedding norms (Wang et al., 2017; Zhang et al., 2020). It provides deeper empirical insights compared to earlier works. The paper is well situated within the broader SSL literature.
Essential References Not Discussed: The paper appears comprehensive in its referencing of relevant literature. All critical prior works directly related to embedding norms in SSL seem adequately discussed, and I did not identify any essential missing references.
Other Strengths And Weaknesses: Overall, I find this paper to be insightful. In my opinion, the most interesting contribution is that it highlighted how embedding norms affect convergence. Specifically, the authors demonstrated both theoretically and empirically that smaller norms are preferred, and meanwhile related that to the observation that embedding norms generally increase while optimizing for cosine similarity. The conclusion that effective and efficient SSL training relies on managing the embedding norms is very insightful.
The weakness would be the proposed methods, especially the cut-initialization, are relatively brutal and less innovative. With that said, I find the knowledge and insight brought to the community by this paper to outweigh this weakness.
Other Comments Or Suggestions: Nothing at the moment.
Questions For Authors: Great work. One minor question is, besides k-NN accuracy, have you considered other measures of latent space quality (linear probing and/or finetuning performance for classification)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the kind words regarding our paper. We respond to your questions below.
> "The weakness would be the proposed methods, especially the cut-initialization, are relatively brutal and less innovative"
Regarding the cut-initialization, we agree with this point and it is something we deliberated on for quite some time. However, we could not come up with an alternative method for ensuring that the embedding norms are small at initialization. An alternative option is to apply cut-initialization at only the final layer, but this performs roughly equivalently. We also note that cut-initialization seems to accomplish the desired task very well, as evidenced by Tables 1-3 in the paper.
If the reviewer has suggestions as to what we may do as an alternative to cut-initialization, we would be happy to try them out.
> "Besides k-NN accuracy, have you considered other measures of latent space quality?"
We recognize that there are several probes that can be used. We focused on kNN as it is known to lower-bound the other probes [1] and is a good indicator of model performance [2]. To address the reviewer's question, we have run the linear probe on various models at the 500 epochs mark and see the same performance improvements:
|SimCLR|Default|Cut|GradScale|
|---|---|---|---|
|Cifar100|59.8|63.2|62.2|
|Tiny Imagenet|41.9|42.8|43.2|
|SimSiam|Default|Cut|
|---|---|---|
|Cifar100|63.7|64.9|
We will include linear probe evaluations in the revision of the paper.
[1]: Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." TMLR 2023.
[2]: Marks, Markus, et al. "A closer look at benchmarking self-supervised pre-training with image classification." arXiv preprint 2024.
------
Please let us know if there is anything else which we can address.
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for providing the additional evidence. I do not have other questions at the moment. | Summary: This submission explores the embedding norms interaction with SSL training dynamics, where cosine similarity is commonly used to map the data to a hypersphere. This paper studies the gradients of the cosine similarity loss (and the InfoNCE loss) revealing that while gradients are inversely scaled by the norm of the embedding (large norms cause vanishing gradients), norms after gradient step get larger (causing a vicious cycle). This also results in an the expression for cosine similarity change after one gradient step, showing quadratic slow down in convergence wrt the norm and angle between the pair of embeddings.
These findings are then validated via a set of simulation experiments by controlling the angle and the norm. Alternatively, the final embedding norms are themselves influenced by optimization of cosine similarity - dense regions get higher norms. This leads to authors making a conclusion (though not formally treated, but empirically demonstrated) that embedding norm is descriptive of the network confidence in the data point, potentially encoding novelty (OOD samples), downstream performance (classification accuracy), human labelers' confidence and agreement.
Manipulating the embedding norm is later studied in the form of three methods (weight decay, cut-initialization and gradscale layer), showing gains over default settings and in imbalanced datasets.
Claims And Evidence: For the most part evidence is convincing. However, the 'Embedding Norm as Network Confidence' section is mainly an empirical study (although theoretically grounded via norm size dependency on the sample frequency during training for cosine similarity loss), it could benefit a lot from experiments on larger dataset like Imagenet, since CIFAR datasets are relatively simple and small. While Imagenet has 1K classes, it is possible to take subsets of classes for the purposes of the study (e.g. Figure 3). The paper uses one such subset (Imagenet-100) in Figure 4, so this should not be a problem. Adapting different subsets of Imagenet for experiments on network confidence would help make the results more convincing.
For the InfoNCE loss proof, the authors argue that negative contribution is averaged across many samples and should be smaller than the positive contribution (cosine similarity loss). I'm not sure this is always true, especially at the start of training and for complex datasets and network architectures and without assuming the network is smooth enough (similar augmented images land closer in latent than negatives).
The authors also discuss 3 ways in which this submission differs from the related work on embedding normalization for deep metric learning literature, which is very commendable. I agree with the first, however, the rest do not seem to be faithfully validated. The second - evaluating how large embedding norms affect real world training - brings us back to the choice of datasets raised earlier. The third - while weight decay and cut-init are already known methods, it is not entirely clear whether GradScaler is a novel contribution of this paper or not. If yes, there should have been better empirical confirmation of it effectiveness (now it states that the models are sensitive and GradScaler fails to demonstrate gains on larger dataset). In addition, it is not verified/discussed whether addressing embedding norm affect is more beneficial than the other techniques from DML (i.e. regularization term for variance from [1]) or if they are complimentary.
Methods And Evaluation Criteria: The formulated experimental setups do make sense, but it would be more convincing if Imagenet-like datasets were used (see Claims and Evidence).
Theoretical Claims: I looked through the proofs in Appendix they seem alright, but I haven't checked everything very carefully (especially since the results are obtained in previous work [1]).
Experimental Designs Or Analyses: Yes, the synthetic experiments and confidence experiments description seem valid.
Supplementary Material: I looked through proofs and all experiments setups.
Relation To Broader Scientific Literature: The submission addresses current gaps in understanding how self-supervised representation learning (SSRL) methods work, specifically those based on contrastive learning. While there has been a track of works connecting modern SSRL methods, especially SimCLR/Contrastive Predictive Coding to deep metric learning (DML), this work seem to use results obtained in earlier DML literature to describe the training dynamics of common contrastive learning-based methods based on data augmentation and provide ground for connecting embedding norms and network confidence in cosine-similarity-based methods.
While earlier discussion on network confidence via norm in the previous literature is discussed in the related work section, it seems unfair to say there has been no explanation for this phenomenon, especially when referencing work that treats norms as concentration parameter in von Mises-Fischer-based models.
Essential References Not Discussed: -
Other Strengths And Weaknesses: Theoretical findings largely depend on (and restate) the results from previous work (especially [1] which they acknowledge). This is not per se a serious weakness, since the authors formalized the quadratic inverse dependence on the norm in Theorem 3.4. However, this diminishes the conceptual contribution of the work.
[1] Zhang, Dingyi, Yingming Li, and Zhongfei Zhang. "Deep metric learning with spherical embedding." Advances in Neural Information Processing Systems 33 (2020): 18772-18783.
See also earlier sections (Claims and Evidence, Relation to Broader Scientific Literature)
Other Comments Or Suggestions: When describing GradScaler, it seems p=0 and p=1 are switched. Having scalar to power 0 will eliminate the scaling effect, while p=1 leaves the norm scaling intact, for now it is written the other way around if I'm not confused.
There is also broken reference to a Table in the supplementary.
Questions For Authors: While I understand that SimCLR, SimSiam and BYOL are picked based on their use of cosine similarity, I wonder if other MSE-based methods also exhibit similar behaviour. Have you tried similar experiments of VICReg/W-MSE?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and in-depth review! Below, we address your concerns by adding experiments on Tiny-ImageNet and testing additional models (BYOL, MoCo v3, Dino), which confirm the generality of our theoretical and experimental results. Furthermore, we clarify the novelty of our analysis: while some of our methods and ideas have precedent, we are the first to consolidate them and show unambiguously how the embedding norms interact with self-supervised learning. We believe that knowledge of these interactions is valuable to the ICML community.
Detailed responses are below:
> "[The paper] could benefit from experiments on larger dataset like Imagenet"
We agree that it would be nice to run on ImageNet. However, due to computational constraints, this is infeasible for us. We therefore refer to our response to Reviewer wGR1, where we include experiments on Tiny-Imagenet. The results are consistent with what we see on other datasets.
> "I wonder if **other methods** also exhibit similar behaviour."
To address both this and the broader question of our results' generalizability, we have trained BYOL (which optimizes the cos. sim.), MoCo v3 (which optimizes InfoNCE) and Dino (which is not cos.sim.-based) on ImageNet-100 for 100 epochs with and without cut-init. We find that cut-init improves BYOL's accuracy at 100 epochs from 26\% to 54\% and MoCo v3's accuracy from 54\% to 58\%. These are in accordance with the results in our paper (in particular, Figure S4 from the paper). Our theory only applies to losses with normalized embeddings, thus not to Dino, VicReg or Barlow Twins. Indeed, we find that cut-init drops Dino's accuracy from 44\% to 29\%. This implies that trying to resolve the embedding norm effect in settings where it does not occur hurts performance.
Nevertheless, many other prominent SSL methods do rely on normalized embeddings (e.g., W-MSE, NNCLR, ProtoNCE, SWAV, CLIP, DCL, and BEiTv2), highlighting how widely applicable our analysis is. We will discuss this in the revision.
> "it is unfair to say there has been no explanation for network confidence via norm"
While we agree Kirchhof et al. (2022) have discussed the embedding norm as a concentration parameter for the vMF distribution, they only *stated* that it behaves this way and gave brief empirical evidence of this. In fact, Kirchhof et al. reference Scott et al. (2021) who explicitly raise why this holds as an open question: ``Why does the COSINE embedding convey a confidence signal in the norm?'' Our work therefore gives a clear *explanation* for this phenomenon for the first time. Furthermore, Scott et al. was in the supervised setting.
> "...weight decay and cut-init are already known methods..."
While weight-decay is a known method, its standard discussion in the literature is entirely different from ours and focuses on overfitting. We only know of one reference, Wang et al. (2017), which suggests that weight decay may help with the embedding norm effect but this was not tested.
Similarly, cut-init only exists in transformer architectures and was proposed to manage the attention. We are not aware of it in any convolutional architectures or for managing embedding norms. Thus, the idea of dividing the weights at initialization has not been discussed with regards to the convergence rate and in this sense is a novel contribution of our paper.
Both of these are complementary to the regularization term for the variance from Zhang et al. (2020).
> "It is not clear whether GradScale is a novel contribution of this paper or not."
GradScale is a novel contribution, but our goal with it was not to propose a new SOTA method. Instead, we use it as a way to analyze how embedding norms affect SSL training. That is, by using GradScale we can show that removing the gradient's relationship to the embedding norm can improve generalization, especially on imbalanced datasets.
> "When describing GradScale, p=0 and p=1 are switched."
The $p$ parameter in GradScale is the power of the embedding norm that gets additionally multiplied to the original gradient. Thus, for $p=0$, the gradient is multiplied by $\|z\|^0 = 1$, leaving the gradient untouched.
> "the authors argue that negative contribution is averaged across many samples and should be smaller than the positive contribution. I'm not sure this is true"
We agree with this point and will remove this from the paper. However, we note that this has no effect on the correctness of Prop 3.2. The main point is that *all* gradients of the InfoNCE loss are orthogonal to the points they act on, thus increasing the embedding points' norm, independent of the sizes of the attractive and repulsive contributions.
-------
We hope we have addressed your concerns. Please let us know if there is anything we can do to convince you further. | null | null | null | null |
Should Decision-Makers Reveal Classifiers in Online Strategic Classification? | Accept (poster) | Summary: This paper mainly proposes regret bounds for online strategic classification under two novel settings: (i) Revealed-Adv where the agents can still manipulate their feature arbitrarily even when the feature vector after manipulation is still negative. In this case the decision-maker makes $\Omega(k_{in})$ times more mistakes; (ii) agents do not know current policy $h_t$. Instead, they best respond to an exponential average of previous classifiers. In this case, an additional $\frac{1}{log(1/\gamma)}$ portion of mistakes are expected.
Claims And Evidence: 1. The contribution is very clear and the proposed settings mostly make sense in strategic classification settings;
2. The theorems are clear, and the explanations linking setting (ii) and (iii) are helpful. The intuitions introduced in Figure 1 is clear;
3. The theoretical results seem to be intuitively correct.
Methods And Evaluation Criteria: This is a purely theoretical work, so there are no evaluations. For methods, Algorithm 1-2 are plausible.
Theoretical Claims: I only read through proof of Theorem 3.1 - 3.3 which mostly make sense.
Experimental Designs Or Analyses: There is no experiment.
Supplementary Material: I only read through Appendix A - B.1.
Relation To Broader Scientific Literature: This paper can make a meaningful contribution to the learning bounds of online strategic classification. Specifically, the idea that decision-maker does not reveal classifier and the agents respond to the previous classifiers was proposed recently and the learning bounds are worthwhile researching.
Essential References Not Discussed: The paper already discussed related work. In (II) of Section 1.1, there are other literature quite related but not mentioned. For example, [1] studies a setting where agents arrive sequentially and also best respond to previous decision policy instead of the current one, although the authors mainly focused on welfare and fairness; [2] focused on online strategic classification in a continuous setting where the feature can be in $R^d$.
[1] Xie, T., & Zhang, X. (2024). Automating data annotation under strategic human agents: Risks and potential solutions. arXiv preprint arXiv:2405.08027.
[2] Shen, L., Ho-Nguyen, N., Giang-Tran, K. H., & Kılınç-Karzan, F. (2024). Mistake, Manipulation and Margin Guarantees in Online Strategic Classification. arXiv preprint arXiv:2403.18176.
Other Strengths And Weaknesses: 1. I am unsure about whether the "adversarial tie-breaking" is actually practical. In your setting, each feature vector has an $N_{out}$ and there seems to be no cost. However, in classic SC setting, manipulating the feature should incur a cost. This is to say, if there is no utility gain, the agents will never move and this makes sense.
2. The paper assumes a finite and realizable setting. Specifically, it seems that the feature vectors are countable. This may limit practical usage.
Other Comments Or Suggestions: I think overall the paper is well-written, and my questions are listed above.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. We address them below.
> adversarial tie-breaking
We appreciate the reviewer’s concern and want to clarify that we do *not* intend to make adversarial tie-breaking a realistic behavioral assumption. Instead, we use it as a conceptual and technical intermediate step toward analyzing the more realistic setting (\gamma-Weighted) where agents *cannot* observe the currently implemented classifier and instead best respond to an *estimate* of it. As we discussed in Lines 235-251 of our manuscript, in such settings, estimation error can cause the agents to perceive one node as slightly better than the other node, even if the true classifier might assign them the same label. For example, if $\tilde h(v) = h(v)$ for all $v \in N_{out}(x)$, but $\tilde h(x) = h(x) - \epsilon$, the agent would not remain at $x$, behaving as if they are breaking ties adversarially.
The purpose of Theorem 3 is precisely to highlight the mistake bound gap between the case where agents only perceive a perturbed version of the true hypothesis $h$ versus the case where they perceive $h$ perfectly. We treat adversarial tie-breaking as a proxy for incorporating such estimation errors into the mistake bound analysis, which serves as a building block towards the gamma-Weighted setting.
Second, our upper bound in Theorem 4.1 holds for both tie-breaking ways. We note that our lower bound in Theorem 4.4 consists of two parts: $\min(d/\log(1/\gamma), |H|)$ and $d \cdot k_{\text{in}} \cdot k_{\text{out}}$* , where the first part still holds for the standard tie-breaking case with a slight refinement of the proof. The second part will be reduced to $d\cdot k_{\text{out}}$ in the standard tie-breaking case. The dependency on $k_{\text{in}}$ in unrevealed classifier + standard tie-breaking is left as an open problem.
*There is a typo in the original statement of Theorem 4.4, $ k_{\text{in}}$ should be replaced by $k_{\text{in}} \cdot k_{\text{out}}$.
> finite feature space
First, we would like to clarify that our model does not require the feature or action space to be finite: one can define a generic manipulation graph structure over any (potentially infinite or continuous) feature space by treating features as vertices and defining edges as possible manipulations. However, the *learnability* of an instance — in both our work and prior works [Ahmadi et al, 2023, 2024; Cohen et al, 2024] — relies on the finiteness of the *maximum degree* of the manipulation graph. Without the finite degree assumption, learning in infinite graphs is generally intractable. This has also been noticed by prior work eg [Shao et al, NeurIPS 2023, “Strategic Classification under Unknown Personalized Manipulation”], which shows that when the feature space is continuous and agents can manipulate to any point within a ball around the initial features, mistake bounds are usually $\Omega(|H|)$.
Second, we would like to remark that when the feature space is continuous but satisfies certain smoothness conditions, it is often possible to discretize the space and then construct a finite manipulation graph on a covering net. Our results then apply to this discretized finite graph.
> realizable setting
Our goal is to study the gap between the setting where the currently implemented hypothesis is revealed and the setting where it is not. The agnostic case remains open in the literature even in the revealed classifiers setting —as also noted by reviewer MqF7. However, we believe that similar ideas as used in Algorithms 1 and 3 can be applied to the agnostic setting too. | Summary: This work considers a strategic online classification setting. At each round $t$, the learner selects classifier $h_t : \mathcal{X} \in \{0,1\}$ and an agent with true feature $x_t \in \mathcal{X}$ and label $y_t \in \{0,1\}$ selects a manipulated feature $v_t \in \mathcal{X}$. After the round, $v_t$ and $y_t$ are revealed to the learner. This work supposes that the agent selects $v_t$ among the out-neighbors of $x_t$ in a directed graph, in order to maximize $\tilde{h}_t(v_t)$, where $\tilde{h}_t : \mathcal{X} \to [0,1]$ is a guess for $h_t$. Previous work examined this problem thoroughly when $\tilde{h}_t = h_t$ and the agent breaks ties favorably. This work considers the setting where $h_t$ is not revealed until after round $t$, and the learner uses a weighted sum of previous classifiers, according to a discount factor $\gamma$. Under a standard realizability assumption, they design new algorithms and upper bound the number of mistakes made by the learner, accompanied with constructions lower bounding the number of mistakes by any learning algorithm. When $\tilde{h}_t = h_t$ but tie-breaking is adversarial, they show that their is already a significant overhead in the mistake bound based on the in-degree of the manipulation graph. In the weighted sum case, there is further overhead according to the agents' time horizon $1/\log(1/\gamma)$. This provides a simple strategic learning setting where withholding access to the classifier degrades performance.
## Update after Rebuttal
I have increased my score to accept based on their clarifications and promised corrections.
Claims And Evidence: Yes, this is a theoretical paper and (almost) all of the results have proofs which appear sound. The only piece I slightly struggled with was the proof of Theorem 4.4. There is one piece of their claim which I don't think is properly treated by the proof and one typo in the statement (I think). For the rest of the proof, I can follow the line-by-line arguments but I think a better proof overview would be helpful, since this result is key to the paper's thesis. The bound is somewhat natural, since $1/\log(1/\gamma)$ is roughly the effective memory length of the agents, so I would be surprised if there was any major issue.
I feel confident vouching for correctness of the other results, whose proofs I found pretty clear.
I am willing to raise my score if the authors address my concerns with Theorem 4.4.
Methods And Evaluation Criteria: n/a
Theoretical Claims: Yes, I read through all of the proofs, as mentioned above. For Theorem 4.4, I had two comments. First, shouldn't $k_\mathrm{out}$ appear in the LB? I think this is just a typo. Second, I don't see why taking $\gamma \to 0$ is okay for the first part of the proof. The theorem was claimed to hold for each fixed $\gamma$ and it seems non-trivial to extend the LB in Theorem 3.3 when $\gamma$ is bounded away from 0.
Experimental Designs Or Analyses: n/a
Supplementary Material: Yes, I read through all of the proofs.
Relation To Broader Scientific Literature: There is a fair amount of literature on strategic classification, which is discussed in the related work. Their results shine some light on the importance of tie-breaking assumptions made in previous works for the online setting where $\tilde{h}_t = h_t$. The extension they consider to the weighted average case is a pretty natural one. I don't think it is that surprising that their agents with memory are harder to learn against, but it is good to establish.
Essential References Not Discussed: Nothing comes to mind.
Other Strengths And Weaknesses: I actually think the technical contributions with respect to tie-breaking are more interesting than that of revealing the classifier, which the paper is framed around. In particular, the $k_\mathrm{in}$ dependence seems to be due to the tie-breaking choice rather than not revealing the classifier. The dependence on $1/\log(1/\gamma)$ is much less surprising to me.
The realizability assumption is rather strong, and these results would definitely be more compelling without it. This is an open problem even when classifiers are revealed immediately though, so I don't hold that against the authors.
Other Comments Or Suggestions: For me, it feels a bit more intuitive to imagine the agent maintaining a distribution over classifiers and manipulating their feature to maximize the probability of a positive label. This of course coincides with taking a weighted sum over classifiers though, so there is no issue.
I didn't understand the phrase "With probability at least 1/3, the agent will stay at xi,B because it is one of the best candidates." on page 15. Isn't the agent's choice up to us in this LB construction? I.e. we can have this occur all of the time. In any case, I don't think this point affects the result, but I want to make sure I'm not misunderstanding something.
This is a small nit, but I prefer using $(1-\gamma)^{-1}$ instead of $1 + 1/\log(1/\gamma)$. These are equivalent up to constant factors and the former is commonly used for effective time horizons in RL. I think it's worth noting somewhere.
Double parentheses in last citation on page 2
Questions For Authors: What does this setting look like if the agents arrive stochastically? Can the Littlestone dimension can be reduced to a VC dimension?
Algorithm 1 seems pretty generic and I could imagine the same idea being useful in other settings. Have you thought about any other applications / has any similar idea appeared in the literature?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the thoughtful comments. We address them below.
> lower bound statement in Theorem 4.4
Thanks for catching the issue in the theorem statement. The lower bound in Theorem 4.4 consists of two parts: $\min( d/\log(1/\gamma), |H|)$ and $d \cdot k_{\text{in}} \cdot k_{\text{out}}$. The first bound holds for all $\gamma\in(0,1)$, and its proof is provided in Appendix B.3. The second bound emphasizes the dependency on maximum in-degree and is established only when $\gamma\to0$. Its proof follows from a modified version of that of Theorem 3.3.
We agree that the theorem statement should be more precise, and will update in revisions to clarify that the second bound is only established in the $\gamma\to0$ regime. In the following, we provide more details for this modified proof and will add them to future versions of this paper.
Consider the graph that is the same as that in Figure 2 but all nodes in the first layer (namely $x_1,x_2,\ldots,x_{k_1}$) are connected by a clique. This modification gives $k_{\text{out}}=k_1+k_2$ and $k_{\text{in}}=k_2$.
For $\gamma\to0$, best responding to $\tilde{h}^\gamma_t$ is equivalent to best responding to $h_{t-1}$. Now, we perform a case discussion of both $h_{t-1}$ — which the agent $x_t$ best responds to — and the current classifier $h_t$. We will construct an adversary that induces at least one mistake in every two rounds in the first $2k_1k_2$ rounds. Now we assume that the learner does not make a mistake at round $t-1$ and show how to induce a mistake in round $t$.
* Case 1: If $h_{t-1}$ labels $x_0$ by positive. Then if $h_t(x_0)=1$, the adversary picks $(x_t,y_t)=(x_0,0)$ and induces a false positive mistake. If $h_t(x_0)=0$, then the adversary picks $(x_t,y_t)=(x_{i^*},1)$, which manipulates to $x_0$ and induces a false negative mistake while revealing no information. These correspond to cases 1 and 2 of the proof of Theorem 3.3.
* Case 2: If $h_t$ labels any leaf node $x_{i,j}$ as positive, then the adversary picks $(x_t,y_t)=(x_{i,j},0)$. This induces a false positive mistake and removes one hypothesis. Since the learner made no mistake in round $t-1$, we can assume that $h_{t-1}$ labels all leaf nodes as negative. This case corresponds to case 4 of Theorem 3.3.
* Case 3: We are left with the case where $h_{t-1}$ labels $x_0$ and all leaf nodes as negative. Due to adversarial tie-breaking and the added clique, we can assume that all nodes in $\{x_0,x_1,\ldots,x_{k_1}\}$ manipulates to the same node $x_i$ in response to $h_{t-1}$. If $h_t(x_i)=1$, then the adversary can choose $(x_t,y_t)=(x_0,0)$ and induce a false positive mistake. If $h_t(x_i)=0$, then the adversary chooses $(x_t,y_t)=(x_{i^*},1)$ and induces a false negative mistake while revealing no information about $i^*$. This case is a modification of case 3 of Theorem 3.3 that accounts for historical best response.
We will correct this in the paper.
> the phrase "With probability at least 1/3”
We allow agents to best respond to the weighted average classifier with adversarially tie-breaking in this setting and we focus on adversarially choosing the specific $x_t$ at each time. Instead of finding the worst instance among all possible cases, it is enough to show our desired instance can happen with $\Theta(1)$ probability.
> agents arrive stochastically
Since we focus on mistake bounds in the realizable setting, they cannot be reduced to the VC dimension in the stochastic setting, even in standard online learning without strategic behavior. However, for regret bounds in the agnostic setting, such a reduction is possible. We believe it would be interesting to explore regret bounds in the agnostic setting, both when the classifier is revealed and when it is not.
> application of Algorithm 1
Algorithm 1 has two main components: the weighted voting of experts and the choice of threshold/updating mechanism of experts. The high-level idea of weighted expert voting is fundamental and has been widely applied in many settings. Choosing a biased threshold that favors false positives is useful in scenarios where false positives and false negatives carry different amounts of information about the true hypothesis.
> Novelty of our results
We would like to re-emphasize the technical novelty of our results.
First, as discussed in our response to reviewers WPjm and T9UQ regarding adversarial tie-breaking, we show that when agents only observe a perturbed version of the current predictor $h_t$—rather than the exact $h_t$—the mistake bound increases by a factor of $k_\text{in}$ (see Corollary 3.2 and Theorem 3.3).
Second, in the weighted sum of history case, as the reviewer pointed out, $1/\log(1/\gamma)$ can be seen as the effective memory length of the agent. One interesting implication of this lower bound is that, in the worst case, the agent may need to “forget” its entire history—incurring $\Omega(1/\log(1/\gamma))$ mistakes—before it can make correct decisions again.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response - I am raising my score to accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the score! We will update the manuscript accordingly. | Summary: This paper studies how hiding the classifier from the agents affects the performance of strategic classification. The result shows that hiding the classifier from the agents significantly increase the number of mistakes the decision maker would make, in proportion to the maximum in-degree of the manipulation graph. The paper proposed two algorithms for strategic learning with adversarial and $\gamma$-weighted agents and calculated the upper bound on the number of mistakes each algorithm makes. The authors also provided special examples where the upper bound is attained.
Claims And Evidence: The central claim is hiding classifiers would increase the number of mistakes of the decision maker. This is well supported by the reduction from the $\gamma$-weighted version to the adversarial agents. Every claims are soundly supported by theoretical construction and proofs.
Methods And Evaluation Criteria: The key methodology is to reduce the target situation ($\gamma$-weighted case) to the adversarial setting and derived bounds of the latter based on Littlestone dimensions. The method is innovative and appropriate. The evaluatoin metric is the number of mistakes the decision maker made throughout the interaction process. The evaluatoin metrics is reasonable and has well-documented benchmarks.
Theoretical Claims: I checked all theorems and lemma except Observation 5.2. The theoretical results are correct as far as I can tell.
Experimental Designs Or Analyses: The paper is purely theoretical, relying on algorithms and constructed examples (e.g., Figure 1 and 2).
Supplementary Material: The supplementary matrials consists of proofs and algorithms. I examined all proofs in appendix A and B (I browsed C but did not dive into details as the setting is simple).
Relation To Broader Scientific Literature: The work extends the strategic classification framework to non-transparent settings. While it’s commonly assumed and argued that revealing classifiers to agents is reasonable and practical, rigorous theoretical studies on its benefits are scarce. This work fills this gap in the literature.
The model introduced in this work, which incorporates discounted agent memory and adversarial tie-breaking, is novel in my understanding.
Section 5 connects the work to repeated Stackelberg games with learning agents. The authors thoroughly discuss the challenges of adapting the current model to the learning agent problem and provide insights into the implications of the findings in this context.
Essential References Not Discussed: Not recognized.
Other Strengths And Weaknesses: The most important innovation of this paper lies in the idea of connecting the target setting ($\gamma$-weighted) agents to the adversarial agents. The paper contributes to the strategic classification community by theoretically supporting the commonly adopted assumption that hiding classifiers to the agents does not benefits the decision maker.
The weakness could be in the limitations of the action space where in usual cases agents may have access to an infinite or even continuous action space.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments and for examining our proofs. We address them below.
> infinite/continuous feature space
First, we would like to clarify that our model does not require the feature or action space to be finite: one can define a generic manipulation graph structure over any (potentially infinite or continuous) feature space by treating features as vertices and defining edges as possible manipulations. However, the *learnability* of an instance — in both our work and prior works [Ahmadi et al, 2023, 2024; Cohen et al, 2024] — relies on the finiteness of the *maximum degree* of the manipulation graph. Without the finite degree assumption, learning in infinite graphs is generally intractable. This has also been noticed by prior work eg [Shao et al, NeurIPS 2023, “Strategic Classification under Unknown Personalized Manipulation”], which shows that when the feature space is continuous and agents can manipulate to any point within a ball around the initial features, mistake bounds are usually $\Omega(|H|)$.
Second, we would like to remark that when the feature space is continuous but satisfies certain smoothness conditions, it is often possible to discretize the space and then construct a finite manipulation graph on a covering net. Our results then apply to this discretized finite graph. | Summary: This work studies an online classification setting, where arriving points (agents) can manipulate their features according to a manipulation graph, where the rationale to to obtain more favorable predictions. Under a realizability assumption, the authors study a scenario where the decision maker does not reveal its currently deployed predictor to agents, who instead manipulate their features as a response to the weighted average of past predictors in the interaction. Under an additional assumption of adversarial manipulations by agents, the authors prove upper and lower mistake bounds for the problem, and show a gap for the full information analogue (though without allowing manipulations to be adversarial), studied in previous work. The paper concludes with examining situations where agents run no regret algorithms instead of best responding, and demonstrate the difficulty of learnability in their model.
Claims And Evidence: The paper is fully theoretical. I did not read the appendix with the proofs, but from glancing, it seems like the resulted are well established.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I did not.
Experimental Designs Or Analyses: N/A.
Supplementary Material: I did not.
Relation To Broader Scientific Literature: The paper contributes to the line of work on strategic classification, and particularly to studying the question regarding the implications of not revealing the deployed predictors to agents. The paper follows a line of work modelling possible manipulations as graphs.
Essential References Not Discussed: In the context of revealing predictors to agents, one line of work that I think is relevant to mention is on providing explanations to agents instead of revealing the predictor (e.g. Counterfactual Explanations, Tsirtsis and Rodriguez 2020), where the agents are given a suggested point to manipulate to, and the prediction at that new point. It is important to point out that there are alternative actionable signals that can be provided to agents to induce strategic modification, and the choice is not only between revealing a predictor or obscuring it (and even if a predictor is revealed and is of high complexity, a common agent may find it difficult to even calculate their best response). I believe the authors should include such a discussion in relevant previous work.
Other Strengths And Weaknesses: Strengths:
1. The paper is very well written, I enjoyed reading it.
2. The approaches for solving the main problem in this work are original, creative, and I found the reduction approach in the paper interesting.
3. The problem of whether classifiers should be revealed in strategic settings is central, and clearly motivated.
Weakness:
I am slightly concerned regarding the significance of obtained results, due to the following points:
1. In the scenario of the paper, there is strong reason to believe that a decision maker who does not want to reveal their current model, would not want to reveal past deployed policies (e.g. there might be many similarities, and the developed predictors may be proprietary and allow the decision maker to maintain competitive edge).
2. The scenario of adversarial manipulation, in which agents may still manipulate to different points even if they do not benefit from such action seems to be rather unnatural (especially that usually such manipulations have cost/effort for agents). To my understanding, the lower bounds in Theorems 3.3, 4.4 strongly rely on this assumption.
3. The realizability assumption in the basis of this work (while maybe necessary to allow for the analysis) is rather restrictive.
4. There are alternatives to revealing the model in inducing agent responses, which may aid to circumvent the tensions between the two extremes (see more in essential references section).
Other Comments Or Suggestions: Algorithm 3 should appear in the main text. I understand that there are space limitations, however it is central to this work and should be included.
Questions For Authors: 1. Why did you opt to study the problem under the additional (and largely unnatural in a strategic agents context) assumption of adversarial manipulations? Have you considered the non-adversarial + unrevealed classifier case?
2. In line 340, you state "a false negative mistake can occur only if a false positive mistake happened in the previous round". Can you explain why this is the case?
3. Regarding Algorithm 3 (described in the paragraph starting in line 361) --- why can't there be instability in the classification where there are recurring mistakes but still inability to make progress according to the described scheme? The argument is probably formalized in the appendix, but I am wondering about the main ideas in the proof.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments. We address them below.
> revealing the past deployed policies
We agree that the decision-maker might not want to reveal their past models explicitly. However, historical outcomes often reveal past classifiers implicitly. For example, we can observe which students were admitted by a college in previous years (based on SAT scores, GPA, extracurriculars) or which loan applicants were approved—and use this information to estimate the predictors that were likely implemented in the past. In some specific cases, the model is effectively made public: in college admissions in countries with unified entrance exams, universities often reveal the threshold classifier by publishing the minimum admission scores after each admissions cycle.
> adversarial tie-breaking
We appreciate the reviewer’s concern and want to clarify that we do *not* intend to make adversarial tie-breaking a realistic behavioral assumption. Instead, we use it as a conceptual and technical intermediate step toward analyzing the more realistic setting (gamma-Weighted) where agents *cannot* observe the currently implemented classifier and instead best respond to an *estimate* of it. As we discussed in Lines 235-251 of our manuscript, in such settings, estimation error can cause the agents to perceive one node as slightly better than the other node, even if the true classifier might assign them the same label. For example, if $\tilde h(v) = h(v)$ for all $v \in N_{out}(x)$, but $\tilde h(x) = h(x) - \epsilon$, the agent would not remain at $x$, behaving as if they are breaking ties adversarially.
The purpose of Theorem 3 is precisely to highlight the mistake bound gap between the case where agents only perceive a perturbed version of the true hypothesis $h$ versus the case where they perceive $h$ perfectly. We treat adversarial tie-breaking as a proxy for incorporating such estimation errors into the mistake bound analysis, which serves as a building block towards the gamma-Weighted setting.
Second, our upper bound in Theorem 4.1 holds for both tie-breaking ways. We note that our lower bound in Theorem 4.4 consists of two parts: $\min(d/\log(1/\gamma), |H|)$ and $d \cdot k_{in} \cdot k_{out}$* , where the first part still holds for the standard tie-breaking case with a slight refinement of the proof. The second part will be reduced to $d\cdot k_{out}$ in the standard tie-breaking case. The dependency on $k_{\text{in}}$ in unrevealed classifier + standard tie-breaking is left as an open problem.
*There is a typo in the original statement of Theorem 4.4, $ k_{in}$ should be replaced by $k_{in} \cdot k_{out}$.
> realizability assumption
Please see the response to reviewer T9UQ.
> alternatives to revealing the model in inducing agent responses
We completely agree that there are interesting settings between the two extremes, and we see it as an exciting open direction to study the spectrum of information that may be revealed. We will add [Tsirtsis and Rodriguez 2020] to the related work section and include a discussion along these lines.
> explanation for line 340
The complete proof for Lemma 4.2 is in Appendix B.2, and we’d like to provide more explanation here. At first sight, it seems that there won’t be false negative mistakes because our algorithm is very conservative. Every positive sample $x$ will always have a neighbor $u$ that is labeled as positive by $h^\star$ and $h_t$. Such $u$ is always in $BR_{\tilde{h}_t^\gamma}(x)$ and we won’t make a false negative mistake on $x$ if $x$ manipulates to this $u$. However, we argue there still can be false negative errors for one very specific scenario. That is, $x$ manipulates to another neighbor $v$ in $BR\_{\tilde{h}\_t^\gamma}(x)$ but $h\_t(v)=0$. For such a case to happen, $\tilde{h}\_t^\gamma(v)$ must be 1 to ensure that $x$ will choose to manipulate to it with non-negative probability. That means there is a $h’$ that is kept in $E$ until time $t-1$ such that $h’(v)=1$. However, $h_t(v)=0$ means that this $h’$ is not in $E$ at time $t$ so it must be removed at time $t-1$. Since we only remove a classifier when a false positive mistake occurs, the number of false negative errors is no larger than the number of false positive errors.
> main idea of Algorithm 3
Algorithm 3 is a reduction to the Revealed-Adv setting, so its progress is tightly coupled with the progress of the algorithm used in the Revealed-Adv setting (Let A’ be such an algorithm, e.g., Algorithm 1). Specifically, Algorithm 3 calls A’ once every $\Phi$ mistakes it makes, so every $\Phi$ mistakes of Algorithm 3 corresponds to a single mistake made by A’. Therefore, if algorithm 3 makes recurring mistakes, then A’ must also be making recurring mistakes in the Revealed-Adv setting. However, this contradicts with the mistake bound for A’ (e.g., Corollary 3.2). Therefore, such “instability” cannot persist, and Algorithm 3 ends up having a finite mistake bound. | null | null | null | null | null | null |
Provably Improving Generalization of Few-shot models with Synthetic Data | Accept (poster) | Summary: The paper presents a theoretical framework and corresponding algorithm to enhance the generalization of few-shot learning models by leveraging synthetic data. Guided by our theoretical generalization error bounds, the authors introduce a novel loss function and training paradigm designed to jointly optimize data partitioning and model training. Experiment results demonstrate superior performance on the few-shot classification task.
Claims And Evidence: The main claims regarding the benefits of minimizing distributional discrepancy and enhancing local robustness to improve generalization are convincingly supported through theoretical analysis and empirical evaluations.
Methods And Evaluation Criteria: The proposed ProtoAug method is suitable for few-shot learning and the experimental setup is fair.
Theoretical Claims: I thoroughly examined the theoretical results, especially Theorems 3.3 and 3.4.
Experimental Designs Or Analyses: The experimental design is reasonable, using public datasets. However, I have some questions in the **Weaknesses**.
Supplementary Material: I reviewed the supplementary material, including detailed proofs of theoretical claims and additional algorithmic clarifications.
Relation To Broader Scientific Literature: Synthetic data and prototype learning have advanced the broader development of computer vision, providing valuable insights.
Essential References Not Discussed: Although the paper compares several related methods, many recent studies are still not discussed or compared, such as [r1], [r2], and [r3]. Thus, the related work section remains relatively weak. [r2] also generates synthesis images.
[r1] Large Language Models are Good Prompt Learners for Low-Shot Image Classification. CVPR 2024.
[r2] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners . CVPR 2023.
[r3] Self-regulating Prompts: Foundational Model Adaptation without Forgetting. ICCV 2023.
Other Strengths And Weaknesses: **Strengths:**
1. Synthetic data benefits few-shot learning by improving generalization.
2. The theoretical proofs provided are clear.
**Weaknesses:**
1. The experiments are insufficient, including the selection of comparative methods and additional experiments that require further validation. See the "Questions" section.
2. The authors do not provide a detailed hyperparameter analysis, such as the selection of weighting factors for the loss terms. How these hyperparameters were set is not clearly described, and the supplementary material does not seem to clarify this sufficiently.
3. The layouts of Figures 1 and 2 are not visually appealing.
4. The prior knowledge about the choice of Stable Diffusion (SD) is missing. Why was it selected specifically?
Other Comments Or Suggestions: The sentence "From the theoretical perspective, existing studies typically have preliminary answers to simple models only: E.g., Yuan et al. (2024) proposed a framework that generates synthetic samples by training/finetuning a generator to minimize the distribution gap, addressing the second question." sounds awkward, particularly the use of "E.g." Consider revising the sentence structure.
Questions For Authors: 1. Why didn't the authors report results for more extreme few-shot conditions (e.g., 1-shot, 4-shot, 8-shot)? These settings are commonly evaluated in few-shot learning literature.
2. What would the results be if only synthetic data were used?
3. How was the number of synthetic samples per class set to 500? Would using more or fewer samples have an impact? This seems to have a significant effect. Additionally, what would be the results if only the 16-shot real data were used?
4. Regarding the generation of synthetic data for the FGVC Aircraft dataset, is there a better solution?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for your positive evaluation and valuable feedback to our work. We would like to address your remaining concerns as follows:
*1. Results for more extreme few-shot conditions*
Thanks for your insightful suggestion. We conduct experiments on 3 datasets that was also used in the DataDream (DD) paper. The results are shown in the following table:
|No. of real shots|AirC||Cars||FLO||
|-|-|-|-|-|-|-|
||DD|Ours|DD|Ours|DD|Ours|
|1|**31.1**|25.3|**72.9**|72.1|**88.7**|86.1|
|4|38.3|**51.6**|82.6|**86.9**|96.0|**96.9**|
|8|54.6|**63.9**|87.4|**91.3**|98.4|**98.7**|
Overall, our method underperforms the baseline in the extreme 1-shot scenario. With only one real sample, the regularization terms in our loss function become small, reducing model robustness and possibly causing performance drops. However, our method significantly outperforms the baseline in 4-shot and 8-shot settings. Thus, extremely limited real data case remains a limitation of our approach.
*2. Results if only synthetic data were used*
We want to gently point out that our method are designed to take into account both real and synthetic data. Our loss function was designed to match between prediction of real and synthetic samples and regularization are computed on regions containing real data only. In order to adapt our method into the case of only synthetic data, one can remove the discrepancy term and loss on real data from the loss function and compute the robustness loss on all regions that contains at least 2 synthetic samples. Overall, the loss function can be rewritten as: $F(\mathbf{G},\boldsymbol{h})+\lambda_2\frac{1}{g}\sum_{i}\sum_{\boldsymbol{g_1},\boldsymbol{g_2}\in \mathbf{G_i}}\frac{1}{g_i}\\|\boldsymbol{h}((\boldsymbol{g_1})-\boldsymbol{h}(\boldsymbol{g_2})\\|$.
We conduct experiments to test the effectiveness of this loss function in some small and medium-sized datasets. The results are shown below:
|Dataset|DD|Ours
|-|-|-
|EuSAT|80.3|**80.6**
|Pets|**94.0**|**94.0**
|AirC|**71.2**|70.6
|CAL|96.2|**96.8**
|Food|86.7|**89.2**
Our method outperforms the baseline on 3 out of 5 datasets, comparable in 1 and worse in 1 dataset. On average, our methods still perform better than the baseline, showcasing the necessity of the robustness regularization. However, we admit that these increases are marginal, and much less significant compared to our full method, which takes into account both discrepancy and robustness terms.
*3. The impact of the number of synthetic samples per class. Results of only the 16-shot real data*
We chose 500 to be consistent with the main experiment in the DataDream paper, ensuring a fair comparison in Tables 1 and 3.
To further investigate the effect of the number of synthetic samples, we conduct more experiments summarized below:
|No. synth. samples|EuSAT||DTD||AirC||
|-|-|-|-|-|-|-|
||DD|Ours|DD|Ours|DD|Ours|
|100|93.4|**94.2**|73.4|**73.9**|68.5|**69.6**|
|200|93.5|**94.5**|73.1|**74.0**|69.3|**71.9**|
|300|93.7|**94.4**|73.5|**73.8**|70.9|**73.0**|
|400|93.8|**94.4**|74.1|**74.2**|70.8|**73.3**|
|500|93.5|**94.7**|74.1|**74.5**|72.3|**74.3**|
The results confirmed that our method consistently ourperform baselines when varying synthetic data sizes.
The results of only 16-shot real data was reported in the "Real-finetune" row in Table 1 in our paper.
*4. Better solution for generation of synthetic data for the FGVC Aircraft datasets*
Stable Diffusion is infamously bad on aircraft data. In practice, we can choose a better generative model on aircraft data. A future direction for our method involves using the theoretical framework for optimizing synthetic data by fine-tuning generator or filtering synthetic data by using our loss function as a criterion.
*5. Detailed hyperparameter analysis*
We want to gently redirect the reviewer's attention to our response to Reviewer hvE3 (part 4) to see our detailed discussion about the hyperparameter selection. [https://openreview.net/forum?id=L6U7nYc4ah¬eId=sANtK8HtxS]
*6. Essential References Not Discussed*
We thank the reviewer for highlighting missing references. We will include the following discussion of the mentioned papers and add further references on prompt and adapter tuning in the revised version:
Recently, methods such as prompt and adapter tuning have become prominent for enhancing few-shot classification performance, including [r1], [r3], etc. Moreover, [r2] uses synthetic images to improve performance but relies on combining four large pretrained models, significantly increasing implementation complexity.
*7. Not visually appealing figures and awkward sentence*
Thanks for suggesting these issues, we will improve the presentation in the new version.
*8. Stable Diffusion selection*
We chose it to be consistent with the baseline for fair comparison. DataDream also uses Stable Diffusion of the same version as the generator. | Summary: The paper tackles the problem of few-shot image classification, where limited labeled data restricts model generalization, by proposing a theoretically grounded approach that augments real data with synthetic data, such as that produced by Stable Diffusion. It introduces a novel test error bound for models trained on mixed real and synthetic datasets, highlighting the need to minimize the distribution gap between data types and ensure local robustness around training samples. The authors develop a new loss function and training method that incorporates prototype learning via K-means clustering for data partitioning and regularization to align predictions and boost robustness. Experiments across 10 diverse datasets demonstrate that this method surpasses state-of-the-art baselines like DataDream, DISEF, and IsSynth, achieving an average performance boost of 0.6% and exceeding 2% on challenging datasets like Food101 and FGVC Aircraft, with ablation studies validating the effectiveness of the proposed components.
Claims And Evidence: Supported Claim1 : The method outperforms state-of-the-art baselines across multiple datasets
Supported Claim2 : The lightweight version of the algorithm provides competitive performance with reduced computational costs
Supported Claim3 : The proposed algorithm, guided by the theoretical bound, effectively minimizes generalization errors by optimizing data partitioning and model training
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem of few-shot image classification with synthetic data. The theoretical framework tackles the critical issues of distribution gaps and data scarcity, and the algorithm builds on this with practical clustering and loss design. The implementation offers flexible options for performance and efficiency. Meanwhile, the evaluation uses diverse datasets, strong baselines, a suitable metric, and thorough ablation studies to demonstrate the method’s effectiveness. While minor enhancements—like more data on computational savings—could strengthen the case, the overall approach is well-tailored to the problem at hand.
Theoretical Claims: Did not check
Experimental Designs Or Analyses: The experimental designs and analyses in the paper are largely sound and valid, supported by diverse datasets, strong baselines, and thorough evaluations.
Supplementary Material: Did not check
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for praising our work and your positive evaluation. We would like to reiterate the key novelties of our work:
1. We introduced a novel generalization bound for a model trained with synthetic data augmentation. It suggests an effective way to generate synthetic samples, and to train a model by minimizing the distribution gap between real and synthetic data and ensuring local robustness around real samples.
2. Guided by this theory, we designed an algorithm to jointly optimize models and data partitioning to enhace models' performance.
3. Experimental results show that our method outperforms state-of-the-art baselines across multiple datasets. Our more efficient lightweight version also shows comparable performance with the baselines.
Should you have any more questions or concerns, we would be very happy to address them. | Summary: The paper addresses the challenge of improving few-shot image classification by augmenting real data with synthetic data. It identifies the distribution gap between real and synthetic data as a key obstacle. The paper presents a theoretical framework to quantify the impact of this distribution discrepancy on supervised learning and proposes a theoretically-driven algorithm that integrates prototype learning to optimize data partitioning and model training. The algorithm aims to bridge the gap between real and synthetic data. The authors claim that their method outperforms state-of-the-art approaches across multiple datasets. The key idea is to minimize a generalization error bound that accounts for both the discrepancy between the real and synthetic distributions and the robustness of the predictor around training samples. They achieve this through a novel loss function and a training paradigm that jointly optimizes data partitioning and model training.
Claims And Evidence: The overall claim that the proposed method improves few-shot learning with synthetic data is supported by the experimental results on several datasets. The tables presented show superior performance compared to several baselines, including DataDream, DISEF, and IsSynth.
The individual contributions of different components (data partitioning, discrepancy minimization, robustness) are somewhat validated through ablation studies. However, the ablation study in Table 2 only shows the effect of regularization on a few datasets. A more comprehensive ablation study on more datasets might be better to support the effectiveness of each component.
Methods And Evaluation Criteria: The proposed method combines theoretical analysis with a practical algorithm. The K-means clustering for partitioning and the prototype learning aspect appear reasonable for the problem. The evaluation uses standard datasets for few-shot image classification (e.g., FGVC Aircraft, Caltech101, Food101), which is appropriate. Comparing against DataDream, DISEF, and IsSynth seems adequate given the focus on synthetic data augmentation.
Theoretical Claims: While the authors derive generalization error bounds, the practical implications of these bounds depend on the tightness of the bounds and the validity of the assumptions made in their derivation.
What are the key assumptions made in deriving the generalization bounds (e.g., concerning the loss function, the data distributions, the model class)? Are these assumptions realistic for the problem of few-shot image classification with synthetic data?
Generalization bounds are often loose. How tight are the derived bounds in practice? Is there any discussion of the practical implications of the bounds (e.g., how the different terms in the bound influence the performance of the algorithm)?
Experimental Designs Or Analyses: The experimental designs seem generally sound.
Ablation Studies: The ablation studies in Table 2 are helpful for understanding the impact of different components of the algorithm. However, the ablation study only focus on three datasets.
Hyperparameter Sensitivity: The paper mentions tuning hyperparameters. A more detailed discussion of the hyperparameter selection process would be useful. It would be beneficial to see a sensitivity analysis that shows how the performance of the algorithm varies with different hyperparameter values. A plot of each hyperparameter in relation to the accuracy/generalization bound should be shown.
Supplementary Material: I only briefly reviewed the supplement.
Relation To Broader Scientific Literature: The paper relates to several areas of research, namely, Few-Shot Learning, Synthetic Data Augmentation, Domain Adaptation and Dataset Distillation
The paper makes explicit connections to DataDream, DISEF, and IsSynth. It would be useful to discuss how the proposed theoretical framework relates to existing theoretical work on domain adaptation or generalization with synthetic data.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
-The combination of theoretical analysis and a practical algorithm is a strength.
-The experimental results demonstrate promising performance.
Weaknesses:
-The tightness and practical implications of the theoretical bounds need further clarity.
Other Comments Or Suggestions: The paper could benefit from a clearer statement of the assumptions made in the theoretical analysis.
A more detailed discussion of the hyperparameter selection process would be helpful.
Questions For Authors: What are the key assumptions made in deriving the generalization bounds in Theorems 3.3 and 3.4?
Why do you choose the parameters in DataDream, DISEF and IsSynth as your baseline? Have you tuned the hyper-parameters to have a fair comparison?
In ablation studies, why not include results on the four datasets used in Table 3?
Why does lightweight version give you better performance for some datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for your positive evaluation and valuable suggestions. We address your concerns below.
*1. Key assumptions made in deriving the generalization bounds, and their realisticity*
The only assumptions are (1) data samples are i.i.d. and (2) loss function is bounded and Lipschitz w.r.t. the model. Note that for classification problems, many common losses satisfy our assumptions, such as absolute loss, square loss, Hinge loss. Therefore, those assumptions are realistic and reasonable, widely used in the literature.
*2. Tightness and practical implications of the bounds*
*Tightness:* We consider several factors regarding the tightness of our bounds:
- Our bound scales with $O(\sqrt{K})$, which can be sub-optimal for large $K$ (partition size). Therefore, we should not choose a large $K$. Luckily, for few-shot problems, our bound does not allow a large $K$ which goes beyond the number of real samples.
- For fixed $K$ and number $n$ of real samples, our bound scales with $O(g^{-0.5})+const$. This indicates *our bound is optimal w.r.t. the number of synthetic samples.* The reason is that the error of the best model is at least $O(g^{-0.5})+const$, for a hard learning problem, according to Theorem 8 in [Than et al. (2024). Gentle local robustness implies Generalization].
- Finally, since $n$ can be very small, our bound can be far from the true error of a model. This is a common limitation of any theoretical error bound for limited data. To the best of our knowledge, *limited data poses a big (open) challenge* for estimating the model's true error.
*Practical implications:* We have discussed various implications after Theorem 3.3 in the paper. We point out some useful roles of the main terms. We also have Sections 5.3 and 5.4 to investigate empirically the influence of robustness and discrepancy terms.
*3. Ablation study limited to 3 datasets*
We conducted ablation study on 4 datasets of varying sizes, which we believe sufficiently illustrate our method's effectiveness. We will add more results on the remaining datasets in the Appendix.
*4. Hyperparameter selection process*
First, we balance the discrepancy ($\lambda_1$) and the robustness ($\lambda_2$), using coordinate descent to determine this optimal ratio. We then chose a suitable value of $\lambda$ associated with the loss on real data. The results are reported below:
|||DTD||EuSAT||Pets||
|-|-|-|-|-|-|-|-
|||light|full|light|full|light|full
|**dis/rob ratio** |1/1|69.2|72.6|89.9|**93.9**|93.1|94.3
||1/5|72.8|72.8|93.0|93.8|**94.5**|94.3
||1/10|**73.7**|**73.1**|**93.3**|**93.9**|**94.5**|94.3
||1/20|73.2|72.8|92.6|**93.9**|94.3|**94.4**
|$\lambda$|1|73.7|73.1|93.3|93.9|94.5|94.3
||2|74.9|73.4|93.3|94.1|94.3|94.5
||4|**75.2**|**74.5**|**93.8**|**94.7**|**94.8**|94.6
||6|75.1|74.4|92.7|94.3|94.5|**94.8**
We then selected the actual values of ($\lambda_1$/$\lambda_2$) from a set of values that follows the 1/10 ratio. We chose their values to maintain a good ratio between the regularization and loss.
For lightweight version, we directly adopted the learning rate and weight decay from the DISEF paper, as our synthetic sample sizes matched theirs. For the full version, we employed grid search with early stopping to choose them.
These hyperparameter value sets are presented in Appendix B. The others follow exactly as in the baselines.
Note that hyperparameter selection was performed only during classifiers tuning, separately from generator tuning and synthetic data generation, minimizing additional computational overhead.
*5. Relation to existing theoretical work on domain adaptation or generalization with synthetic data*
We thank the reviewer for useful suggestion. We have discussed the existing theoretical works of generalization with synthetic data in the last paragraph of section 2.1. We will add more related references.
*6. Why select parameters from DataDream (DD), DISEF, and IsSynth as baselines? Were hyperparameters tuned for fair comparison?*
We chose hyperparameters to be consistent with DD. Since DD's authors reran DISEF and IsSynth on the same number of synthetic data and generative models, it serves as a good baseline for us. Additionally, experiments confirmed that directly using baselines' learning rate and weight decay yielded optimal results across datasets.
*7. In ablation studies, why not include results on the 4 datasets used in Table 3?*
We followed the Resnet50 setup from DD in table 3 for fair comparison. However, in the ablation study, we aim to evaluate our method's performance across datasets of varying scales, whereas the datasets in Table 3 are similar in size.
*8. Lightweight version give better performance for some datasets*
The lightweight version performs better on 2 datasets: Caltech and DTD. For these two datasets, we observed that the methods with generator fine-tuning (DD and our full version) perform equal or worse than the others without it (DISEF, IsSynth, and our lightweight). | null | null | null | null | null | null | null | null |
Flexible and Efficient Grammar-Constrained Decoding | Accept (poster) | Summary: - The paper presents a new algorithm for grammar-constrained decoding (GCD) that significantly improves the efficiency of both offline preprocessing and online token masking. The key innovation is a combined analysis of the LLM token vocabulary and set of CFG terminals, which precomputes a lexer-state-dependent mapping between sequences of CFG tokens and individual LLM tokens.
- The proposed method, implemented in a tool called GREATGRAMMA, achieves an average 17.71x speedup in offline preprocessing compared to related approaches while maintaining state-of-the-art online masking efficiency (5-32ms per token).
- The approach is flexible, handling diverse grammars without prohibitive offline preprocessing costs, and efficient, maintaining low online token-masking overhead.
Claims And Evidence: - The claims made in the paper are generally supported by clear and convincing evidence. The authors provide detailed algorithm descriptions, theoretical analysis, and experimental results.
- They demonstrate the effectiveness of their approach through comparisons with existing GCD tools (OUTLINES, SYNCODE, and XGRAMMAR) on several benchmarks.
- For instance, Table 1 shows that GREATGRAMMA's offline preprocessing is significantly faster than SYNCODE and its online masking is faster than both SYNCODE and OUTLINES.
Methods And Evaluation Criteria: - The proposed methods make sense for the problem at hand. The approach addresses the key challenge of token misalignment between LLMs and CFGs by preprocessing the lexer and parser in tandem.
- The evaluation criteria, including offline preprocessing time and online per-token overhead, are appropriate for assessing the performance of GCD tools.
Theoretical Claims: - The paper does not present detailed proofs for their theoretical claims, but the algorithms and concepts are well-founded.
- The authors provide references to existing literature for some of the theoretical foundations, such as finite-state automata and pushdown automata.
- The correctness of the algorithms is mostly supported by the experimental results.
Experimental Designs Or Analyses: - The experimental designs are sound and valid. The authors conducted experiments using three different tokenizers and several programming language grammars.
- They measured both offline preprocessing time and online per-token overhead, which are critical metrics for evaluating GCD tools.
- The results are presented in a clear and organized manner, with tables and figures that effectively communicate the findings.
Supplementary Material: Yes, supplementary material is the code of the paper.
Relation To Broader Scientific Literature: - The key contributions of the paper are related to several prior works in the field of grammar-constrained decoding. The authors build upon existing approaches such as SYNCODE and DOMINO.
- Compared to SYNCODE, GREATGRAMMA offers faster offline preprocessing and similar or better online masking efficiency.
- The approach also integrates ideas from XGRAMMAR, such as preprocessing context-independent tokens, but with improved efficiency and scalability.
Essential References Not Discussed: - The paper references several related works on grammar-constrained decoding (GCD), including SYNCODE, DOMINO, and XGRAMMAR. However, it does not discuss other relevant works such as Grammar-Aligned Decoding (Park et al., 2024) or Synchromesh (Poesia et al., 2022), which also address the problem of aligning LLM outputs with grammatical constraints. These works provide additional context and comparisons that could enhance the understanding of GREATGRAMMA's contributions.
- The paper also does not mention the work on constrained decoding using finite-state transducers (FSTs) by Koo et al. (2024), which is directly relevant to the preprocessing and token masking techniques used in GREATGRAMMA.
Other Strengths And Weaknesses: - **Strengths:**
- **Originality:** The approach proposed in the paper, particularly the combined analysis of the LLM token vocabulary and CFG terminals, as well as the introduction of the token spanner table, is novel. This innovation addresses a significant challenge in the field of grammar-constrained decoding.
- **Significance:** The results are important for improving the efficiency of constrained decoding in large language models, which has practical implications for applications like code generation and structured data formatting.
- **Clarity:** The paper is well-written and well-organized. The authors provide a clear explanation of the problem, their approach, algorithms, and experimental results. The use of figures to illustrate the lexing transducer, detokenizing transducer, and token spanner table helps in understanding the complex concepts.
- **Weaknesses:**
- **Limitations:** As mentioned in the paper, the current implementation has some limitations. For example, it works under the maximal munch principle and 1-lookahead lexing, which may not support all language constructs. Additionally, the approach may not handle cases where the same input sequence must be lexed differently depending on the parsing context.
- **Theoretical Depth:** While the paper provides an algorithmic and conceptual description of the proposed method, some theoretical aspects could be explored further. For example, a more in-depth analysis of the properties of the realizable terminal sequences and their impact on the overall decoding process could be beneficial.
- **Application Scope:** The paper primarily focuses on grammar-constrained decoding for structured outputs like code snippets. However, the approach could be extended to other domains, such as natural language generation with specific syntax requirements, which is not discussed in detail.
Other Comments Or Suggestions: No obvious typographical errors were observed in the provided text.
Questions For Authors: 1. Regarding the limitations in handling lexing ambiguities, could you elaborate on how you plan to extend the approach to support more complex language constructs that require more than 1 - lookahead or context - dependent lexing? How would a successful extension impact the performance and applicability of GREATGRAMMA?
- If the authors have a clear plan and potential solutions for handling these cases, it would strengthen the paper's future work section and indicate a better understanding of the problem domain.
2. In the experimental evaluation, you mentioned that XGRAMMAR encountered errors during preprocessing or decoding for all the grammars considered. Could you provide more details about these errors and how they might affect the comparison with GREATGRAMMA?
- Understanding the specific issues with XGRAMMAR would help in assessing the fairness of the comparison and determining whether GREATGRAMMA's advantages are as significant as reported.
3. You mentioned that the current implementation works under certain lexing assumptions. How would you evaluate the potential benefits and challenges of relaxing these assumptions to handle a broader range of language grammars?
- The authors' perspective on this could provide insight into the scalability and generalizability of their approach.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback. Below, we provide a detailed response to each comment and question.
> The paper does not present detailed proofs for their theoretical claims
> a more in-depth analysis of the properties of the realizable terminal sequences and their impact on the overall decoding process could be beneficial
We will provide a more rigorous formalization, correctness proofs, and time/space analysis in the appendix of the revised paper. For proof sketches regarding correctness, please refer to our response addressing a similar point raised by Reviewer apti and Reviewer nQQ5.
>could you elaborate on how you plan to extend the approach to support more complex language constructs that require more than 1 - lookahead or context-dependent lexing?
> You mentioned that the current implementation works under certain lexing assumptions. How would you evaluate the potential benefits and challenges of relaxing these assumptions to handle a broader range of language grammars?
To be clear, maximal munch and 1-lookahead are specifications about how a lexer is expected to behave, not limitations of our system. Lexers for real-world languages usually have these properties (including the ones we evaluated on: Python, Java, and Go). That being said, our lexing transducer can also be adapted to accommodate alternative lexer specifications. For instance, the lexing transducer can nondeterministically produce all possible tokenizations, enabling contextual lexing. However, this nondeterminism may increase the size of $Re_{A \circ V}$ and the inverse token spanner table.
If the lexer requires more than one-token lookahead, a k-token lookahead can be implemented by encoding each pair (state, lookahead) as a single combined state in the lexing transducer, similar to an LR(k) parser [NewRef3]. However, this approach can significantly increase the size of the lexing transducer.
We will include this discussion in the revised version of the paper.
[NewRef3] Knuth, Donald E. "On the translation of languages from left to right." Information and control 8.6 (1965): 607-639.
> In the experimental evaluation, you mentioned that XGRAMMAR encountered errors during preprocessing or decoding for all the grammars considered. Could you provide more details about these errors and how they might affect the comparison with GREATGRAMMA?
XGrammar encountered a segmentation fault when decoding with the Go grammar, aborted without displaying a detailed error message after compiling the Java grammar for several minutes, and entered an infinite loop during the initial token generation step for the Python grammar. While we have not investigated the exact cause of these errors in XGrammar, we conjecture that they arise because XGrammar operates on byte-level pushdown automaton, which are significantly more complex than terminal-level pushdown automaton.
Similar problems in XGrammar have also been observed with other grammars: https://github.com/mlc-ai/xgrammar/issues/127. According to the github issue, the runtime overhead increases significantly due to nondeterminism caused by the overlap between identifier and keyword terminals. Note that in GreatGramma the terminal-level PDA in this example is still deterministic: the nondeterminism is purely at the terminal-level, which is resolved by the separate lexer.
> However, it does not discuss other relevant works such as Grammar-Aligned Decoding (Park et al., 2024) or Synchromesh (Poesia et al., 2022)
The paper briefly mentions GAD in the context of evaluating the effectiveness of GCD (lines 330–333). While existing studies (Geng et al., 2023; Beurer-Kellner et al., 2024) demonstrate that GCD can improve downstream task performance, other recent work (Tam et al., 2024 [NewRef4]; Park et al., 2024) highlights that GCD can also negatively impact the quality of generated outputs. Our approach is orthogonal to these methods and can be integrated with techniques proposed by Park et al. (2024) or Banerjee et al. (2025) [NewRef5] to mitigate such negative effects. We will include this discussion on the effectiveness of GCD in Section 6.
Syncromesh emphasizes checking semantic constraints beyond grammatical correctness, we will briefly discuss this extension as well.
[NewRef4] Tam, Zhi Rui, et al. "Let Me Speak Freely? A Study On The Impact Of Format Restrictions On Large Language Model Performance." EMNLP 2024: Industry Track.
[NewRef5] Banerjee, Debangshu, et al. "CRANE: Reasoning with constrained LLM generation." arXiv (2025).
> The paper also does not mention the work on constrained decoding using FSTs by Koo et al. (2024)
The paper briefly introduces the work of Koo et al. when presenting the detokenizing transducer in Section 3.1. However, we agree that this relevant work deserves further discussion. We extended their work to accommodate a structure consisting of a separate lexer and parser and will include this discussion in Section 6.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's careful reply.
According to the Github you provided, I found that there might be a problem in your reproduction process. The author did not modify the code and pointed out that the latest version might be needed (https://github.com/mlc-ai/xgrammar/issues/127#issuecomment-2777313764).
In addition, I admit that I am not an expert in this field. What is the difference between online and offline use in actual scenarios? GREATGRAMMA is more than 10 times to more than 100 times slower than outlines in offline. Is this acceptable? I have used outlines in actual scenarios, and it is not very slow. (If time permits, can you provide a speed comparison in actual scenarios?)
By the way, I have seen that many GCD-related papers have restricted generation of json or other formats.
Finally, does this field only need one experiment to prove the effectiveness of the method, without any ablation and analysis experiments? If so, please ignore this comment.
---
Reply to Comment 1.1.1:
Comment: Thanks for pointing out that XGrammar has recently updated their code to *attempt* to fix this issue (we note that this update happened only a few days ago). While the update has removed the problem from the grammar used in the GitHub issue we linked (https://github.com/mlc-ai/xgrammar/issues/127) , this new version of XGrammar still causes errors or infinite loops for all the grammars used in our experiments. The reviewer can test this if they want by using the latest pip version (released on March 25, 2025) on the grammars we submitted as supplementary materials.
Regarding offline versus online cost: offline costs are incurred once per grammar, while online costs apply to every inference.
If an application uses a fixed grammar, a grammar can be precompiled by developers, making offline costs generally less critical than online costs for end users of the tool.
However, offline costs become significant if the grammar must be changed frequently. For example, one might want to specialize a language grammar using specific program variables appearing in a concrete environment. In such scenarios, it is important to reduce offline preprocessing while retaining low online cost—a high online overhead can prevent large-scale batching. We do believe our grammars are practically useful as they can help reduce syntactic errors for real programming languages as shown in the SynCode paper. Even though the shortest example in our experiment is a 17-lines Python code (prime.py) containing around 100 tokens, running this example with Outlines already takes more than 1,000 seconds, which exceeds the offline cost of GREATGRAMMA.
The reviewer is right that many recent works on constrained generation focus on JSON schemas. While today’s JSON schemas are often fairly simple, supporting larger, programming language–level grammars can provide enhanced reliability and flexibility for various applications, including tool integration mentioned earlier.
Regarding evaluation, besides the quality of outputs generated via constrained decoding, which was addressed in our previous rebuttal, existing works—including SynCode and XGrammar—are evaluated based on offline cost and per-token online cost. As we use the same metrics as existing work, we believe that our evaluation setting is sufficient to demonstrate the efficiency of GREATGRAMMA. We can’t think of a potential ablation for our evaluation since we already compare GREATGRAMMA to existing tools (e.g., as discussed in our related work, SynCode is effectively GREATGRAMMA without the inverse spanner table). | Summary: The authors provide a more computationally efficient method, called GREATGRAMMA, for computing token masks for CFG-constrained decoding from LLMs. In particular, they address the misalignment between CFG terminals and the LLM token vocabulary, and tricks for speeding up online mask computation during decoding using relatively cheap preprocessing. Their method assumes that the constraint is expressed as a CFG whose terminal symbols are *chunks* of characters; these chunks are referred to as the "terminals." Their preprocessing step builds a FST that maps LLM token sequences to character strings and another FST that maps character strings to terminal sequences. They compose them to form a single FST that maps LLM token sequences to terminal sequences. They precompute a table that maps FST states and terminal sequences to valid LLM tokens. Their method is limited to grammars with certain properties (for one, they must be deterministic). They show through experiments that GREATGRAMMA's preprocessing is a little slower than the competing OUTLINES, but its online mask computation is much, much faster.
Claims And Evidence: The empirical claims about runtime are well-supported by experiments.
On the other hand, the authors do not prove the correctness of their algorithms and do not provide asymptotic time/space complexity analysis, which I think is sorely needed.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: This paper does not provide proofs for its theoretical claims, but I think it should prove the correctness of its algorithms and give a time/space complexity analysis.
Experimental Designs Or Analyses: To an extent, yes, but I would like the authors to follow up with proofs of correctness.
Supplementary Material: Most of the appendix.
Relation To Broader Scientific Literature: They provide a faster, more practical method for CFG-constrained decoding than previous work, namely OUTLINES and SYNCODE. See their Section 6 for details.
Essential References Not Discussed: Not to my knowledge.
Although light on detail, I believe this concurrent work uses a similar approach for token-character alignment by composing FSTs and CFGs: https://openreview.net/forum?id=xoXn62FzD0
Other Strengths And Weaknesses: **Strengths**
1. The authors clearly state the limitations of their method.
2. The paper is generally well-written, although I have issues with aspects of the exposition.
3. The empirical results are very good.
4. Good benchmarks.
**Weaknesses**
1. As the authors state, the method is limited to *deterministic* CFLs that satisfy certain properties.
2. There are no proofs of correctness or time/space complexity analyses.
3. The exposition of the method is a bit unclear at times. In particular, their method hinges upon their concept of the set of realizable tokens, but the reason why this approach is correct and necessary is a particularly weak part of the exposition. See also Comments or Suggestions.
Other Comments Or Suggestions: 1. The correctness of token alignment is a major issue. Can you formally express what the mathematically correct solution to the token alignment problem is?
2. 047 right: Interesting!
3. 087: Does this do things like strip whitespace and comments, as is done in a typical compiler? Or is the transformation bijective?
4. 511: This should check for t_next = EOS. Also, I don't think you want to append EOS to x.
5. 130: From this description, it's not clear what the meaning of a cell in the token spanner table is, much less how to construct it. It would be helpful to give a formal definition of what the value of each cell is supposed to be.
6. 204: Is $q_0$ a typo?
7. 520: What does it mean for state $q$ to recognize $T$? How do you determine what $T$ is?
8. 250: When you say $T$ can be generated along some path, do you mean a path ending in an accept state?
9. For some reason, the inputs and outputs of $\mathcal{T}\_{\mathcal{A}}$, $\mathcal{T}\_\mathcal{V}$, and $\mathcal{T}\_{\mathcal{A} \circ \mathcal{V}}$ are a bit unclear from reading the text, partly because $\mathcal{A} \circ \mathcal{V}$ is written in the reverse order of the inputs and outputs. It would be useful to spell this out in the text more explicitly. It took a while to figure out that $\mathcal{T}\_{\mathcal{A} \circ \mathcal{V}}$ maps sequences of LLM tokens to sequences of CFG terminals.
10. 249: There are some $\exists$ quantifiers missing.
11. 227: How is this loop implemented?
12. 251 right: Is $Re\_{\mathcal{A} \circ \mathcal{V}}$ a typo?
13. The wording of Proposition 3.4 is unclear. Do you mean if the PDA *starts* in state $q$ with stack $\gamma$?
14. The concepts of push and pop computations in PDAs from this paper may be useful to you: https://arxiv.org/abs/2210.06884
15. It's confusing that Fig 1(b) does not show an *inverse* table, but a table that maps states and *single tokens* to *sequences of terminals*. This causes a lot of confusion later on.
16. It wasn't clear until later in the paper that there are bounds on the size of the set of realizable tokens. This is important because otherwise it seems like the method requires exponential time/space.
Questions For Authors: 1. Why is there a separation between characters and CFG terminals that is mediated by the lexer? It's always possible to implement the lexer directly within the CFG by modifying its rules and making its terminal symbols single characters. Is there some computational efficiency advantage to treating them separately?
2. 058: Shouldn't it also include a mask for EOS? How is EOS masking handled in your method?
3. What are the inputs and outputs of $\mathcal{T}\_{Re\_{\mathcal{A} \circ \mathcal{V}}}$?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback. In the revised paper, we will correct all typos, clarify the notation pointed out by the reviewer, and provide rigorous definitions and proofs in the appendix. Below, we provide a detailed response to each comment and question.
> the method is limited to deterministic CFLs that satisfy certain properties
Although we assumed a deterministic PDA for simplicity of implementation, determinism is not strictly necessary for our method. For more detail, please see our response to a similar question by reviewer q1Dz.
> 227:How is this loop implemented?
This line is precomputed by a reachability algorithm (e.g., Floyd-Warshall Algorithm [NewRef2]) in the lexing transducer.
[NewRef2] Floyd, Robert W. "Algorithm 97: shortest path." Communications of the ACM 5.6 (1962)
> no proofs of correctness or time/space complexity analyses
> the reason why this approach is correct and necessary is a particularly weak part of the exposition
Computing the set of realizable terminal sequences allows us to do exactly the amount of necessary work during parser preprocessing. We will improve the presentation to make this point clearer.
For proof sketches regarding correctness, please refer to our response addressing a similar point raised by Reviewer apti. Regarding complexity, the most computationally expensive procedures are Algorithms 4 and 5.
Algorithm 4 runs in time $O(|\delta||\Gamma|)$, in addition to the precomputation for line 227 which runs in $O(|Q_A|^3)$ time and $O(|Q_A|^2)$ space. Both $|Re_{A \circ V}|$ and $|T_{inv}|$ are at most $|Q_A||V||\Gamma|$, though empirically they tend to be much smaller since most combinations are infeasible.
CFG parsing has time complexity $O(n^3)$ for general CFGs and $O(n)$ for deterministic CFGs. Consequently, computing $\bar{A}$ and $\bar{D}$ in Algorithm 5 takes $O(|Re_{A \circ V}| \cdot n^3)$ time for general CFGs and $O(|Re_{A \circ V}| \cdot n)$ time for deterministic CFGs, where $n$ denotes the length of the longest terminal sequence plus length of the current prefix.
The computation of sets $A$ and $D$ in Algorithm 5 runs in $O(|Q_A||Q_P||V||\Gamma|)$ time and requires $O(|Q_A||Q_P||V|)$ space. In practice, time/space usage is often significantly smaller due to the sparsity in $T_{inv}$.
In practice, we observe that $|Q_A|, |Q_P| < 1000$, $|\Gamma| < 100$ even for large grammars, including the Go, Java, and Python grammars used in our experiments.
> Can you formally express what the mathematically correct solution to the token alignment problem is?
Formally, given a context-free grammar $G$ and a lexer $Lex$, the token-aligned language $L^{Lex}(G)$ is defined as the set of $w \in V^\ast$ s.t. $\exists T_1 \dots T_k. Lex(w) = T_1 \dots T_k \wedge T_1 \dots T_k \in L(G)$. Correct token-aligned constrained decoding thus requires that any partial output is always a prefix of some sentence in $L^{Lex}(G)$.
> it's not clear what the meaning of a cell in the token spanner table is
> It's confusing that Fig 1(b) does not show an inverse table, but a table that maps states and single tokens to sequences of terminals.
Please see our response to a similar question by Reviewer apti.
> 520: What does it mean for state q to recognize T? How do you determine what T is?
The lexing automaton recognizes the union of all the languages for each terminal. It is built via a product construction where each component automaton recognizes a specific terminal T. Thus, each final state of the product automaton corresponds to a terminal T. If multiple terminals T correspond to a single final state, one can either assign priorities or nondeterministically produce all possible tokens to support contextual lexing.
> 250: When you say T can be generated along some path, do you mean a path ending in an accept state?
We say T can be generated along some path from q, when $q \xrightarrow{w:T T_1 \ldots T_k}^\ast q’ \in \delta^\ast$ for some $q’$ and $T_1 \ldots T_k$.
> Why is there a separation between characters and CFG terminals that is mediated by the lexer?
Lexers usually operate via finite automata, which efficiently match tokens at linear time with minimal lookahead. In contrast, CFG parsers incur higher overhead. Moreover, lexers effectively handle non-grammatical constructs, such as whitespace and comments. This separation ensures that the parser grammar remains concise.
> How is EOS masking handled in your method?
The EOS symbol is treated as a token in the vocabulary, which generates a special end-of-parsing terminal. It behaves similarly to other tokens, except that it has transitions leading back to state $q_0$, which produce the end-of-parsing terminal in the lexing transducer.
> What are the inputs/outputs of $T_{Re_{A \circ V}}$?
The input alphabet is $Re_{A \circ V}$, and the output alphabet is $\Gamma$. This is similar to Figure 3, except that here the input alphabet consists of a sequence of terminals rather than characters.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing proofs and time/space complexity analyses as requested. I will raise my score accordingly. Unfortunately I have not had time to check all the details carefully during the rebuttal period. In the final draft of the paper, I really do recommend working on the clarity issues I raised in my review.
> We assumed a deterministic PDA for simplicity of implementation, but determinism is not strictly necessary. To handle non-deterministic PDAs, one can track multiple non-deterministic states simultaneously (similar to Thompson’s construction for NFA determinization), compute masks separately for each state, and then take the union of these individual masks. This extension allows the algorithm to accommodate non-deterministic PDAs without affecting performance in deterministic cases. We will clarify this point in the revised version of the paper.
I would like to push back on this claim and point out that the extension to nondeterministic PDAs would not be as easy as you say. Unlike NFAs, nondeterministic PDAs can have an infinite number of stack configurations simultaneously, and even in normal form, they can have an exponential number. You would need to use one of the dynamic programming algorithms that can simulate them in cubic time (see Butoi et al. (2022)). Wouldn't you also need to alter the structure of the (inverse) token spanner table in a significant way, similarly to the Generalized LR table in Tomita's algorithm?
---
Reply to Comment 1.1.1:
Comment: We will be sure to address the clarity issues raised. It is indeed true that simulating a NPDA is somewhat involved: the comparison to Thompson’s algorithm was indeed not the right analogy we intended to convey. Nevertheless, the proposed method can handle nondeterminism, though the reviewer is right that the number of possible states may, in the worst case, grow exponentially with respect to the length of the prefix.
The inverse token spanner doesn’t need to be changed to support nondeterminism, since it doesn’t depend on the PDA at all, but only depends on the lexer specification and the vocabulary of LLM.
The reviewer is right that achieving cubic complexity would require a dynamic-programming approach, but we believe such an approach is not suitable for efficient online processing as it would require expensive computations for each individual token (we are not aware of any GCD tool using such an approach). | Summary: Given a language model and a certain grammar, grammar-constrained decoding (GCD) aims to ensure the output of the language model adheres to the grammar. However, this is challenging because, typically, the token vocabulary for a language model does not align with that of the grammar. In this paper, the authors aim to overcome this challenge by suggesting a novel algorithm for GCD which involves an offline stage and an online stage, the latter being more efficient than current GCD algorithms. Through pre-processing, the online token masking process becomes much more efficient. The authors also provide GreatGramma, a tool for implementing their method.
Claims And Evidence: It appears the claims are supported by clear and convincing evidence. The majority of the paper describes the new algorithm. For the online procedure, there is a bit of confusion, and a rigorous definition of the functions $A$ and $D$ would help the reader understand. For the reviewer’s understanding, is this method masking out other tokens that might also be allowed? The experiment section is strong and shows significant improvement over other methods. For the sake of completeness, it would be useful also to also report some sort of metric as regards to quality of the output using GreatGramma as well as other methods.
Methods And Evaluation Criteria: The methods appear to make sense for the problem at hand. The method mostly consists of an offline processing procedure in order to create an inverse token spanner table, which is then used at inference time.
Theoretical Claims: Neither one of Propositions 3.4 and 3.5 have accompanying proofs. This would be useful in order to verify the claims, or to point the reader to another source proving the same result.
Experimental Designs Or Analyses: The experimental design and analysis appears to be complete. As mentioned earlier though, the method would be even further motivated by reporting on the quality of the outputs via GreatGramma.
Supplementary Material: Yes. The supplementary material consists of algorithm descriptions and other formal definitions.
Relation To Broader Scientific Literature: Methods like verifiers and grammars are in general becoming more popular for language models. It appears as though some sort of supervision may be useful. Hence, methods to improve efficiency for such methods should allow the literature to continue to explore these directions with a more scalable approach.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: This paper appears to have a strong result. Yet, the paper is not particularly straightforward to follow. The reviewer suggests that, in order to improve clarity, the authors provide clear definitions in the beginning of sections and attempt to make Section 3 more concise and clear. For example, one point of confusion is what the “token spanner table” is, as mentioned in Figure 1. Later on, there is mention of an “inverse token spanner table.” What is the precise difference between these two, and if they are not the same, what is the point of Figure 1(d)? Furthermore, in places where definitions are provided, they are not rigorous. For example, Definition 3.2 is lacking in clarity: what are $q$ and $q’$? What is the point of the second paragraph regarding “Maximal Munch Principle” (lines 145-156)? It is unclear if the strict interpretation is being assumed or not.
Other Comments Or Suggestions: See above
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback. Below, we provide a detailed response to each comment and question.
> is this method masking out other tokens that might also be allowed?
The masking algorithm of GreatGramma is sound and complete under the assumptions on the lexer stated in the paper. We will include a proof sketch in the body of the revised paper based on the following key lemmas, and include detailed proofs in the appendix.
Lemma 1. Let $T_{A \circ V} = (Q, V, \Gamma, q_0, \delta, F)$ be a token-level lexing transducer for the vocabulary $V \subseteq \Sigma^+$. Then $q_0 \xrightarrow{w:T_1 \ldots T_k}^{\ast} q' \in \delta^\ast$
if and only if $Lex(w) = (T_1 \ldots T_k, w_r)$ for some $w_r \in \Sigma^\ast$ and $q' \in Q$ such that $q_0 \xrightarrow{w_r:\epsilon}^\ast q' \in \delta^\ast$.
The proof proceeds by induction on $|w|$. The base case holds since each terminal is not nullable, and the inductive step follows directly from the construction of the lexing transducer.
Lemma 2. If $q \xrightarrow{w: T_1 \ldots T_k}^\ast q'$ via the token-level lexing transducer for some incomplete token prefix $w$ of $L(G)$, then there exists a terminal $T$ producible along some path from $q'$ such that $T_1 \ldots T_k T$ is terminal prefix of $L(G)$.
Proof sketch. By assumption, there exists a string $v$ such that $wv$ is in the language. Let $T_1 \ldots T_k T_{k+1} \ldots T_m $ be the lexed terminal sequence corresponding to $wv$, which is a sentence in the CFG $G$.
Since $T_1 \ldots T_k$ itself is not a complete sentence, the sequence $T_{k+1} \ldots T_m$ must be non-empty, and in particular, terminal $T_{k+1}$ can be generated.
Thus, we conclude that $T_1 \ldots T_k T_{k+1}$ is a terminal prefix of $L(G)$.
Theorem 1. (Completeness) Given a prefix $w$ of $L(G)$, a token $v$ is not masked by Algorithm 6 (i.e., $v \in V_{allowed}$) if the concatenation $wv$ is a prefix of $L(G)$.
Proof sketch. By Lemma 1, the lexing transducer produces $T_1 \ldots T_k$ and reaches a state $q'$ such that $q_0 \xrightarrow{w_r: \epsilon}^\ast q'$.
Since $wv$ is a prefix of $L(G)$, again by Lemma 1, $q_0 \xrightarrow{wv:T_1 \ldots T_k T_{k+1} \ldots T_m} q''$ for some state $q''$, which implies $q' \xrightarrow{T_{k+1} \ldots T_m} q''$.
Then, by Lemma 2, there exists a terminal $T$ producible along some path from $q’$ such that $T_1 \ldots T_k T_{k+1} \ldots T_m T$ is a prefix of $L(G)$.
Therefore, we have $v \in T_{inv}(q', T_{k+1} \ldots T_m T)$, which implies $v \in V_{allowed}$.
Theorem 2. (Soundness) Given a prefix $w$ of $L(G)$, a token $v$ is masked by Algorithm 6 (i.e., $v \in V_{allowed}$) if the concatenation $wv$ is not a prefix of $L(G)$.
The detailed soundness proof (which we do not include here due to space limits) is similar in overall structure to the proof of Theorem 1. The proof also requires the additional fact that every sequence of terminals corresponds to the lex of a sequence of tokens in $V^\ast$.
> Neither one of Propositions 3.4 and 3.5 have accompanying proofs.
Both propositions follow directly from the definition of PDAs and FSMs. We will provide formal proofs in the appendix. As a quick reference, a proof can also be found in Theorem 6.5 of [NewRef1]. Additionally, Theorem 6.17 provides intuition about how to construct an FSM that over-approximates the given PDA.
[NewRef1] Hopcroft, John E. et al., "Introduction to automata theory, languages, and computation."
> … What is the precise difference between these two, and if they are not the same, what is the point of Figure 1(d)?
While the token spanner table maps pairs of (state, token) to terminal sequences, the inverse token spanner table reverses this mapping for faster lookup in Algorithms 5–6, mapping each (state, terminal sequence) pair to the set of tokens capable of generating that sequence in that state. The token spanner table is used to compute $Re_{A \circ V}$ and the inverse token spanner table. We will clarify this relationship in the revised paper.
> Furthermore, in places where definitions are provided, they are not rigorous. For example, Definition 3.2 is lacking in clarity: what are q and q’?
q and q’ are (existentially quantified) source and destination states, respectively, in the transition set of the token-level lexing transducer. We will clarify the meaning of the notation and provide more rigorous definitions in the revised paper.
> What is the point of the second paragraph regarding “Maximal Munch Principle” (lines 145-156)?
This paragraph introduces the informal idea behind 1-lookahead lexing. This is an additional specification (beyond just maximal munch) on how the lexer should behave. Our lexing transducer is designed to satisfy both specifications. In the revised paper, we will clarify explicitly that these specifications are our design choice. Additionally, we will provide a formal proof in the appendix to demonstrate that the transducer indeed satisfies both specifications. | Summary: This paper describes a flexible & efficient implementation of grammar-constrained decoding for LLMs. Grammar-constrained decoding uses a (usually deterministic) CFG to constrain the output space of LLM decoding. A main challenge in apply such a constraint is the mismatch between the alphabet of the constraint (usually characters for the lexer and terminals for the parser) and the LM (subword tokens). Existing implementations either require expensive offline grammar preprocessing ([Ugare et al, 2024]) or incur a large overhead during decoding ([Willard and Louf, 2023]).
The paper proposes to use a token spanner table (Figure 1(d)) to map each subword token to possible terminal sequences with 1 lookahead from each lexer state. After converting the CFG into a PDA, this table is then further partitioned into $A(q^{\mathcal{A}}, q^{\mathcal{D}})$ (the set of terminal sequences allowed by the PDA starting at lexer state $q^{\mathcal{A}}$ and PDA state $q^{\mathcal{D}}$ under any stack configuration) and $D(q^{\mathcal{A}}, q^{\mathcal{D}})$ (the set of terminal sequences allowed by the PDA starting at lexer state $q^{\mathcal{A}}$ and PDA state $q^{\mathcal{D}}$ only under certain stack configurations), which enables faster computation of token masks at decoding time (Algorithm 6).
Benchmarks show that the proposed method is 17.71x the speed of SynCode ([Ugare et al, 2024]) during offline preprocessing and 2.73x the speed during decoding (Outline ([Willard and Louf, 2023]) while faster during offline preprocessing is inpractically slow during decoding).
[Ugare et al, 2024]: https://arxiv.org/pdf/2403.01632
[Willard and Louf, 2023]: https://arxiv.org/pdf/2307.09702
Claims And Evidence: The main claims of this paper is the correctness and speed of the proposed method. The high level algorithms are correct although proofs were not given in the paper. The claim on speed is well supported by the experimental results.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: The main experiment involves speed benchmarking for offline preprocessing and decoding. Grammars for 3 programming languages are used (Go, Python, and Java). For decoding benchmarks, a set of 5 programs for each language (367, left column) is used for computing token masks. The programs are available in the supplementary material, and they are small programs (< 100 lines). Grammar complexity is the main factor that determines the decoding speed (the grammars are all deterministic PDAs if I am not mistaken), thus the small size of the programs should not be a main problem.
Supplementary Material: I reviewed the benchmark grammars and programs.
Relation To Broader Scientific Literature: Most recent work on constrained-decoding using CFGs are all cited in this paper, in particular, Outline ([Willard and Louf, 2023]), XGrammar ([Dong et al, 2024]), and SynCode ([Ugare et al, 2024]). Formally, these and this paper itself are all incremental parsing methods under the hood that also involves a composition with the subword-character detokenizing FST. However, the exact techniques employed in implementing the parsing & composition have significant impacts on speed. Benchmarks show that the proposed solution in this paper is faster than SynCode and Outline during decoding (XGrammar appears to have issues that prevented the benchmarks from being conducted). Additionally this paper in my opinion presents the more elegant solution compared to the existing ones.
[Dong et al, 2024]: https://arxiv.org/pdf/2411.15100
[Ugare et al, 2024]: https://arxiv.org/pdf/2403.01632
[Willard and Louf, 2023]: https://arxiv.org/pdf/2307.09702
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths
- Most parts of this paper is well written (except Section 3.3 which I took me quite a while to fully understand)
- The prosposed solution are both simple and effective
Weaknesses
- ICML may not be the best venue for this paper given there is nothing about machine learning in this paper. NLP conferences such as ACL are definitely more appropriate where a larger body of conference attendees can appreciate this paper. **However, I did not take this into consideration in my "Overall Recommendation" (i.e. I gave the rating that I would give if I were reviewing for ACL)**. The area chairs should be the judge of this.
Other Comments Or Suggestions: - Figure 2 does not appear to mark all the final states. With $q_0$ being the only final state, this FST only accepts $\epsilon$. $q_2^B$ and $q_3^C$ should also be final each with a final output arc.
- The machine in Figure 4 is called $\mathcal{T}_{\mathcal{A} \circ \mathcal{V}}$ in most places, but other also sometimes
$\mathcal{T}_{\mathcal{A}} \circ\mathcal{T}_V$ (Algorithm 4; 245-246, left column).
- Algorithm 4 operates on $\mathcal{T}_{\mathcal{A} \circ \mathcal{V}}$ but in lines 257-259, left column, the text says
$\mathcal{T}_{\mathcal{A}}$. Indeed the table can be built on
$\mathcal{T}_{\mathcal{A}}$ alone, but the description should be more consistent.
- (267, left column) $T_{\mathrm{inv}}(q, \alpha)$, it appears that $\alpha$ should really be $T_1 \ldots T_kT$ instead.
- (251, right column) "we directly compose the detokenizing transducer $Re_{\mathcal{A}\circ\mathcal{V}}$": I think starting from here this subsection is trying to motivate Algorithm 5, but $Re_{\mathcal{A}\circ\mathcal{V}}$ is not the detokenizing transducer, and the composition is done implicitly in Algorithm 5.
Later, in (256, left column), $\mathcal{T}{Re_{\mathcal{A}\circ\mathcal{V}}}$ occured without being defined. Overall, I had a really hard time understanding Section 3.3 before seeing Algorithm 5. Perhaps there is a better way to present the ideas?
Questions For Authors: Does the PDA $\mathcal{P}$ need to be deterministic? If so, could you mention this explicitly early in the paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback, which will significantly improve the paper. We believe ICML is an appropriate venue as it has previously published similar work in constrained decoding (e.g., https://openreview.net/forum?id=pXaEYzrFae last year).
Below, we provide a detailed response to each comment and question.
> Figure 2 does not appear to mark all the final states. With being the only final state, this FST only accepts $\epsilon$. $q_2^B$ and $q_3^C$ should also be final each with a final output arc.
Thanks for bringing this to our attention. $q_0$ is the only final state, but the transducer appears incorrect because we omitted the transitions for EOS (from $q_2$ and $q_3$ back to $q_0$). We had initially omitted them to simplify the lexing transducer, but we now realize this omission makes the figure imprecise. We will add the necessary EOS transitions.
> The machine in Figure 4 is called $T_{\mathcal{A} \circ \mathcal{V}}$ in most places, but other also sometimes $T_{\mathcal{A}} \circ T_{\mathcal{V}}$ (Algorithm 4; 245-246, left column).
> (267, left column) , $T_{inv}(q, \alpha)$, it appears that $\alpha$ should really be $T_1 \ldots T_k T$ instead.
Thanks, we will resolve the conflicting notation in the revised version of the paper.
> Overall, I had a really hard time understanding Section 3.3 before seeing Algorithm 5. Perhaps there is a better way to present the ideas?
Thanks for the feedback. The primary role of Section 3.3 is to explain how to compute the always-accepted token table $A$ and context-dependent sequence table $D$. The detokenizing transducer is mostly an implementation detail used to avoid having to iterate over terminal sequences at runtime.
We will update the structure of Section 3.3 to be clearer: we will first introduce the definitions of always-accepted/rejected tokens and context-dependent terminal sequences, and then briefly mention the detokenizing transducer as an implementation detail.
> Does the PDA need to be deterministic? If so, could you mention this explicitly early in the paper?
We assumed a deterministic PDA for simplicity of implementation, but determinism is *not* strictly necessary. To handle non-deterministic PDAs, one can track multiple non-deterministic states simultaneously (similar to Thompson’s construction for NFA determinization), compute masks separately for each state, and then take the union of these individual masks. This extension allows the algorithm to accommodate non-deterministic PDAs without affecting performance in deterministic cases. We will clarify this point in the revised version of the paper. | null | null | null | null | null | null |
Decoding Rewards in Competitive Games: Inverse Game Theory with Entropy Regularization | Accept (poster) | Summary: This paper studies inverse game theory for two-player zero-sum Markov games under entropy regularization (quantal response equilibrium): instead of best responding, each player plays a mixed strategy with softmax probability $\frac{e^{\eta u(a)}}{\sum_{a\in\mathcal A} e^{\eta u(a)}}$. Given observed actions sampled from the quantal response equilibrium, this paper aims to estimate the utility function of the two players. The paper derives the sample complexity of this problem, using a Maximum Likelihood Estimation approach.
Claims And Evidence: All the theoretical and empirical results are solid as far as I can tell.
Methods And Evaluation Criteria: Using Maximum Likelihood Estimation to recover the utility functions in inverse reinforcement learning / inverse game theory under entropy regularization is a standard technique in the literature. It makes sense for the problem at hand.
Theoretical Claims: I checked the proofs for the theorems in Section 2, which I believe are correct.
However, I have some concerns about the presentation of the theoretical results: in particular, the asymptotic notation $\lesssim$ hides some important quantities that are not necessarily constants:
(1) The $\lesssim$ notatioin in Theorem 2.3 hide quantities $\eta, \epsilon_1, \epsilon_2$. In particular, the $\frac{1}{\eta^2 (\min_{i\in[m]} \mu_i - \epsilon_1)^2}$ terms in equation (25) (Page 15) is hidden. These quantities are important to the sample complexity when they are close to $0$. They are not necessarily constants and should not be hidden by the $\lesssim$ notation in my opinion.
(2) Similarly, a quantity of $\sum_{a, b \in \mathcal A \times \mathcal B} \| \phi(a, b) \|^2$ is hidden by the $\lesssim$ notation in Theorem 2.4. This quantity depends on the sizes of $\mathcal A$ and $\mathcal B$, $m$ and $n$, which are not constants, and depends on $d$ which is not a constant, either.
(3) Similarly, the $\lesssim$ notation in Theorem 2.5 hides $|| \Phi_1 ||$ and $|| \Phi_2 ||$, which seem to depend on $m, n, d$, which are not constants.
Hiding too many such important quantities hurts the clarity and soundness of the theoretical results in my opinion.
Experimental Designs Or Analyses: I don't see any issues with the experimental results.
Supplementary Material: I checked the proofs for the theorems in section 2, which I believe are correct.
Relation To Broader Scientific Literature: The key contribution of the paper to the broader literature is unclear. The idea of inverse reinforcement learning under entropy regularization is well known and dates back to [Ziebart et al 2008](https://cdn.aaai.org/AAAI/2008/AAAI08-227.pdf). More recent works about inverse game theory under entropy regularization include, for example, [Wu et al, 2024](https://arxiv.org/abs/2210.01380) and [Chen et al, 2023](https://arxiv.org/abs/2307.14085). The idea of using MLE to estimate the unknown reward functions for such problems is also standard. The authors mentioned several related works in Section 1.2, but didn't mention the main difference between previous works and their work. This makes it difficult to judge the contribution of this work.
Essential References Not Discussed: No in my knowledge.
Other Strengths And Weaknesses: ### Strength:
(S1) Considering two-player games is a strength of this paper. As far as I know, previous works on inverse reinforcement learning under entropy regularization mostly focus on single-agent problems.
### Weakness:
(W1) However, the two-player games considered by this paper are restricted to zero-sum games only. Are there fundamental reasons that the approach in this work can only be applied to zero-sum games, or it can actually be applied to general-sum games?
(W2) Another big weakness is that the entropy-regularization parameter $\eta$ is assumed to be known by the learner. In practice, different agents may use different regularization parameters and the parameters can be unknown. When $\eta$ is known, as shown by the authors, the problem of estimating the utility function becomes a simple linear regression problem, which can be solved by standard MLE technique. This technique doesn't seem to be applicable to unknown $\eta$.
Other Comments Or Suggestions: (1) The $\|\theta\| \le M$ in Assumption 2.1 should be $\|\theta^*\|\le M$.
(2) I would suggest not hiding essential parameters using the $\lesssim$ notation in the theorem statements (see "Theoretical Claims").
Questions For Authors: (Q1) What's the main difference between this work and previous works in IRL and inverse game theory with entropy regularization? See "Relation to Broader Scientific Literature".
(Q2) Are there fundamental reasons that the approach in this work can only be applied to zero-sum games, or it can actually be applied to general-sum games? (See Weakness 1)
(Q3) What if $\eta$ is unknown? (See Weakness 2.)
(Q4) How do you derive the first equation in the proof of Theorem 2.3 (Line 665 - 670)? In particular, how do you get $(A(\hat \mu)^\top A(\hat \mu) + B(\hat \mu)^\top B(\hat \mu))^{-1}$ from equation (5) ?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your valuable feedback and for carefully checking the proofs of our theorems.
## Concerns about the Asymptotic Notation
We appreciate your observation regarding the use of asymptotic notation in Theorems 2.3, 2.4, and 2.5. To streamline presentation, we omitted terms that implicitly depend on the problem size—specifically, action space sizes $m,n$, feature dimension $d$, and the regularization parameter $\eta$. We agree that these quantities affect the sample complexity and convergence rates and are not truly constant. In the revised version, we will explicitly include these dependencies in the theorem statements and discuss their implications.
## Questions
### (Q1) Contribution to the Broader Litterature
```
The idea of inverse reinforcement learning under entropy regularization is well known and dates back to Ziebart et al.
```
Ziebart et al., 2008 focuses on imitation learning in **single-agent Markov Decision Processes**. In comparison, our work addresses IRL in two-player zero-sum Markov games, where the objective is to recover the reward functions given observed equilibrium strategies (QRE). Additionally, we deal with both strong and partial identifiability, which introduces new theoretical challenges not covered in Ziebart et al.
```
More recent works about inverse game theory under entropy regularization include, for example, Wu et al and Chen et al.
```
These studies focus on inverse game theory for **Stackelberg games** and **quantal Stackelberg equilibrium** (QSE). These methods primarily deal with leader-follower interactions, where one player commits to a strategy first, and the follower responds optimally. The difference between QRE and QSE is crucial:
- QRE models simultaneous decision-making under entropy regularization, where both players make noisy best responses.
- QSE models sequential decision-making, where the leader’s strategy is announced, and the follower best responds.
Our theoretical framework leverages QRE constraint, which fundamentally differs from the QSE setting explored by Wu et al. and Chen et al.
```
The idea of using MLE to estimate the unknown reward functions for such problems is also standard.
```
We are sorry about causing the confusion. To clarify, we do not directly apply MLE for reward estimation. Instead, the key to our identification method is using the linear assumption to transform the QRE constraints into linear systems like (2), (11). We then employ least square to construct confidence sets. MLE is only used to estimate QRE from data (e.g., Appendix E), not to estimate rewards directly.
To summarize, our contribution includes:
- We introduce a **complete framework** that addresses reward identification and estimation in two-player zero-sum games and Markov games. To the best of our knowledge, no existing work addresses these specific problems.
- We establish necessary and sufficient conditions for strong and partial identification under linear parameterization.
- We provide theoretical guarantees for the sample complexity and convergence of the constructed confidence sets.
- Through experiments, we verify the behavioral consistency between the recovered reward functions and the observed QRE.
### (Q2) Zero-Sum Game Restriction
Our framework is currently specific to zero-sum games because of their **minimax structure**, which enables QRE to yield tractable linear constraints on the reward parameter. General-sum games lack this structure and may admit **multiple equilibria**, complicating both identification and estimation.
Though extending our methods to general-sum settings is nontrivial, we view this as an exciting direction for future work and appreciate your suggestion to explore this aspect.
### (Q3) Assumption of Known Regularization Parameter $\eta$
We acknowledge that our method assumes a known entropy regularization parameter $\eta$, which reflects our modeling of agents . In practice, agents may use different or unknown $\eta$'s. We address this in two parts:
- If players use different $\eta_1$ and $\eta_2$, our linear formulation and results still hold by adjusting the vectors $c(\mu)$ and $d(\nu)$ accordingly.
- If $\eta$ is unknown, one could treat it as a hyperparameter and select it via cross-validation. In estimation, the choice of $\eta$ is not a problem since we can scale the reward functions, so the identifiability and consistency of strategies remain unaffected.
We will discuss this in the revised version as an important direction for future research.
### (Q4) Derivation in the Proof of Theorem 2.3
Thank you for pointing out this issue. We apologize for the confusion caused by the notation. There was a minor typo in the proof where the inequality sign $\leq$ should have been $\lesssim$, which absorbs a constant factor of $2$. The estimate is a consequence of Cauchy-Schwartz inequality. We appreciate your careful attention to detail, and we will correct this in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response! All of my concerns are resolved. I would really appreciate it if the authors could clarify the asymptotic notations, highlight the contribution to the broader literature, acknowledge the limitations of zero-sum games and known $\eta$ assumptions, and improve the experimental results (as suggested by reviewers h55M and PH73) in the revised version. I believe these improvements are easily doable, so I raised my score to 4.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much for raising the score. Your thoughtful comments and insightful feedback are helpful in improving the quality of our work. We appreciate your suggestions regarding clarifying the asymptotic notations, highlighting our contribution to the broader literature, acknowledging the limitations of zero-sum games and known assumptions, and improving the experimental results as noted by reviewers h55M and PH73. We will incorporate these improvements in the final version.
Thank you again for your kind support. | Summary: Under the assumption that agents are playing a QRE (a relaxation of Nash equilibrium that includes an entropy regularization term), this paper presents a method for learning the rewards of each agent based on a dataset of interactions in a finite-horizon zero-sum Markov game (or the special case of a single normal form game). In the general case, the transition kernel must be learned in addition to the agent rewards.
The proposed method exploits a linearity assumption: that every action profile can be mapped to a finite-dimensional feature vector whose inner product with a weight vector yields the payoff of the maximizer agent. Based on this assumption and an empirical estimate of the agents' strategies from the observed interaction data, it produces an estimate of the weight vector $\theta$.
## Update after rebuttal
Thanks for your response! In light of the rebuttal my opinion of the paper remains positive.
Claims And Evidence: The theoretical claims appear well supported. I do not find the empirical evaluation convincing.
Methods And Evaluation Criteria: The empirical evaluations are on a single seemingly-arbitrary scenario, do not include checks of statistical significance, and do not appear to check the theoretical claims. Answers to my "questions to authors" will provide insight into how serious these issues are.
Theoretical Claims: No
Experimental Designs Or Analyses: See "Methods"
Supplementary Material: No
Relation To Broader Scientific Literature: The related work is a solid survey of relevant literature. I'm aware of work that aims to estimate payoffs in Stackelberg settings using the assumption of quantal response (some of which is cited).
Essential References Not Discussed: [Chui et al. AAMAS 2023] perform empirical estimation of payoffs under a QRE assumption in the normal form setting. I am not aware of any work which performs similar estimation in the Markov game setting.
Other Strengths And Weaknesses: See "Questions"
Other Comments Or Suggestions: * p.2 "with inner produce": inner product
* p.3: $d \le m + n - 2$ will not be true in general (in general there will be up to $mn$ unique utility pairs). This is actually a pretty restrictive assumption. The traveller's dilemma is a very structured game, and yet it has $3m$ unique utility profiles. (It's not zero-sum though)
* p.4 "Let Assumption 2.1 and the rank condition (4) hold.": Do you mean the rank condition (3)?
* p.7 def.3.6 "between any pair of rewards $r,r' \in \mathcal{R}$": later on you define $\mathcal{R}$ as the set of feasible rewards corresponding to the QRE $\mu,\nu$; this is probably not what you mean here, because in Theorem 3.9 you want to bound distance away from $\mathcal{R}$, which doesn't make sense if the metric is only defined on $\mathcal{R}$.
Questions For Authors: 1. Theorem 3.9 "We assume that the following $d \times d$ matrix is nonsingular": What are the substantive implications of this assumption? One is presumably that no feature is a linear transformation of another feature, is that sufficient, or does more need to be true? What happens if this assumption fails to hold, does your whole method fail? Can a user tell in advance that it fails to hold?
2. How did you choose the parameters of your test environment ($m=n=5$, $H=6$, etc.).
3. How many replications do the figures represent? What are the confidence bounds on your plots? Are the differences between $N=30000,N=100000,N=300000$ significant? (Especially in Figure 1)
4. Beyond the bare fact that more data means lower error, what should I take away from figures 1 and 2? Are the quantitative differences in error in line with the theoretical bounds?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for thoroughly evaluating our work and for your valuable comments. We hope our reply will address your concerns and questions.
### (Q1) Substantive Implications of the Non-singularity Assumption
The non-singularity assumption on the feature covariance matrix $\Psi_h$ ensures that the feature vectors are linearly independent, meaning that no feature can be expressed as a linear combination of others, so we have sufficient information for parameter recovery. This is a standard assumption of coverage in many related literatures [1, Corollary 4.2; 2, Assumption 2.3]. One practical way for users to check this assumption is to examine the condition number of the matrix $\Psi_h$. If the condition number is extremely large, it indicates that the matrix is nearly singular. Additionally, principal component analysis (PCA) can help detect high collinearity among features.
If the assumption does not hold, the problem becomes ill-posed, meaning that there exist multiple solutions for the estimated reward function. In such cases, our method may produce arbitrarily large or unstable estimates, as the problem becomes underdetermined. Nevertheless, in parctice,we can take proactive measures to avoid rank deficiency:
- Reducing feature dimension to avoid over-parameterization.
- Removing redundant features to ensure numerical stability and identifiability.
### (Q2) Parameter Choice
We chose $m=n=5$, $H=6$ to strike a balance between model complexity and computational feasibility. The setup provides sufficient structure to validate both static and dynamic cases. We will add this explanation in the revised version.
### (Q3) Number of Replications and Confidence Bounds
In the revised version, we repeat experiments 100 times and report 95% confidence intervals. Below we list the results on the QRE error:
|Sample Size (T)|Mean Error and 95% Confidence Interval $(10^{-3})$|
|-|-|
|10,000|$7.55\pm 2.71$|
|20,000|$5.36\pm 2.11$|
|50,000|$3.40\pm 1.86$|
|100,000|$2.27\pm 1.22$|
We will include more result in our revised paper. This will ensure that the results are statistically robust and interpretable.
In Figure 1, we compare reconstructed and assigned rewards. Since the true reward is not uniquely identifiable, different reward functions may induce the same QRE. Thus, even with large $N$, we may observe limited improvement in raw reward error. Our true goal is behavioral consistency, which we assess in Figure 2 via the QRE induced by the recovered reward. This metric better reflects algorithm performance and convergence.
### (Q4) Quantitative Differences and Theoretical Validation:
Thank you for raising this important point. We acknowledge that the empirical results presented in the main text may not fully demonstrate consistency with the theoretical convergence rates.
In fact, we did compare the empirical convergence rate with the theoretical result in the matrix game setting and verified their consistency (both $\mathcal{O}(N^{-1/2})$). This comparison and analysis are detailed in **Appendix E**. While we had limited Markov game replication before submission, we have since conducted 100-run experiments and observed consistent convergence rates. These additional results with confidence intervals will be added to the revision to strengthen empirical validation. We will update the revised version of the paper to include these new experimental results and confidence intervals, and explicitly discuss the consistency between empirical and theoretical results.
Thank you once again for pointing out this important aspect. We believe that this additional empirical evidence will significantly strengthen the experimental validation of our theoretical findings.
### About Undiscussed Essential References
Thank you for pointing out the recent work. While Chui et al. (2023) also explore preference estimation in games, their focus is fundamentally different from ours. They investigate non-strategic or **imperfectly rational players**, proposing models that better fit human behavior by relaxing equilibrium assumptions like Nash equilibrium and QRE. In contrast, our work assumes strategic agents playing QRE and studies the identification and estimation of reward functions under this equilibrium assumption. Moreover, Chui et al. do not address the identifiability or sample complexity of their estimation procedure, while a key contribution of our work lies in formally characterizing when the reward function is identifiable and how to estimate it reliably from finite data. Finally, their analysis is limited to static games, whereas our framework extends to finite-horizon Markov games.
We will include a discussion of this work in the Related Work section.
### References
[1] Tu, S. and Recht, B. (2017). Least-Squares Temporal Difference Learning for the Linear Quadratic Regulator.
[2] Min, Y., Wang, T., Zhou, D. and Gu, Q. (2022). Variance-aware off-policy evaluation with linear function approximation. | Summary: The paper analyzes the problem of identifying the utility in a zero-sum game from observations of a quantal response equilibrium policy. The authors provide an algorithm and theoretical guarantees on the recovered utilities. Moreover, they extend their analysis to zero-sum Markov games under a linear MDP assumption and provide experimental validation in a tabular setting.
Claims And Evidence: The theoretical results seem sound, and they are presented clearly and convincingly.
Methods And Evaluation Criteria: The paper proposes first estimating the expert policies, followed by estimating a confidence set for the payoff or Q function parameter via least-squares. Additionally, in the linear MDP setting, the transition kernel is estimated, and a confidence set for the reward is recovered via the Bellman equation. This methodology is sound and consistent with approaches from prior work in the single-agent setting.
Theoretical Claims: While I didn't have the time to check all the proofs, the claims made seem reasonable and in line with what we would expect from the single-agent theory.
Experimental Designs Or Analyses: The authors provide an experimental validation in the tabular case. They show that both rewards and the corresponding QRE are close to the expert's reward and QRE. I have the following suggestions:
1) Clarify whether the rank condition (12) is satisfied in the current experimental setup, as only then could we expect reward identifiability.
2) Discuss whether closeness of QREs is theoretically expected. Although your current analysis seems to lack a formal continuity argument, I'd expect that it would hold in the entropy-regularized setting.
3) Repeat experiments and add confidence bars to your plots.
Supplementary Material: I have reviewed the extension to maximum likelihood estimation (MLE) and found it very insightful.
Relation To Broader Scientific Literature: The paper builds on ideas from both single-agent reinforcement learning and inverse game theory. In single-agent IRL, similar identifiability conditions have been established for learning from multiple experts with the same objective but different transition kernels (see references below).
Essential References Not Discussed: While the authors briefly discuss single-agent IRL, they don't discuss the very much related results on identifiability in single-agent IRL. For instance, [1,2,3] discusses identifiability when learning from multiple experts with the same reward but individual transition laws. In particular, [1,2] frame reward identifiability problems as a subspace intersection problem and establish rank conditions similar to (12). Moreover, [3] establishes finite sample guarantees for both the feasible reward set and the suboptimality of $\pi_{\hat r}^*$ under the ground truth reward $r^*$. Given the similarities between your zero-sum formulation and learning from two distinct transition dynamics, providing a more detailed discussion that explicitly connects your findings to existing single-agent IRL results would enhance the theoretical clarity and positioning of your work.
1) Cao, Haoyang, Samuel Cohen, and Lukasz Szpruch. "Identifiability in inverse reinforcement learning." Advances in Neural Information Processing Systems 34 (2021)
2) Rolland, Paul, et al. "Identifiability and generalizability from multiple experts in inverse reinforcement learning." Advances in Neural Information Processing Systems 35 (2022)
3) Schlaginhaufen, Andreas, and Maryam Kamgarpour. "Towards the transferability of rewards recovered via regularized inverse reinforcement learning." Advances in Neural Information Processing Systems 35 (2024)
Other Strengths And Weaknesses: The theoretical results are interesting, sound, and clearly presented. However, the paper would benefit from some more explanations of these theoretical results to help build intuition, e.g. when is the rank condition satisfied, what does $D(\mathcal{R},\hat{\mathcal{R}})<\varepsilon$ imply in practice?
Other Comments Or Suggestions: Minor issues & typos:
- line 27, right column: ", as and multiple ..."
- line 192, right column: "approximating feasible set..."
- line 207, left column: "frequency estimator" not defined explicitly.
- line 330 & 343, left column: "estimating the transition kernel" is mentioned twice.
Questions For Authors: 1. Can you give any theoretical insights on when the rank condition (12) is satisfied? Say in a tabular setting.
2. Can you say something about the QREs corresponding to your recovered reward?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your thoughtful and constructive feedback! We greatly appreciate your positive assessment of the theoretical soundness and methodological consistency of our work. Below, we address your specific concerns and suggestions.
### Relation to Single-Agent IRL Identification
We appreciate the suggestion to connect our results with identifiability in single-agent IRL. In particular, works such as Cao et al. [1], Rolland et al. [2], and Schlaginhaufen & Kamgarpour [3] explore how observing behavior under varying transition dynamics or discount factors can lead to identifiability, typically through rank or subspace conditions. These insights align with our own setting, where we identify necessary and sufficient rank conditions on linear systems derived from QRE constraints in zero-sum Markov games.
Unlike the single-agent case, we address multi-agent, strategic interactions where agents follow a quantal response equilibrium. Nevertheless, the underlying intuition is similar: entropy regularization and structured assumptions (e.g., linear parameterization) enable us to derive strong identifiability and sample-efficient estimation. We will incorporate this discussion in the Related Work section to better position our contributions.
### Experimental Design
We have now conducted additional experiments in the Markov game setting with 100 replications per sample size and computed 95% confidence intervals. These results align closely with our theoretical rates. Below are some results on the QRE error $\mathrm{TV}(\widehat{\mu},\mu)+\mathrm{TV}(\widehat{\nu},\nu)$:
|Sample Size (T)|Mean Error and 95% Confidence Interval $(10^{-3})$|
|-|-|
|10,000|$7.55\pm 2.71$|
|20,000|$5.36\pm 2.11$|
|50,000|$3.40\pm 1.86$|
|100,000|$2.27\pm 1.22$|
We will update our paper to include these results and clearly explain the statistical significance of the results.
### (Q1) Theoretical Insights on When the Rank Condition (12) Is Satisfied
The rank condition (12) ensures the linear system (11) has a unique solution, which is the strongly identifiable case. Here the matrices $A_h(s,\nu_h^*)$ and $B_h(s,\mu_h^*)$ are derived from differences between feature vectors weighted by the corresponding equilibrium strategies, which means that distinct action pairs should lead to linearly independent differences. The rank condition also implicitly requires that the QRE policies $(\mu_h^*,\nu_h^*)$ do not degenerate and effectively captures the difference between different action pairs.
In the tabular case, where $\phi$ is the canonical map and $d=Smn$, this condition does not hold in general, since the system (11) has only $S(m+n-2)$ equations. Nevertheless, our theory for the partially identifiable case ensures that we can still learn an acceptable estimate for the reward function even when the rank condition is not satisfied.
In practice, we recommend using low-dimensional, non-redundant feature maps and applying tools like PCA to remove collinearity, which helps maintain full rank and improves sample efficiency.
### (Q2) QREs Corresponding to the Recovered Reward
Theoretically, our algorithm aims to recover a confidence set of feasible reward functions rather than a single one, due to the partial identifiability inherent in inverse game learning. In practice, our focus is on behavioral consistency rather than exact reward matching.
In the entropy-regularized setting, small variations in rewards result in proportionally small variations in QRE, according to the Lipschitzness of softmax. Consequently, when the error between the recovered reward and the true reward is small, the QRE induced by the recovered reward function is guaranteed to be close to the true QRE.
Our experimental results consistently show that even when the recovered reward function differs from the assigned one (due to partial identifiability), the QRE derived from the recovered reward still closely matches the observed QRE. This indicates that **the estimated reward is close to another feasible reward**.
### Other Strengths And Weaknesses
Thank you for your positive feedback on the theoretical results and for suggesting ways to build more intuition. Below, we address the specific point you raised.
The distance measure quantifies the Hausdorff distance between the estimated feasible set $\widehat{\mathcal{R}}$ and the true feasible set $\mathcal{R}$. In practice, when it is sufficiently small, we know that
- The estimated reward set $\widehat{\mathcal{R}}$ almost captures all feasible rewards $r\in\mathcal{R}$;
- For any estimated reward $\widehat{r}$, there exists a feasible reward $r$ close to it, so the confidence set $\widehat{\mathcal{R}}$ is not too conservative.
If the rank condition (12) holds, our confidence set $\widehat{\mathcal{R}}$ converges to a single point, which is the uniquely feasible reward $r$. Indeed, even if the rank condition does not hold, our algorithm still efficiently recovers all feasible reward functions. | Summary: The submission considers inverse reinforcement learning in games. Specifically, the authors first study the conditions for the problem to identifiable, and the propose a methodology to estimate the reward function. Both theoretical analysis and empirical results are provided to justify the proposed method.
Claims And Evidence: The claim that that the proposed condition is necessary is not well discussed (proposition 2.2). First of all, proposition 2.2 does not explicitly claim the condition to be necessary. This itself is contradictory the previous claims and the subtitle of the proposition. Second, to my understanding, the proposed condition is necessary only under some assumptions like the model setup. Specifically, there are similar works (IRL while not in the game setting) that does not take a linear condition like [1,2,3].
The discussions regarding why the condition is necessary and on the comparisons with the mentioned works are missing.
[1] Reward Identification in Inverse Reinforcement Learning
[2] Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning
[3] Deep PQR: Solving Inverse Reinforcement Learning using Anchor Actions
Methods And Evaluation Criteria: I am not farmiliar enough to the direction of IRL for games to suggest any competing methods in this line. But the submission does not consider any competing methods in the experiment section. As a result, the the advantage of the proposed method is not well justified.
Theoretical Claims: I did not check the proof of the results. I tried to find the proof for proposition 2.2 especially on why the proposed condition is necessary, but did not find the proof.
In general the proof makes sense to me: by posing the linear assumption, the reward identification is indeed an MLE problem, whose theoretical guarantees are well established even given not perfectly IID data.
Experimental Designs Or Analyses: I did not pay very much attention to this. The design makes sense to me but lacks competing methods.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The submission contributes to the area of inverse reinforcement learning for games. For such problems, a key challenge is identifiablity. The submission uses a linear-like condition to solve this issue.
Essential References Not Discussed: Already mentioned above.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: Already mentioned above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your valuable feedback and for recognizing the relevance and significance of our contributions. Below, we address your concerns regarding the necessity of the rank condition (Proposition 2.2), comparisons with related works, and competing methods in experiments.
#### Necessity of the Rank Condition (Proposition 2.2)
We apologize for our unclear statement in Proposition. The submitted version of the paper may not clarify the necessary conditions and their assumptions. In fact, the reward parameter is uniquely solvable **if and only if** the rank condition holds, and the logic is derived in previous contents.
- $(\mu^*,\nu^*)$ is the quantal response equilibrium (QRE) corresponding to $Q$ if and only if the following QRE constraint is satisfied:
$$
\mu^*(a)= \frac{e^{\eta Q(a,\cdot)\nu^*}}{\sum_{a\in\mathcal{A}}e^{\eta Q(a,\cdot)\nu^*}}\ \text{for all}\ a\in\mathcal{A},\quad
\nu^*(b)= \frac{e^{-\eta Q(\cdot,b)^\top\mu^*}}{\sum_{b\in\mathcal{B}}e^{-\eta Q(\cdot,b)^\top\mu^*}}\ \text{for all}\ b\in\mathcal{B}.
$$
- Under the linear assumption, the above non-linear system is equivalent to the linear system (2).
- According to our model assumption, this linear system has at least one solution $\theta=\theta^*$. The solution is unique if and only if the rank condition (3) is satisfied.
We will correct our statement in a later version. We hope this clarification solves your confusion.
#### Comparisons with Prior Works
We understand your concern regarding related works that do not impose a linear condition. The works [1,2,3] you listed explore nonlinear settings and adopt different **model assumptions**. In particular:
[1] (Reward Identification in Inverse Reinforcement Learning) studies the problem of reward identifiability in IRL from a **graph** perspective, focusing on single-agent MDPs. It primarily addresses identifiability conditions **without discussing the problem of reward estimation in identifiable cases.**
[2] (Reward-Consistent Dynamics Models for Offline Reinforcement Learning) focuses on **model-based offline RL**, where the goal is to learn a dynamics model that generalizes well to unseen transitions. It is applied to single-agent settings, and **the problem of reward identification is not addressed.**
[3] (Deep PQR: Solving Inverse Reinforcement Learning using Anchor Actions) works on inverse reinforcement learning (IRL) with **deep energy-based policies**. The key innovation is the introduction of an anchor action, which is a known, zero-reward action (e.g., doing nothing), to facilitate reward identification. Compared with our linear assumption, this method is based on an Anchor-Action Assumption (See Assumption 1), which helps to uniquely determine the reward function.
These works differ significantly from our approach and share limited similarities with our problem setting. In our work, the linearity assumption is crucial to transform the non-linear QRE constraint into a tractable linear problem. This enables us to derive rigorous theoretical guarantees on both identification and estimation of reward functions. Moreover, we provide a thorough theoretical analysis including convergence guarantees and confidence set construction, which is not covered in above works. We will include a more detailed discussion of these related works in the final version. Thank you for pointing out these references and helping us improve the clarity of our presentation.
#### Lack of Competing Methods
We would like to clarify that our paper introduces a **totally novel framework** for inverse game theory, covering both identification and estimation of reward functions in competitive game settings.
To the best of our knowledge, **there are no existing methods specifically designed for inverse reinforcement learning (IRL) in two-player zero-sum Markov games that are directly comparable to our approach**. Most prior works in IRL focus on single-agent settings or non-competitive scenarios. Furthermore, the entropy-regularized QRE setting that we address is particularly unique, as it requires handling the complexity of equilibrium strategies and partial identifiability.
While there are some works that address inverse RL with deep policies or anchor actions (e.g., [3]) or model-based RL for offline settings (e.g., [2]), these methods do not address the competitive, game-theoretic setting we focus on. Our approach leverages linear parametrization and QRE modeling, which makes it fundamentally different from existing IRL techniques that do not incorporate game-theoretic interactions or entropy regularization.
We would greatly appreciate it if the reviewer could suggest any closely related methods that might have been overlooked or not covered in our paper. We are committed to enhancing our work by incorporating relevant comparisons, if available, and we value your input in guiding us toward potential improvements. | null | null | null | null | null | null |
The Butterfly Effect: Neural Network Training Trajectories Are Highly Sensitive to Initial Conditions | Accept (poster) | Summary: The submission investigates the sensitivity of neural network training outcome to initial conditions. Building upon related work in optimization and training dynamics, it investigates the conditions for stability and identifies a chaotic early stage in which small perturbations cause trajectory divergence. Experimental evaluations demonstrate that effect in vision models, but find different effects in language models.
### Summary after Rebuttal
In the rebuttal, the authors have added behavioral comparisons and removed claims with weak support, which I believe improve the submission overall. I encourage the authors to continue improving the experimental support and will keep my score as is for now.
Claims And Evidence: Before I go into this, I generally like the paper (see strenghts and weaknesses below). However, I find the claims made way to general for the experimental support.
Experiments are performed mostly on relatively small vision models with relatively simple tasks. I understand the need for tractable experiment setups, but the experimental evaluation would be strengthened significantly if showed beyond ResNet20/50 and CIFAR10 / CIFAR100. This particular combination has showed specific effects in previous work, which unfortunately doesn’t always generalize. I am therefore somewhat skeptical to what degree the results generalize and would encourage the authors to systematically assess different architectures, sizes, and tasks. As is, the insight rest on relatively narrow experimental support. For example: the claim that going from hard to simple improves stability is evaluated only on CIFAR100 and CIFAR10, which are very entangled beyond ‘hard’ and ‘simple’. A claim that general requires more experimental evaluation.
The same goes for the comparison between language and vision models. Conventional vision models (like ResNet20s on CIFAR) and even relatively small and outdated BERT models operate in very different regimes (scale, parameter-data-ratio, training task, etc). The conclusion that language and vision models behave differently are not supported by a single skewed experiment like that. If anything, there may be insights into different behavior of different regimes. To compare different domains, one would have to compare models in the same regime varying the domain, e.g. large VITs pre-trained with data augmentation to recent LLMs, considering the respective scaling laws. In fact, the authors admit that in Section 5 lines 434.
Methods And Evaluation Criteria: Generally yes (modulo narrow datasets+architecture combinations discussed before). The framework is very well motivated and introduced. The only question I had is why the authors chose to use only loss barriers, l2 distance or perm distance? Why not evaluate behavioral similarlity (CKA, SVCCA, agreement, etc)? As the authors note, the same/different basins may be relevant for merging / ensembling models, for which it is essential to know whether or not models actually learn different behavior. I encourage the authors to include that in their framework as well, or to explain why they chose to not include it.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design is well done, except for the issues discussed above.
Supplementary Material: Yes, Appendix A-E
Relation To Broader Scientific Literature: The authors discuss the relation to previous work well throughout the paper, but explicitly in section 2.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ## Strengths
- The paper is very well written, has a tight narrative and well motivated introduction.
- Relevant problem, understanding the sensitivity is fundamentally important, but also highly relevant for understanding the genesis of distinct phases, model merging, ensembling, pruning and also adversarial vulnerability of models.
- Well written related work, that clarified the submissions relation to existing work and identified the contribution.
- Extending Frankle et al. parent-child spawning experiments, the submission presents a more general framework to study the impact of some perturbation on a training trajectory.
- There are some very interesting results, particularly the attribution of stability to the weights (rather than direction), and suggesting the natural occurrence of permutations is not a likely phenomenon to explain trajectory divergence.
## Weaknesses
My main issues with the paper are already discussed above: the generality of claims that are not well supported experimentally, and the obmission of behavioral similarity metrics. Beyond that:
- There are several alignment methods that improve over git re-basin, e.g. Sinkhorn based [1] or learned [2].
- Figure 2 and 3 titles and caption could be improved. Left and middle figure have the same title. Why are Gaussian permutation barriers evaluated without accounting for permutations? Figure 3 - same as which of Figures 2?
[1] Pena et al., Re-basin via implicit Sinkhorn differentiation, CVPR 2023.
[2] Navon et al., Equivariant Deep Weight Space Alignment, ICML 2024
Other Comments Or Suggestions: Typo in the paragraph heading in Section 5: "with with"
Questions For Authors: I would be curious to learn why the authors chose their experimental setup as they have. Are these mostly computational constraints, or based on previous work?
Also, the submission motivates the work via two streams: optimization and work on early training phases. To me, that duality is lost in the course of the submission, and it is more of a generalization of existing work in the latter. I am curious what the authors see as the optimization perspective on their work.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your substantive review, to which we have replied below.
## F. Comparing Dissimilar Tasks and Scales
> Experiments are performed mostly on relatively small vision models with relatively simple tasks. [...] To compare different domains, one would have to compare models in the same regime varying the domain,
We agree with this assessment of our experimental scale, and analogous to figures 4-6, have added an experiment that fine-tunes ViTs from google/vit-base-patch16-224 on CIFAR-100: [https://imgur.com/a/AculLLH](https://imgur.com/a/AculLLH).
We also note that the claims below (sections 4.3 and 5) are negative statements, for which a counterexample is sufficient:
1. More pre-training does not always improve fine-tuning stability (figures 4-6).
2. Barriers do not generally scale with $L^2$ divergence (figures 7-8).
3. Barriers and $L^2$ do not diverge exponentially proportional to perturbation scale (figure 9).
In these cases, we do not strictly need to isolate the effects of task vs architecture or scaling.
## G. Fine-tuning Task Difficulty
> the claim that going from hard to simple improves stability is evaluated only on CIFAR100 and CIFAR10, which are very entangled beyond ‘hard’ and ‘simple’.
We agree that the claim “Pre-training on harder tasks improves stability” is weak. We attempted to make this claim more precise by relating stability to fine-tuning performance, but preliminary evidence in this direction is inconclusive. We have therefore replaced this claim with our new findings on CKA and ensembling (below).
## A., B. Functional Similarity And Model Diversity
> Why not evaluate behavioral similarlity (CKA, SVCCA, agreement, etc)? As the authors note, the same/different basins may be relevant for merging / ensembling models, for which it is essential to know whether or not models actually learn different behavior.
Reviewers *sufv* and *tDxT* have also raised this excellent point. Based on your feedback, we have run additional experiments that find:
1. Training instability (as evidenced by larger barriers) indeed correlates with functional dissimilarity (Angular CKA).
2. Perturbations to training can be used to improve ensemble performance, with more functional dissimilarity leading to better ensembles. This shows that instability increases model diversity.
Please see our full rebuttal to Reviewer *sufv* for details and figures.
Finally, to understand why we chose to focus on barriers as a measure of functional dissimilarity, please also see our rebuttal to Reviewer *HaCY* under **Non-Linear Connectivity**.
## Other Questions
> There are several alignment methods that improve over git re-basin, e.g. Sinkhorn based [1] or learned [2].
We note that weight matching (WM) is already able to greatly reduce barriers between *independently trained* networks for our standard ResNet20-32 architecture [Ainsworth et al. 2023], whereas WM has little effect on the *identically spawned* networks of our experiments. The ineffectiveness of WM in our case suggests that other methods may not fare much better. Practically speaking, Peña et al. [2023] and Navon et al. [2024] are also quite costly (scaling poorly to larger models), as they must learn their permutations.
> Figure 2 and 3 titles and caption could be improved. Left and middle figure have the same title. Why are Gaussian permutation barriers evaluated without accounting for permutations? Figure 3 - same as which of Figures 2?
Thank you for the detailed corrections. We will revise the titles/captions, and add figures for permutation-aligned barriers for Gaussian perturbations into the appendix. Note that just like in batch perturbations, permutation does not reduce Gaussian-perturbed barriers.
> I would be curious to learn why the authors chose their experimental setup as they have. Are these mostly computational constraints, or based on previous work?
Indeed, our experimental setup is motivated by that of previous works [Frankle et al. 2020, Vlaar & Frankle 2022, Juneja et al. 2023, Ainsworth et al. 2023]. Having said that, evaluating stability for training (as opposed to fine-tuning) would require full runs which is costly when considering many replicates, perturbation times/magnitudes, and hyperparameter settings.
> Also, the submission motivates the work via two streams: optimization and work on early training phases. To me, that duality is lost in the course of the submission, and it is more of a generalization of existing work in the latter. I am curious what the authors see as the optimization perspective on their work.
We apologize for this confusion. Our intent was to disambiguate between our notion of instability (divergence of two trajectories that each independently converge to a solution) and that of work such as Wu et al. [2018] (tendency of a trajectory to fail to converge to certain solutions). We will de-emphasize the optimization perspective in the introduction to reflect its relevance to our work.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their rebuttal. While I appreciate computational burdens, I believe that further experimental evaluation beyond CIFAR would help greatly to assess how general or specific the findings are. I appreciate the author's reaction regarding fine-tuning task-difficulty. The addition of similarity scores, angular CCA or CKA, are a good step towards better understanding, and I encourage the authors to continue in this direction.
In light of the experimental evaluation and the other reviews, I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback. We fully agree that evaluating our findings across more diverse settings and larger models would strengthen their impact. However, given the computational scale needed to pre-train contemporary large models, we rely on publicly available intermediate checkpoints such as Multi-BERT to perform our large-scale fine-tuning experiments.
To address your concerns while working within this constraint, we have expanded our experimental setting using AllenAI’s OLMo-1B large language model [1,2], which provides intermediates checkpoints throughout its ~740K step (3 trillion tokens) pretraining. We fine-tuned intermediate checkpoints of this model on GSM8K [3] for approximately 5,000 steps with a peak learning rate of 2e-5 and 10% warm-up. GSM8K contains grade school math problems and is used as a benchmark for many contemporary language models. We conducted our butterfly experiments with batch perturbations for fine-tuning stability at three pretraining checkpoints:
1. First available checkpoint (4B tokens)
2. Mid-way through pretraining (1.5T tokens)
3. Final checkpoint (3T tokens)
Our results are plotted here: [https://imgur.com/a/KQ4kz4D](https://imgur.com/a/KQ4kz4D).
This experiment strongly corroborates our Multi-BERT findings (Fig. 6 in the paper): namely, we observe that pre-training for longer actually reduces stability to fine-tuning. Moreover, our insights regarding perturbation time and scale, whereby earlier and larger perturbations result in higher barriers, remain unchanged in the larger OLMo setting.
Our findings highlight an important consideration for the field. Typically fine-tuning is performed only on the latest checkpoints, which are in turn used for model merging or MoErging (mixture-of-experts method) [4]. Our results imply that depending on the fine-tuning task, earlier checkpoints may be better suited for transfer learning and model merging, and that the optimal pre-training duration could be related to the stability of a given checkpoint with respect to the fine-tuning task. Our work also highlights the value of making intermediate checkpoints more widely available for research and practical purposes.
Regarding similarity scores, we greatly appreciated your feedback and we will adjust our paper accordingly. While computational and time constraints prevent us from providing comprehensive CKA analyses for transformer models in time to add to this comment, we will include these metrics for select subsets of our fine-tuning experiments in the final version of our paper.
[1] https://huggingface.co/allenai/OLMo-1B-hf
[2] Groeneveld D., et al. OLMo: Accelerating the Science of Language Models. 2024.
[3] Cobbe, K., et al. Training Verifiers to Solve Math Word Problems, 2021.
[4] Yadav P., et al. A Survey on Model MoErging: Recycling and Routing Among Specialized Experts for Collaborative Learning, 2024. | Summary: In this work, the authors study how the sensitivity of training trajectories in neural network training depends changes with the distance to the initialization. The authors characterize the sensitivity through the divergene of training trajectories, which is measured in $L_2$ distance, loss barrieres, defined as $\sup_{\alpha \in (0,1)} \ell(\alpha \theta_T + (1-\alpha) \theta'_T) - \alpha \ell(\theta_T) - (1-\alpha) \ell(\theta'_T)$, and barriers module permutation.
Different from previous work, the authors study the training stability by removing the noise and studying deterministic training with controlled pertubations. The authors also compare the stability of loss landscape induced by different choices of optimizers and hyperparameters.
Their main findings include:
- a small pertubation at a single iteration, which is smaller than training noise, is sufficient to cause the divergence of two otherwise identically and trained networks.
- stability during the early phase of training can be improved for wider or shallower networks, increasing the batch size, learning rate warm-up and weight decay.
- pre-trained networks are multiple orders of magnitudes more stable than randomly initialized networks.
Claims And Evidence: Given the metrics that the authors have chosen to measure divergence and stability of training runs, I think the claims of the authors are well supported by the evidence provided, for instance in Figure 2 (early pertubation has a much larger effect than later pertubation), Figure 3 (modifying training hyperparameters can reduce the butterfly effect) or Figure 4 (pre-trained networks are much more stable than randomly-initialized networks).
Methods And Evaluation Criteria: I think that the evaluation criteria are reasonable overall, by considering both the loss barrier with and without permutation, the authors try to exclude the fact that training instability merely causes permutations between networks (although it cannot excluded, as the authors state, that they simply failed to find barrier-reducing permutations).
Also the $L_2$ distance is an intuitive metric to quantify the divergence between two runs. In fact, it seems even more natural to me to use this metric (or after applying $L_2$-minimizing permutations). What is the reason for the authors not to show these figures and can they provide them during the rebutal period?
Given the fact that there is some work, e.g. Draxler et al. [2018] or Garipov et al. [2018], which showed that the optima in neural networks from different initializations can be connected through (simple) non-linear curves, I wonder how much it makes sense to look only at the barriers along the linear path between the weights. It would be interesting to understand whether one can observe a similar trend if one computes the barrier along a Bezier curve. See for instance Fig. 1 in Garipov et al. [2018].
Theoretical Claims: This paper does not contain any proof. The derivation of the lineared approximation for $L_2$ divergence in Appendix C looks correct to me.
Experimental Designs Or Analyses: I checked the training details described in Appendix A and experiment details in Appendix B, which I believe is fine. Also the computation of the loss barriers seems reasonable to me. However I am not familier with the matching algorithm from Ainsworth et al. [2023] to find permutation of neurons $P$ that approximately minimizes the $L_2$ distance between two networks' weights.
Supplementary Material: I have reviewed appendix A, B, C and D of the appendix, but didn't review appendix E.
Relation To Broader Scientific Literature: This paper is strongly related to work on linear mode connectivity, which has been studied extensively, for instance in Frankle et al. [2020a;b], Singh et al. [2020.]. Bachmann et al. [2023] study different factors which lead to linear mode connectivity (or lack thereof), including the network architecture, training setup and the dataset, while Entezari et al. [2022] pose the conjecture that by taking into account permutation invariance, solutions trained with SGD will likely be in the same basin.
This paper follows the experimental setup as in Frankle et al. (2020a), but eliminate the effect of training noise by fixing it for different runs and isolate the effect of pertubations.
Essential References Not Discussed: There is some related work on mode connectivity, which have not discussed in this work, including Draxler et al. [2018] on "Essentially No Barriers in Neural Network Energy Landscape" and Garipov et al. [2018] on "Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs".
Both work show that minima reached from different initializations can be connected through simple, non-linear curves. Can the authors discuss how this paper relates to these works?
Other Strengths And Weaknesses: + + I think the main strength and novelty of this work is to isolate the effect of noise in training and really just comparing (deterministic) training runs with controlled injected pertubations. This highlights the sensitivity of the network to slight variations.
- - I think the main weakness of this paper is that it is still unclear to me in what way the diverged solutions are actually "different" from each other, since they achieve comparable training and test accuracy. For instance, it could be interesting to understand whether all pertubations still converge to the same manifold and how different pertubations for different points in time and magnitude of pertubations are distributed to each other (in terms of $L_2$ distance). The authors also mentioned that it is desired in ensembling methods to combine diverse solutions, but I am not sure what insights this work provides in this regard. I think my main concern with this work is that I am not sure what a reasonable way is to connect divergence to diversity of solutions.
Can the authors perhaps comment on this?
Other Comments Or Suggestions: It would be helpful to show Figures 2-6 and Figure 9 with the y-axis in log-scale to distinguish the lines even better.
Questions For Authors: I was first suprised that there is one point for $t=0.0 \%$ in Figure 2 where the barrier is actually higher after permutation than without. Upon closer look I realized that the permutation is minimizing the $L_2$ distance between two networks.
Can the authors therefore provide similar Figures as the ones in Figure 2 or Figure 3, where the error barrier is measured in terms of $L_2$ divergence? It would particular interesting for me to see, how much the permutation of neurons $P$ can reduce the $L_2$ distance between two networks.
What are the conclusions on network training that you draw from the results of this paper?
Although I gave a rather low score, I am open to adjust my score if the authors address my comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the detailed review, which we address by topic below.
## E. $L^2$ Divergence
> Also the $L^2$ distance is an intuitive metric to quantify the divergence between two runs. [...] Can the authors therefore provide similar Figures as the ones in Figure 2 or Figure 3, where the error barrier is measured in terms of divergence? It would particular interesting for me to see, how much the permutation of neurons $P$ can reduce the $L^2$ distance between two networks.
Apologies for the omission. We have plotted $L^2$ divergence in place of barriers here and will add them to the appendix for all figures: [https://imgur.com/a/BKMQHQk](https://imgur.com/a/BKMQHQk).
We do not prioritize $L^2$ divergence as it is not as indicative of functional similarity as barriers. $L^2$ is sensitive to optimization choices (e.g. weight decay), and network scale symmetries. Indeed, we observe that:
- The relationship between $L^2$ and barriers differs depending on hyperparameter settings for our vision experiments (section 5, figure 7 colors).
- $L^2$ and barriers are unrelated for our language experiments (section 5, figure 8).
Regarding weight matching (WM), Ito et al. [2024] finds:
> permutations found by WM do not significantly reduce the distance between the two
models [...] WM satisfies LMC by aligning the directions of singular vectors with large singular values in each layer’s weights.
This highlights another lack of correspondence between $L^2$ and barriers.
- Ito, A., et al. (2024). Analysis of linear mode connectivity via permutation-based weight matching
## C. Non-Linear Connectivity
> Given the fact that there is some work, e.g. Draxler et al. [2018] or Garipov et al. [2018], which showed that the optima in neural networks from different initializations can be connected through (simple) non-linear curves, I wonder how much it makes sense to look only at the barriers along the linear path between the weights. [..] Both work show that minima reached from different initializations can be connected through simple, non-linear curves. Can the authors discuss how this paper relates to these works?
This is an important point also raised by Reviewer *HaCY*. Please see our rebuttal to Reviewer *HaCY* under **C. Non-Linear Connectivity**, where we address this issue.
## A., B. Functional Similarity And Model Diversity
> I think the main weakness of this paper is that it is still unclear to me in what way the diverged solutions are actually "different" from each other, [...] For instance, it could be interesting to understand whether all pertubations still converge to the same manifold and how different pertubations for different points in time and magnitude of pertubations are distributed to each other (in terms ofdistance).
> The authors also mentioned that it is desired in ensembling methods to combine diverse solutions, [...] I think my main concern with this work is that I am not sure what a reasonable way is to connect divergence to diversity of solutions. Can the authors perhaps comment on this?
These are important observations echoed by Reviewers *sufv* and *mmJd*. Based on your feedback, we have conducted additional experiments using CKA and ensembling to measure model diversity. Please see our full rebuttal to Reviewer *sufv* for details.
Regarding the first quoted passage, could you clarify what you mean by “converge to the same manifold”? This may be related to the section **C. Non-Linear Connectivity**.
## Other Questions
> It would be helpful to show Figures 2-6 and Figure 9 with the y-axis in log-scale to distinguish the lines even better.
We have updated these figures accordingly and will add them to the text:
- Figures 2 and 3 [https://imgur.com/a/fig2-3-jND4s6E](https://imgur.com/a/fig2-3-jND4s6E).
- Figures 4, 5, and 6 [https://imgur.com/a/hsJyGhZ](https://imgur.com/a/hsJyGhZ). We provide tablbes in place of Figure 4 as barriers are 0 for $\sigma<0.01$.
- Figure 9 [https://imgur.com/a/jUXmcZQ](https://imgur.com/a/jUXmcZQ).
> What are the conclusions on network training that you draw from the results of this paper?
Our main conclusion is that neural network training is unstable near initialization even without randomness, meaning that interventions to reduce SGD noise (larger batch size, reduced learning rates, longer warmup periods) cannot eliminate this instability. Similar work has found that the trainability of neural networks can be unstable to small hyperparameter changes [Sohl-Dickstein 2024].
Conversely, pre-trained networks can still be fine-tuned into different modes (and thus, diverse solutions) with sufficiently large perturbations. Our findings suggest ways to manage model diversity among multiple training runs, such as by adjusting pre-training length, changing SGD noise magnitude via hyperparameters, or by directly perturbing networks.
We will add this conclusion into the text.
- Sohl-Dickstein, J. (2024). The boundary of neural network trainability is fractal. | Summary: This paper studies the impact of perturbation on SGD in an empirical way. The authors focus on the condition under which the perturbation during training can lead to convergence to another basin. The authors prepared perturbation in different directions of various scales that is applied to training at different moments. The same experiment is performed for both random initialization and transfer learning. The notable findings are:
* Stability increases as the training goes on;
* Stability can be improved by hyperparameter tuning;
* pre-trained models are more stable;
* Long pre-training reduces stability when fine-tuning language models.
Claims And Evidence: All of the claims are supported by numerical evidences. The authors provide statistically significant results in controlled settings and the experiment themselves are solid. However, the interpretation of the results may be less so. I think characterizing the basin by a linearly connected low-loss region in the loss landscape is a little problematic. (I will leave the details in theoretical claims).
Methods And Evaluation Criteria: Thus authors use image classification and language model training problems to study the stability. These are reasonable choices and they are good enough to support the contents of this paper, but they don’t cover all possible cases for learning (prediction, RL, PINN, …).
Theoretical Claims: This paper does not involve theoretical proofs.
I find the reasoning a little problematic though. The authors characterize the basin by a linearly connected low-loss region in the loss landscape. I don’t think this is always true. There is symmetry (not limited to permutation) in the loss function (for example, as discussed in https://arxiv.org/pdf/2309.16932). Due to the continuous symmetries, there are equivalent local minima that should be considered the same basin and are not linearly connected. A naive example would be a loss function for parameters $(u, v)$ and data point $(x, y)$: $(uvx - y)^2$. The minima are located on the curve $uv = y/x$, and these solutions provide the same training and testing performance. However, these solutions are not considered to be in the same basin according to the authors. Thus, I think the authors’ way of modeling the basin does not take into account the complexity in the landscape.
Experimental Designs Or Analyses: I am not very familiar with fine-tuning language models, so I cannot check the experimental designs in detail in this regard. I don’t find notable problems in experiments in other cases.
Supplementary Material: I didn’t review the supplementary material.
Relation To Broader Scientific Literature: I think this paper provides an alternative point of view to existing works. As far as I know the stability works usually study the problem that at which local minimum the SGD stays. The authors instead study whether SGD converges to the same local minimum regardless of the characteristic of the local minimum. Also, I am not aware of previous works that freezes SGD noise and introduce another source of noise.
Essential References Not Discussed: To my knowledge there are no essential references not discussed.
Other Strengths And Weaknesses: The authors investigate a lot of potential causes for instability. These potential causes are well isolated in the experiments, and the experimental facts are recorded well. This paper can serve as a nice starting point for crafting theories about stability.
In this work, the authors freeze the noise in SGD, and the injected noise is the main object of study for stability analysis. In reality, the noise of SGD may be much larger than the perturbations studied in this paper and overshadow them. Thus, I don't think the stability the authors study is very relevant.
Other Comments Or Suggestions: I have no other comments.
Questions For Authors: I wonder if the authors know how to tell whether $\theta_T$ and $\theta’_T$ are connected by continuous symmetry or not.
I wonder if the authors could quantify how important the perturbation studied in this paper is compared to the SGD noise.
In the numerical experiments, I have the impression that the amplitude of the perturbation can be much larger than rounding error. Where could these perturbations possibly come from?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We have addressed the quoted comments and questions point-by-point below.
## C. Non-Linear Connectivity
> Due to the continuous symmetries, there are equivalent local minima that should be considered the same basin and are not linearly connected. [...] However, these solutions are not considered to be in the same basin according to the authors. [...] I wonder if the authors know how to tell whether $\theta_T$ and $\theta’_T$ are connected by continuous symmetry or not.
Thank you for raising this crucial and subtle distinction, which is also suggested by Reviewer *tDxT*.
Our definition of “loss basin” (which we refer to here as “LMC basin”) in section 2 (*Linear mode connectivity*) reflects when the loss landscape is locally convex, and is drawn from Neyshabur et al. [2020] and Frankle et al. [2020]. Although this perspective is more restrictive than a general notion of “mode connectivity” [Draxler et al. 2018, Garipov et al. 2018], ours is both theoretically and practically relevant:
1. Despite being a non-convex problem, neural network training has many connections with convex optimization. LMC basins describe approximately convex regions of the loss landscape.
2. Classical methods for determining the stability of dynamical systems take a linear or quadratic approximation, which holds precisely in LMC basins.
3. Model merging applications require networks to share a LMC basin, as shown by permutation alignment research [Singh & Jaggi 2020, Entezari et al. 2022, Ainsworth et al. 2023].
Prior work, both theoretical [Simsek et al. 2021, Lin et al. 2024] and empirical [Draxler et al. 2018, Garipov et al. 2018, Sonthalia et al. 2024], also points to non-linear connectivity being a trivial property of neural network minima in general. If this is the case, then knowing if $\theta_T$ and $\theta’_T$ are continuously connected would be rather uninformative.
We apologize for not making our definition of “loss basin” and our justifications for the choice clear, and will update the above points in the text.
- Simsek, B., et al. (2021). Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances.
- Lin, Z.,et al. (2024). Exploring neural network landscapes: Star-shaped and geodesic connectivity.
- Sonthalia, A., et al. (2024). Do Deep Neural Network Solutions Form a Star Domain?.
## D. Correspondence With SGD Noise
> In this work, the authors freeze the noise in SGD, and the injected noise is the main object of study for stability analysis. In reality, the noise of SGD may be much larger than the perturbations studied in this paper and overshadow them. Thus, I don't think the stability the authors study is very relevant. [...] I wonder if the authors could quantify how important the perturbation studied in this paper is compared to the SGD noise.
We agree that we have not sufficiently established the relative magnitude of our perturbations vs SGD noise. To address this, we replicate the parent-child spawning experiment of Frankle et al. [2020] (Frankle baseline) and show that our batch perturbations are a lower bound on the Frankle baseline’s barriers [https://imgur.com/a/frankle-ThKay8T](https://imgur.com/a/frankle-ThKay8T). In this comparison, we scale batch perturbations to the expected magnitude of SGD noise at the perturbation time $t$: making batch perturbation equivalent to taking only one step at time $t$ with different SGD noise, as opposed to using different SGD noise from $t$ onwards in the Frankle baseline.
We also tested even smaller perturbations by perturbing a fraction of weights at the smallest perturb scale we used in our main experiments ($10^{-4}$).
We find that as little as a **single (!)** perturbed weight (at a fraction of $10^{-6}$) causes barriers at initialization [https://imgur.com/a/AvfP8Mh](https://imgur.com/a/AvfP8Mh), which is well below the scale of noise from sources such as hardware indeterminacy.
The significance of our approach is that, as per section 3.1, the exponential convergence or divergence of a deterministic dynamical system dominates the diffusion effects of SGD noise [Wu et al. 2018]. This means that, in the dynamical systems model, training trajectories will diverge as long as they are unstable to noise *once*, whereas in the diffusion model, divergence depends on noise persisting throughout training.
## Other Questions
> In the numerical experiments, I have the impression that the amplitude of the perturbation can be much larger than rounding error. Where could these perturbations possibly come from?
Sources of perturbation in regular training could include SGD noise (batch order, data augmentation, hardware indeterminism) as well as model pruning or quantization. We do not target a specific source so as to keep our experimental findings general. | Summary: The paper studies the impact of applying isolated perturbations to model parameters during different training and fine-tuning phases. The analysis provides insights into model training stability using three quantities measuring parameter similarity or functional similarity. The findings suggest that models become more stable during training, and the effect is even more evident for pretrained models. Interestingly, analysis shows that the longer training in the case of language tasks hurts the model's stability.
Claims And Evidence: All the claims are supported by clear empirical evidence.
Methods And Evaluation Criteria: The methods (models and benchmarks) cover various popular settings, making the study interesting for both image classification and NLP communities.
The measures used by the authors to estimate the similarities between models are well-established methods within this area. However, I'm wondering how other similarity measures differ from the perspective presented by the authors, e.g., CCA or CKA index, which could be an interesting add-on to the current work.
Theoretical Claims: The paper does not contain any formal statements.
Experimental Designs Or Analyses: The experiments are clearly explained to the reader and meticulously designed to precisely measure the effect of perturbations.
Supplementary Material: I have skimmed through the results presented in supplementary material, but I haven't checked them in detail.
Relation To Broader Scientific Literature: The work is well positioned within the related works. My current understanding is that the work could be an important first step towards designing precise training manipulations, which in turn could be used to increase the model's performance or robustness to data shifts by model merging techniques. It would be interesting to continue this line of work and explore questions like: How can optimally perturb a model (and at which point) increase the final model's performance of the averaged model? Would that perturbation be universal across different merging strategies or should each strategy have its own method of perturbation?
Essential References Not Discussed: None
Other Strengths And Weaknesses: As mentioned earlier, it would be interesting to broaden the scope of methods for computing similarity between two models. Measures such as CCA, CKA could be useful in this regard and could offer a deeper understanding of the phenomenon.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful review. Please find our replies below for the quoted points.
## A. Other Similarity Measures
> I'm wondering how other similarity measures differ from the perspective presented by the authors, e.g., CCA or CKA index, which could be an interesting add-on to the current work.
This is an excellent idea and was also recommended by Reviewer *mmJd*.
We prioritized barriers in our original analysis in order to determine when networks are in the same locally convex loss basin. For our reasoning about why this is useful, see the discussion of the advantages of linear mode connectivity in our rebuttal to Reviewer *HaCY*, under **C. Non-Linear Connectivity**.
Representational similarity measures (e.g. CKA) are a more generic way to measure functional similarity that is not sensitive to mechanistic differences between neural network weights, such as barriers that prevent effective model merging. Nevertheless, we agree that measuring similarity in a more general sense will better illuminate how perturbed training trajectories differ.
Accordingly, we have now conducted the following experiment on our standard setting (figure 2):
1. We compute output dissimilarity (in $L^2$ between logits, or % of disagreeing classifications) and Angular CKA [Williams et al. 2021] between pairs of networks in our experiments: [https://imgur.com/a/zOBlg5X](https://imgur.com/a/zOBlg5X).
2. As expected, measures of functional dissimilarity increase as barriers increase.
3. Functional similarity becomes more sensitive to batch perturbation later in training.
Note that Angular CKA ranges from 0 (perfectly similar) to $\pi$ (perfectly dissimilar), with $\pi/2$ indicating no correlation. We plot the largest (most dissimilar) CKA value over all residual block outputs, which are computed over 10000 examples using software from [Lange et al. 2023].
We will replicate the above experiment for our fine-tuning settings for inclusion in the text.
- Williams, A. H.,et al. (2021). Generalized shape metrics on neural representations.
- Lange, R. D., et al. (2023). Deep networks as paths on the manifold of neural representations.
## B. Model Diversity
> How can optimally perturb a model (and at which point) increase the final model's performance of the averaged model? Would that perturbation be universal across different merging strategies or should each strategy have its own method of perturbation?
This is a very interesting and novel application which we did not previously consider.
We have interpreted your point as the hypothesis that *deliberately perturbing models increases model diversity, leading to improved ensembling performance*. We test this hypothesis as follows:
1. We take our measurements from above, and plot Angular CKA (x-axis) against ensemble performance: [https://imgur.com/a/ah7kCfg](https://imgur.com/a/ah7kCfg).
2. We find that the ensembles indeed perform better on more dissimilar model pairs, which is in line with the proposed hypothesis.
3. Note that, as shown by barriers increasing with Angular CKA, model averaging performs worse for more dissimilar model pairs.
By ensembling, we mean that in each of our spawn-and-perturb experiments, we evaluate the average of the output logits for the unchanged and perturbed networks after training.
While Gaussian perturbations appear to have a more consistent relationship between functional dissimilarity and ensemble performance, we leave a full exploration of which perturbations are best suited for different merging strategies to future work.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for addressing my concerns and providing additional experiments. I still believe that the findings from this work could serve as a starting point for finding optimal perturbations that increase ensemble performance and deserve being accepted.
---
Reply to Comment 1.1.1:
Comment: Thank you for the positive assessment of our work. We will include additional CKA analyses such as for fine-tuning transformer models in the final version of our paper. Additionally, we have completed a larger-scale experiment as suggested by Reviewer *mmJd*, the results of which we have appended here for convenience:
## Large-scale experiments
We have expanded our experimental setting using AllenAI’s OLMo-1B large language model [1,2], which provides intermediates checkpoints throughout its ~740K step (3 trillion tokens) pretraining. We fine-tuned intermediate checkpoints of this model on GSM8K [3] for approximately 5,000 steps with a peak learning rate of 2e-5 and 10% warm-up. GSM8K contains grade school math problems and is used as a benchmark for many contemporary language models. We conducted our butterfly experiments with batch perturbations for fine-tuning stability at three pretraining checkpoints:
1. First available checkpoint (4B tokens)
2. Mid-way through pretraining (1.5T tokens)
3. Final checkpoint (3T tokens)
Our results are plotted here: [https://imgur.com/a/KQ4kz4D](https://imgur.com/a/KQ4kz4D).
This experiment strongly corroborates our Multi-BERT findings (Fig. 6 in the paper): namely, we observe that pre-training for longer actually reduces stability to fine-tuning. Moreover, our insights regarding perturbation time and scale, whereby earlier and larger perturbations result in higher barriers, remain unchanged in the larger OLMo setting.
Our findings highlight an important consideration for the field. Typically fine-tuning is performed only on the latest checkpoints, which are in turn used for model merging or MoErging (mixture-of-experts method) [4]. Our results imply that depending on the fine-tuning task, earlier checkpoints may be better suited for transfer learning and model merging, and that the optimal pre-training duration could be related to the stability of a given checkpoint with respect to the fine-tuning task. Our work also highlights the value of making intermediate checkpoints more widely available for research and practical purposes.
[1] https://huggingface.co/allenai/OLMo-1B-hf
[2] Groeneveld D., et al. OLMo: Accelerating the Science of Language Models. 2024.
[3] Cobbe, K., et al. Training Verifiers to Solve Math Word Problems, 2021.
[4] Yadav P., et al. A Survey on Model MoErging: Recycling and Routing Among Specialized Experts for Collaborative Learning, 2024. | null | null | null | null | null | null |
Measuring Representational Shifts in Continual Learning: A Linear Transformation Perspective | Accept (poster) | Summary: This paper focuses on the theoretical analysis of representation forgetting in Continual Learning scenarios. While representation forgetting has been introduced previously, the focus was largely experimental and the main contribution of the proposed work is to focus on the theoretical aspect of representation forgetting. Therefore, the authors justify the usage of a new metric, Representation Decrepancy, as a surrogate for traditional representation forgetting measures. Consequently, this paper analyzes the theoretical behavior of representation forgetting, showing its dependence on layer index, and network depth, and identifying various regimes of representation forgetting during training. Such findings are supported by experiments.
## update after rebuttal
I thank the authors for their detailed responses and clarifications.
The figure given in response R4-3 is interesting and I would suggest including it in the final version of the manuscript (or in appendix).
Regarding R4-2, I should have been more precise in my comment, but I believe the main experimental result of the paper lies in Figure 4, which is what I would like to see regarding the evolution of $D^k_t$. Although as stated, I understand that $D^k_t$ and $\Delta P^k_t$ are correlated so similar results should be observed. Nonetheless, the authors should display the results with $D^k_t$ since the theoretical analysis is made with this quantity.
For these reasons, I will maintain my current score of 4.
Claims And Evidence: The theoretical aspect of the paper is clear and adequately justified. I did not find any major flow in the presented theorems or assumptions.
My main concern resides in the experimental section. While the author introduces a new metric, representation discrepancy, they somehow never compute it in the experiments to the benefit of representation forgetting as defined in previous work. While the authors indeed discuss how both metrics are related, I struggle to understand the justification of such a choice and I am unsure that this adequately justifies the theoretical claims.
Methods And Evaluation Criteria: Yes, the dataset perfectly makes sense, even though additional settings would be considered. However, the paper being mostly theoretical, I believe the current choice to be sufficient.
Theoretical Claims: I checked the correctness of the main paper, however only briefly looked at the demonstrations in the appendix. So far, no major issues has been identified.
Experimental Designs Or Analyses: The experimental design is clearly explained and overall makes perfect sense for demonstrating the theoretical claim. As already stated, my main concern remain the metric used in epxeriments.
Supplementary Material: I did not review carefully the proofs (part A) of the supplementary.
I did review part B (experiments).
Relation To Broader Scientific Literature: This is related to Continual Learning in general as it helps understand how neural network representation can change throughout training. Notably, while some findings have been observed experimentally in previous studies (forgetting strong in the latest layers), it gives additional insight into being able to connect it with the theory proposed in the paper. Maybe such theoretical analysis can help design new methods for reducing forgetting in intermediate layers or give additional metrics for quantifying training behaviors.
Essential References Not Discussed: The increase of forgetting in latest layers is intuitive and expected. This has similarly been discussed in previous studies such as [1]. While the theoretical findings are interesting, further discussion with previous studies would be included.
[1] Ramasesh, Vinay Venkatesh, Ethan Dyer, and Maithra Raghu. "Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics." _International Conference on Learning Representations_.
Other Strengths And Weaknesses: *Strenghts*
- The writing quality is very good, I enjoyed reading the paper and found the presentation particularly clear
- The findings are interesting and novel
- Illustrations are relevant
- The presence of experiments to support the theoretical claims is particularly appreciated
*Weaknesses*
- I would like the code to be shared
- See questions and suggestions
Other Comments Or Suggestions: - The increase of forgetting in the latest layers is intuitive and expected. This has similarly been discussed in previous studies such as [1]. While the theoretical findings are interesting, further discussion with previous studies would be included.
- From a theoretical perspective, it seems to me that such analysis can be adequately applied beyond Class-Incremental Learning scenarios (which is the only one considered here). I would appreciate discussion in this regard. While I believe the current experiments to be sufficient, experiments in Domain Incremental Learning cases would further strengthen the theoretical findings.
Questions For Authors: - What about when the representations are normalized? Such strategy is very common in representation learning, how would the current analysis be impacted?
- Why not use centroid distances for definition 3, instead of the most distant features?
- How far is the theoretical upper bound $U_{t, \infty}^k$ from an experimental upper bound? For example, what is the discrepancy between the model trained on the first task and a randomly initialized model? Such information could be interesting to understand how much the model *can* forget, as for now it is not clear if this saturation is the consequence of "not being able the forget more" or if it occurs before reaching such a stage, therefore maintaining some of the previous knowledge (and if so, how far are we from "maximum forgetting"?).
- My biggest concern is the following. In Section 6.1, why compute the representation forgetting $\Delta P_t^k(\Delta t)$ as defined by [2], instead of directly computing your newly introduced metric $D_t^k(h_t,\Delta t)$? I understand that the two are correlated as stated in 4.2 but all the theoretical analysis is made on $D_t^k(h_t,\Delta t)$, which should be computable. I am unsure how much should the analysis of $\Delta P_t^k(\Delta t)$ gives insight on the behavior of the maximum value of $D_t^k(h_t,\Delta t)$. Could you provide more explanation on this choice and its consequences?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer cno5 for the detailed review and constructive suggestions. We appreciate your acknowledgements that the theoretical aspects of this paper are **clear and adequately justified**, and that the findings are **interesting and novel**. Below we show our reply on your comments and questions.
---
**[R4-1]**`In Section 6.1, why compute the representation forgetting \Delta P^k_t instead of directly computing your newly introduced metric D^k_t? I understand that the two are correlated as stated in 4.2 but all the theoretical analysis is made on D^k_t, which should be computable`
As noted in [R3-3] and [R3-6], we evaluated $D_t^k$ directly and observed results consistent with Figures 5 and 6. These will be included in the revised manuscript.
---
**[R4-2]**`I am unsure how much should the analysis of \Delta P^k_t gives insight on the behavior of the maximum value of D^k_t. Could you provide more explanation on this choice and its consequences?`
Thank you for your insightful comments. Our original intent in conducting experiments with $\Delta P^k_t$ was to demonstrate that the theoretical results derived from $D^k_t$ also holds in $\Delta P^k_t$. However, we agree that experiments based solely on $\Delta P^k_t$ are insufficient to support the theoretical claims concerning $D^k_t$. Accordingly, we have conducted additional experiments using $D^k_t$, as detailed in [R3-3] and [R3-6].
---
**[R4-3]**`How far is the theoretical upper bound U^k_t,\infty from an experimental upper bound? For example, what is the discrepancy between the model trained on the first task and a randomly initialized model? Such information could be interesting to understand how much the model can forget, as for now it is not clear if this saturation is the consequence of "not being able the forget more" or if it occurs before reaching such a stage, therefore maintaining some of the previous knowledge (and if so, how far are we from "maximum forgetting"?).`
We thank the reviewer for the insightful question. To clarify, at each layer $k$, the below plot reports the representation discrepancy $D_t^k$ between the task-1 model $h_1$ and two models:
* a randomly initialized model $h_0$, i.e., $\Delta t = -1$
* the final model after training on all tasks $h_N$, i.e., $\Delta t = N-1$
Our results show that $D^k_1(h_1, N-1)$ is significantly smaller than $D^k_1(h_1, -1)$, indicating that the saturation does not correspond to complete forgetting. Thus, the model retains a nontrivial amount of task-1 information even after learning all subsequent tasks.
https://hackmd.io/_uploads/ByCfAQOTke.png
---
**[R4-4]**`The increase of forgetting in latest layers is intuitive and expected. This has similarly been discussed in previous studies such as [1]. While the theoretical findings are interesting, further discussion with previous studies would be included.`
Thank you for the suggestion.
We will include a discussion comparing our results with [1], which empirically observed greater drift in deeper layers. Our work complements this by providing the first theoretical explanation of this phenomenon (see Corollary 1 and Fig. 5).
---
**[R4-5]**`I would like the code to be shared`
You can find code here: [link](https://anonymous.4open.science/r/representational-shifts-in-cl-F1F5/README.md)
---
**[R4-6]**`From a theoretical perspective, it seems to me that such analysis can be adequately applied beyond Class-Incremental Learning scenarios (which
is the only one considered here). I would appreciate discussion in this regard. While I believe the current experiments to be sufficient, experiments in Domain Incremental Learning cases would further strengthen the theoretical findings.`
We agree that our framework can extend to domain-incremental learning (DIL), provided an appropriate modeling of weight perturbation as done in Assumption 2 for class-incremental learning. While a formal theory for DIL is left for future work, we provide preliminary empirical evidence in [R2-2] using rotated Split-CIFAR100, where Corollary 1 continues to hold.
---
**[R4-7]**`What about when the representations are normalized? Such strategy is very common in representation learning, how would the current analysis be impacted?`
While we do not yet have a formal analysis under normalization, we offer the following intuition: normalization compresses the representation space by reducing feature norms, which in turn may reduce the magnitude of $D_t^k$. If the shift in linear structure is preserved under normalization, our discrepancy measure would still reflect representational drift, but with smaller scale. We will mention this in the revised manuscript.
---
**[R4-8]**`Why not use centroid distances for definition 3, instead of the most distant features?`
We appreciate the suggestion. We follow the max-discrepancy formulation as in Guha et al. (2024), who adopted a similar worst-case approach in their theoretical analysis of catastrophic forgetting.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses and clarifications.
The figure given in response R4-3 is interesting and I would suggest including it in the final version of the manuscript (or in appendix).
Regarding R4-2, I should have been more precise in my comment, but I believe the main experimental result of the paper lies in Figure 4, which is what I would like to see regarding the evolution of $D^k_t$. Although as stated, I understand that $D^k_t$ and $\Delta P^k_t$ are correlated so similar results should be observed. Nonetheless, the authors should display the results with $D^k_t$ since the theoretical analysis is made with this quantity.
For these reasons, I will maintain my current score of 4.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for maintaining a positive assesment of our work. Following the suggestion, we will include the figure presented in R4-3 in the final version of our manuscript. We also agree that it is important to report the empirical evolution of D^k_t. Accordingly, we will include the corresponding results in the final version as well. | Summary: In this works, the authors introduce a novel measure of forgetting in the hidden layers of deep neural networks in continual learning setting. They derive an upper-bound for the proposed representation discrepancy measure and the convergence rate of this measure under a set of assumptions. Additionally, the authors evaluate the development of representation forgetting during learning empirically using Split-CIFAR100 and ImageNet 1K.
Claims And Evidence: 1. The manuscript doesn’t clearly justify how the proposed measure of representation discrepancy $D^k\_t$ relates to forgetting, despite claiming it as an effective surrogate for representation forgetting. While the authors demonstrate that when $D^k\_t$ is small, $P^k\_t$ is also small, it remains unclear how these values scale or whether one provides an upper bound for the other. This is particularly concerning, because the empirical results evaluate the properties of only $P^k\_t$, not $D^k\_t$. Furthermore, the convergence rate in Fig. 6 is defined using $P^k\_t$, which is different from the original definition in Eq. 7. Thus, I find that the analytical results do not support their claims on forgetting convincingly, and there remains discrepancy between analytical and empirical findings.
2. In the proof of proposition 1, the authors state that $d$ increases linearly with $\Delta t$, citing Theorem 4.1 of Guha & Lakshman (2024). However, in the original theorem, this linear $\Delta t$ dependence is shown for the upper bound of $d$, not $d$ itself, making this proof and the following arguments on the two-stage dynamics not well supported.
3. The motivation behind the definition of the representation discrepancy is unclear. Because the definition relies on the maximum discrepancy over all samples rather than the distance between two activity distributions, it is susceptible to outlier, raising concerns about its robustness.
4. In section 4.1, the introduction of a new measure for representation forgetting is motivated by the difficulty of deriving an optimal linear classifier for hidden layers. I’m confused by this argument particularly because the proposed measure of representation discrepancy requires derivation of the optimal linear transformation between two hidden layer representations, which is typically computationally heavier than the derivation of the optimal linear classifier (if $d_y < w_k$).
Methods And Evaluation Criteria: See the comment 1 above.
Theoretical Claims: See the comment 2 above.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Not very carefully.
Relation To Broader Scientific Literature: While the manuscript is built heavily upon Guha & Lakshman (2024), the results presented here are clearly distinct from that previous work.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: The question addressed in this manuscript, forgetting in hidden layers of ANNs, is potentially interesting and important.
Other Comments Or Suggestions: Naively thinking, Assumption 1 shouldn't hold when $w_k < w\_{k-1}$, but Figure 2 suggests it does. Why is that the case?
Questions For Authors: Please see the comments above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer 4iQx for the detailed review and constructive suggestions. We appreciate your acknowledgements that the problem addressed in this manuscript is potentially **important and interesting**, and that our results are **clearly distinct from the previous works**. Below we show our reply on your comments and questions.
---
**[R3-1]**`The introduction of a new measure for representation forgetting is motivated by the difficulty of deriving an optimal linear classifier for hidden layers. I’m confused by this argument.`
We believe that there may have been a misunderstanding regarding our motivation for introducing the measure $D^k_t$.
The term *difficulty* in our statement refers to theoretical intractability rather than computational cost. Our intention was **not** to use $D^k_t$ as a *more efficient alternative* for computing representation forgetting in practical settings. Rather, we introduced $D^k_t$ to *facilitate the theoretical analysis* of $\Delta P^k_t$.
We will clarify this point in the revised manuscript.
---
**[R3-2]**`It remains unclear how D^k_t and P^k_t scale or whether one provides an upper bound for the other. `
Below we added plots showing $D_t^k$ versus $\Delta P_t^k$ across different layers $k$. In both datasets, $\Delta P_t^k$ **scales approximately linearly** with $D_t^k$, supporting its role as a surrogate. We will include these results in the revised manuscript.
| **ImageNet1K** | **SplitCIFAR100** |
|-|-|
|https://hackmd.io/_uploads/BJVuwbdp1g.png|https://hackmd.io/_uploads/Sk-YD-upkl.png|
---
**[R3-3]** `The empirical results evaluate the properties of only P^k_t not D^k_t. Thus, the analytical results do not support their claims on forgetting convincingly.`
We conducted additional experiments directly evaluating $D_t^k$. As shown below, we observed a **strong linear relationship** between $D_t^k$ and $R_t^k$, similar to our observation in Fig.5 of the submitted manuscript. We also confirm the linearity between $U_t^k$ and $R_t^k$ as predicted by Corollary 1.
These results will be included in the revised manuscript.
| **ImageNet1K** | **SplitCIFAR100** |
|-|-|
|https://hackmd.io/_uploads/BJ-jMLEpJg.png|https://hackmd.io/_uploads/S1RsG8Vpye.png|
|https://hackmd.io/_uploads/SkTYGzd6Je.png|https://hackmd.io/_uploads/B1n9MGOa1l.png|
---
**[R3-4]**`The motivation behind the definition of the representation discrepancy is unclear. It is susceptible to outlier, raising concerns about its robustness.`
Thank you for your comment. As noted in [R3-1], our primary motivation for introducing the metric $D^k_t$ is to facilitate the theoretical analysis of $\Delta P^k_t$. We adopted the maximum discrepancy formulation following the approach of Guha et al. (2024), who succesfully employed a similar method in their theoretical study of catastrophic forgetting in continual learning.
---
**[R3-5]**`The proof of Prop 1., claims d increases linearly with \Delta t, citing Thm 4.1 of Guha & Lakshman (2024), but the theorem only shows this dependence for the upper bound of d. Hence, the proof and the following arguments on the two-stage dynamics is not well supported.`
Thank you for the sharp comment. The reviewer is correct that the linear dependency of $d$ with respect to $\Delta t$ is not *theoretically* proven in Guha & Lakshman (2024). As such, our proof in Proposition 1 requires an additional assumption (that the lower bound of $d$ increases without a limit as a function of $\Delta t$) for the result to hold.
Under this assumption, the unique peak described in Proposition 1 may not always be guaranteed. However, this assumption is sufficient for the two-phase dynamics to hold, as the upper bound $U^k_t$ will eventually reach a peak and then saturate.
We also note that if $d$ itself monotonically increases with $\Delta t$, then the uniqueness of the peak as stated in Proposition 1 follows. We will clarify these assumptions and their implications in the revised version.
---
**[R3-6]**`The convergence rate in Fig. 6 is defined using P^k_t, which is different from the original definition in Eq. 7.`
As suggested, we report the convergence rate using $U_t^k$ instead of $\Delta P_t^k$. Similar to Figure 6, we find that $\Delta t_{\text{sat}}$ decreases with layer index $k$, so the convergence rate $r_t^k = 1 / \Delta t_{\text{sat}}$ increases with $k$, consistent with Theorem 2.
https://hackmd.io/_uploads/r1VnzXOp1l.png
---
**[R3-7]**`Naively thinking, Assumption 1 shouldn't hold when w_k < w_{k-1}. Why is that the case?`
Assumption 1 compares across different tasks $t$ and $t'$ for the same layer $k$, not across different layers. Thus, the width $w_k$ remains fixed for each comparison, and the condition holds regardless of whether $w_k < w_{k-1}$. This is why Assumption 1 is satisfied in Figure 2.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal. With the addition of the numerical evaluation of $D^k_t$ and the clarification of the assumption for Proposition 1, the manuscript is technically sound. I have thus raised my evaluation, although I’m not fully convinced of the utility of the measure based on the maximum discrepancy.
Regarding [R3-7], because $W^k \in R^{w_k \times w_{k-1}}$, there exists $T \in R^{w_k \times w_k}$ satisfying $T W^k_{t’} = W^k_t$ almost trivially if $w_k \geq w_{k-1}$. However, the existence of $T$ is not guaranteed if $w_k < w_{k-1}$. I assume it works because learned weights tend to be effectively low-rank.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the positive evaluation and for acknowledging the technical soundness of the manuscript.
> Regarding [R3-7], because $W^k \in R^{w_k \times w_{k-1}}$, there exists $T \in R^{w_k \times w_k}$ satisfying $T W^k_{t’} = W^k_t$ almost trivially if $w_k \geq w_{k-1}$. However, the existence of $T$ is not guaranteed if $w_k < w_{k-1}$. I assume it works because learned weights tend to be effectively low-rank.
We would like to clarify that, as stated in Assumption 1, we consider the case where $w_k = w_{k-1}$ for all $k$. Therefore, all experiments related to Figure 2 were conducted using networks with equal layer widths, i.e, $w_k = w_{k-1}$. We will make this explicit in the revised manuscript to avoid any confusion. | Summary: This paper introduces a novel metric—termed representation discrepancy—to quantify the degradation of internal feature representations (i.e., representation forgetting) in continual learning. By framing the forgetting problem in terms of a minimum alignment error between hidden layer representation spaces via an optimal linear transformation, the authors provide both a theoretical analysis and empirical validation.
Claims And Evidence: The primary claim is that the proposed representation discrepancy is a reliable and analytically tractable surrogate for measuring representation forgetting. The authors support this claim through rigorous theoretical derivations (e.g., Theorem 1 and Theorem 2) and by demonstrating a strong empirical correlation between the discrepancy and conventional measures (such as degradation in linear probing accuracy). While the derivations are nontrivial and depend on certain assumptions (e.g., the existence of a linear transformation aligning weight matrices across tasks), the evidence—both theoretical and experimental—is generally convincing. Some aspects, particularly the sensitivity of these assumptions to different network architectures, might benefit from further discussion.
Methods And Evaluation Criteria: The method of aligning representation spaces using a linear transformation is well-motivated and novel in the context of continual learning. The evaluation criteria, which include comparing linear probing performance before and after task transitions, appear appropriate for assessing representation quality. The choice of benchmark datasets and the experimental design are standard for the field, lending credibility to the claims. However, additional details on hyperparameter sensitivity and broader dataset diversity could further strengthen the evaluation.
Theoretical Claims: The paper presents several theoretical claims, including explicit upper bounds on the representation discrepancy and an analysis of the convergence rate of forgetting across layers. The proofs, particularly for Theorem 1 and Theorem 2, seem methodologically sound, though they rely on assumptions (such as Assumption 1 regarding the existence of a suitable linear transformation) that, while empirically supported, might not hold in all settings. Clarifying the limitations of these assumptions and their impact on generality would improve the theoretical discussion.
Experimental Designs Or Analyses: The experiments are designed to validate the theoretical predictions: the evolution of representation forgetting is tracked as new tasks are learned, and the relationships between layer depth, network width, and forgetting are examined. The use of visualizations (e.g., plotting the forgetting curves and the relationship between representation space size and forgetting) helps illustrate the key findings. While the experimental setup is solid, additional ablation studies—such as varying network architectures or exploring different continual learning scenarios—could further substantiate the results.
Supplementary Material: The supplementary material includes detailed proofs of the theoretical claims and extended experimental details (e.g., additional figures and experimental configurations). The appendices enhance the credibility of the theoretical analysis and provide necessary clarity on the derivations. I tried to read the material, but it is too long for me, and I have to admit, it is a little bit too hardcore for me especially for the part A.
Relation To Broader Scientific Literature: This work builds on a rich body of literature in continual learning and catastrophic forgetting. It is well situated relative to prior work—especially that of Guha & Lakshman (2024) and other studies addressing representation changes over time. The connection drawn between representation forgetting and linear probing performance is particularly insightful. However, integrating discussion of a few more recent works on unsupervised continual learning and contrastive methods might further contextualize the contributions.
Essential References Not Discussed: To my knowledge, no.
Other Strengths And Weaknesses: Strengths:
1.The paper proposes a novel and theoretically grounded metric for a long-standing problem in continual learning.
2. The theoretical analysis is detailed and offers clear predictions about how factors like layer depth and network width influence forgetting.
3. Experimental validation on standard datasets is convincing and well-presented.
Weaknesses:
1. Some of the core assumptions (e.g., the existence of a near-perfect linear alignment between weight matrices) may not generalize across all architectures or learning scenarios.
2. Certain proofs and derivations could benefit from additional clarity, and more discussion on potential limitations would be helpful.
Other Comments Or Suggestions: The authors can consider providing more intuition on the practical implications of the theoretical findings—especially how this metric could potentially inform better continual learning strategies.
Questions For Authors: Could you elaborate on the limitations of Assumption 1 regarding the existence of a linear transformation aligning weight matrices? How might this assumption be relaxed or validated for other network architectures (e.g., non-ReLU networks or models with skip connections)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer 9B8C for the detailed review and constructive suggestions. We appreciate your acknowledgements that our theoretical and experimental results are generally **well-presented and convincing**, and that our **proofs are methodologically sound**. Below we show our reply on your comments and questions.
---
**[R2-1]** `Generalizability and limitations of Assumption 1`
Thank you for raising this important point.
We agree that **Assumption 1** (existence of a near-perfect linear transformation $T$ aligning $W_t^k$ and $W_{t'}^k$) may not universally hold. In the revised manuscript, we clarify its scope and limitations.
As shown in **Figure 2**, Assumption 1 holds empirically for **MLPs** and **ResNets**, the latter of which includes skip connections (asked by the reviewer). To test generality, we conducted experiments on a **Vision Transformer (ViT)** (9 layers, 1 head) trained on **Split-CIFAR100**, optimizing $T$ to align $W_1^9$ and $W_{50}^9$, which showed clear convergence:
https://hackmd.io/_uploads/SkzzjJ4p1x.png
We also compare alignment errors:
| Architecture | Final Alignment Error |
|------------------------|-------------------------------|
| Multi-Layer Perceptron | $7.58 \times 10^{-7}$ |
| ResNet | $5.18 \times 10^{-7}$ |
| Vision Transformer | $6.52 \times 10^{-8}$ |
While an approximate version of the assumption (e.g., $\| W_t^k - T W_{t'}^k \| \leq \varepsilon$) is possible, understanding its effect on theory is left for future work. This limitation will be explicitly discussed.
---
**[R2-2]** `Ablations across architectures or continual learning scenarios`
Thanks for the suggestion.
We added experiments under a **domain-incremental setup**, training ResNet on **rotated Split-CIFAR100** (same classes, different input rotations). We checked $R_t^k$ vs. $U_t^k$ for $t=1$ and $k=1,\dots,9$, observing consistent linear trends as predicted by Corollary 1:
https://hackmd.io/_uploads/r14Vgb_Tyg.png
In addition, we ran additional experiments for **Vision Transformer (ViT)** architecture to check the validity of Assumption 1, as shown in our response in [R2-1].
---
**[R2-3]** `Clarifying proofs and discussing limitations`
Thank you. We revised the proofs to add explanations in non-trivial derivations:
| Submitted Version | Revised Version |
|--------------------------------------------------------|--------------------------------------------------------|
| https://hackmd.io/_uploads/rkg3RbB61l.png | https://hackmd.io/_uploads/r15TpWr6ke.png |
| https://hackmd.io/_uploads/HJmS0WSpyl.png | https://hackmd.io/_uploads/ryHb0bH61g.png |
For discussion on potential limitations of our assumption, please refer to our response in [R2-1].
---
**[R2-4]** `Referencing recent unsupervised and contrastive learning works`
While our focus is on supervised continual learning, we agree that linking to recent related work adds context. We will include the following:
- Malviya et al., *J. Comput. Sci.*, 2025
- Wen et al., *arXiv:2405.18756*, 2024
- Zhang et al., *arXiv:2404.19132*, 2024
---
**[R2-5]** `Practical implications of the theoretical findings`
Thank you for the insightful suggestion.
While our metric is theoretical in nature, the **insights derived from it** suggest actionable strategies for improving continual learning (see [R1-5]). We will emphasize this connection more clearly in the revision.
---
**[R2-6]** `Hyperparameter sensitivity and dataset diversity`
Thank you for the constructive suggestion.
We conducted experiments varying **batch size** ($B = 128, 256, 512$) and **learning rate** ($\gamma = 10^{-3}, 10^{-4}, 10^{-5}$) on Split-CIFAR100 using ResNet. The linear relationship between $R_t^k$ and $U_t^k$ remained consistent, demonstrating robustness:
| **Batch size** $B$ | **Learning rate** $\gamma$ |
|----------------------------------------------------------|-------------------------------------------------------------|
| https://hackmd.io/_uploads/H1jp6bOpJx.png | https://hackmd.io/_uploads/HkZ06bOa1e.png |
As for dataset diversity, see [R2-2] for domain-incremental results on **rotated Split-CIFAR100**.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply, I've read them but I want to keep my score as 4.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for reading our comments and maintaining the positive evaluation. | Summary: This paper introduces a novel metric for measuring representation forgetting in continual learning and derives an upper bound for this metric. The theoretical findings provide valuable insights, which are further validated through experiments on real image datasets.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No.
Experimental Designs Or Analyses: No.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper presents theories in continual learning, which helps the general understanding of lifelong learning systems.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Overall, this is a well-written paper with a clearly stated system model and key assumptions. The proposed metric is a reasonable choice, particularly for facilitating theoretical analysis. However, the interpretation of Theorem 1 is insufficient, especially regarding the tightness of its upper bound, which is not discussed.
Other Comments Or Suggestions: NA.
Questions For Authors: 1. How useful are these insights for improving the performance of continual learning?
2. In Definition 3, the self-distance is nonzero. Does this indicate that Definition 3 is not suitable for measuring distances between representation spaces?
3. Without a discussion on the tightness of the upper bound, certain interpretations—such as the peak value in Proposition 1—become questionable. Additionally, other aspects of the paper may also raise concerns about precision and tightness, such as Definitions 3 and 4. Specifically, the proposed distance/discrepancy measures are only approximations based on imprecise beliefs (as indicated by the use of “$\approx$” in Section 4.2).
4. Figures 4 and 8 display multiple peaks, which contradicts the unique peak predicted by Proposition 1. Could this be evidence that the theoretical upper bound is not sufficiently tight?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer 1dag for the detailed review and constructive suggestions. We appreciate your acknowledgements that our **proposed metric is a reasonable choice** for facilitating theoretical analysis and that this paper is **well-written**. Below we show our reply on your comments and questions.
---
**[R1-1]** `In Definition 3, the self-distance is nonzero. Does this indicate that Definition 3 is not suitable for measuring distances between representation spaces?`
We truly appreciate this comment and would like to thank all the reviewers for pointing this out. **We found that there was a typo in Def.3** (as well as Eq.1 for recalling the definition in Guha et al. 2024) and revised Def.3 as follows. We assure you that our theoretical results and the corresponding proofs remain the same.
| Submitted version: | Revised version: |
|--|--|
| https://hackmd.io/_uploads/ByVVMGZ6kl.png | https://hackmd.io/_uploads/Sy-BI8-Tyl.png |
Previously, our definition of the distance between two representation spaces was based on computing the distance between the two most distant features in the respective representation spaces. Our newly revised definition only computes the maximum distance between the features of $\mathcal{R}^k_t(h_{t_1})$ and $\mathcal{R}^k_t(h_{t_2})$ with respect to the **same input**.
Thus, in our revised version, the self-distance is zero.
We will revise the manuscript accordingly.
---
**[R1-2]** `Without a discussion on the tightness of the upper bound, certain interpretations—such as the peak value in Proposition 1—become questionable. `
We thank the reviewer for the insightful comments. In response, we conducted experiments using a ReLU network on Split-CiFAR100 dataset to assess the tightness of the upper bound $U^k_t$ with respect to the representation discrepancy $D^k_t$, when $t=1$ and $k=5$; see the below plot.
https://hackmd.io/_uploads/SyV_GCD6yg.png
Although there exists a gap between $U^k_t$ and $D^k_t$, we observe that U^k_t consistently tracks the trend of D^k_t, suggesting that its analysis still yields meaningful insights on D^k_t.
We will include this discussion in the revised manuscript.
---
**[R1-3]** `Other aspects of the paper may also raise concerns about precision and tightness, such as Definitions 3 and 4. The proposed distance/discrepancy measures are only approximations based on imprecise beliefs (as indicated by the use of “ \approx ” in Section 4.2).`
We agree with the reviewer that our distance/discrepancy measures lack formal guarantees for exactly quantifying the representation forgetting. However, similar approximations have been employed in prior work, e.g., Guha et al. (2024), which used similar measures to analyze the catastrophic forgetting, and their empirical results were well-aligned with their theoretical insights. In our case as well, the empirical trends closely align with the theoretical predictions, supporting the practical relevance of our formulations.
---
**[R1-4]** `Figures 4 and 8 display multiple peaks, which contradicts the unique peak predicted by Proposition 1. Could this be evidence that the theoretical upper bound is not sufficiently tight?`
Thank you for pointing this out.
The reviewer is true that the number of peaks is different from theory (in Proposition 1) and practice (in Figures 4 and 8), but we claim that the number of peaks is not an important factor in terms of figuring out the overall trend. More importantly, both curves (theory and practice) have similar trend of exibithing two phases -- forgetting phase and saturation phase -- where the amount of forgetting significantly increases at the first phase, and the amount of forgetting deviates in a smaller range in the second phase. We will mention this in the revised manuscript.
Regarding the tightness of our upper bound, please refer to our response in [R1-2].
---
**[R1-5]** `How useful are the insights in this paper for improving the performance of continual learning?`
Our analysis reveals two key findings with direct implications for improving performance in continual learning: (1) representation forgetting tends to occur more rapidly and severely in deeper layers, and (2) increasing network width mitigates the degree and speed of forgetting. These insights suggest practical strategies such as allocating more regularization or memory resources to deeper layers, or designing architectures with wider representations in critical layers to slow down forgetting. | null | null | null | null | null | null |
A Recipe for Causal Graph Regression: Confounding Effects Revisited | Accept (poster) | Summary: This paper addresses the challenge of adapting causal graph learning (CGL) techniques from classification to regression tasks, introducing a framework called causal graph regression (CGR). The authors identify two key innovations: (1) an enhanced graph information bottleneck loss function that, unlike previous approaches, acknowledges the predictive power of confounding features rather than treating them as pure noise, and (2) a contrastive learning approach for causal intervention that can operate without discrete class labels. By combining these techniques, their method generates counterfactual graphs through random combinations of causal and confounding subgraphs, enabling the model to learn invariant representations. Extensive experiments on GOOD-ZINC and ReactionOOD benchmarks demonstrate performance improvements in out-of-distribution settings.
Claims And Evidence: Claim 1: Existing causal graph learning can not work well for graph regression tasks.
Support: 1. Citations. 2. Experimental results in Section 5. This claim is supported with evidence.
Claim 2: The reason that existing CGL methods don't work well for regression tasks stems from an assumption that confounding subgraphs contain strictly no predictive power.
Support: I don't find clear supports for this claim. I personally deduce that this is related to the optimization dynamics related to tasks (classification and regression).
Methods And Evaluation Criteria: Methodologically, the paper extends causal graph learning to regression tasks by:
Rethinking how confounding effects should be handled in regression (acknowledging predictive power of confounding features rather than treating them as pure noise)
Developing a contrastive learning approach to replace label-dependent causal intervention techniques that don't transfer well to regression.
These two designs make sense, while the motivation for the second approach is not clearly stated. Authors thoroughly introduce the modeling of $I(C;Y)$ and $I(C;G)$. I wonder if there's any difference from those introduced in existing papers. If not, authors should focus on two mechanisms introduced in this paper.
For evaluation, the authors use appropriate benchmark datasets:
GOOD-ZINC: A standard graph regression dataset explicitly designed for OOD testing
ReactionOOD: Specialized datasets (Cycloaddition, E2&SN2, RDB7) with regression tasks for chemical reactions
The evaluation criteria are reasonable.
Theoretical Claims: There's no rigorous theoretical claims in this paper. I find some design choices in Section 4 (like selecting identity covariance matrix) are not well supported.
Experimental Designs Or Analyses: For evaluation, the authors use appropriate benchmark datasets:
GOOD-ZINC: A standard graph regression dataset explicitly designed for OOD testing
ReactionOOD: Specialized datasets (Cycloaddition, E2&SN2, RDB7) with regression tasks for chemical reactions
The evaluation criteria are reasonable.
Supplementary Material: I've gone through the supplementary materials. One suggestion is to revise the ablation study. Based on the current figure, it's not clear how each part affects the model performance.
Relation To Broader Scientific Literature: * Out-of-Distribution Generalization for Graphs: This paper discusses why previous CGL-based methods can't work well on regression tasks and provides a remedy.
* Molecular Property Prediction: Explaining the effectiveness of contrastive learning for molecular data.
Essential References Not Discussed: [1] is closely related but not discussed at all.
[1] Wu, Tailin, et al. "Graph information bottleneck." Advances in Neural Information Processing Systems 33 (2020): 20437-20448.
Other Strengths And Weaknesses: I think this paper suffers from two limitations. First, the scope is excessively narrow, focusing on a highly specific problem within graph regression. Second, method design lacks clear motivation and support, as the authors fail to adequately justify their methodological choices or connect them to established theoretical principles.
Other Comments Or Suggestions: Definition in Section 3: You are considering a set of graphs but not a single graph.
5.2. baselines and Setup -> Baselines
Questions For Authors: I notice that the proposed methods achieve much larger gains on GOOD datasets than ReactionOOD datasets. Do you have any ideas on the reason?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer iAdk for the useful feedback. Detailed responses are provided below. We kindly ask the reviewer to reconsider their score if the following clarifications resolve the concerns.
### Q1. Why are gains on GOOD-ZINC larger than on ReactionOOD?
GOOD-ZINC is large-scale and has diverse OOD testing samples, thus more suitable for causal discovery (which naturally requires more samples than ERM). In contrast, ReactionOOD datasets are smaller and the OOD samples are more specialized. Our method shows larger gains on GOOD-ZINC because our recipe more effectively handles diverse distributions, while baselines struggle with such complexity.
Details can be seen in the response to [Q2 & Claims And Evidence] from Reviewer jru1.
### W1. Overly narrow scope of causal graph regression
We respectfully disagree, and humbly refer the reviewer to our response to [Broader Sci. Literature] from Reviewer HaWt.
### W2. Method motivation
We humbly refer the reviewer to the response to [Methods] below.
### [Claims And Evidence] Claim 2 (restated below) lacks support
> Claim 2: The reason that existing CGL methods don't work well for regression tasks stems from an assumption that confounding subgraphs contain strictly no predictive power.
We respectfully clarify a misunderstanding regarding Claim 2. In our paper, we never claim that existing CGL methods fail in regression tasks due to the assumption that confounding subgraphs contain strictly no predictive power.
Rather, a similar statement serves as part of our **motivation** (instead of claim): prior methods (e.g., CAL [1]) often assume that confounding subgraphs are non-predictive, which does not hold in practice (both in classification and regression settings). We provide empirical evidence supporting this motivation in Section 5.5 (Lines 385–397, Figure 3 left), showing that completely ignoring the predictive signals of confounders can lead to performance degradation.
### [Methods]
> - The motivation for using contrastive learning (CL) in regression is unclear.
We respectfully clarify the motivation for using CL. Existing CGL methods (e.g., [1]) proposed to **align the representation** of intervened graphs (constructed by randomly pairing confounding subgraphs with target causal subgraphs [2]) through the **labels of causal subgraphs**, and due to the explicit usage of labels they cannot be directly applied to regression tasks.
From this perspective, CL is a perfect fit for causal graph regression, since the InfoNCE loss in CL aims at "perfect alignment" [3] without using labels, pulling positive sample representations (intervened graphs with the same causal subgraphs in our case) closer while pushing negative pairs (intervened graphs with distinct causal subgraphs) apart, which thus distinguishes causal effects between factual and counterfactual samples. This design enables models to implicitly capture causal signals without label supervision, consistent with recent advances in causal representation learning [4].
> - Is the modeling of $I(C;Y), I(C;G)$ different from those in existing work.
The modeling of $I(C;Y), I(C;G)$ in this paper does differ.
- Prior works on causal graph learning derive GIB-based objectives under **classification settings**.
- In contrast, we provide a new formulation of the GIB loss under Gaussian assumptions **specific to graph regression**, allowing principled disentanglement of causal and confounding subgraphs under continuous targets.
### [Theoretical Claims] Assumption of identity covariance
We humbly refer the reviewer to our response to W2 of Reviewer HaWt.
### [Supp. Material] Ablation studies

We take your suggestion and re-perform the ablation studies.
Ablation 1 - Effectiveness Analysis: We evaluate the performance of our model variants across four OOD datasets. Each variant removes several proposed modules (the same setting as in Appendix A.5, Lines 597-627).
Due to varying dataset complexities, the contributions of GIB and CI differ across tasks. Nonetheless, the full model consistently achieves the best performance, indicating the two loss functions work synergistically to enhance OOD generalization. (More details can be found in the attached figure.)
Ablation 2 - Parameter Sensitivity Analysis: We humbly refer the reviewer to our response to [Experimental Designs] from Reviewer HaWt.
### [Reference]
Thank you for catching this oversight. We will definitely cite the reference when introducing GIB in Sec. 3.2 and acknowledge its influence.
---
[1] Causal attention for interpretable and generalizable graph classification. KDD, 2022.
[2] Debiasing graph neural networks via learning disentangled causal substructure. NeurIPS, 2022.
[3] Understanding contrastive representation learning through alignment and uniformity on the hypersphere. ICML, 2022.
[4] Robust causal graph representation learning against confounding effects. AAAI, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and I think it has addressed most of my concerns.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your valuable suggestions and positive feedback, which will help improve the quality of our manuscript. | Summary: This paper investigates causal graph regression (CGR), extending causal graph learning (CGL) techniques, which have been successful in classification tasks, to the more challenging regression setting. The authors introduce a novel approach that adapts causal intervention techniques to regression through the use of contrastive learning. Their method aims to mitigate confounding effects and improve the generalizability of graph neural networks (GNNs) under out-of-distribution (OOD) scenarios. The main claims of the paper include:
1. Contrastive Learning for Causal Graph Regression – The introduction of contrastive learning as a means of handling confounders in graph-level regression tasks.
2. Generalization of Causal Interventions – An adaptation of classification-specific causal intervention techniques to the regression setting.
3. Extensive Empirical Validation on Graph OOD Benchmarks – Demonstrations of effectiveness of the proposed method through experiments on multiple OOD datasets.
##update after rebuttal
Claims And Evidence: The paper provides empirical evidence through experiments on graph OOD benchmarks. The use of contrastive learning to reshape causal intervention in regression is an innovative approach. The results indicate that the proposed method performs well on several datasets, supporting the claim that contrastive learning is effective for CGR. However, the method does not perform well on the Cyloadition and RDB7 datasets. The paper lacks sufficient explanation for these weaker results, which raises concerns about the robustness and generalizability of the approach across diverse datasets.
Methods And Evaluation Criteria: The proposed method is well-motivated and aligns with the challenges of causal graph regression. The evaluation is conducted on a range of benchmark datasets, which provides a solid empirical foundation
Theoretical Claims: The variational bound for the GIB objective seems fine to me.
Experimental Designs Or Analyses: The experiments are well designed. The analysis on the results on the ReactionOOD dataset can be improved.
Supplementary Material: I checked, and it provides more details about the datasets and experimental settings. It also contains two more results -- ablation studies and hyper-parameter analysis.
Relation To Broader Scientific Literature: This work builds on existing research in causal graph learning and graph neural networks, expanding causal intervention techniques beyond classification to regression tasks.
Essential References Not Discussed: No, all important related works are cited.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Typo: Title of Section 5.2, and check line 393 "... loss L_{CI} Conversely ..."
Questions For Authors: Question 1: explain the fundamental difference between graph classification and regression, and why regression is so challenging.
Question 2: hint on why methods such as ERM, CORAL and DANN outperform your method in Table 2 and Table 3.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer jru1 for the thoughtful and encouraging feedback. We appreciate your recognition of our method’s motivation, design, and empirical support. Below we address each of your comments in detail.
### Q1. Explain the fundamental difference between graph classification and regression, and why regression is so challenging.
The core difference lies in the adoptation of causal discovery, as noted in Lines 057–060, where we state that existing methods "rely on **discrete label information** and cannot be adapted to regression."
In regression, the absence of discrete labels naturally makes confounder separation harder. For example, in molecular property prediction, continuous outcomes (e.g., solubility values) often arise from overlapping substructures, complicating causal identification.
### [Q2 & Claims And Evidence] Performance Gaps on ReactionOOD Datasets
We appreciate this observation. The relatively weaker performance is attributable to known challenges rather than methodological flaws:
* Our method stems from **more general and weaker assumptions**, which naturally implies larger function space and higher requirements on sample size:
Unlike prior methods such as IRM or DIR, our method do not assume that spurious features are non-predictive. This allows us to model a broader function class under weaker assumptions, making our method applicable to a wider range of real-world distribution shifts. The trade-off is increased optimization difficulty under limited data or high spurious signal, but this reflects a principled design choice, not a methodological weakness.
* No method dominates across all OOD benchmarks:
As shown in OOD-GNN [1], it is a regular phenomenon in generalization tasks that no approach consistently performs best on every dataset due to varying distribution shifts and inductive biases. Despite this, our method is among the most stable across diverse settings and achieves best results on ZINC (Table 1), the largest and most complex dataset.
* Spurious correlations affect ID performance:
Non-causal baseline methods (e.g., ERM, CORAL, and DANN) often perform better under ID or mild OOD due to exploiting spurious but predictive features. To handle OOD settings, our method instead recognizes those spurious features and removes them for better generalization, which may slightly reduce ID performance but improves robustness.
> For example, in Cycloaddition-ID(Total Atom Number, concept), CORAL achieves 4.10 vs. ours 5.74, but under OOD, CORAL drops to 5.74 while ours remains stable at 5.53 (the smaller the better).
* Causal and contrastive learning require more data:
Causal inference is statistically harder than correlation-based methods and generally requires more data to reduce variance [2]. Contrastive learning also benefits from large batch sizes and diverse representations. This explains why our method performs best on ZINC. On smaller datasets (e.g., RDB7), all causal methods degrade; however, our method remains stable and consistently outperforms CIGA (the best method of the listed causal intervention baselines) across all RDB7 settings.
We will modify the corresponding experimental analysis.
### [Typo] Correct typo in Section 5.2 title and line 393.
We appreciate your attention to detail. We will accordingly revise the manuscript to reflect your suggestion.
---
[1] Tajwar F et al. No true state-of-the-art? OOD detection methods are inconsistent across datasets. arXiv, 2021.
[2] Guo R et al. A survey of learning causality with data: Problems and methods. ACM Comput. Surv., 2020. | Summary: This paper proposes an improved causal graph regression method by an enhanced graph information bottleneck loss function and a contrastive learning loss from generated counterfactual graphs. Experiments on OOD datasets confirm its generality.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: There are no formal theoretical claims in this paper.
Experimental Designs Or Analyses: Yes, I checked the experiments. The results verify the effectiveness of the proposed method. The only issue is that there is no discussion on the effect of the hyperparameters $\alpha$ and $\beta$, which seem to be important hyperparameters in the method.
Supplementary Material: Yes I looked through all parts.
Relation To Broader Scientific Literature: Causal graph regression seems to be a relatively neglected domain. The authors also do not provide any important applications of the causal graph regression task. Therefore, I think the key contributions of the paper may not have a large influence to the broader scientific literature.
Essential References Not Discussed: I do not know any related works that are essential to this paper.
Other Strengths And Weaknesses: - The paper is well-written and easy to follow.
- The experimental results are good.
1. Could you please explain how to obtain the causal subgraph $C$ from a graph $G$, i.e., how to obtain the mask matrices $M_{edge}$ and $M_{node}$? Even though the concrete approach might be discussed in previous work, I think it would be better to have a brief introduction in the background to make the paper more consistent.
2. There are several assumptions in Section 4.2. However, it remains unknown whether these assumptions are reasonable in practice.
3. The OOD generality is claimed as a core contribution of the proposed approach. However, there is no obvious superiority of the OOD performance compared with the ID performance.
Other Comments Or Suggestions: "5.2. baselines and Setup " -> "5.2. Baselines and Setup"
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer HaWt for the thorough and constructive feedback. We have carefully addressed the raised concerns and suggestions. We kindly ask the reviewer to reconsider their score if the following points resolve the reservations with our work, or to provide more pointers for us to address.
### W1. How to obtain the causal subgraph $C$ from graph $G$, i.e., how to obtain the mask matrices $M_{edge}$ and $M_{node}$.
We appreciate the reviewer's question. The construction of the causal subgraph is introduced in the main text (Eq. (1)), and the details of generating the mask matrices are in Appendix B (Lines 647–692).
In the revision, we will briefly summarize the mask generation process in the main text.
### W2. The assumptions in Section 4.2 remain unverified. Are they reasonable?
Thank you for raising this point. We go through and clarify each assumption below:
* Assumption 1 (Gaussian assumption of $p(C \mid G)$ and $q(C)$):
We follow the variational information bottleneck literature [4], modeling both as multivariate Gaussians to enable tractable KL estimation. This is widely adopted in similar works [5,6].
* Assumption 2 (Covariance $\Sigma_\phi(G) = I$):
Setting the covariance to identity simplifies optimization and is theoretically justified in [4, Appendix A], which shows that any full-rank covariance can be whitened without loss of generality.
* Assumption 3 (Gaussian posterior):
For $P(Y \mid H_c)=\mathcal{N}(Y; \mu_{(c)}, \sigma^2_{(c)})$, we use a fixed variance for stability, similar to Phase I assumption in section 1 of [7]. This also allows a closed-form expression for mutual information estimation (e.g., [6], Sec.2.1, Proposition 1).
We will clarify these assumptions more explicitly and emphasize their empirical validity (the strong OOD generalization results in Section 5) in the revision.
### W3. There is no obvious superiority of the OOD performance compared with the ID performance.
We respectfully clarify that the goal of CGL methods might be misunderstood; we do not aim for OOD performance to surpass ID performance, which is unrealistic.
We focus our attention on **the robustness under OOD settings**, demonstrating strong generalization ablity of our method. For more analysis of OOD performance, we humbly refer the reviewer to the response to Q2 from Reviewer jru1.
### [Experimental Designs] The effect of the hyperparameters $\alpha$ and $\beta$

We thank the reviewer for this suggestion. Some discussion related to this point has already been included in Appendix A.5. To better illustrate, we rerun the experiments and will update the sensitivity analysis for $\alpha$, $\beta$, and $\lambda$. (More details can be find in the attached figure.)
The results show that the choice of $\alpha$ has no clear effect, while the choices of $\beta$ and $\lambda$ significantly impact OOD performance.
### [Broader Sci. Literature] Causal graph regression (CGR) seems to be a relatively neglected domain, and the authors also do not provide any important applications of CGR
We appreciate this review and will revise the introduction to better highlight the motivating applications and clarify the contribution as **improving generalization in graph-based prediction tasks through causal mechanisms**.
Furthermore, we beg to clarify
- CGR is not a niche topic but a core component of the broader field of **causal graph learning** (CGL). As noted in Section 1 (lines 36-46), CGL has emerged as a key paradigm for building robust models. Our work leverages causal interventions to improve OOD generalization in graph regression, which is essential for many real-world scenarios with distributional shifts.
- We have discussed the application of CGR in Section 1 (lines 48-51). Moreover, CGR is relevant to tasks such as molecular property prediction [1], traffic flow forecasting [2], and credit scoring [3].
---
[1] Rollins Z A. et al. MolPROP: Molecular Property prediction with multimodal language and graph fusion. J. Cheminform., 2024.
[2] Li G. et al. Multistep traffic forecasting by dynamic graph convolution: Interpretations of real-time spatial correlations. Transp. Res. Part C Emerg. Technol., 2021.
[3] Ma F. et al. Utilizing Reinforcement Learning and Causal Graph Networks to Address the Intricate Dynamics in Financial Risk Prediction. Int. J. Inf. Technol. Syst. Approach, 2024.
[4] Chechik G. et al. Information bottleneck for Gaussian variables. Adv. Neural Inf. Process. Syst., 2003.
[5] Kingma D P, Welling M. Auto-encoding variational bayes[EB/OL].(2013-12-20)
[6] Yu S. et al. Cauchy-Schwarz Divergence Information Bottleneck for Regression. arXiv, 2024.
[7] Nix D. et al. Learning local error bars for nonlinear regression. Adv. Neural Inf. Process. Syst., 1994.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed response. My concerns on assumptions, OOD performance, and hyperparameter settings are addressed.
> The construction of the causal subgraph is introduced in the main text (Eq. (1)), and the details of generating the mask matrices are in Appendix B (Lines 647–692).
Thank you for pointing out Appendix B. I am still confused on how MLP_edge and MLP_node are trained in Eq. (19) - (20), and which is the GNN-based encoder $f$. In the description of Sec. 3.1, it seems that $C$ and $S$ can be obtained directly and deterministically from $G$; while in Appendix B, it seems that we need additional neural networks $f$ and MLPs, and it is unknown where these neural networks come from. And I think Appendix B is more like a detailed explanation of the proposed framework instead of Sec. 3.1.
> CGR is not a niche topic but a core component of the broader field of causal graph learning (CGL). As noted in Section 1 (lines 36-46), CGL has emerged as a key paradigm for building robust models. Our work leverages causal interventions to improve OOD generalization in graph regression, which is essential for many real-world scenarios with distributional shifts.
We have discussed the application of CGR in Section 1 (lines 48-51). Moreover, CGR is relevant to tasks such as molecular property prediction [1], traffic flow forecasting [2], and credit scoring [3].
Yes, I agree that CGL is crucial to being a key paradigm for building robust models. The key problem is whether CGR is significant, as it is also mentioned that "previous CGL studies focus on classification settings." As this paper seems to be one of the first papers emphasizing the regression task, I think it would be helpful to provide some stronger supports for the importance of the CGR task. Therefore, I suggest mentioning these applications [1-3] in the revised version.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the constructive feedback and thoughtful questions. We are particularly grateful that the reviewer acknowledged our efforts; as noted in your response, the concerns on assumptions, OOD performance, and hyperparameter settings have been addressed, and we appreciate your positive recognition.
### Q1.
> Thank you for pointing out Appendix B. I am still confused on how MLP_edge and MLP_node are trained in Eq. (19) - (20), and which is the GNN-based encoder f. In the description of Sec. 3.1, it seems that C and S can be obtained directly and deterministically from G; while in Appendix B, it seems that we need additional neural networks f and MLPs, and it is unknown where these neural networks come from. And I think Appendix B is more like a detailed explanation of the proposed framework instead of Sec. 3.1.
Thank you for pointing out this ambiguity. Below we manage to clarify the confusion regarding **(1)** how to get C and S from G; **(2)** how MLP_edge/node are trained.
(1) how to get C and S from G
* **The function $f(\cdot)$ always** refers to a GNN encoder along this paper, which takes the input graph $G=(A, X)$ and produces a graph-level embedding $H_g =f(A, X)$.
* Based on $H_g$, **two MLPs** (MLP_edge and MLP_node) are used to generate soft attention masks $M_{node}$ and $M_{edge}$, which assign importance scores to nodes and edges. We then follow Eq. (1) to compute $C = (M_{edge} \odot A, M_{node} \odot X)$. Therefore, the subgraph $C$ cannot be obtained directly through $G$ (similarly for $S$); the masks $M_{edge}, M_{node}$ are returned by MLP_edge, MLP_node in Eqs. (19) - (20).
(2) how MLP_edge/node are trained
* After obtaining $C$, we continue the forward pass, encoding $C$ ($S$ doesn't explicitly appear in the loss) and passing the representation of $C$ to the final loss (line 246) $L_c(G,C,Y)=-I(C;Y)+\alpha I(C;G)$. Since the loss involves the representation of $C$, which is computed via the two MLPs, the parameters of **MLP_edge and MLP_node are thus trained end-to-end via backpropagation**.
For your reference, we provide a road map for the related contents in our submission.
- The specific procedure for **generating $C$ and $S$** is summarized in Sec. 3.1
- The technical details of the **whole prediction process** above are provided in Appendix B (Eqs. 18-20).
- Our full pipeline, including **the proposed loss and overall framework** built upon this foundation, is illustrated in Figure 2 and described in Sec. 4.1.
We will revise Sec. 3.1 (on how $C$ and $S$ are generated) to clearly state that the masks are learnable and produced by trainable components. Thank you again for pointing this out.
### Q2
> Yes, I agree that CGL is crucial to being a key paradigm for building robust models. The key problem is whether CGR is significant, as it is also mentioned that "previous CGL studies focus on classification settings." As this paper seems to be one of the first papers emphasizing the regression task, I think it would be helpful to provide some stronger supports for the importance of the CGR task. Therefore, I suggest mentioning these applications [1-3] in the revised version.
Thank you for your valuable suggestion. We will highlight its significance more clearly and include the mentioned applications in the revised version.
Moreover, we would like to gently emphasize that CGR is an important yet underexplored task, partially due to its inherent difficulty in modeling. Some molecular property prediction tasks are natively regression problems, while subsequently reformulated as classification problems to facilitate solution (e.g., Tox21 [8]). We will also add this discussion to the next revision.
---
[8] Huang R et al. Tox21Challenge to build predictive models of nuclear receptor and stress response pathways as mediated by exposure to environmental chemicals and drugs. Front. Environ. Sci., 2016. | null | null | null | null | null | null | null | null |
LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently | Accept (oral) | Summary: This paper presents a theoretical analysis of Low-Rank Adaptation (LoRA) for efficient fine-tuning of large language models. The main contributions are:
- Theoretical analysis showing that LoRA's gradient updates align with the singular subspace of the full fine-tuning gradient.
- Introduction of a spectral initialization strategy and preconditioned gradient descent to improve convergence.
- Proof of linear convergence rates for both linear and nonlinear models under certain conditions.
- Empirical validation showing improved performance over vanilla LoRA and its variants on NLP benchmarks.
## update after rebuttal
The authors provide a strong theoretical foundation with comprehensive proofs, which is crucial for the community. Therefore, I decide to raise my score to 4.
Claims And Evidence: The claims made in the paper are generally supported by clear and convincing evidence:
- The theoretical alignment between LoRA updates and the singular subspace of the full gradient is demonstrated through mathematical proofs.
- The effectiveness of the proposed spectral initialization and preconditioning methods is supported by both theoretical analysis and empirical results.
- The linear convergence rates are proven under specific assumptions for both linear and nonlinear models.
- Experimental results on NLP benchmarks show consistent improvements over baseline methods.
Methods And Evaluation Criteria: The proposed methods, including spectral initialization and preconditioned gradient descent, make sense for improving parameter-efficient fine-tuning of large models. The evaluation criteria, using standard NLP benchmarks like GLUE tasks, are appropriate for assessing the effectiveness of fine-tuning methods.
Theoretical Claims: I checked the proofs in the main paper.
- The alignment analysis in Section 3.1 shows that LoRA updates align with the singular subspace of the full gradient, which appears correct.
- The convergence proofs for both linear (Theorem 3.6) and nonlinear (Theorem 4.3) models follow standard optimization proof techniques and seem valid.
- The analysis of spectral initialization and preconditioning methods is mathematically rigorous.
Experimental Designs Or Analyses: The experimental designs are sound:
- The comparison against vanilla LoRA and other variants is comprehensive.
- The ablation studies help understand the contribution of different components.
- The results on multiple NLP tasks demonstrate the general effectiveness of the proposed method.
Supplementary Material: I reviewed the "Experimental Settings and Additional Results" part in the supplementary material.
Relation To Broader Scientific Literature: This work relates to several key areas in machine learning:
- Parameter-efficient fine-tuning methods like LoRA, Adapter, and Prefix-Tuning
- Matrix factorization and low-rank approximation theory
- Optimization for deep neural networks
- Theoretical understanding of fine-tuning large language models
Essential References Not Discussed: The paper adequately covers relevant literature.
Other Strengths And Weaknesses: Strengths:
- Strong theoretical foundation with comprehensive proofs
- Practical algorithm with empirical validation
- Clear improvement over existing LoRA variants
Weakness:
- The experimental validation could be expanded to more diverse tasks.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate reviewer's efforts and constructive feedback.
---
>**Q1** More diverse experimental tasks
We extend our method to fine-tune T5 model on a subset of SuperGLUE [1] datasets, which is more challenging than GLUE and widely used in fine-tuning papers [2-5]. We use full fine-tuning, LoRA and LoRA-One with rank 8 for comparison. For a fair comparison, we search the optimal stepsize for each method via a large grid ranging from 1e-2 to 1e-5. The other settings are same as Appendix F.2. except the epochs of CB is set to 4 since it is a very tiny set. The final results are provided in the table below.
| Data | BoolQ | CB | COPA | RTE | WIC | Avg. |
| -------- | -------- | -------- | --- | --- | --- | --- |
| Full FT | $70.89_{\pm 0.02}$ | $89.29_{\pm 0.00}$ | $63.67_{\pm 0.94}$ | $75.33_{\pm 0.34}$ | $66.35_{\pm 0.32}$ | $73.11$ |
| LoRA | $70.01_{\pm 0.03}$ | $85.12_{\pm 0.84}$ | $61.67_{\pm 0.47}$ | $70.88_{\pm 0.17}$ | $65.78_{\pm 0.37}$ | $70.69$ |
| Ours | $70.21_{\pm 0.09}$ | $88.10_{\pm 0.84}$ | $65.33_{\pm 1.24}$ | $74.61_{\pm 0.61}$ | $68.29_{\pm 0.49}$ | $73.31$ |
We also extend our method to fine-tune image classification task on Vision Transformer (ViT [6]). We fine-tune ViT using CIFAR10 and CIFAR100 [7], search optimal stepsize for each method to ensure a fair comparison. Since the convergence of LoRA is slow on CIFAR100, we also run two epochs for comparison. The results are provided in the table below.
| Data (#epoch) | CIFAR10 (1) | CIFAR100 (1) | CIFAR100 (2) |
|---|---|---|---|
|Full FT| $98.48_{\pm 0.03}$ | $89.31_{\pm 0.18}$ | $91.73_{\pm 0.08}$ |
|LoRA| $97.91_{\pm 0.08}$ | $76.46_{\pm 0.22}$ | $80.23_{\pm 0.12}$ |
|Ours | $98.50_{\pm 0.04}$ | $86.68_{\pm 0.44}$ | $88.83_{\pm 0.11}$ |
In both reasoning and vision classification tasks, we can observe that LoRA-One consistently outperforms LoRA and achieves comparable performance (even better) with full fine-tuning.
---
Reference:
[1] Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O. and Bowman, S., 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32.
[2] Zhang, Q., Chen, M., Bukharin, A., Karampatziakis, N., He, P., Cheng, Y., Chen, W. and Zhao, T., 2023. AdaLoRA: Adaptive budget allocation for parameter-efficient fine-tuning. The Eleventh International Conference on Learning Representations.
[3] Meng, F., Wang, Z. and Zhang, M., 2024. PiSSA: Principal singular values and singular vectors adaptation of large language models. Advances in Neural Information Processing Systems, 37, pp.121038-121072.
[4] Kopiczko, D.J., Blankevoort, T. and Asano, Y.M., 2024. VeRA: Vector-based Random Matrix Adaptation. The Twelfth International Conference on Learning Representations.
[5] Zhao, Z., Shen, T., Zhu, D., Li, Z., Su, J., Wang, X., Kuang, K. and Wu, F., 2025. Merging LoRAs like playing LEGO: Pushing the modularity of lora to extremes through rank-wise clustering. The Thirteenth International Conference on Learning Representations.
[6] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S. and Uszkoreit, J., 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
[7] Krizhevsky, A. and Hinton, G., 2009. Learning multiple layers of features from tiny images.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. My concerns have been addressed and I devide to keep my score.
---
Reply to Comment 1.1.1:
Comment: Much appreciated for your positive support.
This paper studies the training dynamics of LoRA and identifies alignment and pre-condition as key factors for accelerating convergence, with demonstrating **“strong theoretical foundation with comprehensive proofs”.** This theory guides the practice with the LoRA-One algorithm to **“achieve clear improvement over existing LoRA variants”** with massive experiments as recognized by your comments.
Accordingly we sincerely appreciate your stronger support if possible. We’re happy to address your any further concerns. | Summary: This paper investigates methods to enhance the performance of Low-Rank Adaptation (LoRA). The authors make two key discoveries: (i) LoRA tends to align with a specific singular subspace in a single step, and (ii) the use of preconditioners significantly improves convergence in high-rank scenarios. Building on these insights, the authors provide rigorous theoretical guarantees for the convergence of the proposed preconditioned gradient descent algorithm. These contributions advance the understanding of LoRA and offer practical improvements for its application in high-rank settings.
This paper is theoretically rigorous and presents highly interesting insights that are valuable to the community. The analysis is well-structured, and the findings contribute meaningfully to the understanding of low-rank fine-tuning and optimization dynamics. While the theoretical focus is on gradient descent and preconditioned gradient descent, the work lays a strong foundation for further exploration. Overall, it is a compelling contribution that will likely inspire follow-up research and discussions in the field.
Claims And Evidence: 1. The title of this paper, "One-Step Full Gradient Suffices for Low-Rank Fine-Tuning," suggests that a single step of full gradient computation is sufficient for effective low-rank fine-tuning. However, the experimental results indicate that the performance of one-step fine-tuning is not only suboptimal but also significantly inferior to that of LoRA. This raises questions about the appropriateness of the title, as it does not accurately reflect the empirical findings. The authors should consider revising the title to better align with the actual results.
2. The model presented in Eq. (2) appears overly simplistic and may lack practical applicability, particularly in the context of modern deep learning architectures. While the theoretical results derived for this model are insightful, it remains unclear whether they extend to more complex and widely used architectures, such as transformers.
3. The authors assume that the input $X$ follows an isotropic centered sub-Gaussian distribution, which ensures that $\tilde{X}$ in Eq. (2) is approximately orthogonal. Under this assumption, the linear case in Eq. (2) becomes nearly equivalent to minimizing $||W-W^b||$, thereby reducing LoRA to a standard low-rank matrix factorization problem of the form $||AB-\Delta||$. In this simplified regime, one-step full gradient descent is indeed sufficient for convergence. However, this assumption on $\tilde{X}$ is critical to the theoretical framework and results presented in the paper. It would be important for the authors to discuss the validity and practicality of this assumption in real-world scenarios, particularly when dealing with more complex data distributions or architectures, such as transformers, where such conditions may not hold. This would help clarify the scope and limitations of their theoretical findings.
Methods And Evaluation Criteria: Yes
Theoretical Claims: The theoretical results are well-established and sufficient for the simplified model presented in Eq. (2).
Experimental Designs Or Analyses: The experiments presented in the paper are sufficient to support the theoretical claims within the scope of the simplified model and assumptions considered.
Supplementary Material: The proof of the main Theorem 3.2.
Relation To Broader Scientific Literature: Closely related.
Essential References Not Discussed: Not available
Other Strengths And Weaknesses: 1. This paper primarily focuses on analyzing the properties and convergence of gradient descent (GD) and preconditioned gradient descent for low-rank fine-tuning. However, in practice, optimizers like Adam, which incorporate adaptive learning rates and momentum, are widely used for training parameters. This creates a notable mismatch between the theoretical analysis and practical implementation. Specifically, the first and second moment estimates in Adam significantly alter the learning dynamics of the parameters, leading to behavior that may differ substantially from the theoretical results presented in this paper. To bridge this gap, the authors should consider extending their analysis to include adaptive optimization methods like Adam or provide empirical evidence demonstrating how their theoretical insights translate to such practical settings.
2. The proposed preconditioned gradient descent method appears to be identical to the algorithm presented in Zhang & Pilanci, 2024, and bears strong similarities to the LoRA_pro method introduced by Wang et al., 2024. The primary distinction lies in the spectral initialization proposed in this work. Given this overlap, the authors should place greater emphasis on discussing the novelty and significance of their initialization scheme within this learning regime. Specifically, they should clarify how the spectral initialization contributes to improved convergence, stability, or performance compared to existing methods, and provide empirical or theoretical evidence to support its importance. This would help better highlight the unique contribution of their work and its potential impact on the field.
Other Comments Or Suggestions: Please refer to the previous section.
Questions For Authors: Please refer to the previous section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly thank the reviewer's efforts and constructive comments.
>**Q1** *Appropriateness of title*
**A1** The title is derived from our theory: under proper initialization from one-step full gradient, we can recover $\Delta$ to large extent **at initialization**, see Prop. 3.3 (linear) and Lem. C.5 (nonlinear). This claim is also supported by our toy experiments (Fig. 4) and small-scale datasets (CoLA & MRPC): It achieves similar performance with LoRA but fewer time cost. We provide a table (displayed in anonymous link: https://imgur.com/a/i42zqCm) for comparison.
We are aware of the fact that one-step full gradient is not sufficient for large-scale datasets on LLMs. Hence we continue to run LoRA-One for one epoch. We provide a table (in anonymous link: https://imgur.com/a/qiB59c2) for comparison.
We will update the title according to the reviewer's feedback.
---
>**Q2** *Simplistic model*
**A2** Extension to complex architecture, e.g., transformer, can be found in A1 of our responses to Reviewer cf1y.
We also remark that, prior works on the analysis of LoRA are generally limited to single-vector linear regression [1,2], matrix factorization [3,4], or strict assumptions [5] (rank-one shift, rank-one LoRA, frozen $B$). Unlike them, our model is more advanced, and settings (e.t.c. algorithms, rank) are standard in practice.
---
>**Q3** *Data assumption*
**A3** The sub-Gaussian data assumption (e.g., bounded data) is commonly used in theoretical literatures [5,6]. Even for the linear model, our setting is different from low-rank factorization problem because we allow $\tilde{X}$ to be random and can be any matrix shape (including rectangular and asymmetric matrices), which is more flexible.
Our analysis can be extended to more complex data, such as structured data. For example, the linear model under subgaussian data with covariance $\Sigma$, we can track the summary statistic $||\Sigma AB - \Delta||_F$ instead of $||AB - \Delta||_F$ by reparametrization. Also, the nonlinear model can be extended to gaussian mixture data via incorporating Stein's lemma into Hermite analysis. We leave this extension as a future work.
According to the reviewer's feedback, we will add a detailed discussion on the validity of our theory to practice and clearly clarify its scope and limitations in the updated version.
---
>**Q4** *GD vs Adam*
**A4** We understand reviewer's concern about the gap between theoretical (GD) and practical (Adam) optimizers. Before studying Adam, understanding GD is the first step, commonly used for theoretical analysis [1-4,6] including LoRA and use Adam in practice. Our theory can serve as a conceptional motivation to design the practical algorithm and we achieve significant empirical improvements (see Table 2&8 in paper and Table in A1). Extending our theory to adaptive methods will be an interesting topic.
---
>**Q5** *Distinction with prior methods*
**A5** In our submission, we have compared with gradient alignment based algorithms in Appendix E. According to the reviewers' suggestion, we add a detailed comparison with Zhang & Pilanci 2024 [2] & LoRA pro here and will include in the updated version.
Comparing to [2]:
- We identify ill-condition as a convergence bottleneck in LoRA (Thm. 3.5) and demonstrate that pre-condition resolves it (supported by Thm. 3.6&4.3, Fig. 3, Tables 10–11). [2] employ pre-condition (originated from matrix sensing) for stability but overlook ill-condition.
- Our method significantly outperforms theirs (Table 2), highlighting the importance of spectral initialization.
Comparing to LoRA pro:
- We only need the **exact first full batch** gradient for initialization, but LoRA pro needs to **approximate** the **stochastic batch** gradient from full fine-tuning in **every training step** using more matrix operations.
- LoRA pro adds $8dr^2+4kr^2+24.5 r^3$ **more** FLOPs for each $A\in R^{d\times r},B\in R^{r\times k}$.
- We have **much more efficient** memory usage, e.t.c. for GSM8K, we provide a table (in anonymous link: https://imgur.com/a/yjZYUzC) for comparison.
There is a series of empirical works e.t.c. LoRA Pro and LoRA-GA introducing gradient information, we are first to provide a theoretical foundation for these heuristic methods and wish we can inspire more theoretical works.
---
Reference:
[1] Lora+. ICML24.
[2] Riemannian preconditioned lora. ICML24.
[3] On the crucial role of initialization for matrix factorization. ICLR25.
[4] Compressible dynamics in deep overparameterized low-rank learning & adaptation. ICML24.
[5] Gradient dynamics for low-rank fine-tuning beyond kernels.
[6] Implicit balancing and regularization: Generalization and convergence guarantees for overparameterized asymmetric matrix sensing. COLT23. | Summary: The paper provides a theoretical analysis of LoRA fine-tuning by showing that a single full gradient step naturally aligns the LoRA updates with the top singular subspace of the full gradient, and by introducing a spectral initialization strategy, it can effectively recover the downstream low-rank target before iterative training even begins; moreover, by incorporating preconditioned gradient descent, the method eliminates dependence on the condition number of the feature shift, thereby accelerating convergence, and these insights culminate in the development of the LoRA-One algorithm, which achieve empirical improvements over traditional LoRA methods on various benchmarks.
Claims And Evidence: This work makes two claims, one theoretical and one empirical.
Theoretically, this work analyzes LoRA fine-tuning applied to a 1-layer neural network without and with a non-linearity. The theory is detailed and rigorous, although it is unclear how generalizable these insights are to the practical setup of LoRA fine-tuning deep transformers.
Empirically, this work proposes the LoRA-One method, which is inspired by the 1-layer analysis, and apply it to various fine-tuning tasks (on deep transformers). The results outperform LoRA-type methods in terms of generalization performance on several benchmarks.
Methods And Evaluation Criteria: The evaluation criteria seems to be sound.
Theoretical Claims: The proofs justifying the theoretical claims seem to be sound.
Experimental Designs Or Analyses: The experimental design seems to be sound. The experiments are done thoroughly, and the gains over other competing methods are consistent.
Supplementary Material: The proofs seem to be sound.
Relation To Broader Scientific Literature: The prior work seem to be appropriately referenced.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: I'm not very convinced of the theoretical merits of this work: the theorem statements are not the cleanest one can hope for and it is unclear if their insights generalize to deeper architectures.
However, the experimental demonstrations are thorough and convincing. In the end, the idea of initializing the LoRA modules according to the subspaces defined by the full gradient seems like a very nice insight.
Other Comments Or Suggestions: .
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer’s efforts and the positive support.
---
>**Q1** *The theory is detailed and rigorous, although it is unclear how generalizable these insights are to the practical setup of LoRA fine-tuning deep transformers.*
**A1** We are greatly thankful for the reviewer to point out the potential extension to transformer-based models. The attention module and depth will be very challenging. Currently, we believe our techniques can be extended to single linear attention module which considered in a variety in-context learning theory papers [1], i.e.
$$
\hat{y} = \left\langle \frac{\mathbf{W}\mathbf{X}^\top\mathbf{y}}{N}, \mathbf{x}_{\tt query}\right\rangle,
$$
where $\mathbf{W}$ is the attention matrix and $(\mathbf{X},\mathbf{y},\mathbf{x}_{\tt query})$ are data. By a reparametrization, we can admit the following matrix inner product form
$$
\hat{y} = \left\langle \frac{\mathbf{X}^\top\mathbf{y}\mathbf{x}_{\tt query}^\top}{N}, \mathbf{W}\right\rangle.
$$
Treating $\frac{\mathbf{X}^\top\mathbf{y}\mathbf{x}_{\tt query}^\top}{N}$ as a random measurement matrix, we can characterize the above model as a special variant of matrix sensing model [2] and LoRA will be a factorization approach. With this characterization. we believe our analysis can also apply. In the updated version, we will add a discussion on the possible extension to linear attention module.
The original attention module in transformer are more sophisticated due to the existence of softmax activations and multiple tunable weight matrices. First, handling the nonlinearity brought by softmax needs to incorporate more theoretical tools, such as [3,4]. Second, the simultaneous training of all layers (Q,K,V) in attention is quite challenging, we might seek tools from dynamical systems to deal with the coupling between parameters.
We believe the theory can be extended to deep transformer if the single attention module can be understood thoroughly.
From an empirical perspective, in our experimental part, we choose LLaMA 2 7B and T5 base model which consist of multiple attention layers and we achieve promising results (Table 2 & 8) on various NLP tasks using our theory-grounded method, which can empirically justify our insights from theory.
---
>**Q2** *I'm not very convinced of the theoretical merits of this work: the theorem statements are not the cleanest one can hope for*
**A2** Our theory demonstrates the subspace alignment between $(A_t, B_t)$ and one-step full gradient, see Section 3.1, and motivates us to design one certain initialization to achieve this subspace alignment, which is for convergence acceleration. According to the reviewer's feedback, we will use the simplified version for better illustration. For example, Theorem 3.2 (alignment between $A_t$ and $G^{\natural}$) can be simplified as
Theorem 3.2 [Simplified]. Under standard random initialization for LoRA, after training for $t^*=\mathcal{O}(\ln d)$ steps, then we have the subspace alignment between $G^{\natural}$ and $A_t$
$$
|| U^\top_{r^*,\perp}( G^{\natural}) U_{r^*}\left( A_{t^*}\right)||_{op} \mbox{~is small}, \quad w.h.p.
$$
---
Reference:
[1] How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression? ICLR24.
[2] Implicit balancing and regularization: Generalization and convergence guarantees for overparameterized asymmetric matrix sensing. COLT23.
[3] Training dynamics of multi-head softmax attention for in-context learning: Emergence, convergence, and optimality. COLT24.
[4] On the convergence of encoder-only shallow transformers. NeurIPS23. | Summary: This paper focuses on the learning dynamics of Low-Rank Adaptation (LoRA) and proposes improvements in initialization and gradient preconditioning.
The authors analyze both linear and nonlinear matrix factorization cases, where the objective is to minimize $\|\tilde{X}(W+AB)-\tilde{Y}\|_F^2$ using gradient descent. In the linear case, they show that the factors $(A, B)$ in vanilla LoRA align with the top singular vectors of the he first iteration's full gradient. To accelerate this process, they propose spectral initialization based on the full gradient, and prove linear convergence under such initialization. To further address ill-conditioned targets, they introduce preconditioning, leading to a condition-number-agnostic convergence rate. They also extend these results to nonlinear settings. The paper provides insights into why introducing full-gradient information benefits fine-tuning.
For practical applications, the authors propose LoRA-One, a fine-tuning method incorporating spectral initialization and preconditioned GD, achieving performance gains on the GLUE benchmark compared to LoRA+, P-LoRA, and Galore.
Claims And Evidence: The paper presents both theoretical analysis and empirical validation, including:
Alignment of LoRA updates with the first-step gradient (Fig. 2), GD trajectories of LoRA-init vs. spectral-init (Fig. 4), and practical performance improvements on GLUE benchmarks.
Methods And Evaluation Criteria: Characterizing LoRA as a single-layer matrix factorization problem seems somewhat unrealistic. While LoRA is often viewed as a low-rank approximation of the full fine-tuning update, its optimization behaves differently in practice (e.g., https://arxiv.org/abs/2410.21228 shows that LoRA updates are nearly orthogonal to full fine-tuning updates). This raises questions about whether a unique $\Delta$ exists in such cases.
Moreover, I am skeptical about introducing full-gradient information into parameter-efficient fine-tuning (PEFT). If computing the full gradient is feasible, then full fine-tuning—given proper hyperparameter tuning and regularizations—usually serves as a strong benchmark. From my perspective, the most effective approaches in practice still tend to be either full fine-tuning with careful optimization or vanilla LoRA.
Theoretical Claims: The theoretical analysis is conducted under a single-layer matrix factorization setting, focusing on its optimization and generalization. The theorems are mostly correct and intuitive.
Experimental Designs Or Analyses: The authors validate their findings through numerical experiments on matrix factorization and further demonstrate the effectiveness of their method on modern NLP tasks, as outlined in the **Summary**.
Supplementary Material: The appendix mainly contains detailed proofs and extra visualizations.
Relation To Broader Scientific Literature: This paper may be in the interest of the general area of improving the efficiency of large models.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: See **Summary** and **Methods And Evaluation Criteria**.
Other Comments Or Suggestions: A minor typo: Line 16, "algin" → "align".
Questions For Authors: I would like to see that if LoRA-one actually leads to a better alignment to the first step gradients on the practical NLP tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer's effort and constructive comments on this work. We fixed the typo on "algin" and address your concerns as below.
>**Q1** Simplistic model
**A1** Our formulation **can be not regarded as the matrix factorization (MF) problem**, which is strictly defined as $\min_{A, B} ||AB-\Delta||_F^2$.
For linear models, our problem reduces to MF only when the downstream data matrix $\tilde{X}$ is an identity matrix (i.e., square and symmetric). However, our setting is more general: we assume $\tilde{X}$ to be random sub-Gaussian (e.g., bounded entries) and allow it to have arbitrary dimensions, including rectangular and asymmetric shapes. This makes our framework more flexible than standard MF.
Our setting for nonlinear models is structurally different to MF due to the nonlinearity of ReLU.
There are some prior works that use MF to study LoRA but the practical validity of their theory should be concerned. In a recent paper [1], authors analyze the dynamics of MF using Nystrom initialization and apply it to LoRA. However, such initialization in MF needs to know the complete ground truth $\Delta$ as the prior knowledge, which is unknown in real-world fine-tuning. So there is a large gap between their MF theory and practice. The MF is also considered in [2] and applied to LoRA, however the convergence and generalization guarantees are missing.
Our work theoretically studies LoRA under realistic assumptions, demonstrating the subspace alignment between $(A_t, B_t)$ and one-step full gradient, and then develops a theory-grounded algorithm to achieve promising performance in practice.
---
>**Q2** LoRA and FFT optimizes differently (e.g., paper). Existence of \Delta.
**A2** We thank for the reviewer to point out this paper and the empirical findings in this paper are very interesting and inspiring. In our updated version, we will cite the paper above and add a discussion.
Our theory follows the classical data generation process in machine learning for generalization guarantees, i.e., the label of (downstream) data is generated by a linear/nonlinear target function, corrupted by some noise. Under this setting, $\Delta$ represents a part of the target function and is therefore unique. Then different fine-tuning strategies (e.g., full FT, LoRA) can still lead to different solutions that achieve similar performance in practice, as suggested by the aforementioned arXiv paper. This phenomenon is common in deep learning.
Note that, if we focus solely on optimization guarantees, our theory remains valid regardless of whether $\Delta$ is unique or not.
---
>**Q3** Skeptical about introducing full-gradient info into PEFT
**A3** We understand the reviewer's concern and would like to provide more details to make it clear.
All experiments are conducted on a single A100 40GB GPU. Due to memory limitations, full fine-tuning is not feasible. However, our implementation employs a memory-efficient approach [3,4], allowing the computation of the first full gradient. Notably, this approach cannot extend to full fine-tuning.
It's true that using one-step full gradient information requires additional (and marginal) time cost but significantly improves the accuracy. Here we provide a table (displayed in the anonymous link: https://imgur.com/a/qiB59c2) for comparison between vanilla LoRA and LoRA-One on LLaMA 2 7B. It takes additional 10 mins but the accuracy is improved with more than 5%.
In fact, using additional gradient information from full fine-tuning is a trend and empirically considered in [3,5] as well.
---
>**Q4** LoRA-one leads to better alignment on NLP tasks?
**A4** Based on Algo. 1, LoRA-One achieves perfect alignment with first-step gradient at initialization but LoRA needs to take long time (e.g. one epoch) to achieve a weak alignment, e.g., fine-tune on MRPC via LoRA with r=8 for one epoch, using principal angle defined in Thm 3.1 & 3.2, we have
| | Avg. | Min | Max |
| - | - | - | - |
| A | 0.399 | 0.199 | 0.854 |
| B | 0.267 | 0.156 | 0.393 |
Thanks to alignment, LoRA-One surpasses the first-step GD accuracy at the start and continues growing better (displayed in anonymous link https://imgur.com/a/BG5DZca). But LoRA needs a long time to achieve the first-step GD accuracy (Table 2). Also, LoRA-GA mismatches singular subspace with first-step gradient which is the potential reason that we consistently outperform theirs on all datasets even without precondition (Fig 3, Table 10&11).
---
We expect that our clarification would help you have a better understanding on this work.
---
Reference:
[1] On the crucial role of initialization for matrix factorization. ICLR25.
[2] Compressible dynamics in deep overparameterized low-rank learning & adaptation. ICML24.
[3] Lora-GA: Low-rank adaptation with gradient approximation. NeurIPS24.
[4] Full parameter fine-tuning for large language models with limited resources. ACL24.
[5] LoRA-Pro: Are Low-Rank Adapters Properly Optimized? ICLR25.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. I appreciate the authors' efforts in both the paper and the response. I also apologize for the misstatement regarding the authors' focus on learning dynamics in linear/nonlinear regression problems under coefficient shift. I am glad to learn that incorporating the full gradient introduces a negligible time cost (Q3) and that the authors have verified the enhanced alignment brought by LoRA-One (Q4).
However, my main concern remains whether fine-tuning large models can truly be interpreted as a regression problem and whether refining the vanilla LoRA method is still a pressing issue. Since LoRA was introduced four years ago, thousands of papers have claimed to improve upon it in an elegant and theoretically superior manner. But from my perspective, many of these modifications do not seem to withstand the test of time. Of course, this perspective may be highly biased.
Overall, this paper is well-organized, the theoretical analysis is solid and the authors provide concrete experimental results. In light of the authors' clarifications, I have raised my score to 3.
---
Reply to Comment 1.1.1:
Comment: We deeply appreciate your support and are very happy to see you have a better understanding of our work.
We would like to make a few more keynotes:
- Regression is the first step to theoretically understand fine-tuning beyond linear models. Extension to classification based on [1] or Next Token Prediction (which is closer to LLM) based on [2] requires more efforts under different settings.
- We deeply agreed with you on the concerns of validity of huge amount of modifications to LoRA. They have different benchmarks and experimental settings, so sometimes they may only work in a narrow setting. Our work can clarify misunderstandings of some heuristic algorithms and improve the performance in practice, which is the spirit and value of theory in this case (with required simplifications but acceptable).
Reference:
[1] Collins, L., Hassani, H., Soltanolkotabi, M., Mokhtari, A. and Shakkottai, S. Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks. ICML 2024.
[2] Thrampoulidis, C. Implicit Optimization Bias of Next-token Prediction in Linear Models. NeurIPS 2024. | null | null | null | null | null | null |
Online Conformal Prediction via Online Optimization | Accept (poster) | Summary: The authors propose an algorithm for online conformal prediction that achieve both long-run deterministic coverage as well as conditional coverage under stochastic data which is well-behaved. The idea is to train a predictive model of the quantile on the sequence of data. Different results are assumed under different assumption one model well-specification and data generating distributions. The approach is shown to reduce the set prediction set size and cumulative loss.
Claims And Evidence: Yes, the developed theory is convincing and the results are clearly presented.
Methods And Evaluation Criteria: Yes, the proposed algorithm based on the estimation of the NC scores is thoroughly evaluated. In particular the algorithm’s benefit is shown to have better prediction set size and cumulative loss. However, I couldn’t find any long-run coverage results in the tables in the main text.
Theoretical Claims: I did not have checked the proof of the theorems.
Experimental Designs Or Analyses: I’ve reviewed the experiments in the main text, and they are sound; however, I could not find any results regarding adversarial coverage
Supplementary Material: No
Relation To Broader Scientific Literature: The results are well-connected to existing literature and the Appendix A does a good job at providing an overview of existing results in online calibration and stochastic optimization.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is original in that it proposes a new algorithm for online conformal prediction, achieving both long-run deterministic coverage and conditional coverage under well-behaved stochastic data. The core idea of the algorithm is to train a predictive model of the quantile based on the data sequence. This approach is novel, and, to the best of my knowledge, the asymptotics of online coverage have only been explored in i.i.d. settings for standard ACI [Online Conformal Prediction with Decaying Step Sizes] or localized ACI [Localized Adaptive Risk Control]. I don’t see any weaknesses in the existing approach, other than the fact that the current experiments lack results comparing long-run coverage.
## update after rebuttal: My score remains positive and unchanged after the rebuttal.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of our work.
**However, I couldn’t find any long-run coverage results in the tables in the main text.**
The long-run coverage for all the methods in Table 1 is at least 0.87 - we will make this clearer in the final version. We have also attached plots showing the long-run coverage rates on the preregistered ERCOT data at this link [https://drive.google.com/file/d/1AT1hbP3ohYowIt0_Hy209yadYvZjl9cJ/view?usp=sharing], where we see that all methods converge rather quickly. We have also attached plots showing the long-run coverage tradeoff for LQT in terms of the learning rate, for ERCOT data.
**I don’t see any weaknesses in the existing approach, other than the fact that the current experiments lack results comparing long-run coverage.**
Please see the comment above.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the additional experiments! I think plots like the one attached should be included, perhaps in the appendix. Unfortunately, tables don’t effectively showcase long-term coverage. I confirm that this looks like a good paper to me! | Summary: The paper proposes an online conformal prediction method that integrates stochastic optimization to improve coverage guarantees. It claims existing methods use bang-bang control and lack time-conditional guarantees. The proposed method supposedly achieves better theoretical guarantees and outperforms baselines in 15 datasets, including a pre-registered electricity demand forecasting task.
Claims And Evidence: The paper may inflate its novelty, rely on strong assumptions, and adopt a relaxed **batch** online setting. Moreover, it misuses a lot of basic conformal inference terminology and misinterprets existing works.
1. The claim that existing methods resemble bang-bang control is misleading.
Actually, existing online conformal inference methods are variants of online gradient descent (OGD)[1], instead of a "2 step on–off controller that switches abruptly between two states". This is a fundamental misinterpretation of current online conformal inference methods.
2. Online setting vs batched online setting.
From the abstract, the paper focuses on modeling data dependency in online conformal prediction. However, in the methodology, it shifts to a batched online setting, which is an obvious relaxation of the pure online setting.
3. In Eq. (1), the authors claim the ACI guarantee
$\left|\frac{1}{T} \sum_{t=1}^{T} \text{err}_t - \alpha\right| = o(1)$, can be obtained by trivially setting $-\infty, \infty$ threshold. This is unclear, as the target error rate is $\alpha$, not $0$.
4. intertwined theoretical contribution
While the title of the paper is 'Online Conformal Prediction,' the proposed methods are an intertwined combination of conformal inference and asymptotic methods with strong model assumptions. Notably, conformal prediction is unique in its finite-sample guarantee and distribution-free nature. It difficult to discern the main purpose of this paper.
5. Impractical assumption of well-specified models and conformal predictor is linear in its parameter.
The theoretical guarantees assume well-specified models and, moreover, a linear conformal predictor. This is highly impractical and further proves the paper's overstatement of its contributions.
[1] Ramalingam, Ramya, Shayan Kiyani, and Aaron Roth. "The Relationship between No-Regret Learning and Online Conformal Prediction." arXiv preprint arXiv:2502.10947 (2025).
Methods And Evaluation Criteria: See above.
Theoretical Claims: I did not check the correctness of the proofs because of the above issues.
Experimental Designs Or Analyses: 1. Since the main novelty of the paper is claimed upon the modeling of the data dependency and auto-regressive cases, the authors should compare with baseline in [2]. [2] handles time series with general dependency and auto-regressive case.
However, the model does not include it in the baselines.
2. Lack of essential 'sets vs time step' figure.
The paper contains no figures of prediction sets visualization for either its method or the baselines, which are essential in an online setting.
[2] Zaffran, Margaux, et al. "Adaptive conformal predictions for time series." International Conference on Machine Learning. PMLR, 2022.
Supplementary Material: Due to the lack of essential figures for online prediction sets (as well as their comparison with baselines) in the experimental results, I looked through the entire paper, including the supplementary material, but did not find them.
Additionally, I read through the related work section and did not find a detailed discussion of the most closely related conformal paper that explicitly addresses the auto-regressive case.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: Most of the related works are mentioned, but the discussion lacks depth and contains fundamental misunderstandings.
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: There are lots of misinterpretation of conformal methods and ambiguity in the presentation. For example,
1. "The first algorithm to propose this approach, ACI, constructs confidence sets using a single parameter that governs their level of conservativeness.''
In fact, in ACI, the choice of the parameter $\gamma$ creates a trade-off between adaptability and stability. A larger $\gamma$ induces greater volatility in the realized coverage. However, "conservativeness" in conformal prediction refers to the size of the prediction intervals, which is a distinct concept.
2. Despite the technical errors through the whole paper, the grammar is also cumbersome with ambiguous statements. For example,
"so that we do not rely on (likely invalid) modeling assumptions, but **which can in fact do the “right thing,”** adapting to underlying structure when it exists."
{$(X_t, Y_t)$}$_t$
3. The paper overuses $q_t$ to denote both the threshold and the conformal predictor, which may cause confusion.
Questions For Authors: Since the main claim of the paper has a major flaw, along with an inaccurate theoretical framing, I don't think requesting additional experiment results would be helpful.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments. We first provide a few general remarks.
Our work provides stronger guarantees than standard online conformal works, by offering not just the standard adversarial marginal guarantee, but also a stronger conditional one when the data is non-adversarial. Crucially, our algorithm is adaptive when the process is non-adversarial. Adaptive and optimistic algorithms have been effective solutions in several areas of machine learning, such as online convex optimization and reinforcement learning.
**Claims & Evidence**
1. This is not a misinterpretation of online conformal methods when we take scalar quantile tracker (SQT, Example 3.1) updates to be controller actions. The SQT system, with $\eta=1$ and $S_t = 0.1$ for all $t$ for simplicity, behaves precisely like a bang-bang controller with on signal = +0.9, off signal = -0.1, and set point = 0.1. Namely, if the state ($q_t$) is below the set point (0.1), the controller will always output the on signal (+0.9), and it will output the off signal (-0.1) otherwise.
2. The batch version of our algorithm is fully compatible with the standard online conformal prediction setting. In the batched algorithm, the conformal predictor’s parameter is updated after every m (the batch size) steps, but *the prediction set at each time t is always generated immediately after the covariate X_t is observed.*
3. We state that one can trivially obtain the guarantee “by *alternating* between $q_t = \infty$ and $q_t = -\infty$.” The thresholds should not always be $\infty$ or $-\infty$ as this would indeed lead to error rates of 0 or 1, respectively.
4. The main purpose of this paper is to show that we can develop online conformal algorithms that enjoy strong guarantees for stochastic processes *while maintaining adversarial guarantees*. The only asymptotic part of the paper is Theorem 5.1, and this is in the analysis of the algorithm. Theorems 4.1, 5.4, and 6.1 are finite-sample guarantees for our algorithm.
5. Our theoretical guarantees extend beyond well-specified models – we devote Section 6 to the mis-specified setting and preserve the adversarial coverage guarantees with no such assumptions. Our linear model assumption represents a generalization over methods such as ACI, which only considers scalars, and our experimental results across 17 datasets, along with our pre-registered experiment, suggest this generalization leads to significant performance improvements.
**Experimental Designs Or Analyses**
1. We have adapted the code provided by the authors of [2] to compare with our method and baselines on the preregistered ERCOT data. The algorithm provided in [2], AgACI, comes with several aggregation strategies, and we provide results at this link [https://drive.google.com/file/d/1AT1hbP3ohYowIt0_Hy209yadYvZjl9cJ/view?usp=sharing].
These all achieve marginal coverage, but the set sizes and quantile losses are all larger than every method appearing in Table 1. We also point out that AgACI is not proven to achieve long-run coverage and does not provide theoretical guarantees.
2. We have attached these figures for our main pre-registered dataset and several methods at the link above (Figure 1 contains essentially this figure, we also attached analogous figures for the ACI and PID(theta) baselines for comparison).
**Due to the lack of essential figures ... but did not find them.**
Please see the comment above.
**Additionally, I read through the related work section ... auto-regressive case.**
Thank you for the suggestion, we will add a discussion of [2] to the final version.
**Other Comments Or Suggestions**
1. The parameter that we refer to here is the threshold, not the learning rate, which is instead a hyperparameter.
2. In spite of the reviewer’s comments, we have not identified any technical errors. The reviewer, as far as we can tell, also does not seem to identify technical errors in our theoretical or experimental results. While we are sure there are typos that we have missed, it seems unreasonable to ask for more (on the technical side) than correct proofs and pre-registered real-world experiments.
In Section 2, we define “right thing” formally– always achieving the adversarial guarantee (1) while also achieving (2) when possible. We will clarify this statement in the final version.
We would be happy to clarify if anything about the notation $\\{(X_t,Y_t)\\}_t$ is unclear.
3. We have been careful about dropping the argument to the conformal predictor when clear from context. If there is any instance that caused confusion, we would be happy to clarify.
**Since the main claim of the paper has a major flaw, along with an inaccurate theoretical framing, I don't think requesting additional experiment results would be helpful.**
Despite the reviewer’s comments, we have not identified any “major flaw” or “inaccurate theoretical framing”. Perhaps the reviewer can elaborate on what these refer to.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my questions regarding the addition of necessary result figures and baselines. After these adaptations, the paper is more persuasive in terms of empirical validity and efficiency.
On the other hand, I believe the presentation still requires careful revision, particularly in the discussion of current online conformal prediction methods:
- A more respectful opinion on current online conformal prediction methods, especially acknowledge that they are online optimization methods (e.g. online gradient descent). This perspective is supported by several recent conformal prediction papers (e.g., [1]–[2]).
Regarding the presentation of the theoretical results, the logical flow between the various theorems is unclear. Different theorems rely on different assumptions (e.g., correct model specification, linear parameters) and offer different types of guarantees (e.g., asymptotic vs. finite-sample). It would greatly benefit the paper if the authors could clearly highlight the most general assumptions under which they obtain theoretical guarantees—whether asymptotic or finite-sample.
[1] Ramalingam, Ramya, Shayan Kiyani, and Aaron Roth. "The Relationship between No-Regret Learning and Online Conformal Prediction." arXiv preprint arXiv:2502.10947 (2025).
[2] Angelopoulos, Anastasios N., Rina Foygel Barber, and Stephen Bates. "Online conformal prediction with decaying step sizes." arXiv preprint arXiv:2402.01139 (2024). | Summary: This paper introduces a conformal inference algorithm in the online setting that achieves both long-term coverage guarantees in adversarial settings, as well as expected coverage (conditioned on the past sequence) that converges to the desired coverage rate in certain stochastic settings. Many existing online conformal prediction algorithms apply some form of online gradient descent on the $(1-\alpha)$-quantile loss, on either a (single-dimensional) prediction of the quantile or non-conformity threshold for the current time-step. The algorithm presented here generalizes this approach by instead performing OGD on a richer parameter class $\Theta$, and then computing the non-conformity threshold $q_t$ as a deterministic function (called the conformal predictor) of the current iterate $\theta_t$.
This algorithm inherits the long-term coverage guarantees of prior work by bounding the outcomes of the conformal predictors (analogous to directly bounding the predicted quantiles / thresholds). In stochastic settings, they define the notion of a process being well-specified if a single parameter in $\Theta$ can correctly predict the quantile that minimizes expected quantile loss across all rounds. In such (or similar) settings, with appropriately chosen learning rates the algorithm converges to this parameter, thus recovering an expected coverage that converges to $1 - \alpha$, with finite-sample guarantees if a batched version of the algorithm is run. When $\Theta$ and the set of conformal predictors are chosen appropriately, well-specified processes include more than just IID settings (such as autoregressive processes).
The authors run experiments with several datasets, showing that their algorithm performs on par with existing methods in the adversarial setting, while achieving better performance in terms of quantile loss and set size when there is more of a linear correlation between the predicted parameter and the appropriate quantile.
Claims And Evidence: The claims are all proved and there is sufficient evidence (in particular the experiment registered by the authors themselves) that their algorithm shows improved performance over prior ones.
Methods And Evaluation Criteria: Yes, it seems fine.
Theoretical Claims: I only briefly looked at most of the proofs, apart from Theorem 4.1 and Lemma 5.2 I looked at completely which both look sound.
Experimental Designs Or Analyses: The experiments are fairly extensive. However, I couldn’t find the long-term coverage rate reported for any of the experiments. A plot showing the convergence rate to the desired coverage across the different methods would have been interesting. Also since LQT with a decaying learning rate is optimized for the stochastic setting, I would have liked to see the trade-off it has when it comes to long-term coverage.
Supplementary Material: Skimmed parts of it for the proofs.
Relation To Broader Scientific Literature: This paper has close connections with much of the work using OGD with pinball loss to obtain conformal-type coverage guarantees in adversarial settings, starting with the paper that introduced ACI (Gibbs et al 2021) and pointed out the connection (though not directly through no-regret guarantee), and then subsequent work that built on it with slight modifications (Bhatnagar et al. 2023, Gibbs & Candes 2022). This work follows a very similar idea, the main departure as I’ve understood is noting that in stochastic settings where there is a single parameter that optimizes expected coverage (well-specification), convergence to this parameter is possible using an appropriately chosen sequence of learning rates, and when a richer class of parameters than just the set of possible non-conformity thresholds is used, there are more settings in which the above well-specification property becomes true.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: Strengths:
The most interesting aspect of this paper for me is the idea of using higher-dimensional iterates to map to non-conformity thresholds, and performing gradient descent on these iterates rather than on thresholds directly. Turning the online conformal-prediction problem from one where we predict a one-dimensional threshold into one where we predict a higher-dimensional vector that may encode richer information but has a direct connection to the thresholds + coverage, through the conformal predictors $q_t$ at each time-step is clever. It generalizes the prior algorithms in this area in a meaningful way, and opens the possibility of using other kinds of conformal predictors to expand the collection of settings where this well-specification property holds.
Weaknesses:
There seems to be a tradeoff in terms of the stochastic and adversarial coverage guarantees when it comes to setting both the batch size and the set of learning rates, so the optimal rates of both cannot be achieved simultaneously. This is understandable and seems like a necessary tradeoff but in terms of application what approach would be taken to decide which algorithm to use? In unknown settings / sequences, it seems like one would want to set the parameters so that we achieve non-trivial long-term coverage (assuming adversarial), but this would have worse convergence rates than the other methods if you also wanted non-trivial guarantees of the stochastic variety.
Other Comments Or Suggestions: As the authors mention, to achieve optimal convergence rates for coverage in the stochastic setting, we set the learning rate in a way that achieves a vacuous bound on long-term coverage in the adversarial sense (and vice versa). Did you think about the settings where you achieve these guarantees at the same time (though not at the optimal rates)?
Questions For Authors: 1. Section 5 mentions that the finite sample rates are achieved with a particular batch size m and I didn’t get into the details of this proof – could the authors clarify what the relationship between m and the constant C’ in Theorem 5.4 is? Is m a function of T?
2. How was the learning rate set for the decaying version of LQT in the experiments? Did it decrease at a O(1/t) rate or was it slower to also achieve a non-trivial long-term coverage convergence rate?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **I couldn’t find the long-term coverage rate reported for any of the experiments...I would have liked to see the trade-off it has when it comes to long-term coverage.**
At this link [https://drive.google.com/file/d/1AT1hbP3ohYowIt0_Hy209yadYvZjl9cJ/view?usp=sharing], we have attached additional experiments to describe both (1) long-run coverage across several methods, and (2) the trade-off in terms of the learning rate for LQT. For (1), we observe that all methods converge rather quickly, and for (2), we observe that indeed there is a slower convergence for more quickly decaying learning rates.
In general, we do not report the long-run coverage because all methods achieve it. We also point out that Table 6 in the appendix shows that in all but 1 instance, the proposed algorithms (and baselines) achieve at least 0.87 long-run coverage when the target is set to 0.9. The baselines achieved tight marginal coverage since we tuned their hyperparameters on the test set.
**There seems to be a tradeoff in terms of the stochastic and adversarial coverage guarantees when it comes to setting both the batch size and the set of learning rates, so the optimal rates of both cannot be achieved simultaneously. This is understandable and seems like a necessary tradeoff but in terms of application what approach would be taken to decide which algorithm to use? In unknown settings / sequences, it seems like one would want to set the parameters so that we achieve non-trivial long-term coverage (assuming adversarial), but this would have worse convergence rates than the other methods if you also wanted non-trivial guarantees of the stochastic variety.**
We thank the reviewer for their appreciation of our work.
We offer three answers to the tradeoff question:
- The batch size and learning rate decay are hyperparameters that can be tuned on validation data, as we have done with the magnitude of the learning rate for our experiments. We sought the best performance on the quantile loss, given a constraint on the long-run coverage, but these heuristics can be changed to fit the practitioner’s goals.
- Domain-specific knowledge, if available, can help guide this choice. If the well-specified assumption is reasonable, or the errors happen to be linearly correlated often, then faster rates of decay are preferable to speed up convergence, while constant learning rates will provide the tightest coverage when the data is fully adversarial.
- Choosing intermediate rates of decay (i.e. $\Theta(t^{-0.6})$ as in the paper) allows us to attain both long-run coverage and consistency under well-specification, albeit at slower rates. Across our experiments, we have found this (and small batch sizes) to be a safe choice. We think these hyperparameter questions are important, so we have been careful to empirically validate this intuition through the preregistration.
**As the authors mention, to achieve optimal convergence rates for coverage in the stochastic setting, we set the learning rate in a way that achieves a vacuous bound on long-term coverage in the adversarial sense (and vice versa). Did you think about the settings where you achieve these guarantees at the same time (though not at the optimal rates)?**
Our results apply to a range of learning rates $\Theta(t^{-c})$ for $c \in [0,1]$, not just the $\Theta(1/t)$ learning rates attaining optimal convergence rates in the stochastic setting and $\Theta(1)$ attaining optimal long-run coverage rates in the adversarial setting. Choosing $c \in (0,1)$ thus allows us to attain both long-run coverage and consistency under well-specification asymptotically, albeit at non-optimal rates.
**Section 5 mentions that the finite sample rates are achieved with a particular batch size m and I didn’t get into the details of this proof – could the authors clarify what the relationship between m and the constant C’ in Theorem 5.4 is? Is m a function of T?**
The batch size m is not a function of T. It should be chosen so that the corresponding minimum eigenvalue in each sample covariance matrix has a sufficiently large conditional expectation. In Lemma 5.3, this just leads to a batch size of at least 2. The batch size m is constant, and the constant C’ increases with m.
**How was the learning rate set for the decaying version of LQT in the experiments? Did it decrease at a $O(1/t)$ rate or was it slower to also achieve a non-trivial long-term coverage convergence rate?**
The rate of decay for our experiments is $\Theta(t^{-0.6})$ following Angelopoulos et al. (2024). This is indeed to ensure non-trivial long-run coverage. The only exception is the synthetic experiment in Appendix B3 where we use the optimal $\Theta(1/t)$. | Summary: This paper is looking at the problem of online conformal prediction, focusing on deriving stronger (conditional) guarantees than long run marginal coverage. They propose to look at asymptotic absolute value coverage deviation from the nominal value at each time step, conditioned on all the past observations. They then design and algorithm based on quantile tracking through pinball loss minimization. Relying on standard proof techniques through the connection of pinball loss derivatives and coverage, they both and adversarial long run coverage validity and stronger conditional coverage guarantees in the stochastic setup.
I thank the authors for their response. I agree with them. I strongly encourage the authors to incorporate the comments on improving their presentations. In particular, it is good that the list of contributions would be stated in the introduction with pointers to the associated parts of the paper.
The paper [1] that the authors mentioned is indeed what i referred to.
I keep my score and vote for acceptance.
Claims And Evidence: All the claims are backed by meaningful theorems and experiments.
Methods And Evaluation Criteria: The evaluations make sense to me.
Theoretical Claims: The theoretical claims and proofs sound reasonable to me.
Experimental Designs Or Analyses: I have not checked all the details of the experiments, but from a high level perspective, everything makes sense.
Supplementary Material: I have looked at some important parts of the proofs and they all make sense.
Relation To Broader Scientific Literature: The online conformal prediction is a very important and active area of research, as often time in practice there is a need to do uncertainty quantification in an online fashion. Even though conformal prediction can be extended fairly easily to the adversarial setup, but the theoretical guarantees of the form of long run marginal coverage are very weak and could be somewhat misleading. This work make progress toward offering stronger guarantees, which could be of interest in real world applications, providing a more robust and interpretable uncertainty quantification.
Essential References Not Discussed: A very similar notion of conditional coverage has been previously discussed by [1], in a stochastic iid batch setting, named mean squared conditional error.
Other Strengths And Weaknesses: The paper needs improvement in the presentation, particularly when presenting the algorithm. The algorithm suddenly appears without motivating or hinting toward what are the algorithmic principles behind it. To that end, it is not obvious what is the novel/new tweak to the algorithm beyond just running online gradient descent on pinball loss, which is the core algorithm in the literature.
Other Comments Or Suggestions: In the current version, the organization of the assumptions needed for the theoretical argument is hard to navigate. It's good to state the assumptions once and assign numbers (or letters to them), assumption 1, assumption 2, ..., and then reference to them in the theorem statements. This will improve the clarity of the theorems. That being said, it is also good, to discuss whether such assumptions are conventional and/or how restricting they are. (Though I appreciate the three examples given, but it is also good to point out some examples, which might be of importance in practice, but your assumptions don't hold then.)
Questions For Authors: What are the list of contributions this paper is claiming? is there also any algorithmic contributions? or it is mainly analyzing the online gradient descent on pinball loss under different assumptions and different notions of coverage?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **A very similar notion of conditional coverage has been previously discussed by [1], in a stochastic iid batch setting, named mean squared conditional error**
We thank the reviewer for pointing us to this reference. We would like to verify that it refers to [1], and we would be happy to incorporate it into our discussion of conditional coverage in the i.i.d. offline setting.
[1] Kiyani, S., Pappas, G. and Hassani, H. Conformal prediction with learned features. arXiv:2404.17487 [cs.LG]. 2024.
**The paper needs improvement in the presentation...beyond just running online gradient descent on pinball loss, which is the core algorithm in the literature.**
Our main contribution is a new algorithm that both satisfies adversarial guarantees (long-run coverage) and does the “right thing” (achieves time-conditional coverage) if the data is not adversarial. Most online conformal work focuses solely on adversarial guarantees, but we provide a novel approach that can adapt to both settings.
In terms of the algorithm itself, the main novelty is that we run online gradient descent on the quantile loss with more complex models for the conformal predictor. Past work (e.g. ACI) only considers scalar models, while our algorithm applies to arbitrary parametric models. For our stochastic guarantees, we simply require that the conformal predictor q_t(\theta) is linear in \theta. We will preface the algorithm with more motivation and discuss the novelties in the final version.
**In the current version, the organization of the assumptions...but your assumptions don't hold then.)**
We will make the assumptions for each guarantee clearer, and expand on how conventional and restrictive they are in the final version. At a high level, our assumptions fall into two categories: model assumptions and distributional assumptions. Our model assumptions determine the structure of the conformal predictor; most of our analysis requires that the conformal predictor $q_t(\theta)$ is linear in its parameter $\theta \in \mathbb{R}^d$ (though this can in fact be dropped for the adversarial guarantee in Theorem 4.1). These are less restrictive than conventional model assumptions that only allow for scalar models (as discussed in our response to the previous point). Our distributional assumptions hold for many data-generating processes. For example, in Theorem 5.4 we only require that the conditional quantiles are a linear function of the past, which is true for AR(p) processes and i.i.d. data. This is less restrictive than (Angelopoulos et al., 2024), which only considers the i.i.d. setting.
Examples 3.1, 3.2, and 3.3 all satisfy the linear model assumption, and each has a description of when it is well-specified (for the guarantees of Section 5 to apply). Our experiments focus on these three examples, where we find that 3.2 and 3.3 perform the best. Algorithm 1 does allow for models that do not satisfy our assumptions (e.g. nonlinear), but these do not enjoy the same stochastic theoretical guarantees, which are a crucial part of our contribution. We believe that extending our guarantees to more complex model classes that could provide further practical benefits is a promising area of future research, and we will make this point clearer in the final version.
**What are the list of contributions this paper is claiming? is there also any algorithmic contributions? or it is mainly analyzing the online gradient descent on pinball loss under different assumptions and different notions of coverage?**
Concretely, our contributions are:
- A new algorithm for online conformal prediction that allows for conformal predictors beyond scalars as in ACI.
- Theoretical guarantees showing that our algorithm always achieves
- marginal coverage,
and under appropriate stochastic assumptions:
- conditional coverage when well-specified
- relaxed conditional coverage when mis-specified.
- Experiments spanning 17 datasets and a pre-registration to compare our method with existing alternatives and provide evidence for the empirical benefits of our algorithm.
As previously discussed, the algorithmic contribution is to extend the model for the conformal predictor beyond a single scalar parameter (as in ACI) to parametric functions, and specifically for our theoretical results: linear functions of feature maps. This new algorithm can adapt to stochastic data, where it enjoys strong convergence guarantees both when well-specified and mis-specified. While our algorithm can replicate SQT as a special case, it encompasses a larger family of procedures and is thus not equivalent to exploring the behavior of existing algorithms under different assumptions. Moreover, our notions of coverage are not just different, but more general and stronger: we satisfy the standard marginal coverage property in the adversarial setting, but also a notion of conditional coverage when the data is stochastic. This has not been explored in past work beyond the i.i.d. setting. | null | null | null | null | null | null |
A Hitchhiker's Guide to Scaling Law Estimation | Accept (poster) | Summary: This work investigates the challenge of fitting scaling laws in the language model domain, in particular:
- How accurate we should expect scaling laws to be? (The authors show ~ 4% Absolute Relative Error (ARE) at best).
- How does the shape of scaling laws vary with architecture, in order for the community to efficiently use scaling laws to propose architectures modifications? (The authors show "similar enough to be useful and transferrable").
- Should we only use final checkpoints? (The authors show we should use checkpoints from ~> 30% of training data, and that only using final checkpoints can be detrimental).
- How large should models be? (The authors show a nuanced trade-off between variance of larger models and proximity of larger models to extrapolated models of interest).
The authors do the above through a meta-analysis of existing open pretrained Language Models (LMs).
### Update after rebuttal
I increased my score to 4: accept. The critical issues I had with the paper (the numerical values of the coefficients) were addressed, and turned out to be only a presentation issue. I see no issues in accepting this work, as it is correct to the best of my understanding, and can provide a valuable resource for the community.
Claims And Evidence: ### Claim 1: Scaling laws predict at best 4% ARE
#### Supported (previously partially supported)
Edit: authors clarified difference between residuals and ARE.
Section 4 of the current paper provides a good discussion of the source for this value. This Figure 4, where the typical residual has an absolute value of 0.02. With Chinchilla losses ~2, this corresponds to an ARE of ~ 1%. This analysis is open source and reproducible. The authors should comment on the discrepancy between their result and the result in [1].
### Claim 2: Scaling laws can be strategically reused across model families
#### Supported (was previously supported)
Edit: authors clarified misunderstandings here, including reporting log values.
In Section 5, the authors note that $E, A, B, \alpha, \beta$ vary significantly over model families (Figure 3). I find the numerical values in Figure 3 concerning:
1. $A$ and $B$ are more than an order of magnitude different to typically reported results (see e.g. [1]) for very similar setups.
2. Negative values of $E$ do not correspond to a well-behaved asymptotic limit, as cross-entropy loss is lower bounded from zero.
This hints at an issue related to the optimization process for the parameters. I note that the scaling law estimation is done using `curve_fit` in `sklearn` (section 3.2), and the authors comment they could not get the L-BFGS based solver to be stable. Additionally, the authors optimize square loss, instead of Huber loss. They state that Huber loss produces similar trends as stable loss (Appendix E), yet, they also see throughout their work the effect of single poor runs dominating scaling law fitting (Section 7). I suggest the authors investigate the code provided in [1] as a way of robustly identifying scaling law coefficients, which can be sensitive to outliers and initial conditions, and may prevent robust identification. Because of potential numerical challenges, this claim, and remaining claims depending on derived scaling laws are only partially supported.
Beyond the above, the authors take pre-existing $E, A, \alpha$ values, and fit only the data term $B, \beta$ values. In Figure 6 the corresponding error is shown.
It is unclear what fitting process is being performed here to produce the shown errors. Which cell(s) correspond to the fitting of a single model? Does it matter which single model? It would be useful for the authors to explicitly describe the fitting process for this Figure in the accompanying Appendix.
What are the resulting $B,\beta$ values? Can the authors also provide the Absolute Relative Error (ARE) for the model at the end of training, rather than an average over the $\geq 30\%$ checkpoints, as the latter value is the primary one of interest for drawing conclusions about.
### Claim 3: Use $\geq 30\%$ checkpoints, not only final checkpoints
#### Supported (was partially supported)
Edit: authors clarified number of runs versus number of checkpoints understanding. Some comments about Chinchilla are still outstanding, however, the required data for that analysis is not available, and not a blocker for the main conclusions/contributions of the paper.
In Section 6.1 and Figure 4, the authors show how varying the checkpoints used for the fit changes the ARE. The behavior shown in Figure 4 does support the authors claims.
The baseline number of models being used for the fit in each plot in Figure 4 is low. Can the authors explain how it is possible to achieve the fitting of an equation with five free parameters $E, A, B, \alpha, \beta$ to ~3 models (Figure 4b, top left cell) such that the resulting scaling law reliably generalizes? Is this true for all choice of 3 models? What is the variance of the scaling result over resampling of the chosen 3 models? It would be useful the the authors to perform the relevant bootstrap analysis here.
Does the claim remain true when a significant number of models are included in the fit. E.g. Chinchilla uses >200 models in the fit. Is it still beneficial to include intermediate checkpoints in this scenario, or do the checkpoints now result in a biased estimate?
In addition to the above question regarding robustness of generalization, partially supported because of numerical issues discussed in Claim 2.
### Claim 4: There is a trade-off between using fewer larger models or more smaller models
#### Supported (was partially supported)
Edit: issues above now resolved.
Section 7 and Figure 2 support the authors claims. Partially supported because of numerical issues discussed in Claim 2.
[1] Chinchilla Scaling: A replication attempt, https://arxiv.org/abs/2404.10102
Methods And Evaluation Criteria: The approach of a meta-analysis of many scaling studies is a sensible approach. The evaluation criteria is reasonable, and corresponds to questions of interest in the scaling laws community. The baseline scaling laws chosen in Section 5.2 are reasonable.
Theoretical Claims: The paper contains no theoretical claims.
Experimental Designs Or Analyses: I checked the experimental design in the paper. The meta-analysis experimental design is sensible.
There are a number of ways the experimental design could have been improved:
- It would be beneficial if the authors more fully detailed their precise optimization scenario (learning rates used)
- The data from [1] should have also been included, as it provides a large number of final checkpoints within a single family, an important use case not shared with other data sources used by the investigation.
- The numerical methods chosen (see discussion in Claim 2 above) are potentially problematic, and yield coefficients that don't correspond to well-behaved scaling laws (Fig 3c) or coefficients consistent with prior approaches. The issues faced by the L-BFGS solver should be understood. (For example, did the authors provide the Jacobian to the solver?).
- There is no discussion of which combinations of model size $N$ and data $D$ to pick to fit a scaling law. Some choices of $N$ and $D$ yield pathological solutions and do not allow Identifiability. One common problem is the temptation to use only Chinchilla-optimal models for fitting the scaling laws. In which case, $N = k D$ and the scaling law can be written as a one-dimensional scaling law in either $N$ or $D$, which produces redundancies around the solutions for $A,\alpha$ and $B,\beta$ as they cannot be disentangled. Doing a decomposition of the type described in Section 9 would reveal that 3 components in this case would explain most of the variance, as the authors find. IsoFLOP protocols, as in Chinchilla, are designed to mitigate this pathology/redundancy. Two questions are then:
- If we use IsoFLOP models to fit the scaling law, are the conclusions of Section 9 still true?
- Is the observation that using checkpoints beyond the final checkpoint is beneficial an artifact of the models being analyzed satisfying $N\propto D$, and the utility from early checkpoints arising due to requiring a disentangling between $A,\alpha$ and $B,\beta$, which might then disappear in the case where the models being analyzed are IsoFLOP?
- More generally, the paper and experimental design would benefit from a discussion regarding which $(N,D)$ combinations to choose subject to a compute budget, taking into account this identifiability issue.
[1] Chinchilla Scaling: A replication attempt, https://arxiv.org/abs/2404.10102
Supplementary Material: I reviewed all of the supplementary material, including the anonymous linked website.
Relation To Broader Scientific Literature: The work contains many contributions that, assuming correct, are extremely valuable for the scaling community. In particular, Claim 2 "Scaling laws can be strategically reused across model families", would enable more rapid investigation of scalable methods. It would be great to see a deeper dive into this particular claim.
Claim 3 is consistent with findings in [2] as is discussed in the main text.
Claim 1 has potential conflicts with existing literature (see above). Claim 4 - that it may be more effective to train a number of small models than a single large model - is interesting, but to be made fully usable, the authors should provide some guidance/rules-of-thumb for the reader if possible.
[2] https://arxiv.org/abs/2406.19146v1 Resolving Discrepancies in Compute-Optimal Scaling of Language Models
Essential References Not Discussed: The re-usability of scaling laws across model families to iterate on architecture choices is well-captured by [3], which builds the scaling law into a hyperparameter search procedure, and is relevant for the ideas relating to Claim 2.
[3] https://arxiv.org/abs/2302.00441 Scaling Laws for Hyperparameter Optimization
Other Strengths And Weaknesses: The paper is a joy to read. It is clearly written, and provides a clean set of definitions and notations that, on their own will aid in community discussion of scaling laws.
The open nature of the project further increases its utility, both in the provided code, and the assembled data from many sources.
Other Comments Or Suggestions: Figure 6. Explain what "Percentage" means in caption, to help reader (prevent requirement from re-referencing main text.)
Questions For Authors: Edit: Majority of questions responded to. Some outstanding questions corresponding to "model training strategies" for compute-optimal identification of scaling laws, but not a blocker for acceptance.
1. Can you explain the discrepancy between the 4% ARE discussed in Section 4, and the 1-2% ARE observed in Chinchilla [1]? [this discrepancy should be understood as it conflicts with one of the primary contributions in the scaling law field]
2. Optimization of scaling law related [the potential optimization difficulties that may be present in the paper raise questions about the primary claims in the paper, as those claims follow from the optimization, i.e. scaling law fitting]:
1. Can you explain the negative values for $E$ in Figure 3?
2. Can you explain the discrepancy of > an order of magnitude between your values for $A$ and $B$ and those of prior work, e.g. [1]?
3. Do you still see the effect of poor runs affecting scaling law fits (Section 7) in your Huber Loss analysis?
4. Why didn't you use Huber loss throughout (which would be standard standard scaling law practice), instead of square loss? Do you have a quantification of the difference?
5. How stable is your scaling law fitting procedure under i) resampling experimental data for fitting, and ii) change of initial conditions for the fit? (i.e. what are your confidence intervals under bootstrap?)
6. In your failed L-BFGS solver, did you provide the Jacobian? If not, this can be done using autograd, (e.g. JAX), and can greatly help the solver.
3. What happens to Claim 3 if you increase the number of models available (e.g. in [1], >200 models were used)? If the conclusion/guidance the same, or now in favor of using only final checkpoints? If the latter, at what point can we discard earlier checkpoints?
4. Do the conclusions of Section 9 still hold if the scaling laws are fit to an IsoFLOP experimental protocol? (this can be tested synthetically)
5. More broadly, the paper mostly works with models of different sizes that happen to be trained on certain amounts of data. Can the authors comment on the importance of the interaction between $N$ and $D$, and how certain combinations of $N$ and $D$ for a model may be optimal given a compute budget, and if any of their conclusions would change if a strategic choice of models with combinations of $N$ and $D$ are trained by a practitionr?
[1] Chinchilla Scaling: A replication attempt, https://arxiv.org/abs/2404.10102
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their interest and for deeming our work “extremely valuable” as well as for the sincere effort in ensuring all the details are in place. We believe the two most pressing issues found are, in fact, simpler to explain than one might expect and hope the reviewer will agree with us on this point.
Regarding the ARE vs. residuals: the main difference is that residuals explain the ability to fit the training data (goodness of fit), and in the ARE we measure the prediction error on a test model. We will clarify this in the paper to avoid confusion with related works (such as the one mentioned).
Regarding estimated parameter values: as noted in our response to Hw8m noted, this was a typo: we estimate (and plot) the log of these scaling parameters (as in previous work) to improve stability and enforce positivity. . Thus, this is a minor error in presentation not an issue with the experimentation. We will fix it in the final version of the paper. We will also explain more about the rest of the fitting details as requested. For your interest in fitting pre-existing parameters, we do fit the token parameters, and regarding comparing only to the last model, results are very similar, just slightly more noisy when compared per cell in the tables.
Regarding the number of checkpoints, note that the number of checkpoints used is much larger than 3: it is the number of unique pretraining *runs* that is relatively small, we discuss the contribution of single seeds and the effect or number of model sizes used in Section 7,8 .
We agree that the question about chinchilla is interesting, but as they did not release the necessary checkpoints to answer it, we can only assume based on the rest of the experiments that it would not change much (as they have a lot of data already). We did, however, run experiments with the data they did offer (or that we can extract as done in the mentioned paper). The results match the rest of our findings in the paper (when relevant).
Thanks for the reference to the hyperparameter search, we missed it and will discuss it in the final version.
Regarding the questions about fitting with L-BFGS: it is not that one cannot use L-BFGS; with appropriate hyperparameter choices, it does indeed converge. But, the optimization scheme used in the paper was equally effective and required less tuning. We did have various initial experiments with L-BFGS, and they ended up with similar results. We will clarify this in the paper.
Regarding the question about the interplay between choices for data, model size and the scaling laws. We believe that the impact here is not great, because we also fit scaling laws on the partial training runs. This means that models were often far from optimality. The only attribute they still share is that the scheduler might converge at the optimal point. Note that not all papers we used even rely on compute optimal thresholds (For example, Pythia is under-trained, the overtrain paper trains models on varying amounts of data, and OLMO models trained with the same amount of tokens regardless of the model size)
---
Rebuttal Comment 1.1:
Comment: Thank you for responding to myself and the other reviewers. My review and score have been updated correspondingly. | Summary: The authors address challenges in scaling law estimation by compiling a dataset of training losses and evaluations from 485 pretrained language models. Through extensive empirical analyses, they establish concrete best-practice guidelines for efficiently predicting the performance of new, larger models without fully training them. Key recommendations include leveraging intermediate training checkpoints, excluding unstable early-stage training data, using multiple smaller models to reduce estimation variability, and generalizing scaling parameters across related model families. Overall, the paper serves as a practical guide for efficiently and reliably estimating scaling laws in large language model training.
Claims And Evidence: The empirical evaluation supporting the paper’s main claims is solid, demonstrating that their practical guidelines—such as using intermediate checkpoints, excluding early-stage noisy data, and training multiple smaller models—improve prediction accuracy within the studied scenarios.
However, the paper's findings depend heavily on implicit assumptions about training dynamics, particularly that hyperparameter choices (e.g., learning rate schedules) do not significantly disrupt scaling behavior. Since the authors do not systematically examine how these hyperparameters affect their conclusions, the generality and practicality of these guidelines beyond the presented setup remain uncertain. Additionally, without deeper theoretical insights or a clear understanding of what fundamentally constitutes robust scaling behavior, the long-term applicability of these empirical guidelines may be limited.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate and well-chosen for estimating scaling laws. The authors cover nearly all major publicly available LLMs. Their evaluation criterion is sensible too.
Theoretical Claims: The contributions are empirical.
Experimental Designs Or Analyses: The experimental designs and analyses are generally sound, systematically varying key factors such as model size, dataset coverage, and checkpoints. However, the authors do not fully explore sensitivity to variations such as random seeds or specific hyperparameter choices, which could significantly affect results. While their recommendation to use multiple smaller models appears beneficial in their tested scenarios, further validation of robustness to these factors would strengthen confidence in the general applicability of their conclusions.
Supplementary Material: Yes, A-E.
Relation To Broader Scientific Literature: The paper extends prior scaling law research by focusing on practical estimation strategies rather than deriving new theoretical formulations. It consolidates insights across a broad set of models, validating empirical heuristics like using intermediate checkpoints and model family generalization.
Essential References Not Discussed: The related work is well-grounded and sufficiently supports the paper’s contributions.
Other Strengths And Weaknesses: Strengths:
1. Conducts comprehensive empirical analysis addressing practical questions in scaling law estimation.
2. Offers clear, actionable guidelines (e.g., using intermediate checkpoints, excluding noisy early training).
3. Clearly structured and accessible, effectively using visualizations and succinct takeaways.
Weaknesses:
1. Contributions are primarily empirical with limited theoretical novelty.
2. Focuses exclusively on current families of language models, leaving general applicability uncertain.
3. Methodological assumption that a single scaling law fits entire training trajectories overlooks hyperparameter influence.
Other Comments Or Suggestions: -
Questions For Authors: 1. Have you investigated how different learning rate schedules affect scaling-law estimation accuracy, particularly when using intermediate checkpoints?
2. How are the hyperparameters (like the learning rate) selected?
3. Can you provide additional evidence or analyses on the sensitivity of your guidelines to hyperparameter variations and random seeds?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and feedback and for proposing that the paper should be accepted. In essence, we also agree that this paper is a meta-analysis trying to figure out what current trends tell us about the world rather than prove that this has to be the case forever. We do believe, however, that with the vast number of cases presented, most of the results will hold until a significant paradigm shift would enter the way we pretrain models, making the knowledge shared here valuable. We believe that the answer to question 2, might shed a lot of light on that issue as well.
We address the weaknesses mentioned:
1. “Contributions are primarily empirical with limited theoretical novelty.”
The contributions are indeed not a mathematical theory, while guarantees would be nice, we see scientific value also in empirical knowledge.
1. “Focuses exclusively on current families of language models, leaving general applicability uncertain.”
We see the replication across settings as a strength of this paper rather than a weakness. While most scaling law papers deal with a single family of models and describe a “law of nature” observed in it, we describe what is consistent across settings and is hence expected to generalize.
1. “Methodological assumption that a single scaling law fits entire training trajectories overlooks hyperparameter influence.”
We kindly disagree that this is an assumption we make. Instead, we hypothesize that this is the case and bring supportive evidence that this is a useful simplification of the setting and that the benefits from the additional data outweigh the downsides, across many models trained in very different hyperparameter settings.
Responses to questions asked:
1. “Have you investigated how different learning rate schedules affect scaling-law estimation accuracy, particularly when using intermediate checkpoints? “
To some extent. Our results aggregate results across multiple models trained with different LR schedules; characterizing their effects in a fine-grained way would be an interesting question for followup work (and one only possible to answer using the dataset released with this paper).
1. “How are the hyperparameters (like the learning rate) selected?”
We do not select any hyperparameters throughout the paper. All the models we report are hyperparameter choices from pretrained models that publicly shared their pretraining loss, like GPT 3 (we extracted) or Pythia. This is also the reason why we expect everything to generalize, as, in essence, it already generalizes across papers and models that each had their own set of choices.
1. Can you provide additional evidence or analyses on the sensitivity of your guidelines to hyperparameter variations and random seeds?
If you consider the fact that every model size is initiated differently, and has the data shuffled separately, and that we have models across different papers. Then, you can see that seeds are changing a lot here and we account for them. See paragraph from line 347 on for the main place where this appears in the context of the paper. | Summary: This paper provides a comprehensive analysis of scaling laws in large language model (LLM) training, focusing on how to estimate and interpret scaling laws effectively. The authors construct and release a large-scale dataset containing training losses and evaluations from several pre-trained models, enabling them to derive over 1000 scaling laws. The study presents best practices for cost-efficient scaling law estimation and discusses trade-offs between the number of preliminary models, their sizes, and the dataset used.
Claims And Evidence: * The paper makes strong, well-supported claims backed by demonstrating scaling behavior across multiple model families.
* The claim that intermediate checkpoints improve scaling law estimation is well-supported with empirical evidence.
* The assertion that smaller models can sometimes provide better estimates than a single large model is backed by statistical analysis.
Methods And Evaluation Criteria: The experimental setup is well-designed with varying model sizes, training checkpoints, and dataset choices.
Theoretical Claims: The functional form of scaling laws follows prior work (Hoffmann et al., 2022). The paper provides empirical validation for using intermediate checkpoints but lacks a formal theoretical justification. The trade-offs between training fewer large models vs. many small ones are explained well empirically, but a formal learning-theoretic analysis would strengthen the argument.
Experimental Designs Or Analyses: The statistical robustness of their findings is high, as it aggregates data across 1000+ scaling laws. Extrapolation to unseen models is tested.
Inference efficiency comparisons are not provided.
Supplementary Material: No supplementary material is provided.
Relation To Broader Scientific Literature: This paper builds on scaling law research from Kaplan et al. (2020) and Hoffmann et al. (2022), but extends their findings to diverse model families. Unlike prior studies that focused on fixed model families, this work demonstrates that cross-family extrapolation is feasible, opening new possibilities for scaling predictions across architectures.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: See above comments.
Other Comments Or Suggestions: See below questions.
Questions For Authors: 1. How well do these findings generalize to architectures beyond transformers?
2. What are the computational savings when using this method vs. naive scaling?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and the accurate points made are encouraging that some deep points about scaling laws indeed pass to readers.
Regarding the two questions:
1. How well do these findings generalize to architectures beyond transformers?
Our experiments focused on two classes of Transformer models (encoder–decoder and decoder-only). Scaling laws for mixtures of experts seem to work relatively well (while not reported in the paper, we do have MoEs included in the scaling law dataset that we will release). We also tested some small Mamba models; they were noisy as they didn’t have as many checkpoints and sizes, but they generally seemed to behave in the same way (in the sense that fitting laws behaved similarly). However, those are only anecdotes and not a robust study.
1. What are the computational savings when using this method vs. naive scaling?
As we provide a guide, it depends on individual time, cost, and hardware constraints on training. For example, if you just train on the beginning of your largest model, a 3x speedup is achievable but requires the same hardware as the large model (see e.g. Fig. 6). If you train on many small models then it depends on how well you need your estimation to be, look for example for the orange lines in the main figure (Fig. 2), they highlight places with approximately the same amount of compute, so you can see for example that often you save more than any amount of training on the large model (so more than 10X), usually, much more. Note that an additional benefit from running smaller models is the hardware, one can use smaller GPUs, less memory and other hardware that is easier to come by, in addition to the FLOPs saved.
We thank the reviewers again for the appreciative review and thoughtful comments. It seems like no major issues preventing the paper from being accepted. We humbly ask if they can raise their score to signify it to the AC. | Summary: The paper is a meta-analysis of scaling law fitting, focusing on a popular power law relating the pretraining loss to the model size and number of training tokens. The paper offers guidelines for how accurate these scaling laws can and should be and how to best estimate them given available resources.
## Update after rebuttal
The authors have addressed my primary concern, and I have raised my score accordingly.
Claims And Evidence: The paper adequately supports its claim with evidence. ~However, I am concerned that a flaw in the analysis (described below under “Experimental Designs Or Analyses”) might invalidate the evidence.~
Methods And Evaluation Criteria: The parametric form (1) at the center of the paper has limitations. In addition to potentially being incorrectly or over-parameterized as discussed in Sec. 9 of the paper, eq. (1) is also difficult to fit - As discussed in Besiroglu et al. (2024), Hoffmann et al.’’s fit for (1) had multiple issues, leading to incorrect prediction of compute optimal scaling. The two other approaches studied in Hoffman et al. sidestepped the need to assume (1) holds and instead directly predicted the compute optimal scaling, leading to consistent results validated by subsequent work.
More broadly, pretraining loss is arguably not the most important thing to predict using scaling - predicting optimal design choices such as model size, number of experts, and dataset composition as a function of compute budget is operationally much more meaningful than predicting the resulting loss of the model. Consequently, it would have been more interesting to study how the accuracy of those predictions varies with the specifics of the training procedure.
However, I acknowledge that the parameter form (1) is popular in the literature, and therefore I do not consider focusing on it a critical flaw in the paper.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: ~I have concerns about the method used to fit the central power law in this paper. In particular, it appears to miss two crucial components used in most prior work:~
1. ~A positivity constraint on $A,B$ and $E$, implemented in prior work by fitting the log of the scaling exponents and coefficients. Alarmingly, Figure 3 shows $E$ fitted to negative values for some model families!~
2. ~A grid search for the initialization to L-BFGS is important for preventing convergence to suboptimal local minima. The authors make no indication of using this technique in their scaling law fit.~
(Addressed by author rebuttal)
Supplementary Material: I perused the additional figures in the supplementary material.
Relation To Broader Scientific Literature: The paper revisits the fitting of the power law (1), but unlike prior work it uses a large and diverse body of scaling experiment results, and experiments with degrees of freedom like using intermediate checkpoints from training, and transferring scaling laws between model families, which to my knowledge were not explicitly explored in prior work.
Essential References Not Discussed: I am missing a more detailed comparison between the dataset of experiments results compiled in this paper and the ones collected for recent meta-analyses such as Ruan et al. (2024) and Maia Polo et al. (2024). Such comparison would be valuable for future practitioners trying to decide which compilation to use in their studies.
Other Strengths And Weaknesses: Strength: The publicly released compilation of scaling experiments results collected as part of this paper could be useful for subsequent work. However, most of the files in the anonymous repository linked to the papers were not available, which preventing me from gauging the usability of the dataset.
Other Comments Or Suggestions: The definition of model family in line 74 (right) is not clear. What does it mean for models to “differ only in size?” Even for standard Transformer models, models of different sizes necessarily differ by other hyperparameters such as depth, d_model, attention heads, etc. I think what you meant is that a family is a mapping from (model size, num tokens) to model checkpoints.
Also, I really liked the title of the paper.
Questions For Authors: ~My main question is whether the fitting of the scaling law is valid, given the concerns I raised under “Experimental Designs Or Analyses” above.~
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the deep read, care for details and supportive stance.
The main concern raised by the review had to do with the estimation of scaling law parameters (and why some were reported as negative). As discussed below, this was (fortunately) a mistake in our presentation rather than the estimation procedure itself.
**Estimation of scaling parameters:**
Great catch! Indeed, revisiting the graph, the caption does not mention that we have the positivity constraint. Following past work, we fit (and plot) log E, not E, to ensure positivity. We will clarify this in our revised version of the paper.
Similarly, we will clarify that when we fit with L-BFGS (which we found to be less stable as stated in “Fit” Section), we search the parameters using the code from Muennighoff (2023) (which is improved over Hoffman).
We also discuss other mentioned comments.
**Parametric form has limitations:**
As the reviewer states, “(1) is popular in the literature”; and as we state in the paper, this is our reason to build upon it for this meta-study. Based on preliminary experiments, expect that many of our findings would be similar in other functional forms.
Note that, while previous work focuses on identifying new functional forms for scaling laws, we here are analyzing how to fit these laws by evaluating generalization on held-out models. Consequently, our results report the extent to which the scaling laws generalized under different settings and allow others to know what to expect.
Besiroglu (2024) notes issues with the fitting reported in the original paper, but those are related to rounding and other issues that are not relevant to our case, especially not with the repetitions across settings, that do not share the same issues.
**Pretraining loss vs. downstream task:**
You are correct that various other kinds of quantities can be predicted with scaling lows. We believe this would be an excellent topic for a follow-up paper!
**Comparisons to observational scaling law papers:**
We are happy to add more discussion of the relationship of this work to those! We value their contribution to the field (as can be seen by citing them in various contexts). We wish to emphasize that our papers have different goals: Ruan focuses on predicting scaling across datasets and Polo proposes a new kind of scaling laws that fits across multiple models together; both given fixed data for estimating a scaling law. Here we’re focused specifically on identifying *what data to collect*---a question that could also be asked about observational laws in future work. Our approach does not provide a new scaling law, but discusses fitting scaling laws, when creating new models, efficient use of the information available etc. Possibly, information from previous models can aid there as well (but it is unclear, as the distinctions in A/B tests might be smaller than the ones between full models that they focus on).
Model size:
Thank you, we will clarify based on your suggestions.
Title:
Thanks, that’s reassuring!
---
Rebuttal Comment 1.1:
Comment: Thank for addressing my concern regarding parameter fitting - if the issue of negative E was a missing log factor then I think the paper passes the bar for publication and will update my evaluation accordingly.
Regarding comparison to Ruan and Polo - I was asking for a _dataset_ comparison: how does you compilation of model family evaluation differs from theirs?
---
Reply to Comment 1.1.1:
Comment: Regarding data, the main difference is that their data uses benchmark scores (so not the data scaling laws are usually having, but the data evaluation is using). This has a lot of benefits (e.g., high data availability for other models), but also disadvantages (e.g. that scaling law and learning trajectories research cannot use it). Our data includes either losses or downstream (or both) but it is collected along the training run and not just at the end. Note that the end also has interesting but different characteristics such as instruction tuning, changing data at the last next token prediction batches, changes in hyperparameters and such behaviors that we try to isolate and their data tries to see as a whole. Both are valuable, but for different scientific questions. When available, we also match those results with the actual checkpoints released to allow further research of this kind. | null | null | null | null | null | null |
Self-Supervised Learning of Intertwined Content and Positional Features for Object Detection | Accept (poster) | Summary: Post-rebuttal
After reading the comments by reviewers v6Kv and mPP8, I agree that contrastive learning for dense prediction can offer advantages over autoencoder-based methods, such as more efficient training without heavy fine-tuning. I also agree that the proposed method introduces some technical novelty beyond DropPos. For these reasons, I have updated my rating to weak accept.
---
This paper enhances dense representation learning from contrastive learning by 1) using position encoding as a relative location within the entire image instead of renormalizing it in each cropped image, and 2) reconstructing position information to preserve spatial structure. As a result, the proposed method outperforms or is on par with previous dense contrastive learning methods such as MuGS, FLSL, and LOCA.
Claims And Evidence: I'll combine this section with "Methods and Evaluation Criteria."
Methods And Evaluation Criteria: I have several concerns regarding the motivation, method, and evaluation.
A. Why use contrastive learning for dense representation? Many works since MIM/MAE have shown that pixel-level objectives learn better dense representations than contrastive learning. For instance, Table 4 of the MAE (CVPR 2022) paper shows that ViT-B trained with MAE on IN-1k achieves 50.3 and 44.9 box and mask AP, respectively, outperforming this paper’s 49.2 and 43.8, as reported in Table 1. What advantage does this method have over MAE and its numerous follow-ups, including those combining MAE with contrastive learning?
B. How does this method perform on tasks beyond object detection and segmentation? It is well known that dense contrastive learning involves a trade-off between high-level tasks (e.g., classification) and low-level tasks (e.g., detection). I suspect this method may degrade classification performance. Could you report linear probing accuracy on IN-1k to verify this?
C. The technical novelty is limited. The idea of position-aware cropping has been explored since DenseCL, with various methods adopting their own variants, including LOCA, as mentioned in this paper. Using global positional embeddings may serve as an additional trick for ViT-based contrastive learning, potentially encoding the scale of cropped images. However, its precise effect is not well justified. For instance, it introduces a distribution shift in positional encodings between training and testing, which could harm image-level prediction, as full [0-1,0-1] encodings are always used at test time, potentially reducing scale invariance in classification.
D. Position reconstruction has already been widely explored. This includes methods such as DropPos, as mentioned in the paper. Additionally, MP3 [1] is an example of a missing related work.
E. Why not explore contrastive learning on multi-object images? IN-1k is mostly object-centric, but region-aware contrastive learning would be more impactful for complex, multi-object images. LAION would be a practical example, but early academic papers [2,3] have demonstrated this on smaller datasets like COCO. Notably, region-aware contrastive learning can also improve image classification by preventing representation collapse, benefiting not just detection and segmentation.
[1] Position Prediction as an Effective Pretraining Strategy. ICML 2022.\
[2] Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations, and Dataset Biases. NeurIPS 2020.\
[3] Object-aware Contrastive Learning for Debiased Scene Representation. NeurIPS 2021.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Major concerns were discussed above. Here are some minor comments:
- DropPos is from NeurIPS 2023, not 2024, meaning it was released more than a year ago.
- The DINOv2 results are misleading since the model was designed for larger-scale datasets. However, this paper reimplemented it on IN-1k, leading to significantly lower performance compared to other methods. It would be better to report both the original DINOv2 and the reimplemented version while clearly stating that the original DINOv2 performs better due to its larger training dataset.
Supplementary Material: All checked.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: There is a vast amount of work on (dense) contrastive learning, so I understand that the paper cannot cover all of them. However, it should at least cite [1], as it is highly relevant to the position prediction proposed in this paper.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: All my concerns have been discussed above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1. Why use contrastive learning for dense representation? ...**
The importance of this study can be explained from practical and exploratory perspectives.
Practical Perspective: Although the CVPR2022 MAE results demonstrate high object detection performance, they incur extremely high computational costs during both pre-training and fine-tuning (specifically, 1600 epochs for pre-training and 100 epochs for object detection fine-tuning). In contrast, in our experiments (and those of related studies), pre-training is conducted in 350-700 epochs and fine-tuning requires only 12 epochs; under these conditions, MAE yields inferior results. In other words, our approach achieves satisfactory performance while significantly reducing computational cost.
Exploratory Perspective: Our results offer new insights into the challenge of determining what information should be extracted from images for dense prediction tasks. They also show that even within the framework of contrastive learning, these tasks are learnable, and there may be further room for improvement. We expect that our work will serve as a stepping stone for future discoveries.
**Q2. How does this method perform on tasks beyond object detection and segmentation? ...**
As you correctly pointed out, our method, like existing approaches, also shows relatively low performance on classification tasks. The linear probing accuracy on ImageNet-1K is 76.8% (evaluated using ViT-B/16 with DINOv2’s linear evaluation protocol). However, we would like to emphasize that we do not consider this a disadvantage. Regardless of the relative "level" of the tasks (if anything, we consider object detection to be a higher-level task, as it involves both classification and geometric estimation), the two tasks are inherently different to some extent—for instance, in the way positional information is utilized. Therefore, we view this trade-off as a natural consequence.
**Q3. & Q4. The technical novelty is limited. ...; Position reconstruction has already been widely explored. ...**
Thank you for your comment. We respectfully contend that our proposed method is genuinely novel and should not be regarded as merely a variant of existing approaches. While it shares the common objective of embedding positional information into feature representations, its approach differs significantly from methods such as DropPos, MP3, and LOCA.
Moreover, because simple position prediction tasks tend to overfit, DropPos uses a high masking rate (94%) and MP3 removes all positional tokens. Our method avoids these measures, demonstrating both its effectiveness and novelty. Although there may be some adverse effects on image-level prediction (classification), this is not our primary target; as reported in the table below for few-shot classification following the settings in MSN[1], the performance drop in classification is smaller than that of other methods with similar objectives.
Finally, regarding the concern about the justification of our method’s effectiveness, our results on dense prediction tasks—supported by comprehensive evaluations (in Table 1), ablation studies (in Table 2), and attention analyses (Figure 6)—clearly show that our approach outperforms conventional methods. We are prepared to conduct additional experiments if necessary and appreciate the comment regarding MP3, which we plan to address in the revised manuscript.
| Method | 1 Img/class | 2 Img/class | 5 Img/class | 1% |
|---|--|---|----|------|
| MAE | 6.0±0.2 | 9.6±0.3 | 17.5±0.2 | 28.3 |
| LOCA | 31.9±0.2 | 40.3±0.4 | 49.7±0.1 | 56.9 |
| Ours | 39.2±0.3 | 48.3±0.4 | 56.9±0.1 | 63.0 |
Note: Unlike MSN, we report linear probing results for MAE without partial fine-tuning.
[1] Assran, Mahmoud, et al. "Masked siamese networks for label-efficient learning." ECCV, 2022. (KJ-note. Adding the performance of iBOT for few-shot.)
**Q5. Why not explore contrastive learning on multi-object images?...**
In our study, we primarily focused on learning feature representations that benefit dense prediction tasks. However, as you noted, SSL on multi-object/scene images has also garnered significant interest in the community. To explore this direction, we conducted similar experiments using the Object365 dataset—a dataset of a manageable size—instead of ImageNet-1K. Without any hyperparameter tuning, we achieved 48.7 mAP on the object detection task, which is nearly equivalent to the 49.1 mAP obtained with ImageNet-1K. This demonstrates that our approach performs comparably on more challenging, multi-object images. We plan to further evaluate our method across a wider range of tasks beyond dense prediction in future work.
**To minor comments.**
We will revise the reference for DropPos to correctly cite NeurIPS 2023. We will also note the DINOv2 results by reporting the performance from its original model trained on larger datasets and highlighting the difference in training conditions and expected performance. | Summary: The paper proposes a novel self-supervised learning (SSL) framework tailored specifically for object detection (OD) and instance segmentation (IS). The key idea is to integrate positional information more effectively by introducing a learnable positional encoding field that is aligned with the image cropping process. In addition, the method employs a dual masking strategy where both the content and positional embeddings of image patches are masked and then predicted. The authors report improvements on COCO for both detection ($AP_{Box}$) and instance segmentation ($AP_{Mask}$), and they also evaluate on ADE20K for semantic segmentation.
## Update after rebuttal
I appreciate the authors’ efforts in preparing the rebuttal and addressing the major concerns I raised. While I had suggested experimenting with a larger backbone to facilitate a more direct comparison with a state-of-the-art method, I recognize that this concern is relatively minor. Given the authors’ thorough clarifications and convincing responses, I have decided to adjust my overall assessment to Weak Accept.
Claims And Evidence: Claims:
- By aligning the positional encoding with the cropping process, the method preserves crucial spatial cues that are lost in conventional approaches.
- Simultaneously masking and predicting both content and positional embeddings leads to improved feature representations, especially beneficial for OD and IS tasks.
- The proposed approach achieves competitive or superior performance on COCO compared to existing methods.
Evidence:
- Experimental results on COCO indicate improvements in AP_Box and AP_Mask compared to several baselines.
- Ablation studies demonstrate that both the cropping-aligned positional encoding and the dual masking strategy contribute to performance gains.
However, the direct comparison with LOCA - which also focuses on incorporating positional information - is limited. Further experiments (e.g., using a ViT-Large/16 backbone or few-shot settings) would strengthen the evidence for the method’s claimed advantages.
Methods And Evaluation Criteria: The proposed method is well-motivated for OD and IS because these tasks require precise localization and instance-level discrimination, which rely heavily on accurate positional information. The evaluation uses standard benchmarks - COCO for OD/IS and ADE20K for semantic segmentation - and includes extensive ablation studies to assess different design choices.
Concerns:
- While semantic segmentation is evaluated, the paper emphasizes OD and IS as the primary target tasks; the authors should clarify why they separate these tasks given that semantic segmentation also requires fine positional accuracy.
- To better emphasize OD and IS, additional datasets specific to each task could be included.
Theoretical Claims: There are no formal theoretical proofs provided in the paper. The contribution is primarily empirical, with the rationale for the positional encoding strategy based on intuitive arguments and validated through experiments.
Experimental Designs Or Analyses: The experimental design includes:
- Pre-training on ImageNet-1K using a ViT-based architecture.
- Evaluation on COCO for OD/IS and ADE20K for semantic segmentation.
- Detailed ablation studies analyzing the effects of different hyperparameters and design choices (e.g., mask sampling strategies, resolution of the positional field).
Concerns:
- The current experiments, while solid, could benefit from closer alignment with the settings used in LOCA. For instance, experiments with a stronger backbone like ViT-Large/16 and evaluations under few-shot scenarios would provide more comprehensive evidence for the method’s effectiveness on OD/IS.
Supplementary Material: I have reviewed all sections of the supplementary file.
Relation To Broader Scientific Literature: The paper builds on a rich body of SSL work including methods like DINO, iBOT, and DINOv2, and it specifically addresses limitations in previous approaches that attempt to integrate positional information. Although the idea of leveraging positional cues is not entirely new, the paper’s approach of using a cropped positional field and dual masking is presented as an incremental yet meaningful improvement. However, since LOCA is the most direct competitor in this area, a more rigorous experimental comparison is needed to highlight the distinct advantages of the proposed method.
Essential References Not Discussed: The paper cites most of the relevant works in SSL and dense prediction tasks.
Other Strengths And Weaknesses: Strengths:
- Provides comprehensive ablation studies that detail the contribution of each component.
- Experimental results on COCO demonstrate improvements in both detection and instance segmentation metrics.
Weaknesses:
- Figure 2 could benefit from additional annotations or breakdowns to clarify the process and key components of the proposed method.
- The lack of provided code hinders reproducibility and a deeper understanding of the implementation details.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Have you experimented with stronger backbones such as ViT-Large/16? It would be valuable to see experiments that use a higher-capacity model to evaluate how your approach scales. Positive results in this setting would not only solidify your claims about the method’s effectiveness for dense prediction tasks but also offer a more direct comparison to competitors like LOCA, which have explored similar configurations.
2. Have you explored the performance of your method under limited-dataset conditions for OD and IS? By evaluating your approach in few-shot settings - where only a limited amount of training data is available - you could further substantiate the claim that your method learns robust representations that are effective even with minimal supervision. For example, if your method can achieve competitive performance with a few-shot setup similar to that reported for LOCA, it would significantly reinforce your argument regarding its suitability for real-world applications.
3. Could you elaborate on why you chose to focus specifically on object detection (OD) and instance segmentation (IS) rather than addressing semantic segmentation (SS) in a unified manner? The current motivation appears to emphasize locating objects rather than handling instance-level prediction uniquely. This raises concerns about whether the method is truly specialized for OD and IS as opposed to SS. A more detailed explanation of the distinct challenges and requirements of OD and IS would help justify your targeted approach and clarify the specific advantages your method offers for these tasks.
4. Have you considered evaluating your method on additional datasets dedicated specifically to OD or IS? Although COCO is a widely recognized benchmark and provides a strong basis for evaluation, incorporating experiments on another dataset would help demonstrate that the improvements observed are not unique to COCO.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1. Have you experimented with stronger backbones such as ViT-Large/16?...**
Due to insufficient computational resources, we have not been able to conduct such large-scale experiments comprehensively; however, preliminary results confirm the scalability of our approach. Nonetheless, we believe that the performance of our proposed method has been sufficiently demonstrated by the results reported in the paper.
**Q2. Have you explored the performance of your method under limited-dataset conditions for OD and IS?...**
We thank the reviewer for the suggestion. To further evaluate our method's performance under limited training data conditions, we conducted fine-tuning experiments on OD and IS. The results are shown in the table below. Here, the labels 1/32, 1/8, etc. indicate that only 1/32, 1/8, etc. of the original dataset (randomly selected) was used for training, following the experimental settings of LOCA. These results confirm the superiority of our method even under limited-dataset conditions.
| Method | 1/32 AP$^\text{Box}$ | 1/32 AP$^\text{Mask}$ | 1/8 AP$^\text{Box}$ | 1/8 AP$^\text{Mask}$ | 1/2 AP$^\text{Box}$ | 1/2 AP$^\text{Mask}$ | 1 AP$^\text{Box}$ | 1 AP$^\text{Mask}$ |
|--------|----------------------|-----------------------|---------------------|----------------------|---------------------|----------------------|-------------------|--------------------|
| MAE | 31.0 | 28.8 | 33.1 | 31.1 | 42.6 | 39.0 | 48.1 | 43.2 |
| LOCA | 30.4 | 28.2 | 33.0 | 30.6 | 42.3 | 38.5 | 48.3 | 43.0 |
| Ours | 33.0 | 30.8 | 32.8 | 30.6 | 43.3 | 38.9 | 49.2 | 43.8 |
**Q3. Could you elaborate on why you chose to focus specifically on OD/IS rather than addressing SS in a unified manner?**
As you correctly noted, both OD/IS and SS rely on pixel-level class and positional information, yet their requirements for positional precision differ. In OD/IS, it is crucial to accurately distinguish individual instances—even when they are adjacent or overlapping within the same class. Consequently, OD/IS tasks demand the ability to extract and integrate detailed positional information with class cues—precisely the capability our proposed method is designed to achieve. In contrast, SS primarily benefits from positional information that ensures the continuity and consistency of regions corresponding to the same class. Thus, the fine-grained, class-integrated positional cues essential for OD/IS are not only unnecessary for SS but, given the constraints of representational capacity, may even be counterproductive.
**Q4. Have you considered evaluating your method on additional datasets dedicated specifically to OD or IS?**
We conducted experiments on LVIS (with ViT-B/16, and 12 fine-tuning epochs) and observed a similar tendency to that on COCO, as shown in the table below.
| Method | AP$^\text{Box}$ | AP$^\text{Mask}$ |
|--------|-----------------|------------------|
| MAE | 30.1 | 28.7 |
| LOCA | 29.6 | 29.6 |
| Ours | 30.6 | 29.6 |
**To other weaknesses**
We appreciate the feedback. We will revise Figure 2 to include clearer annotations and breakdowns to better highlight the key components of our method. Additionally, we plan to release the full codebase to ensure reproducibility and facilitate further exploration by the community. | Summary: The paper proposes and investigates a novel positional encoding method and an extension to DINOv2 loss that incorporates positional masking for better SSL pretraining of Object Detection and Instance Segmentation Vision Transformer backbones. The method achieved competitive performance on COCO and ADE20K. Ablation studies show the effectiveness of the proposed components, positional sampling, and positional masking. The study also investigates a statistical ground for its scale distribution for the positional sampling part and empirical support for its positional masking strategy and mixed content-position prediction.
## update after rebuttal
This is an interesting work to me. I like that they incorporate relative positional information in the pretraining by using their proposed positional encoding. The experiments displays some questionable points, as noted by the reviewers, but are overall promising. So it come to whether we should take reasonably projected results or ask for apple-to-apple comparisons, and I'll leave it to you.
The comparison between masking strategies is not significant. However, I probably should not just focus on the numbers. This work displays a complete path from identifying the cause of the attention artifacts to solving it by designing a masking strategy. The question of how the attention artifacts are less relevant to the final outcome is not a burning question that hinders acceptance.
I am not concerned of the motivation. The question of why contrasive SSL for segmentation is reasonable and worth asking, but not very constructive if taken very seriously. Should it be constructive, then why SSL at all, we can always finetune.
Therefore I deem this work to be worth displaying to the community for its progress in processing position information with SSL for segmentation.
Claims And Evidence: This work claims that feature representation integrating content and positional information is important for OD and IS tasks. By introducing new cropping-aware positional encoding and position-masking during training, the ViT models studied can learn such features better. The claim is well-supported by comparative studies to other recent works in image SSL and ablation studies of the proposed components.
Methods And Evaluation Criteria: Yes. The paper proposes novel positional embedding and modified DINOv2 loss, adding positional prediction to improve self-supervised learning for Object Detection and Instance Segmentation. The datasets of choice are COCO and ADE20K, which are commonly tested datasets for OD and IS, respectively. The evaluation metrics are box/mask average precision and mean intersection over union for COCO and ADE20K, respectively. It makes sense.
Theoretical Claims: The paper is highly empirical with minimal theoretical claims. However, in Section 4.3, there is a claim that relative object scale in object detection follows beta distribution with no further explanation. The ablation study shows a connection between these two by displaying a better result. Should the writer want to make an empirical observation, it is better to state it clearly.
Experimental Designs Or Analyses: Yes. The work follows the evaluation protocol of previous works. For OD on COCO, they use the protocol from DropPos and framework from ViTDet while removing windowed attention and RoPE. IS on ADE20K uses the protocol from LOCA and Segmenter’s linear decoder for minimal adaptation. Methods are compared under the same pipeline, and the results are strong under standard metrics: average precision for COCO and mean intersection over union for ADE20K.
Supplementary Material: Yes. I reviewed Supplementary Section A and Section C. Visualizing the last layer of attention shows the model’s ability to distinguish between the foreground and background under spatial proximity and to distinguish object instances, which stands out from models trained with comparing methods and aligns with their focus on OD and IS.
In Section C, the cause of the column attention artifact under the box positional embedding mask remains unclear.
Relation To Broader Scientific Literature: This work extends traditional discrete positional encoding to a normalized vector space and uses it to incorporate relative positional/scale information of image patches in the embedding process. The work demonstrated stronger performance than traditional sin/cos positional encoding on OD and IS tasks.
The authors discussed that the position inference target connects to DropPos and LOCA. This work combines it with pixel inference and proposed novel positional encoding to achieve better performance. Also, the contrasive learning target is based on iBOT and DINO, which are strong image self-supervised learning methods. This work expands SSL for OD and IS by adding augmentation in positional encoding and combining positional inference and pixel inference.
Essential References Not Discussed: No
Other Strengths And Weaknesses: One weakness of this work is the unclear cause of the stronger vertical artifact, compared to horizontal, in the attention map when box-based masks are used.
Other Comments Or Suggestions: Though this work is positioned for improving IS and OD performance, displaying ImageNet classification results could make this work more valuable to readers. A line plot of different ratios between position masking and content masking v.s. performance is another thing that interests me, potentially others.
L362 - Table Table 3a: delete one “Table”
Figure 3 is dangling with no reference to it in the main text.
L643 - Inconsistent usage of Figure and Fig.
Questions For Authors: 1. Is the choice of scale factor distribution based on intuition or mathematical grounds?
2. Why is horizontal attention artifact absent or weaker than vertical ones? Does it imply that the vertical position is harder to predict for images in the pretraining dataset?
3. Does an imbalanced content vs position image ratio improve the results?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **W1. One weakness of this work is the unclear cause of the stronger vertical artifact, compared to horizontal, in the attention map when box-based masks are used.**
Two potential explanations may account for the artifact being more pronounced vertically than horizontally: the statistical bias inherent in the image content and the effect of the horizontal flip (hflip) applied during training. To investigate these factors, we conducted two experiments. Note that although typical contrastive learning applies hflip independently to each crop, our method applies it uniformly to the entire input image to maintain consistent spatial relationships across views.
First, to assess the impact of image content, we rotated all training images 90° counterclockwise and then trained the model under otherwise identical conditions (with hflip applied as usual). As a result, the artifact also rotated 90°, appearing horizontally rather than vertically when we fed test images in their original orientation.
Next, we augmented the data by randomly applying a vertical flip (vflip) at a 1:1 ratio alongside hflip. We expected this to mitigate or randomize any directional artifacts; however, the horizontal artifact persisted.
These findings suggest that the predominance of the vertical artifact is primarily driven by the intrinsic characteristics of the image content. It appears that ImageNet—and natural images in general—exhibit different statistical properties along the vertical and horizontal axes. Further investigation into this phenomenon is warranted for future work.
We provide a figure at the following URL illustrating the effect of each transformation (Hflip, Rot90, hflip/vflip) on the resulting attention maps for reference.
https://drive.google.com/file/d/1uezT10nsOwK6KtWqvBSN-so6B6DdiRUe/view?usp=drive_link
**Q1. Though this work is positioned for improving IS and OD performance, displaying ImageNet classification results could make this work more valuable to readers.**
Thank you for your suggestion. We will include the ImageNet classification results in the revised manuscript. In summary, our method's performance is slightly lower than that of DINOv2* (our reproduction on ImageNet-1k), yet it outperforms existing SSL methods designed for dense prediction. We report the linear probing performance below, following the protocol used in DINOv2.
| IM1K; Linear Probing | ViT-B/16 | ViT-S/16 |
|----------------------|----------|----------|
| DINOv2* | 78.2 | 74.4 |
| Ours | 76.8 | 71.6 |
**Q2. Is the choice of scale factor distribution based on intuition or on mathematical grounds?**
We chose it empirically for the intuition of a higher position precision requirement for small/medium regions. Please refer to our response to ***Reviewer v6Kv Q1.*** for further discussion.
**Q3. Does an imbalanced content vs position image ratio improve the results?**
Thank you for your interest. We found that our method is robust to the content vs. position masking ratio. Please refer details in the table in our response to ***Reviewer v6Kv Q2.***.
**To other comments**
Thank you for pointing out these typos. We will fix the duplicated “Table,” add the missing reference to Figure 3, and standardize the usage of “Figure” throughout the text.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response! My follow-up comments are listed below:
0. I have a new concern about Table 3.b and Table 4.a. The difference between the proposed method and the comparison method is the same, but the paper includes that Table 3.b shows resolution invariance, and Table 4.a shows the advantage of cross-mask over box-mask. Only one of the two claims should hold. The marginal improvement on ViT-S with cross-wise masking undermines the necessity of using cross-wise masking, and it is also counter-intuitive that the model did not suffer a lot from the attention artifacts displayed in Figure 7. Do you have the numbers for ViT-B to justify the needs for cross-wise positional masking?
1. This is a great study on the potential cause of attention artifacts. The fact that simple augmentation (vertical + horizontal flipping) cannot eliminate the artifact adds value to the proposed cross-wise masking strategy.
2. Though classification is not the focus of this work, the IN1K result slightly weakens the proposed method's capacity, given that the proposed loss function resembles the DINOv2 loss.
3. The table on the task distribution ratio is clear. Having them all provides empirical ground for the 50/50 choice.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment regarding the comparative results in Table 3(b) and Table 4(a). Although both tables yield identical results, our interpretation appears contradictory, which we had not previously noticed.
In summary, there remain measurable differences between the two methods, and higher values are preferable; hence, we continue to assert the superiority of the cross-mask. Furthermore, the reduction in attention artifacts—which, while difficult to fully capture in averages, is clearly advantageous—supports our claim.
It is important to note that the aim of Table 3(b) was to demonstrate that our approach is not overly sensitive to hyperparameters, as resolution can be varied continuously. In contrast, Table 4(a) evaluates the relative merits of two masking strategies. If higher performance were observed only at a specific resolution, it would cast significant doubt on the method's overall validity.
That said, we believe that higher resolution generally yields better results, which aligns with intuition. (From a computational standpoint, it is sensible to limit the resolution to a practical range—which is why we adopted a 50×50 resolution in our experiments.)
In the next revision of the paper, we will clarify our interpretation of the results in Table 3(b) to ensure our intent is clearly conveyed. Although we currently do not have results for ViT-B with cross-wise masking, we will include them in the next version. | Summary: This work extends the teacher-student SSL approach of DINO v2 with an additional task with the goal to improve the dense prediction capability of the trained model. Specifically, during training the student network is either tasked with the alignment of masked positional encoding as well as the standard alignment of masked out content views. To this end, the authors do not apply the positional with respect to just the cropped image part, but with respect to the position of the crop in the original image.
---
update after rebuttal: I think the authors should have referenced the ground work done by DropPos in a more prominent manner, as it explains the motivation behind the global mask very well. Nevertheless, the idea to sample from a virtual larger image is an interesting contribution.
Claims And Evidence: The main claim of this work is that the proposed sampling of the positional encoding and the additional task of masking and predicting positional encoding vectors between a student and a teacher view improves the dense downstream performance of the model. The experiments suggest that both these additions to the DINO V2 method are beneficial.
Methods And Evaluation Criteria: The method is pretrained on ImageNet1k and its dense prediction performance is evaluated on COCO and ADE20K. A standard setup to compare with other SSL methods. Ablations studies focus on the object detection and instance segmentation performance on COCO.
Theoretical Claims: no theoretical claims.
Experimental Designs Or Analyses: Yes. Experiments and ablations are valid. Besides pretraining in a comparable setup to other methods, the finetuning in both object detection as well as segmentation downstream tasks is performed with established pipelines.
Supplementary Material: Yes. All.
Relation To Broader Scientific Literature: The focus on the role of the positional encoding itself and its utilization during pretraining are valid contributions to further improve contrastive self-supervised learning when it comes to downstream tasks that require dense prediction.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
- The method is simple and the authors provide a good motivation for their approach. To build upon the pipeline of DINO v2 makes a lot of sense and the authors describe the additions well. The paper is quite easy to follow.
- The improvements on the downstream evaluations on COCO over the DINO v2 baseline are significant.
- The authors provide a lot of insightful evaluations, ablation studies and qualitative examples.
Weaknesses:
- The method improves object detection on COCO, but the improvements in segmentation are limited. The fact that the position sampling on its own, even without the additional training task shows significant gains is interesting. The importance of the sampling strategy (constant, uniform, beta) is significant, but not explained or discussed extensively. Might the main benefit of the method be simply its adjustment to challenging smaller objects in the COCO dataset?
Other Comments Or Suggestions: None.
Questions For Authors: Why do you use a 50/50 split between content and positional encoding prediction? Given that training with just positional prediction does not work well and the model still needs to learn content based feature extraction with a sufficient number of training samples.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q1. The method improves object detection on COCO, but the improvements in segmentation are limited. The fact that the position sampling on its own, even without the additional training task shows significant gains is interesting. The importance of the sampling strategy (constant, uniform, beta) is significant, but not explained or discussed extensively. Might the main benefit of the method be simply its adjustment to challenging smaller objects in the COCO dataset?**
As you correctly pointed out, the improvement on ADE20K is limited compared to object detection tasks. We believe this is because ADE20K is primarily a pixel-level classification task that does not require explicit spatial reasoning, and thus benefits less from relative positional encoding. (We included ADE20K to enable fair comparison with prior work.) In contrast, object detection tasks on COCO require more precise instance-level localization, which our method is explicitly designed to address.
Thank you also for your valuable comment regarding the sampling strategy. To simulate the long-tail distribution of object sizes in COCO, we introduced a Beta distribution. Since small objects are frequent and more difficult to detect, we considered this distribution more realistic for the training setting. In future work, we plan to analyze the impact of different sampling distributions in a more systematic manner, and to explore their generalization to other tasks and datasets.
We also appreciate your question on whether the improvement mainly stems from better handling of small objects. Indeed, we believe the position sampling is particularly effective for small objects. However, our method also improves attention diversity and instance discrimination capabilities (see Figures 5 and 6). These improvements are not limited to small objects but contribute to better object-level reasoning across a wide range of object sizes.
**Q2. Why do you use a 50/50 split between content and positional encoding prediction? ...**
Thank you for pointing this out. As shown in Table 4b, using position prediction alone leads to a significant drop in performance, whereas content prediction alone yields reasonably strong results. Nonetheless, we found that a 50/50 split between content and position masking achieves the best overall performance (see the table below for more details).
We hypothesize that the effectiveness of this balanced setup stems from the fact that the position prediction task, while insufficient on its own, encourages the model to explicitly encode spatial relationships, which synergizes with content learning. The 50% allocation provides a sufficient signal for content feature extraction while still allowing position prediction to serve as a meaningful auxiliary task. We agree that this trade-off warrants further exploration, and we plan to investigate more dynamic or adaptive scheduling strategies in future work.
| Content vs. Pos. | 100/0 | 75/25 | 50/50 | 25/75 | 0/100 |
|------------------|-------|-------|-------|-------|--------|
| **AP$^\text{box}$** | 43.4 | 44.2 | 44.8 | 44.2 | 21.7 |
| **AP$^\text{mask}$** | 38.9 | 39.5 | 39.8 | 39.5 | 20.7 | | null | null | null | null | null | null |
ASRC-SNN: Adaptive Skip Recurrent Connection Spiking Neural Network | Reject | Summary: This research considers neurons and recurrent structures as an integrated system and systematically analyzes gradient propagation along the temporal dimension, uncovering a difficult gradient vanishing problem. To tackle this challenge, the study proposes innovative architectural modifications that enhance the network's ability to maintain and propagate gradients over extended temporal sequences. Specifically, the introduction of ASRC significantly mitigates the gradient vanishing issue, allowing the network to better capture long-term dependencies in sequential data.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: S
1. well written
2. The SRC, as an alternative to the vanilla recurrent structure, can effectively alleviate the gradient vanishing problem, which is crucial for the stability and training effect of the model when dealing with time series data
Weakness
1. The ASRC model does not outperform existing methods such as PMSN [1] and TC-LIF [2] on the PS-MNIST dataset. This suggests that the ASRC approach may face challenges in handling certain types of spatiotemporal data, indicating a need for further optimization to enhance its generalizability across diverse tasks.
2. The ASRC raise the model's computational complexity and increase training time and escalating computational resources.
[1] PMSN: A Parallel Multi-compartment Spiking Neuron for Multi-scale Temporal Processing. Arxiv 2024.
[2]Tc-lif: A two-compartment spiking neuron model for long-term sequential modelling. AAAI 2024.
Other Comments Or Suggestions: N/A
Questions For Authors: See weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer's feedback. We also hope the reviewer will take note of other strengths of the paper, such as the positive points mentioned by Reviewer 3VEv、 Reviewer keA4 and Reviewer 9KzL. Below, we provide point-by-point responses to the weeknesses.
>Weakness 1: The ASRC model does not outperform existing methods such as PMSN [1] and TC-LIF [2] on the PS-MNIST dataset. This suggests that the ASRC approach may face challenges in handling certain types of spatiotemporal data, indicating a need for further optimization to enhance its generalizability across diverse tasks.
Response 1: I. We would like to clarify that, as shown in Table 1 of this paper, our method outperforms TC-LIF. We acknowledge that it does not surpass PMSN on the PS-MNIST dataset. However, we want to emphasize that there are many works on neurons in this field, and our approach is based on a new perspective that focuses on the synergistic role between recurrent structures and neurons.
II. We would like to clarify that this paper focuses on long-term sequence modeling. While extending it to other spatiotemporal tasks is valuable, it is beyond the scope of this paper.
>Weakness 2: The ASRC raise the model's computational complexity and increase training time and escalating computational resources.
Response 2: Below, we present the computational overhead of our models on PS-MNIST, the most complex dataset in our study, as shown in the two tables. The increase in memory consumption with larger values of $T_{\lambda}/\lambda$ is significant, while the increase in training time is relatively marginal. The training time of ASRC-SNN is approximately 17% longer than SRC-SNN. When $T_{\lambda}$ and $\lambda$ are close, the memory consumption of ASRC-SNN is slightly higher than SRC-SNN. Considering the trade-off between computational overhead and performance, we recommend selecting relatively small values of $T_{\lambda}/\lambda$ whenever possible. In this case, selecting $T_{\lambda} = 11$ and $\lambda = 8$ results in good model performance, while the additional computational overhead remains acceptable compared to the vanilla RSNN.
Table 1. computational metrics of ASRC-SNN on PS-MNIST
| $T_{\lambda}$| Memory (GB) | Training time(hours) |Accuracy (%) |
| :-----------: | :-----------: | :-----------: | :-----------: |
| 11 | 5.12 | 32.86 |95.15|
| 21 | 8.99 | 32.96 |95.23|
| 31 | 10.21 | 32.90 |95.22|
| 41 | 13.67 | 33.61 |95.36|
| 51 | 15.20 | 33.43 |95.40|
Table 2. computational metrics of SRC-SNN on PS-MNIST
| $\lambda$ | Memory (GB) | Training time(hours) |Accuracy (%) |
| :-----------: | :-----------: | :-----------: | :-----------: |
| 1(vanilla) | 2.43 | 27.93 |84.59|
| 2 | 2.67 | 28.23 |90.65|
| 8 | 4.11 | 28.10 |93.83|
| 16 | 6.39 | 28.20 |94.48|
| 24 | 8.21 | 28.36 |94.44|
---
Rebuttal Comment 1.1:
Comment: I appreciate your response and extra experiments. Most of the concerns have been addressed. But ASRC is still not as good as PMSN, so I will not change my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your acknowledgment that most concerns have been addressed, and we recognize PMSN as a strong contribution to the field. However, we highlight that direct comparison between the two works may not be fully equitable, as our research objectives are not entirely identical.
PMSN primarily aims at improving the spiking neuron model and emphasizes feedforward architectures for multi-scale temporal tasks, with a strong focus on computational efficiency. In contrast, our work focuses on recurrent spiking neural networks (RSNNs), exploring the interplay between RNN structures and spiking neurons. In this sense, we believe the two approaches are complementary rather than directly competing.
Beyond its contributions to SNNs, our ASRC method represents an architectural innovation for RNNs. As noted in our conclusion: "The essence of ASRC lies in learning a discrete position along the temporal dimension, with the potential to extend this method to learning a discrete position in both time and space." We believe this paradigm may offer broader methodological implications and inspire new research directions. | Summary: This paper proposes an Adaptive Skip Recurrent Connection (ASRC) framework for Spiking Neural Networks (SNNs) to address gradient vanishing in long-term temporal modeling. By unifying the analysis of neurons and recurrent structures, the authors identify gradient propagation challenges and introduce SRC (fixed skip connections) and its adaptive variant, ASRC (learnable skip spans via temperature-scaled Softmax). Experiments on four benchmarks demonstrate ASRC-SNN’s superior performance and robustness compared to vanilla RSNNs and SRC-SNN.
Claims And Evidence: Most of the claims have corresponding evidence. e.g. the claim "SRC improves performance over vanilla RSNNs". Evidence on Table 1 shows SRC-SNN outperforms most of the prior SNNs.
Methods And Evaluation Criteria: The methodology and evaluation make sense. But longer temporal sequence tasks could be used to demonstrate the superiority of ASRC.
Theoretical Claims: Relevant theoretical analysis was checked, but no further Lemma or Theorem insight was provided.
Experimental Designs Or Analyses: The experimental dataset is validity, but lack additional inshightful experiments.
Supplementary Material: All the appendixes have been reviewed.
Relation To Broader Scientific Literature: see below
Essential References Not Discussed: There seems to be a corresponding literature
Other Strengths And Weaknesses: Strengths:
The integration of adaptive skip connections into SNNs is innovative. The dynamic adjustment of skip spans through temperature annealing offers a fresh perspective on addressing gradient vanishing. The experiments covering multiple datasets and ablation studies. proposed ASRC could enhance SNNs’ applicability to complex temporal tasks.
---
Weaknesses:
1. The length of the temporal can be marked below the dataset to make it more intuitive.
2. The accuracy is not better than PMSN PS-MNIST datasets, and the network is also not recurrent.
3. It could be more additional experiment by incoporating other neuron on ASRC architecture to prove effectiveness and generality. The authors claim that PLIF and GLIF have no effect on ASRC, which can be illustrated by the specific data.
4. This work lack more deeper theory insight or more additional ablation experiment to explain more essentially. It could expolre the performance relationship bewteen the decay term $\alpha$ of LIF and $\lambda$/$T_{\lambda}$ on the SRC/ASRC, there may show be some nature pattern or trend, because the LIF suffer from the temporal gradients vanishing problem, and the different $\alpha$ could influence the how fast the gradient in the temporal dimension disappears, and how $\lambda$ will solve this problem.
5. The increased GPU memory and training time of the corresponding w SRC and w ASRC of LIF can be given correspondingly as a reference.
Other Comments Or Suggestions: More ablation experiments or other data sets can be performed to prove the ASRC more broadly. see weeknesses.
Questions For Authors: see weeknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer 9KzL for his review. Below we provide point-by-point responses to the Weaknesses.
>Weakness 1: The length of the temporal can be marked below the dataset to make it more intuitive.
Response 1: We thank the reviewer for the helpful suggestion and will implement them in our final manuscript.
>Weakness 2: The accuracy is not better than PMSN PS-MNIST datasets, and the network is also not recurrent.
Response 2: Yes, you're right. However, we clarify that there is a lot of works on neurons in this field, and our approach is based on a new perspective that considers the synergistic role between recurrent structures and neurons.
>Weakness 3: It could be more additional experiment by incoporating other neuron on ASRC architecture to prove effectiveness and generality. The authors claim that PLIF and GLIF have no effect on ASRC, which can be illustrated by the specific data.
Response 3: Considering that if the neurons differ too much from LIF neurons, the gradient propagation in the time dimension would need to be reanalyzed and would likely lead to different conclusions than those presented in this paper, we have only considered PLIF and GLIF neurons.
Below, we present the neuron ablation experiment results for SRC-SNN and ASRC-SNN. First, consider Table 2. When replacing LIF with PLIF in SRC-SNN, performance improves on PS-MNIST but decreases on SSC. The skip span $\lambda$ is set to 12 for PS-MNIST and 3 for SSC. On PS-MNIST, where the skip span is larger, an appropriate membrane potential decay factor can better regulate the flow of information transmitted through the membrane potential within the skip span, especially between the first skip connection linked to the current time step, leading to improved performance. In SSC, where the skip span is smaller, controlling membrane potential transmission is likely less critical than in larger spans, making a slight performance drop reasonable. GLIF introduces additional complexity, significantly complicating the backpropagation topology in the temporal dimension of SRC-SNN. Learning proper temporal dependencies is challenging, so replacing LIF with GLIF in SRC-SNN leads to a performance drop.
In ASRC-SNN, replacing LIF with PLIF does not improve performance, possibly because simultaneously learning both an optimal match between the skip span and the membrane potential factor while ensuring global optimization is challenging. Similarly, the complex gating mechanism of GLIF introduces more learnable parameters, making it even harder for the model to learn a good match between the skip span and GLIF parameters, which explains the significant performance degradation when using GLIF. Additionally, we note that replacing LIF with GLIF increases the training time on PS-MNIST by approximately 60%.
Table 1. Ablation study on neurons in ASRC-SNN
| neuron| Accuracy on PS-MNIST(%) | Accuracy on SSC(%) |
| :-----------: | :-----------: | :-----------: |
| LIF | 95.40 | 81.93 |
| PLIF | 95.16 | 81.93 |
| GLIF | 94.19 | 80.44 |
Table 2. Ablation study on neurons in SRC-SNN
| neuron| Accuracy on PS-MNIST(%) | Accuracy on SSC(%) |
| :-----------: | :-----------: | :-----------: |
| LIF | 94.78 | 81.83 |
| PLIF | 95.11 | 81.57 |
| GLIF | 93.98 | 81.00 |
>Weakness 4: This work lack more deeper theory insight or more additional ablation experiment to explain more essentially. It could expolre the performance relationship bewteen the decay term $\alpha$ of LIF and $\lambda/T_{\lambda}$ on the SRC/ASRC, there may show be some nature pattern or trend, because the LIF suffer from the temporal gradients vanishing problem, and the different $\alpha$ could influence the how fast the gradient in the temporal dimension disappears, and how $\lambda$ will solve this problem.
Response 4: We thank the reviewer for the helpful suggestions. I. We are training our model on sequential CIFAR(timesteps=1024), which takes a lot of time. We will provide the training results later. Additionally, we found that ASRC-SNN is more effective than SRC-SNN under sparse connectivity. Please refer to "Response 4" to Reviewer 3VEv.
II. Expolring the performance relationship bewteen the decay term $\alpha$ of LIF and $\lambda/T_{\lambda}$ on the SRC/ASRC requires extensive additional experimentation. We will include these detailed analyses in our final manuscript.
>Weakness 5: The increased GPU memory and training time of the corresponding w SRC and w ASRC of LIF can be given correspondingly as a reference.
Response 5: Please refer to "Response 2" to wx2U.
---
Rebuttal Comment 1.1:
Comment: I appreciate your response and extra experiments. Most of the concerns have been addressed. But I still have some concerns.
The intuition is that the ASRC architecture using more complex neurons than LIF should improve the performance. But the results do not, which shows that ASRC is not a general method, and the performance relies on the $\lambda$ and $T_{\lambda}$ empirical hyperparameters.
If the authors can show that ASRC or SRC is a general method framework, I will improve the score. Maybe you can try to apply TC-LIF/CLIF/PMSN or other neurons with ASRC or SRC compared to the naive recurrent architecture with the same neuron.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your feedback. We truly appreciate your constructive suggestion regarding the generality of the ASRC/SRC framework.
As shown in Tables 1 and 2, both SRC and ASRC consistently achieve higher accuracy than the standard RSNN across different spiking neuron models (PLIF, GLIF, CLIF, TC-LIF) on two datasets (PS-MNIST and SSC). These results suggest that our method is not limited to a specific neuron type and can serve as a general recurrent structure for a wide range of spiking neurons.
Finally, we provide the performance of our models on the complex sequential CIFAR dataset (with 1024 timesteps). As shown in Tables 3 and 4, the results further demonstrate that SRC-SNN improves over RSNN, and that ASRC-SNN has even stronger ability in modeling long-term dependencies and shows better robustness compared to SRC-SNN.
Table 1. Evaluating the Generality of ASRC and SRC Across Spiking Neuron Models on PS-MNIST
| neuron| Accuracy of RSNN(%) | Accuracy of SRC(%) | Accuracy of ASRC(%)
| :-----------: | :-----------: | :-----------: | :-----------: |
| PLIF | 86.6 | 95.11 |**95.16**|
| GLIF | 91.68 | 93.98 |**94.19**|
| CLIF | 88.58 | 92.57 |**94.62**|
| TC-LIF | 95.23 | 95.67 |**95.84**|
Table 2. Evaluating the Generality of ASRC and SRC Across Spiking Neuron Models on SSC
| neuron| Accuracy of RSNN(%) | Accuracy of SRC(%) | Accuracy of ASRC(%)
| :-----------: | :-----------: | :-----------: | :-----------: |
| PLIF |81.30 | 81.57 |**81.93** |
| GLIF | 79.47 | **81.00** |80.44|
| CLIF | 73.09 | 81.93 |**82.02**|
| TC-LIF | 69.16 | 75.13 |**77.37**|
Table 3. ASRC-SNN perfomerce on sequential CIFAR
| $T_{\lambda}$| Accuracy(%) |
| :-----------: | :-----------: |
|21 | 71.51 |
| 41 | **71.88** |
Table 4. SRC-SNN perfomerce on sequential CIFAR
| $\lambda$ | Accuracy(%) |
| :-----------: | :-----------: |
| 1(vanilla) | 59.86 |
| 12 | **67.60** |
| 24 | 64.62 |
| 36 | 62.78 | | Summary: The paper introduces ASRC-SNN, a spiking neural network architecture that incorporates adaptive skip recurrent connections to improve long-term temporal modeling. It identifies and addresses the gradient vanishing problem in recurrent spiking neural networks (RSNNs), which occurs when gradients propagate over long time spans. The authors propose replacing the standard recurrent connections with skip recurrent connections (SRC) and further extend this idea with adaptive skip recurrent connections (ASRC), where the model learns the skip span dynamically using a temperature-scaled softmax kernel. The model is evaluated on sequential MNIST, permuted sequential MNIST, Google Speech Commands, and Spiking Google Speech Commands datasets. Experimental results show that SRC improves over standard RSNNs, while ASRC further improves over SRC by allowing different layers to learn appropriate skip spans independently. The results suggest that ASRC-SNN is more robust and better at capturing long-term dependencies compared to existing methods.
## update after rebuttal
Thanks to the authors for the detailed and thoughtful response. I appreciate the clarifications around computational overhead, the robustness to hyperparameters, and the discussion on potential local minima for skip spans. It’s also good to hear that larger-scale experiments are underway.
That being said, my overall view of the paper remains the same. While the additional explanations are helpful, they don't fully address the bigger concerns I had — especially the lack of experiments on more challenging datasets and the fairly limited theoretical grounding. The efficiency analysis is still missing, and the experimental gains, while promising, aren’t strong enough to outweigh the added complexity in my opinion. So I’m keeping my original score of Weak Reject.
Claims And Evidence: The paper claims that ASRC-SNN solves the gradient vanishing problem in RSNNs and improves performance on long-term temporal tasks. The experiments provide reasonable evidence that ASRC-SNN performs better than baseline RSNNs, especially in datasets with long-term dependencies like PS-MNIST and speech recognition tasks. The claim that SRC mitigates gradient vanishing is plausible, as skipping over time steps can help maintain gradients, but the explanation lacks rigorous mathematical justification. The claim that ASRC is superior to SRC is supported by empirical results, but the advantage is marginal in some cases, making it unclear whether the complexity of learning adaptive skips is always justified. The paper does not analyze potential drawbacks, such as the additional computational cost of learning skip spans dynamically, and does not compare against other methods that might also alleviate gradient issues, such as gated spiking architectures. The evidence supports the claims to some extent, but the lack of theoretical analysis and broader comparisons weakens the argument.
Methods And Evaluation Criteria: The method of introducing skip recurrent connections is reasonable for addressing the gradient vanishing problem, and the evaluation criteria, mainly accuracy on temporal classification benchmarks, are standard in the field. The chosen datasets are relevant but mostly small-scale, meaning the real-world scalability of the method remains uncertain. The experiments focus only on accuracy, without discussing training efficiency, memory consumption, or computational overhead introduced by ASRC. It would be more convincing to analyze how much additional cost ASRC incurs and whether the improvement justifies it. The model is also not tested on more complex spiking datasets, such as neuromorphic event-based vision tasks, which would better showcase its advantages over simpler architectures.
Theoretical Claims: The paper provides some mathematical expressions to describe gradient propagation and the effects of skip recurrent connections, but it does not offer a formal proof that ASRC-SNN systematically mitigates the gradient vanishing problem. The key derivations related to gradient propagation are mostly heuristic, and the claim that SRC prevents vanishing gradients is stated without proving that it consistently avoids exponential decay in all cases. The theoretical basis for using the softmax-based adaptive skip mechanism is also weak; while the authors reference softmax annealing behavior, they do not prove that the model converges to an optimal skip span. A stronger theoretical argument, possibly including convergence guarantees or an analysis of how skip spans interact with different time scales, would improve the paper.
Experimental Designs Or Analyses: The experiments compare ASRC-SNN against several spiking models on relevant datasets, but the evaluation has some weaknesses. The authors do not provide runtime analysis or efficiency comparisons, which are crucial for spiking networks that often aim for energy efficiency. The reported improvements in accuracy are sometimes small, and there is no statistical significance analysis to show whether ASRC consistently outperforms SRC. The choice of datasets is somewhat limited to standard benchmarks, and the results do not explore whether ASRC-SNN generalizes well to more challenging real-world tasks. Ablation studies show the effect of skip coefficients and softmax kernel behavior, but there is no deeper analysis of why certain choices work better in specific settings. A comparison with alternative recurrent SNN architectures, such as spiking GRUs or LSTMs, would have made the evaluation stronger.
Supplementary Material: The appendix provides hyperparameter settings and additional experimental details, but there are no major theoretical supplements. The provided tables help clarify the impact of skip coefficients, but the discussion of computational efficiency is missing. The training configuration details are useful but do not include a discussion on how sensitive the model is to hyperparameter tuning. No implementation details are provided to assess how easy it would be to reproduce the results.
Relation To Broader Scientific Literature: The paper is well-situated within the literature on recurrent SNNs and gradient propagation issues. It references prior work on improving spiking neuron models and incorporating recurrent structures, but the discussion is mostly limited to direct competitors. There is little connection to broader machine learning literature, such as meta-learning approaches for temporal dependencies or energy-based models that could also provide alternatives to gradient-based training. The paper does not discuss potential trade-offs compared to alternative recurrent architectures, such as gated SNNs, which have been explored in recent work.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The paper presents an interesting and well-motivated idea that could help mitigate gradient issues in recurrent SNNs. The introduction of skip connections and adaptive learning of skip spans is an intuitive extension, and the empirical results suggest it is a promising approach. However, the paper lacks a strong theoretical foundation, and the experimental results, while positive, do not clearly demonstrate that ASRC is necessary in all cases. The increase in model complexity from learning adaptive skips is not justified by efficiency analysis, and the small accuracy improvements on some tasks raise questions about whether the method is worth the added computational cost. The writing is clear overall, but some sections could be more precise in distinguishing between empirical observations and theoretical claims.
Other Comments Or Suggestions: The paper would benefit from a more rigorous analysis of computational efficiency, including training time and memory usage comparisons. A theoretical discussion of when ASRC is expected to work better than simple skip connections would improve the argument. The authors should also explore whether ASRC generalizes well to more complex tasks beyond the datasets tested here. Finally, the discussion should include potential downsides of the approach, such as its sensitivity to hyperparameters.
Questions For Authors: 1. What is the computational overhead of learning adaptive skip spans compared to using fixed skips or standard recurrent connections? A runtime comparison would clarify whether ASRC is practical for large-scale problems.
2. How does ASRC-SNN perform on larger, real-world spiking datasets such as event-based vision tasks? Would the method still be effective when applied to more complex sequences?
3. How sensitive is ASRC-SNN to hyperparameter tuning? Does the softmax temperature decay require careful selection, or is the method robust across different datasets?
4. How does ASRC compare to alternative approaches such as spiking GRUs or gated recurrent SNNs, which also attempt to mitigate gradient vanishing? Would incorporating gating mechanisms alongside skip connections improve performance further?
5. Is there a risk that learned skip spans converge to suboptimal solutions? How does the model ensure that it selects useful temporal dependencies rather than defaulting to local minima?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's thorough and insightful comments. Below we provide point-by-point responses to the Questions.
>Questions 1: What is the computational overhead of learning adaptive skip spans compared to using fixed skips or standard recurrent connections? A runtime comparison would clarify whether ASRC is practical for large-scale problems.
Response 1: Please refer to "Response 2" to Reviewer wx2U.
>Questions 2: How does ASRC-SNN perform on larger, real-world spiking datasets such as event-based vision tasks? Would the method still be effective when applied to more complex sequences?
Response 2:
I. This is a valuable direction to comprehensively assess the generalizability of our approach. However, to the best of our knowledge, the mainstream architectures used for event-based vision benchmarks, such as CIFAR10-DVS and DVS-Gesture, are not recurrent. This suggests that we may need to explore suitable new recurrent architectures. Additionally, our study focuses on long-term temporal modeling, whereas the timesteps for CIFAR10-DVS and DVS-Gesture are typically set to no more than 20. In summary, this is beyond the scope of this paper.
II. We are training our model on sequential CIFAR(timesteps=1024), which takes a lot of time. We will provide the training results later.
>Questions 3: How sensitive is ASRC-SNN to hyperparameter tuning? Does the softmax temperature decay require careful selection, or is the method robust across different datasets?
Response 3: I. In our final setup, the learning rate of the softmax kernel is set to 100 times the global learning rate. Under this setting, ASRC-SNN is robust to the choice of the softmax temperature decay. Additionally, we find that setting the softmax kernel's learning rate to 1× or 10× the global learning rate requires careful decay rate adjustment.
II. We did not carefully select the softmax temperature decay. As mentioned in the last part of Section 3.4.1 of this paper: "In our experiments, the exponential decay factor is set to 0.96." Fine-tuning the decay precisely across different datasets could further improve ASRC-SNN.
>Questions 4: How does ASRC compare to alternative approaches such as spiking GRUs or gated recurrent SNNs, which also attempt to mitigate gradient vanishing? Would incorporating gating mechanisms alongside skip connections improve performance further?
Response 4: I. The gating mechanism can create temporal shortcuts that help prevent gradient vanishing[1]. However, we believe these shortcuts are difficult to observe and inherently uncertain. SRC can creat fixed temporal shortcuts, while ASRC allows for flexible adjustment of the shortcut span.
II. Since this is not the core focus of this work and spiking GRUs do not have publicly available code, we have not explored this in SNNs. We will release our code after this paper is accepted, and we believe it is easy to extend.
[1] Dampfhoffer M, Mesquida T, Valentian A, et al. Investigating current-based and gating approaches for accurate and energy-efficient spiking recurrent neural networks[C]//International Conference on Artificial Neural Networks. Cham: Springer Nature Switzerland, 2022: 359-370.
>Questions 5: Is there a risk that learned skip spans converge to suboptimal solutions? How does the model ensure that it selects useful temporal dependencies rather than defaulting to local minima?
Response 5: Thanks to the reviewer for the insightful questions. We've also given these issues some thought. First, we emphasize that the softmax kernel initially assigns equal weights to skip connections of different spans, with no manual bias. During training within an epoch, we do not guide the distribution of the softmax kernel’s weights. Only after completing an epoch do we slightly sharpen this distribution by reducing the temperature parameter. Therefore, these issues can be seen as related to whether certain parameters in BP-based neural networks will converge to local optima. Of course, we acknowledge that the convergence of the softmax kernel parameters is quite unique. This question may require knowledge from other fields, such as non-convex optimization theory, to answer. We can only state that our experimental results show that the convergence of the skip spans is good. Finally, we point out that the clear dynamic changes in the softmax kernel during training might provide material for research in other fields. | Summary: This paper proposes the Skip Recurrent Connection (SRC) as a replacement for the vanilla recurrent structure and also proposes the Adaptive Skip Recurrent Connection (ASRC), a method that can learn the skip span of skip recurrent connection in each layer of the network.
Claims And Evidence: 1. This paper has a profound intention, which is "other works overlooking the importance of analyzing neurons and recurrent structures as an integrated framework."
2. The introduction is well-written and can vividly describe the problem being solved.
Methods And Evaluation Criteria: 1. Eq.1-3 is confusing. Bringing equation 1 into 3, the result is $U^{l}[t]=\alpha U^{l}[t-1]-V_{t h} S^{l}[t]+I^l[t]$, not consistent with Eq.4. Please check the symbol definitions carefully.
2. Theoretical analyses are intuitive and easy to understand but lack rigorous theoretical proof.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: 1. Experiments proved the effectiveness of SRC but not its generalization. For example, experiments lacking event-drive datasets, DVS-Gesture, CIFAR-DVS, and so on.
2. SRC is simple and effective. Can each timestep have a different $T_{\lambda }$?
3. This paper lacks an ablation experiment where the performance of SRC increases with the increase of timesteps.
4. In terms of power consumption considerations, since each neuron remains dense for different timesteps of links, the power consumption remains the same as before. So, can Neuron's connections for different timesteps be sparse?
5. This paper lacks ablation experiments on different neurons with SRC. You may not simply describe "and the results show no performance improvement with these substitutions." The reasons behind it should be analyzed.
Supplementary Material: Not applicable.
Relation To Broader Scientific Literature: Not applicable.
Essential References Not Discussed: Not applicable.
Other Strengths And Weaknesses: Not applicable.
Other Comments Or Suggestions: Not applicable.
Questions For Authors: Not applicable.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's comments, especially some valuable questions raised. We also hope the reviewer will take note of other strengths of the paper, such as the positive points mentioned by Reviewer keA4 and Reviewer 9KzL. Below, we provide point-by-point responses to the questions.
>Questions 1: Experiments proved the effectiveness of SRC but not its generalization. For example, experiments lacking event-drive datasets, DVS-Gesture, CIFAR-DVS, and so on.
Response 1: Please refer to "Response 2" to Reviewer keA4.
>Questions 2: SRC is simple and effective. Can each timestep have a different $T_{\lambda}$?
Response 2: Is the reviewer suggesting that different time steps adapt to different skip spans? If so, this would be a further extension of ASRC-SNN. We would like to point out that this extension introduces additional parameters: $T_{\lambda} \times \text{number of layers} \times \text{timesteps}$, whereas ASRC-SNN has $T_{\lambda} \times \text{number of layers}$ additional parameters compared to SRC-SNN. We will incorporate this experiment in our final manuscript as a reference. Additionally, we will release our code after the paper is accepted, and we believe it is easy to extend.
>Questions 3: This paper lacks an ablation experiment where the performance of SRC increases with the increase of timesteps.
Response 3: We clarify that the timesteps for the datasets used in our paper are: GSC (101), SSC (250), SMNIST (784), and PS-MNIST (784). We will later provide experimental results of our models on sequential CIFAR with 1024 timesteps.
>Questions 4: In terms of power consumption considerations, since each neuron remains dense for different timesteps of links, the power consumption remains the same as before. So, can Neuron's connections for different timesteps be sparse?
Response 4:
Thank you for raising this great question! The experiments we conducted further demonstrate the advantage of ASRC-SNN. Below, we present our experimental results. Across different sparsity levels, ASRC-SNN consistently outperforms SRC-SNN. This advantage is particularly evident on the PS-MNIST dataset, which has complex temporal dependencies—where increasing sparsity further amplifies ASRC-SNN’s superiority over SRC-SNN. A possible reason is that sparse connectivity demands stronger temporal modeling capabilities, and ASRC-SNN is better suited to handle this challenge. Additionally, we observed that as sparsity increases, our model’s performance on SSC does not degrade significantly, which may be related to the tendency to overfit easily on this dataset.
Table 1. The performance of ASRC-SNN under different sparsity rates
| Sparsity rate| Accuracy on PS-MNIST(%) | Accuracy on SSC(%) |
| :-----------: | :-----------: | :-----------: |
| 0.00 | 95.40 | 81.93 |
| 0.25 | 95.04 | 81.83 |
| 0.50 | 93.84 | 81.53 |
| 0.75 | 90.21 | 80.51 |
Table 2. The performance of SRC-SNN under different sparsity rates
| Sparsity rate| Accuracy on PS-MNIST(%) | Accuracy on SSC(%) |
| :-----------: | :-----------: | :-----------: |
| 0.00 | 94.78 | 81.83 |
| 0.25 | 93.41 | 81.53 |
| 0.50 | 91.25 | 80.68 |
| 0.75 | 86.34 | 80.18 |
>Questions 5: This paper lacks ablation experiments on different neurons with SRC. You may not simply describe "and the results show no performance improvement with these substitutions." The reasons behind it should be analyzed.
Response 5: Please refer to "response 3" to Reviewer 9KzL.
---
Rebuttal Comment 1.1:
Comment: The author has addressed most of my concerns, I raised my score. | null | null | null | null | null | null |
Optimal Transfer Learning for Missing Not-at-Random Matrix Completion | Accept (poster) | Summary: This paper studies the problem of matrix completion with missing not-at-random mechanisms, where the observation pattern is row/columns-wise. Under such missing/observation family, the authors establish the minimax lower bound for entrywise estimation error. With side information, the authors propose a computationally efficient estimation framework which achieves the optimal rate under mild assumptions. The effectiveness of proposed method is demonstrated on both simulated data and real-world genomic and metabolic datasets.
Claims And Evidence: The claims are supported by rigorous theoretical investigations and numerical experiments.
Methods And Evaluation Criteria: The theoretical results are developed mostly in $max$ (entrywise $L_\infty$) norm. It would be interesting to see the discussion in other forms (in an average sense, like Frobenius norm or entrywise $L_1$ norm). In numerical experiments, such metrics are also interesting and informative to see (RMSE, MAE, etc).
Theoretical Claims: I didn't go through the detailed proofs, but the ideas and discussions presented in the main body are sound to me.
Experimental Designs Or Analyses: If I understand it correctly, there's no real missing in the real world experiments. Are the missed entries synthetic? It would enhance the significance of this work if the experiments are conducted on datasets where the primary interest is the recovery of entries.
Supplementary Material: No
Relation To Broader Scientific Literature: The presented method works effectiveness for biological findings.
Essential References Not Discussed: No
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: line 1422, "do poorly in max errow"
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, and for highlighting that our claims are supported by rigorous theoretical investigations and numerical experiments. We will fix typos in the revision, and we address all other comments below.
## Our methods perform best under new evaluation metrics (MAE and RMSE).
As the Reviewer suggests, we perform new experiments to compare our active and passive sampling methods on Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), as well as the the previously presented metrics of Max Squared Error, and Mean Squared Error (MSE). For ease of comparison, we report the results of Fig. 2 (gene expression transfer) and Fig. 3 (metabolic transfer) again in the tables below. The MAE/RMSE numbers are new.
For the gene expression transfer problem (Figure 2), our methods continue to out-perform baselines with $p_{\textup{Row}} = p_{\textup{Col}} = 0.5$.
For gene expression data, the errors are:
| Method | MSE | Max Squared Error | MAE | RMSE |
|--------|-------|-----------|-----|------|
| Passive (Ours) | **0.004385** | **0.300035** | **0.044493** | **0.055198** |
| Active (Ours) | *0.018225* | *0.372105* | *0.103285* | *0.114654* |
| LLL22 | 0.151792 | 0.626293 | 0.343497 | 0.389449 |
| BC22 | 0.570254 | 1.000000 | 0.678862 | 0.754897 |
Next, we perform the same experiment for the metabolic transfer problem (Figure 3).
| Method | MSE | Max Squared Error | MAE | RMSE |
|--------|-------|-----------|-----|------|
| Passive (Ours) | *0.000217* | 1.292995 | *0.000934* | *0.014638* |
| Active (Ours) | **0.000024** | **0.294249** | **0.000669** | **0.004883** |
| LLL22 | 0.000360 | *0.651176* | 0.006931 | 0.018147 |
| BC22 | 0.003790 | 1.000000 | 0.021086 | 0.055543 |
We will include the suggested error metrics (MAE and RMSE) in the revision.
## Our theoretical upper bounds on Max Squared Error also imply upper bounds on Frobenius error, RMSE, and MAE.
The max-squared error bounds (Theorem 2.6 and Theorem 2.9) immediately imply bounds on Frobenius error, because $\frac{1}{mn} \Vert \hat Q - Q \Vert_{F}^2 \leq \Vert \hat Q - Q \Vert_{max}^2$.
Similarly, both the RMSE and the MAE are upper-bounded by the Max Absolute Error, which can be bounded via Theorems 2.6 and 2.9.
We will include this discussion in the revision.
## Our experiments are conducted on datasets where the primary interest is the recovery of entries.
Our real-world experiments (Section 3.1) on gene expression (Fig. 2) and metabolic (Fig. 3) data are on datasets where the primary interest is recovery of missing entries (Parnell et al. 2013, King et al. 2016, Jalan et al. 2024).
## We mask matrix entries to compare estimated vs. ground-truth values.
In each experiment, we mask entries of $Q$ for which the ground-truth is known, so that we can compare the estimated $\hat Q_{ij}$ to the ground truth value $Q_{ij}$. We mask according to the active and passive sampling frameworks (lines 302-320, left).
There are many entries of $P, Q$ for which the ground truth is unknown. Without knowing the ground truth, we cannot report estimation error. Therefore we do not perform estimation experiments on missing entries for which ground-truth values are not known. | Summary: This paper studies transfer learning for matrix completion in a Missing Not-at-Random (MNAR) setting, which is motivated by biological problems. The problem is challenging because entire rows and columns of the target matrix are are missing, making direct estimation impossible. This paper introduces a source matrix. This paper provides lower bounds for estimation error in both active and passive sampling settings. The proposed model is an efficient model that is minimax-optimal in the active setting and rate-optimal in the passive setting.
Claims And Evidence: 1. The proposed estimator is minimax-optimal for the active setting, while in real-world applications, exact optimization of sample selection is impractical.
2. The transfer learning framework effectively corrects for the missingness structure in MNAR settings. However, the distribution shift model is restrictive.
Methods And Evaluation Criteria: 1. The paper defines active and passive sampling settings, but the evaluation assumes clean singular value decomposition (SVD) features can always be extracted, which is unrealistic in noisy biological data.
2. Active sampling relies on an idealized G-optimal design, which assumes one can perfectly select the most informative rows/columns, an unrealistic assumption in biological experiments.
Theoretical Claims: 1. Minimax lower bounds for MNAR matrix completion are presented, but the practical significance of these bounds is unclear.
2. The paper proves optimality under restrictive assumptions, while real-world missingness mechanisms are more complex.
Experimental Designs Or Analyses: 1. The paper evaluates on only two datasets, which is insufficient for demonstrating broad applicability.
2. The paper does not evaluate how well the method generalizes across different datasets or varying missingness patterns.
Supplementary Material: The attached material is the code.
Relation To Broader Scientific Literature: This paper studies transfer learning for matrix completion in a Missing Not-at-Random (MNAR) setting, which is motivated by biological problems.
Essential References Not Discussed: Graph-based and VAE-based matrix completion methods should be discussed.
Other Strengths And Weaknesses: Strengths:
1. The problem of MNAR matrix completion with transfer learning is relevant in biological applications.
2. The theoretical analysis of estimation error bounds contributes to the understanding of MNAR matrix recovery.
Weaknesses:
1. The method is largely a combination of existing techniques.
2. Real-world applicability is questionable, from my perspective, the active sampling strategy is difficult to implement practically.
3. The assumptions about the structured linear feature shifts between source and target matrices is restrictive.
Other Comments Or Suggestions: 1. Discuss practical limitations of active sampling and whether it can be applied in real experimental settings.
2. Provide more details on implementation choices, such as hyperparameter tuning and computational complexity.
Questions For Authors: 1. What happens if the structured transfer assumption (Definition 1.2) does not hold?
2. Can the active sampling method be realistically implemented in biological experiments?
3. How robust is the method to different missingness mechanisms?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, and for highlighting the relevance of our problem (MNAR matrix completion with transfer learning) to biological applications. We address their comments below.
## Our minimax results guide algorithm design.
The practical significance of our minimax lower bounds for MNAR matrix completion is that *no method* can achieve a better estimation error than ours. So, the error guarantees in Theorems 2.6 and 2.9 are unimprovable in our setting.
## Exact selection of rows and columns (in active sampling) *is realistic* for the biological settings we study.
The choice of exact rows and columns to query matches experimental design constraints in multiple settings (lines 9-16, right). Our model is designed to capture these settings. These include (i) metabolite balancing experiments (Christensen and Nielsen, 2000), (ii) gene expression microarrays (Hu et al. 2021), (iii) marker selection for single-cell RNA sequencing (Vargo and Gilbert, 2020) and (iv) patient selection for companion diagnostics (Huber et al. 2022). In Section 3.1, we study precisely the first two settings listed in the introduction: metabolic (Fig. 3) and gene expression (Fig. 2) data.
Some settings, such as electronic health records (Zhou et al. 2023), have different experimental design constraints (lines 46-49, right). Our model would not apply to these settings.
In the revision, we will include additional details on experimental protocols for metabolite balancing and gene expression microarray experiments, as suggested.
## Our MNAR model captures the missingness structure of the biological datasets we study.
Our passive sampling model (lines 97-109, left) matches the row/column missingness structure present in gene expression data (Fig. 1, and lines 16-24, right) and metabolic network data (lines 330-342, left). We focus on these settings in our real-world experiments (Section 3.1).
For other kinds of data, other missigness structures may be present (lines 46-49, right). Our methods can be applied to *any* missingness structure, but we analyze the specific MNAR settings of the paper due to their biological importance.
If the Reviewer has particular missingness structures in mind, we are happy to discuss the applicability of our methods to those settings.
## The distribution shift model is realistic for the datasets we study.
The matrix transfer model (Definition 1.2) is commonly used in the biological literature, such as in Genome-Wide Association Studies (see McGrath et al. 2024 and references therein). Our real-world experiments (Section 3.1) further validate that it is appropriate for gene expression (Fig. 2) and metabolic (Fig. 3) transfer problems.
No model applies perfectly to real-world data. It would be interesting to study other models of distribution shift, as we have stated (lines 437-439, right).
## Our methods can be applied without any structured transfer assumptions.
Even if Definition 1.2 does not hold, our method can be applied. However, we only analyze its theoretical properties under Definition 1.2.
## Our methods *can* use noisy SVD estimates.
We do not assume clean SVD features for either the source matrix $P$ or target matrix $Q$. The source matrix, which follows Assumption 2.5, can be noisily observed in an MCAR or MNAR fashion (lines 167-172, right). The target matrix is observed with additive noise and has missing rows and columns for both active and passive sampling (lines 79-109, left).
## Our active sampler *does not* assume idealized G-optimal design.
We compute an $\epsilon$-approximate G-optimal design (Definition 2.3) with convex optimization. Specifically, we use the Franke-Wolfe algorithm which runs in polynomial time (Lattimore and Szepesvari 2020). Setting $\epsilon = 0.05$ is sufficient (Theorem 2.6). A perfect $G$-optimal design is not needed.
## We will add graph-based and VAE-based references in the revision.
We will discuss VAE-based methods and graph-based methods in the revision, including but not limited to the (see discussion with Reviewer XH4Z) of Ipsen et al. 2020, as well as [1-2] for VAE, and Jalan et al. (NeurIPS 2024) and [3-4] for graphs.
[1] Ghalebikesabi, Sahra, et al. "Deep generative missingness pattern-set mixture models." International conference on artificial intelligence and statistics. PMLR, 2021.
[2] Cai, Hongmin, et al. "Realize generative yet complete latent representation for incomplete multi-view learning." IEEE Transactions on Pattern Analysis and Machine Intelligence46.5 (2023): 3637-3652.
[3] Wang, Yao, et al. "Matrix Completion with Graph Information: A Provable Nonconvex Optimization Approach." arXiv preprint arXiv:2502.08536 (2025).
[4] Zhan, Tong, et al. "Collective Matrix Completion via Graph Extraction." IEEE Signal Processing Letters (2024).
## All algorithms are polynomial-time.
We use well-studied polynomial-time algorithms (SVD, least-squares, and Franke-Wolfe). | Summary: This paper studies a problem in which there are two matrices $P$ and $Q$ of the same dimension, where a noisy version of $P$ is observed and a noisy and partial view of $Q$ is observed. $P$ is known to be low-rank, and the two matrices are related via distribution shift. The objective is to recover the matrix $Q$ using this additional knowledge. The authors study two settings, one in which there is a budget to sample noisy entries of $Q$ (active sampling) and one in which, according to some distribution, noisy entries of $Q$ are revealed. An estimation algorithm for $Q$ is given, and analysis is provided for the estimation error of both settings.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes, and Figure 2 shows that that this method outperforms other methods as well.
Theoretical Claims: The theoretical claims appear to be correct.
Experimental Designs Or Analyses: Yes, the experiments appear to be sound and valid.
Supplementary Material: Only briefly looked at the supplementary material, which appeared to be correct. It consists mostly of the proofs for theorems in the main paper.
Relation To Broader Scientific Literature: The problem analyzed in this paper is motivated by problems that arise in biology. The authors emphasize its relation to tasks such as various key biology and medical problems, for which this paper may be useful.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: Strengths:
* The paper is comprehensive in its analysis of an interesting problem and studies two different settings, the passive and active sampling settings. As expected, the active sampling setting is easier. Minimax lower bounds and generic error bounds are given for both settings. Further, the solution is computationally efficient.
* This paper generalizes on previous results by allowing for any kind of distribution shift rather than simply a rotational shift.
* Empirically, there are strong results. The results appear to be complete and well-described.
Weaknesses:
* To improve readability, it would be better to first introduce the estimation framework, and then separately discuss active and passive learning settings.
Other Comments Or Suggestions: See above.
Questions For Authors: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, and for highlighting that our paper has strong empirical results, and is comprehensive in its analysis of an interesting problem. We address their comments below.
## We will reorganize the writing to first introduce the estimation framework, and then separately discuss the active and passive sampling settings.
Our estimation framework (Section 2.2) involves first learning features from the source matrix via SVD, and then estimating the target matrix via least-squares. This estimation framework applies to both the active sampling (lines 85-96, left) and passive sampling (lines 97-109, left) settings. To improve readability, we will reorganize the problem setup in the revision to first introduce the estimation framework, and then discuss both the active and passive sampling settings. We thank the reviewer for this valuable suggestion. | Summary: The authors study matrix completion in the MNAR setting under transfer learning. They establish minimax bounds for entry-wise estimation of target values under both active and passive sampling settings. Additionally, they propose a computationally efficient minimax-optimal estimator—leveraging the tensorization of G-optimal designs for active sampling—and a rate-optimal estimator for passive sampling. Experiments on two simulated datasets and two real-world datasets demonstrate the effectiveness of their method compared to two baseline approaches.
Claims And Evidence: The theoretical results are well established and proved.
Methods And Evaluation Criteria: The paper reports Mean Squared Error (MSE) for tasks involving gene expression microarrays and metabolic modeling. Max Squared Error (MSE_max) might be commonly used in these domains due to its sensitivity to extreme deviations, which can be biologically significant. It might be helpful for the authors to adopt other evaluation metrics (e.g., Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE) ) for a more comprehensive assessment of model performance.
Theoretical Claims: Yes, I mainly check the theorem mentioned in the main paper.
Experimental Designs Or Analyses: In the experimental design, it would be valuable to include results for a non-transfer setting. Specifically, evaluating state-of-the-art imputation methods when trained solely on Q would provide a useful baseline. If the proposed method outperforms these non-transfer baselines, it would further highlight the significance of studying the transfer setting and demonstrate its practical advantages. I encourage the authors to consider this comparison to strengthen their empirical evaluation.
Supplementary Material: I mainly review Appendix B, additional experiments and details.
Relation To Broader Scientific Literature: I think this paper help to establish the theoretical and practical approach for matrix complete under transfer setting.
Essential References Not Discussed: I think it would the best if the author discuss more about missing data imputation method under MNAR in the literature.
Other Strengths And Weaknesses: Strengths: The paper is well written and presents a sound theoretical framework.
Weaknesses: The baseline methods in the experimental section are somewhat limited. While I acknowledge that few methods specifically address matrix completion in the transfer setting, it would be beneficial to incorporate baseline methods from the MNAR imputation literature—such as not-MIWAE [1]—to establish a stronger point of comparison. Evaluating these methods using only data from Q would help demonstrate the best possible performance without transfer, thereby further emphasizing the significance of the transfer setting.
[1] Ipsen, N.B., Mattei, P., & Frellsen, J. (2020). not-MIWAE: Deep Generative Modelling with Missing not at Random Data. ArXiv, abs/2006.12871.
Other Comments Or Suggestions: None
Questions For Authors: Refer to Experimental Designs Or Analyses about the evaluation criterion.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, and for highlighting that our paper is well written and presents a sound theoretical framework. We address their comments below.
## The manuscript contains results for a non-transfer MNAR baseline (BC22).
The method of BC22 (IEEE Transactions on Information Theory, 2022) is an MNAR imputation method (lines 302-305, left). The manuscript includes this method for all experiments.
## Our methods outperform not-MIWAE on MAE, RMSE, MSE, and Max Error.
As the Reviewer suggests, we perform new experiments to compare our methods against the not-MIWAE method of Ipsen et al. 2020. Below, we present the comparison of our active and passive sampling methods on Max Squared Error, Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE). For ease of comparison, we report the results of Fig. 2 (gene expression transfer) and Fig. 3 (metabolic transfer) again in the tables below. The MAE/RMSE numbers, and the results of not-MIWAE, are new.
For the gene expression transfer problem (Figure 2), our methods out-perform not-MIWAE with $p_{\textup{Row}} = p_{\textup{Col}} = 0.5$. We train not-MIWAE until convergence, with the latent dimension equal to the true matrix rank of $Q$, and a batch size of 32.
For gene expression data, the errors are:
| Method | MSE | Max Squared Error | MAE | RMSE |
|--------|-------|-----------|-----|------|
| Passive (Ours) | **0.004385** | **0.300035** | **0.044493** | **0.055198** |
| Active (Ours) | *0.018225* | *0.372105* | *0.103285* | *0.114654* |
| LLL22 | 0.151792 | 0.626293 | 0.343497 | 0.389449 |
| BC22 | 0.570254 | 1.000000 | 0.678862 | 0.754897 |
| not-MIWAE | 0.207850 | 1.000000 | 0.415913 | 0.455765 |
Next, we perform the same experiment for the metabolic transfer problem (Figure 3).
| Method | MSE | Max Squared Error | MAE | RMSE |
|--------|-------|-----------|-----|------|
| Passive (Ours) | *0.000217* | 1.292995 | *0.000934* | *0.014638* |
| Active (Ours) | **0.000024** | **0.294249** | **0.000669** | **0.004883** |
| LLL22 | 0.000360 | *0.651176* | 0.006931 | 0.018147 |
| BC22 | 0.003790 | 1.000000 | 0.021086 | 0.055543 |
| not-MIWAE | 0.006666 | 1.000000 | 0.030307 | 0.076841 |
As Reviewer XH4Z suggests, our methods may perform better because not-MIWAE is a non-transfer baseline. This further emphasizes the significance of the transfer setting, which our methods capture, as well as the method of LLL22.
We will include both the suggested baseline (not-MIWAE), and the suggested error metrics (MAE and RMSE) in the revision.
## Our methods out-perform others under new evaluation metrics (MAE and RMSE).
See the above table. We will include these new metrics in the revision to give a more comprehensive assessment of model performance.
As the reviewer correctly notes, our main focus is on Max Squared Error, which is a commonly used evaluation metric due to its sensitivity to extreme deviations; this is biologically significant. | null | null | null | null | null | null |
Subgoal-Guided Policy Heuristic Search with Learned Subgoals | Accept (poster) | Summary: The paper proposes a new approach to training policies for search. The authors leverage subgoal guidance and this way they show how to get the training signal even from unsolved episodes. By experiments in 4 environments, they show that the proposed approach improves sample efficiency of the training procedure.
## Update after rebuttal
Thank you for the answer. Although I'm still on the fence about this paper, I will raise my rating to 3. That said, I still encourage the authors to make the proposed adjustments to strengthen the paper.
> Indeed, if the problems are on the easier end, then our method will not offer gains in terms of running time. [...] These results will be added to the paper.
I cannot give credit for results that are not shown. If that is indeed the case, then please include evaluations on harder instances in the paper. Otherwise, the paper will have limited impact, even if published. The results show currently are somewhat misleading as they rely only on the expansions metric that is not equally handled by methods; taking into account other metrics, such as running time currently makes the advantage rather small.
> Why We Prefer to Not Use the Trick of Filtering Out Actions
No, I am proposing a very simple baseline: fix k=2, $\varepsilon=0$, i=0, and j=$\infty$. To be honest, I expect that running such an experiment would take less time than writing the justification for not running it. All you have to do is to add `action_probs = action_probs * (action_probs >= torch.topk(action_probs, k).values.min())` after `action_probs = pi(s)` in the implementation of the baseline.
> is orthogonal to our subgoal-based policies
Yes, I agree. However, this is not exactly about improving speed. The point is to understand why your approach is beneficial. One advantage is that using the high-level policy for additional weighting makes the policy more focused on actions that matter. The question is whether this is the main benefit? If you show that naively focusing the search on top actions is still inferior to your approach, it would significantly strengthen your results.
Claims And Evidence: The main claim of the paper is that the proposed training pipeline improves sample efficiency of the training. More spcifically -- less node expansions are required until the search algorithm can solve all instances. I am mostly convinced with that. However, a few concerns remain:
1. A single expansion of LevinTS/PHS requires one policy call and 0-k heuristic calls. On the other hand, a single expansion of your approaches require k VQ-VAE calls, 2k policy calls, and 0-k heuristic calls. This is a considerable difference, especially given that the VQ-VAE model likely has considerably more parameters than policies. Please discuss the difference in expansion cost and make it explicit in the paper. What is the magnitude of that difference in practice?
2. Given that your method requires training 3 policy-like models instead of a single one for baselines, is it not the case that it trains faster simply because in practice it receives more training updates? Please discuss that.
3. The cost of expansion is highly dependent on the number of generated subgoals. However, I found no specification of such hyperparameters (except few in appendices J,K). It makes a huge difference whether you use 4 or 40 subgoals. Please discuss that.
4. While I understand the proposed method, I am missing an intuition why it works. The key step lies in extracting the training signal from non-solution data. The pipeline you proposed uses those to train subgoal generator and low-level policy. This data is sufficient to train a goal-reaching low-level policy. The subgoals learned this way are not targetted toward the solution, hence the generator learns to output _any_ valid subgoals. While it makes sense that it should considerably improve the sample efficiency of the subgoal search, I don't see immediate reasons for why we should expect it to outperform plain low-level search. As far as I understand, the main role of subgoals in your pipeline is that they provide additional lookahead-driven weighting for actions. What would happen if you would augment baselines by "simulating" this subgoal guidance (e.g. during expansion, for each action make a single few-step policy rollout, treat the final state as a subgoal and the product of policy predictions on the path as subgoal probabilities)? Would it be sufficient to close the gap between methods?
5. There is also another difference: subgoals can be seen as a way to select more promising action from the full set. How would the performance of low-level methods change if you limited the number of actions used for each expansion to only top 2 according to the policy (or use weighting [0.45, 0.45, 0.05, 0.05] if you wish to preserve completeness)?
6. Were the key hyperparameters of each method tuned separately for each domain? From my experience, the weight of 1.5 for A* seems quite high, did you check other values as well?
I'm happy to accept the paper if my concerns are addressed.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There are no theoretical claims that need to be checked.
Experimental Designs Or Analyses: Yes, I checked all experiments in the main text.
Supplementary Material: I skimmed the whole appendix and read more carefully appendices F, G, H, J, L.
Relation To Broader Scientific Literature: The proposed approach is mostly a combination of existing tools (LevinTS/PHS search, HIPS-inspired subgoal model, Louvain algorithm for subgoal discovery), but the proposed combination and additional training details are novel to my best knowledge.
Essential References Not Discussed: I'm not aware of any essential references that have to be added.
Other Strengths And Weaknesses: While the proposed pipeline mostly rely on existing parts, the proposed combination is novel and as such can be a good contribution.
Other Comments Or Suggestions: - The running title is missing.
- typos: (90L "subgoals"), (74R "represent"), (238L "a hierarchy"), (251R "using")
- Figure 1.c.ii: there is a problem with indexing, all sequences have the same indices.
- Why using the Louvain algorithm is helpful? Is it much better than sampling subgoal pairs from unsolved trajectories in the same way as from solved trajectories?
Questions For Authors: Please answer my questions from the Claims and Evidence section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and feedback.
**Concern 1**
Each node generation for LevinTS/PHS requires a single policy call (plus heuristic for PHS). Each node generation for our approach requires K VQ-VAE calls to generate the K subgoal observations, K policy calls for the low-level policy (over each of the K generated subgoal target observations), and a single policy call for the high-level policy mixture. We make the K calls faster by batching them. Due to batching, the slowdown is not linear with the size of K, (see **Concern 3**).
Consider the results of the following experiment on a much more difficult version of BoulderDash, following the same procedure from **Section 4.3**. We also increased the network size used in PHS*($\pi$) (denoted LARGE) to match the number of parameters used in PHS*($\pi^{SG}$) to show that simply using more parameters does not help.
PHS*($\pi^{SG}$) learned how to solve the difficult set of BoulderDash within 6.25 hours. The baselines did not learn to solve the problems even when granted 4x more expansions and 2x more time. We also report the number of nodes generated per second of each model. We only have a factor of 2 slowdown while using 4 times more parameters and K subgoals.
|Algorithm | Total Loss (Expansions) | Total Time (Hours) | Training Instances Solved | Nodes Generated per Second |
|---|---:|---:|---:|--:|
| WA* | 606,000,000 | 15.58 | 16 |1,872.31 |
| LevinTS($\pi$) | 636,718,638 | 16.69 | 16 | 1,828.51 |
| PHS*($\pi$) | 599,849,865 | 15.61 | 16 | 1,765.00 |
| PHS*($\pi$) LARGE | 344,512,985 | 33.04 | 1 | 496.45 |
| PHS*($\pi^{SG}$) [Ours] | 85,470,564 | 6.25 | 10,000 | 791.69 |
We then tested the resulting models on a separate test set of 100 problems to match the procedure done for Table 1.
| Algorithm | Solved | Expansions |
| --- | --: | --: |
| WA* | 14 | 272,872.28 |
| LevinTS($\pi$) | 14 | 280,407.36 |
| PHS*($\pi$) | 14 | 288,045.64 |
| PHS*($\pi$) LARGE | 4 | 110,805.75 |
| PHS*($\pi^{SG}$) [Ours] | 100 | 1,291.37 |
**Concern 2**
Yes, we equip the learning agent with the ability of learning from experiences that previous methods could not use for learning. We update our method more often because we learn how to reach the subgoals the clustering algorithm generates. The baselines cannot learn from such experience because they do not learn and exploit subgoals.
**Concern 3**
We make a single call of batch size K for a faster query. The time information was indeed missing in our ablation, which we include below. Due to batching, the number of generations per second does not decrease linearly with the number of subgoals.
| Codebook Size (Number of Subgoals) | Nodes Generated per Second |
| --- | ---: |
| 1 | 266.85 |
| 4 | 261.74 |
| 16 | 208.44 |
**Concern 4**
A key difference between our approach and the baselines is that we can learn from problems we have yet to solve. As an example from BoulderDash, if many diamonds are required to solve a problem, it may be difficult to find a solution, but sampled paths may contain the agent collecting a single diamond. The low-level policy learns to collect diamonds even before the method can solve any problems (see Figure 3 of the paper).
While we find the suggestion of performing roll-outs to simulate subgoal guidance interesting, we don’t see how it would be competitive with our method. Our method learns helpful subgoals because it uses the information contained in the CLOSED list of the search, which stores the entire search tree. The roll-out approach contains a single path. While the CLOSED list contains helpful information such as how to collect a key and open a door, it is unlikely a roll-out will capture this type of information.
**Concern 5**
We could apply the suggestion of using the top-k (e.g., k = 2) actions during search with the baselines in two moments: (1) during training and (2) during testing.
- **Training**: Once the neural model is randomly initialized, the policy is nearly uniform. So choosing the top-k would be an arbitrary choice, as we would choose the two actions that happen to have a slightly higher probability at initialization.
- **Testing**: Once these models manage to solve all training instances, their policies almost deterministically choose their actions. So choosing k > 1 would increase the number of expansions because we would be forcing the algorithm to explore when it can almost deterministically solve the problem.
**Concern 6**
We did not tune hyperparameters specifically for each domain for our method. For any
hyperparameter specific to HIPS-e, we used their listed ones. For WA*, we followed LevinTS and PHS* which used a weight of 1.5. WA* tends to be quite robust to the weight value as it learns its heuristic function.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the answers. However, I'm still somewhat concerned about using the number of expansions as the main budget metric. I acknowledge that with batching the running time can be made ~2x times slower than baselines (actually the baselines can be batched as well, but with a bit more effort). However, if we account main results even for that number, the difference between $\pi^{SG}$ and baselines seem to disappear.
Ad concern 5: As the training progresses, the models get better. At that point, it makes sense to expand only a limited number of best actions according to the policy estimates, not to waste the budget on worse moves. From my experience, such trick usually considerably improves efficiency, is easy to add, and is in line with how the proposed method works, so makes the comparison more fair.
If the search action selection is not important neither during training nor testing, why search is used at all?
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to respond. We appreciate the opportunity to address your concerns.
**The Trick of Filtering Out Actions Can be Used with Our Approach**
We failed to clarify in our initial response that the trick of filtering out actions the reviewer suggested is orthogonal to our subgoal-based policies and therefore can potentially be used to speed up the search of our approach too.
The filtering out trick the reviewer suggests will increase the probability of the top $k$ actions and reduce the probability of the remaining actions. What the high-level policy $\pi^\text{hi}$ does is to increase/decrease the weight of a low-level policy and not of an action, as the trick does. Instead of increasing the probability values to a predefined value, our approach relies on an optimization process that minimizes the Levin loss—an upper bound on the size of the tree. As a result, the high-level policy $\pi^\text{hi}$ has its weights adjusted to minimize the size of the tree. This is in contrast with the more aggressive approach of setting the top $k$ actions to a predefined value.
Once $\pi^\text{hi}$ is combined with the low-level policies as shown in Equation 4 of the paper, we obtain a probability distribution like any other. So there might be a point during training that our approach could also benefit from increasing the probability of the top $k$ actions, when the high and low-level policies are not as sharp as they will be at the end of training. We hope this clarifies this point, and it explains why our experiments are fair. They would be unfair if we only used the top-$k$ trick with the baselines and not with our approach.
**Why We Prefer to Not Use the Trick of Filtering Out Actions**
In our original response we explained that we cannot use the trick of picking $k$ actions and maximizing their probabilities while keeping the other actions at a minimum $\epsilon$ in the beginning and at the end of training. In the early stages of learning, the probability of all actions is similar and the decision of which actions to maximize will be arbitrary; at the later stages of learning the policy is nearly deterministic, so maximizing $k > 1$ actions will hamper performance. Can the trick work in between these two extremes? Yes, it can. However, you need to define the following to make it work:
1. Choose the number of actions $k$ to maximize.
2. Choose the value $\epsilon$.
3. Choose the learning iteration $i$ in which to start using the trick (we cannot use it in the beginning of training and we cannot use it at the end of training).
4. Choose the learning iteration $j > i$ in which to stop using the trick.
If we sweep over different values of $k$, $\epsilon$, $i$, and $j$, we will possibly find values that will speed up learning. However, by the time we finish the sweep, the algorithms that do not use the trick will likely have finished training because they do not need to set any of these hyperparameters.
Importantly, LevinTS and PHS* are trained to minimize the Levin loss, an upper bound on the number of nodes they expand to solve a problem. The training process we use is principled because we are learning to minimize the search tree of these algorithms. The trick of filtering out actions would make us lose this important property.
Since the trick requires us to set a number of hyperparameters and we would lose the property that makes us excited about LevinTS and PHS, we prefer not to use it in our experiments.
**Running Time**
Indeed, if the problems are on the easier end, then our method will not offer gains in terms of running time. However, note that we showed in our initial response that, for harder problems, such as BoulderDash with more keys and diamonds, the difference between our method and the baselines is so large that even if we allow the baselines to use more than twice as much time, they would only solve a fraction of the problems our method can solve. These results will be added to the paper.
Regarding batching, all search algorithms we evaluated were implemented with batched best-first search. That is, instead of popping one node OPEN, we pop $k=32$ so that their children can be evaluated in a batch. This is the standard implementation for search algorithms learning a policy and/or heuristic function and all algorithms already benefit from batching. | Summary: The paper proposes a new approach for utilizing policy tree search by the inclusion of learned sub-goals. The subgoals are learned online as the tree search expands during a Bootstrap process while attempting to solve problems. One key innovation is the utilization of failed solution trees as data as well (similar to many modern RL algorithms like HER) but by expertly partioning the graph by exploiting the Louvain algorithm. The policy search is hierarchical with the subgoal policy guided by a weighted sample of the high-level subgoal and the low-level subgoal policy. This forms the evaluation function for policy tree search. The subgoal generator uses the VQVAE ( (Vector Quantized Variational Autoencoder). One key distinction is that the search still operates using the problems operator set while the low-level policy guides the search to read the subgoals. Thus, sub-goal representations that do not map to any high-level state are less susceptible to cause problems.
The authors then perform an empirical analysis on 4 domains using the new evaluation function and perform an ablation that showcases the strengths of exploiting failed attempts.
Claims And Evidence: The claims are supported by convincing evidence. Their empirical evaluation showcases that their approach can reduce the computational effort in problems on several domains compared to SOTA baselines.
Methods And Evaluation Criteria: Using subgoals to guide search in sparse, binary reward type problems has been a well-known and well-researched topic. The application to policy search in an online fashion and in the hierarchical setting is reasonable.
Theoretical Claims: The theoretical claims that their method enjoys the same guarantees that PHS* provides (Sec 3.5) seems okay to me.
Experimental Designs Or Analyses: The choice of domains is good, baselines are suitable and evaluation metrics are reasonable for the most part although some additional evaluation metrics could be added.
Supplementary Material: I glanced through the supplement.
Relation To Broader Scientific Literature: This work advances the state-of-art in policy guided search algorithms by introducing a method of learning and utilizing learned subgoals online while also exploiting failed attempts.
Essential References Not Discussed: Not applicable
Other Strengths And Weaknesses: 1. I think one weakness is the lack of other evaluation metrics like time in seconds on the x-axis. It is quite important to add data with such results especially due to the added computational effort from learning the subgoals is involved.
This also solves the problem of "incomparable" comparisons with HIPS-e since if the node expansions are not directly comparable certainly time is.
2. Another issue is that the analysis of failures is a bit lacking. I would have rather preferred some more detailed analysis on why sokoban seems to perform worse using this approach. Subgoals needing to be undone is pretty common among many problems (eg. Sussmans anomaly) so if your approach cannot be expected to work here then I would question its overall utility. Please expand on why your approach is outperformed by the vanilla methods on Sokoban. It might also make sense to run a quick experiment on Blocksworld (the domain in sussmans anomaly) and analyse whether similar observations can be made.
Other Comments Or Suggestions: N/A
Questions For Authors: Overall I have a very positive view of this paper. My negative score is due to the lack of clarity regarding W2. I hope the authors can appropriately resolve my questions in weaknesses before I can revise my score.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and feedback.
**Weakness 1**
We understand the reviewer's concern with running time as some of the gains we have in terms of expansions disappear when solving easier problems. To address this, we ran experiments on a much more difficult version of BoulderDash to highlight the difference in inference speed is not an issue when the problems are too difficult for the baselines. We also increased the network size used in PHS*($\pi$) (denoted LARGE) to match the number of parameters used in PHS*($\pi^{SG}$) to show that simply using more parameters does not help.
Our results are summarized in the tables below. Our proposed approach, PHS*($\pi^{SG}$), learned how to solve the difficult set of BoulderDash within 6.25 hours. WA*,PHS*($\pi$), and PHS*($\pi$) LARGE did not learn to solve the problems even when granted 4x more expansions and 2x more time. Increasing the size of the neural network of PHS*($\pi$) does not help either.
|Algorithm | Total Loss (Expansions) | Total Time (Hours) | Training Instances Solved | Nodes Generated per Second |
|---|---:|---:|---:|--:|
| WA* | 606,000,000 | 15.58 | 16 |1,872.31 |
| LevinTS($\pi$) | 636,718,638 | 16.69 | 16 | 1,828.51 |
| PHS*($\pi$) | 599,849,865 | 15.61 | 16 | 1,765.00 |
| PHS*($\pi$) LARGE | 344,512,985 | 33.04 | 1 | 496.45 |
| PHS*($\pi^{SG}$) [Ours] | 85,470,564 | 6.25 | 10,000 | 791.69 |
We then tested the resulting models on a separate test set of 100 problems to match the procedure done for Table 1. A maximum budget of 512,000 was given for each problem instance. PHS*($\pi^{SG}$) can solve all 100 problems with a bit more than 1000 expansions. The baselines can solve a small fraction of the 100 problems even when granted 100x more expansions.
| Algorithm | Solved | Expansions |
| --- | --: | --: |
| WA* | 14 | 272,872.28 |
| LevinTS($\pi$) | 14 | 280,407.36 |
| PHS*($\pi$) | 14 | 288,045.64 |
| PHS*($\pi$) LARGE | 4 | 110,805.75 |
| PHS*($\pi^{SG}$) [Ours] | 100 | 1,291.37 |
While we can provide the runtime cost of HIPS-e, it would still not be meaningful. We used the open source HIPS-e implementation from the authors, which is implemented in Python. Our experiments and all other baselines are implemented in C++. If we were to compare running time, we would show our approach being substantially faster than HIPS-e, but due to the mismatch in programming languages, this comparison is not meaningful. Note that our comparison to HIPS-e is still meaningful in the sense that while our approach can learn to solve instances of BoulderDash, HIPS-e fails to do so.
In summary, while our proposed approach might not be faster on easier problems, due to the cost of querying more expensive neural models, it can allow us to solve problems we would not be able to solve with the baseline systems considered in our experiments.
**Weakness 2**
This is a great point and we are happy we have the chance to address it in our rebuttal. To avoid confusion with the argument that will follow, let us start by saying that our Sokoban results are by no means weak. The method was able to learn how to solve all training instances and performed best in terms of expansions on the test instances. We just did not see the clear advantage in our favor that we see in the other domains. We conjecture that the reason our method performs worse in Sokoban in comparison to the other three problems is due to the training data the clustering algorithm generates. While clustering finds important structures in the other three problems, it did not find helpful structures in Sokoban. For example, in BoulderDash, once the agent unlocks a door, it opens a region of the state space that was not available before. This is the type of structure the clustering algorithm might be capturing. With that, the structure of the underlying state space could be offering subgoals that are helpful in solving these problems. Sokoban might not have such a helpful underlying structure. Note that it is not trivial to verify our conjecture because we do not attempt to reconstruct the subgoals; we use them to condition the policies.
We see the Sokoban results as a demonstration that our approach is robust. Even if the clustering algorithm does not find helpful structures in the state space, we are still able to learn how to solve the problems.
To illustrate that our approach can undo reached subgoals when needed, we collected statistics of when we need to remove boxes from the goal in Sokoban. For the solution paths our method finds, we tracked the statistics of how often a box was pushed onto a goal, then undone (for example, to shuffle around for another box to get through). The table below shows the results. The table shows that in 66% (37+17+9+4) of the problems solved our system had to move at least one box from a goal location.
| Number of times a solved box was undone | Percentage of paths found |
| --- | ---: |
| 0 | 33% |
| 1 | 37% |
| 2 | 17% |
| 3 | 9% |
| 4 | 4%|
---
Rebuttal Comment 1.1:
Comment: Thanks for your response and additional results. This has resolved my concerns and I have increased my score by 1 point.
---
Reply to Comment 1.1.1:
Comment: We would like to again thank all the reviewers for their feedback. Also, we would like to thank reviewers **yx2F** and **ocyY** for updating their reviews as a result of our discussions.
We will be available to answer questions in the next few hours before OpenReview closes, in case reviewers **WUJG** and **nFhe** have any follow-up questions. If we miss the discussion period, we kindly ask them to update their reviews with eventual questions, as their questions will help us improve our work.
In addition to the results on the more difficult BoulderDash problems, we also completed a run of PHS* with a flat policy $\pi$ and with our subgoal-based policy $\pi^\text{SG}$ on more difficult CraftWorld problems, where the agent needs to craft more complex items. The table below summarizes the results.
**CraftWorld**
|Algorithm | Total Loss (Expansions) | Total Time (Hours) | Training Instances Solved | Nodes Generated per Second |
|---|---:|---:|---:|--:|
| PHS*($\pi$) | 373,496,109 | 18.35 | 80 | 855.58 |
| PHS*($\pi^\text{SG}$) [Ours] | 123,729,550 | 16.12 | 10,000 | 451.62 |
Similarly to what we observed in the more difficult problems of BoulderDash, the baseline fails to learn a policy after more than 18 hours of computation, while our system is able to learn an effective policy in 16 hours of computation. We will complete these experiments with all the baselines and include them in the paper.
We hope that the results on the more difficult BoulderDash and CraftWorld instances will clear the concerns the reviewers had related to the running time of our method and the difficulty of the problems we used in our experiments. | Summary: The authors propose a new way to generate policies for deterministic tree search algorithms. Their method to do so is by learning to generate sub-goals using a VQVAE where the subgoals are recognized , learning to reach these subgoals with a low level policy and a high level policy over subgoals.
Claims And Evidence: The experimental results are over 4 domains and looks promising. But the domains looks very similar, and not very large. I wonder how challenging they are, and if the proposed approach is scalable to harder domains and problems.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No theoretical claims in the paper.
Experimental Designs Or Analyses: The experimental results in Figure 2 looks sound as far as I can tell.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper seems to consider related scientific literature appropriately.
Essential References Not Discussed: None that I know of.
Other Strengths And Weaknesses: The motivation is good and I think deterministic tree search problems are very common in real world.
The improvements over previous works are substantial and novel.
The paper is a bit complicated and hard to follow with many parts, some text simplication or simpler diagrams of specific components like the Louvain algorithm or the search part would have helped with understanding).
Even though the overall algorithm seems pretty complex, the experiments are done on problems that look pretty small and easy so either an explanation on how hard these problems are, or tackling problems that look harder, would have been a great help to assess the quality of the work.
Other Comments Or Suggestions: Perhaps you can consider optimization problems as another test-bed (for example TSP and extensions). There are numerous benchmarks there and it should be easy to see if the proposed approach is "competitive".
Questions For Authors: 1. How complex are the domains you've experimented on?
2. How well will the method work on much bigger problems?
3. Does the learned sub-tasks in the experiments reach some semantic meaning?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your helpful comments and feedback.
**Question 1.**
The environment domains chosen are common amongst related works. In terms of complexity, Sokoban is PSPACE-complete [1], CraftWorld and Box-World are NP-hard [2]. BoulderDash requires collecting multiple diamonds before reaching the exit, which is equivalent to the NP-Hard Hamiltonian path problem between the diamonds and the exit of the puzzle. The domains have important differences that make them a good benchmark for search algorithms. Both Sokoban and Box-World have deadlocks (pushing a box into a corner for Sokoban and using a key on the wrong box for Box-World). BoulderDash has a unique property with its dirt elements, where the agent when moving over a dirt tile removes it. This can greatly increase the size of the state-space, and the agent needs to learn that dirt elements can be ignored.
We ran additional experiments on a much larger version of BoulderDash. As before, we use the bootstrap process to train through search WA*, PHS*($\pi$), and our method PHS*($\pi^{SG}$) over 10,000 problems where an initial budget of 4000 is used. The neural networks used for the policy/heuristics in WA* and PHS*($\pi$) have ~2.5M parameters, whereas the total number of parameters for all policies/subgoal generators in PHS*($\pi^{SG}$) is ~10M. We also tried increasing the network size used in PHS*($\pi$) (denoted LARGE) to match the number of parameters used in PHS*($\pi^{SG}$) to show that simply using more parameters does not help.
Our results are summarized in the tables below. Our proposed approach, PHS*($\pi^{SG}$), learned how to solve the difficult set of BoulderDash within 6.25 hours. WA*,PHS*($\pi$), and PHS*($\pi$) LARGE did not learn to solve the problems even when granted 4x more expansions and 2x more time. Increasing the size of the neural network of PHS*($\pi$) does not help either.
|Algorithm | Total Loss (Expansions) | Total Time (Hours) | Training Instances Solved | Nodes Generated per Second |
|---|---:|---:|---:|--:|
| WA* | 606,000,000 | 15.58 | 16 |1,872.31 |
| LevinTS($\pi$) | 636,718,638 | 16.69 | 16 | 1,828.51 |
| PHS*($\pi$) | 599,849,865 | 15.61 | 16 | 1,765.00 |
| PHS*($\pi$) LARGE | 344,512,985 | 33.04 | 1 | 496.45 |
| PHS*($\pi^{SG}$) [Ours] | 85,470,564 | 6.25 | 10,000 | 791.69 |
We then tested the resulting models on a separate test set of 100 problems to match the procedure done for Table 1. A maximum budget of 512,000 was given for each problem instance. PHS*($\pi^{SG}$) can solve all 100 problems with a bit more than 1000 expansions. The baselines can solve a small fraction of the 100 problems even when granted 100x more expansions.
| Algorithm | Solved | Expansions |
| --- | --: | --: |
| WA* | 14 | 272,872.28 |
| LevinTS($\pi$) | 14 | 280,407.36 |
| PHS*($\pi$) | 14 | 288,045.64 |
| PHS*($\pi$) LARGE | 4 | 110,805.75 |
| PHS*($\pi^{SG}$) [Ours] | 100 | 1,291.37 |
**Question 3.**
We are happy to give more details in the paper, but as part of our implementation details we do not use the fully grounded reconstructed observation from the VQVAE decoder. This results in it being difficult to gaininsights to potential semantic meaning of the subgoals produced. HIPS [3] which uses the fully reconstructed observations as their subgoal targets do provide visualizations of the subgoals generated. We do think that this would be an interesting research direction, to see if any semantic meaning can be learned and/or used in downstream tasks.
[1] Culberson, J. Sokoban is PSPACE-complete. IEICE Technical Report, 1997
[2] Viglietta, Giovanni. "Gaming is a hard job, but someone has to do it!." Theory of Computing Systems 54 (2014): 595-621.
[3] Kujanpää, Kalle, Joni Pajarinen, and Alexander Ilin. "Hierarchical imitation learning with vector quantized models." International Conference on Machine Learning. PMLR, 2023. | Summary: - The paper proposes an algorithmic framework for best-first search intended to solve deterministic search problems. The primary contributions of the paper are algorithmic and empirical. The main algorithmic idea is to learn control knowledge to speedup search. Subgoals here refer to state abstractions represented by a VQVAE, with the size of the subgoal "space" parameterized by the size of the VQVAE codebook. The resulting control knowledge takes the form of a pair of policies, one generating subgoals and one conditioned on them. These can now be used to guide the search towards states more likely to lead to solutions (based on previously solved problem instances).
- The training algorithm consists of alternating between generating search control knowledge with the current policy and updating the policies and heuristic function in a loop. Key implementation choices include using the Louvain clustering algorithm to generate training data for the subgoal generator (VQVAE) as well as scaling training data for the VQVAE subgoal generator by augmenting the supervised solution sub-trajectories with additional data generated via a clustering-based approach.
- Results on four deterministic search domains show that the proposed algorithm is more sample efficient than the baselines (which do not use the learned control knowledge) without any deterioration in solution quality.
---
## update after rebuttal
- I thank the authors for their detailed responses to the reviewers. These have addressed my main concerns about the paper. I'm now more positively inclined towards the paper and have revised my score upwards.
Claims And Evidence: - The claims are reasonably well supported.
Methods And Evaluation Criteria: - Table 1 shows nearly all baselines and the proposed methods solve all test problem instances. This suggests that the problem domains might be a bit too "easy". It is currently unclear how the method might perform on harder problems.
- The experimental section is somewhat thin. Since search quality is not shown to be improved, it would have been good to see a conclusive empirical demonstration of search speedup. This does not happen in the paper. While number of node expansions is a reasonable metric, I'd suggest a more detailed investigation into runtimes and computational overhead is warranted.
Theoretical Claims: - The paper does not focus on theoretical aspects. The properties of the search seem fine to me.
Experimental Designs Or Analyses: - Yes. The experimental setup described in Section 4 and the appendices seems fine to me.
Supplementary Material: - Yes. I reviewed the appendices.
Relation To Broader Scientific Literature: - The main algorithmic contribution is a good combination of ideas from prior work (Louvain clustering, VQVAE-based subgoal generation, k-step data generation as shown in kSubS and AdaSubS).
Essential References Not Discussed: - None that I can spot. The paper's related work section needs some improvement though to make it more complete.
Other Strengths And Weaknesses: Strengths
- The paper tackles an interesting and important problem. The ability to learn search control knowledge for improving search quality and speed, without the need for manual domain engineering, has large utility.
- The overall approach is intuitively clear.
- The paper seems to be an interesting and novel combination of prior ideas (Louvain clustering, VQVAE as subgoal generator, best-first search, differentiable policies).
- The experimental results show that the proposed algorithm needs fewer node expansions to find solutions of similar quality.
Weaknesses
- The paper mentions its approach in detail but does not offer much motivation or justification for its algorithmic choices. Examples below.
- What's a formal definition of a subgoal as used in the paper?
- Why restrict to deterministic problems? What doesn't work if there's randomness or noise in the transition function?
- What's the intuition for choosing the particular form of $\pi^{SG}$ described in Equation 4? What other forms were considered? Why was this one preferred over those?
- Considering the large body of work on learning search control knowledge to improve search quality and speed [1,2,3,4], the experimental section needs improvement. It is unclear to me if the proposed method is better than existing techniques. Details follow.
- All considered domains are solved at test time by all methods including baselines without any improvement in solution quality. This suggests all the domains are "easy" for the baselines. How might the method do in harder domains? The choice of domains needs to be improved in order to answer these questions.
- The proposed method is an algorithmic framework, with many hyperparameters and choices. Appendix J demonstrates the potential impact of a single poorly chosen hyper-parameter. In my opinion, the paper needs a much more careful empirical investigation with key insights discussed in the main paper.
- Reporting results in terms of node expansions is not wrong but does mask the overhead of using the control knowledge during search. Does the proposed algorithm actually run faster than the simpler baselines (LevinTS, PHS*)? Please discuss and / or report time-based results. If solution quality was better, this would be less of a concern.
- [1] Combining Online and Offline Knowledge in UCT (Silver 2007)
- [2] HC-Search: Learning Heuristics and Cost Functions for Structured Prediction (Doppa 2013)
- [3] Guided search for task and motion plans using learned heuristics (Chitnis 2016)
- [4] Learning Simulation Control in General Game-Playing Agents (Finnsson 2010)
- The related work section could be significantly improved. For example, the kSubS paper contains a more complete discussion of related work. Specifically, the discussion on goal-conditioned RL could be improved with a discussion of hierarchical RL methods (e.g., MAXQ [1], ALISP, options, etc.) and state abstraction [1,2] in particular, given the Louvain algorithm and the k-step training mechanism.
- [1] Dietterich 1999: https://proceedings.neurips.cc/paper/1999/hash/e5a4d6bf330f23a8707bb0d6001dfbe8-Abstract.html
- [2] Andre 2002 ; https://cdn.aaai.org/AAAI/2002/AAAI02-019.pdf
- Overall, due to the lack of improvement in solution quality and limited empirical evaluation, I'm not sure if the paper represents an actual algorithmic improvement over existing methods. While an interesting combination of existing ideas, the overall novelty seems low. As a result, I'm inclined to reject. That said, I do like the overall approach and don't see any major technical issues with the paper, so I'm happy to revise my score upwards based on author feedback.
Other Comments Or Suggestions: - None at this time.
Questions For Authors: - (Q1) Based on Table 1, it seems nearly all baselines and proposed methods solve all test problem instances. Might this suggest that the selected domains are too "easy"? What might be more challenging domains and how might the results change on these domains?
- (Q2) Is LevinTS($\pi^{SG}$) actually faster than LevinTS in wall-clock time? Please provide details about actual runtime at test time for all the methods considered. I'm looking to better understand how much search speedup is achieved by using the VQVAE and hierarchical policy.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and feedback.
**Question 1**
Thank you for your suggestion to run on more difficult instances. We ran additional experiments on a much larger version of BoulderDash, following the same procedure from **Section 4.3**. We also tried increasing the network size used in PHS*($\pi$) (denoted LARGE) to match the number of parameters used in PHS*($\pi^{SG}$) to show that simply using more parameters does not help.
Our results are summarized in the tables below. Our proposed approach, PHS*($\pi^{SG}$), learned how to solve the difficult set of BoulderDash within 6.25 hours. The baselines did not learn to solve the problems even when granted 4x more expansions and 2x more time.
|Algorithm | Total Loss (Expansions) | Total Time (Hours) | Training Instances Solved | Nodes Generated per Second |
|---|---:|---:|---:|--:|
| WA* | 606,000,000 | 15.58 | 16 |1,872.31 |
| LevinTS($\pi$) | 636,718,638 | 16.69 | 16 | 1,828.51 |
| PHS*($\pi$) | 599,849,865 | 15.61 | 16 | 1,765.00 |
| PHS*($\pi$) LARGE | 344,512,985 | 33.04 | 1 | 496.45 |
| PHS*($\pi^{SG}$) [Ours] | 85,470,564 | 6.25 | 10,000 | 791.69 |
We then tested the resulting models on a separate test set of 100 problems to match the procedure done for Table 1.
| Algorithm | Solved | Expansions |
| --- | --: | --: |
| WA* | 14 | 272,872.28 |
| LevinTS($\pi$) | 14 | 280,407.36 |
| PHS*($\pi$) | 14 | 288,045.64 |
| PHS*($\pi$) LARGE | 4 | 110,805.75 |
| PHS*($\pi^{SG}$) [Ours] | 100 | 1,291.37 |
**Question 2**
The results on the more difficult instances of BoulderDash we presented above highlight that PHS*($\pi^{SG}$) can be substantially faster than the baselines, to the point where PHS*($\pi^{SG}$) learns how to solve the problems while the baselines do not. Next, we present the runtime statistics for the methods listed in Table 1, on the easier problem instances, for a representative example; the tables for the other domains are similar and were omitted for space.
What the table below shows is that, from a practical perspective, all algorithms tested, once they learn a policy and/or heuristic, are able to solve the test problems within a reasonable amount of time. The key difference between the method we propose and the baselines is the time that it might take them to learn the policy and heuristic. As demonstrated in the more difficult BoulderDash instances, the difference between our approach and baselines can be the same as solving all instances versus solving none of the instances.
| Algorithm | Solved | Expansions | Length | Total Time (Seconds) |
| --- | ---: | ---: | ---: | ---: |
| | | **BoulderDash** | | |
| WA* (1.5) | 100 | 1,193.60 | 51.44 | 31.93 |
| LevinTS($\pi$) | 100 | 61.33 | 52.90 | 4.82 |
| PHS*($\pi$) | 100 | 53.65 | 52.74 | 5.60 |
| LevinTS($\pi^{SG}$) | 100 | 65.48 | 53.30 | 16.18 |
| PHS*($\pi^{SG}$) | 100 | 53.34 | 52.68 | 10.89 |
The following addresses your points from the Weaknesses section.
- **Formal definition of a subgoal**: Subgoals are states from the underlying state space that the search attempts to achieve. We will clarify this in the paper
- **Restricting to deterministic problems**: Deterministic problems are an important and active area of research. There is also a body of work which looks at how transformations can be applied to non-classical planning problems (e.g., non-determinism/incomplete information) from the classical planning setting we consider in our work [5]
- **Is the proposed method is better than existing technique**: WA*, LevinTS, and PHS* are the current state of the art methods for the problems we consider. [1] and [4] consider two-player games and Orseau & Lelis showed that PUCT performs quite poorly on this type of problem. [3] solves MDPs, whereas we consider the classical planning setting. [2] requires a dataset of structured input/outputs, whereas we focus on needle-in-a-haystack problems. The suggested references, [1-4], are only related to our work in a broad sense.
- **Intuition for Equation (4)**: We provide the intuition of Equation 4 with an example. Consider the case where one subgoal could be "go to the door on left", while another could be "go to the door on the right". Mixing these two subgoals would result in a uniform (and ineffective) policy. The high-level policy decides which subgoal to attain next by providing a weight to the probability distribution given by the low-level policies. This way, the high-level policy decides whether the agent will go right or left.
- **Baselines do not use the learned control knowledge**: All baselines learn a policy and/or heuristic. In particular, LevinTS/PHS* use the same Bootstrap pipeline that our method uses.
[5] Geffner, Hector. "Non-classical planning with a classical planner: The power of transformations." European Workshop on Logics in Artificial Intelligence. Cham: Springer International Publishing, 2014. | null | null | null | null | null | null |
Stochastic Deep Restoration Priors for Imaging Inverse Problems | Accept (poster) | Summary: This paper introduces stochastic deep restoration priors, a framework leveraging an ensemble of pre-trained restoration models as priors for solving imaging inverse problems. The authors claim they minimizes a regularizer based on degraded observation likelihoods, generalizing denoiser-based methods like RED and SNORE. Experiments on MRI reconstruction and super-resolution demonstrate SOTA performance. The main contributions include theoretical convergence guarantees, adaptation to self-supervised training without fully sampled data, and improved handling of structured artifacts.
Claims And Evidence: The authors claim that restoration priors outperform Gaussian denoisers is supported by experiments showing higher PSNR/SSIM on MRI and SISR tasks. Theoretical justification links ShaRP to a sound regularizer, and convergence analysis under idealized assumptions provides a partial foundation. Empirical gains over diffusion models are clear.
Methods And Evaluation Criteria: ShaRP builds on PnP/RED but modifies via restoration operator ensembles. Evaluation uses standard datasets, i.e., fastMRI and ImageNet. However, for MRI, validation on uniform/random sampling subsets is reasonable, but real-world non-Cartesian sampling is not tested. SISR results focus on synthetic blur kernels, but robustness to natural distortions (e.g., motion blur) is unclear.
Theoretical Claims: In Appendix, Theorem 1 is correct, and Theorem 2 relies on ideal assumptions: Lipschitz continuity and bounded bias. While practical convergence is shown empirically, the gap between theory and practice needs discussion.
Experimental Designs Or Analyses: 1. MRI experiments are thorough, but no ablation study quantifies how ShaRP’s gains scale with ensemble size b.
2. SISR uses only two Gaussian blurs, which is not enough.
3. Perceptual quality requires human evaluation.
Supplementary Material: I have reviewed all the content of the supplement.
Relation To Broader Scientific Literature: ShaRP aligns with recent trends in learning-based priors, e.g., denoisers and diffusion, but extends them via restoration ensembles. Comparisons to PnP/RED and diffusion methods (DPS, DDS) are suitable.
Essential References Not Discussed: The two papers shown below handle similar tasks:
[1] Osmosis: Rgbd diffusion prior for underwater image restoration. ECCV, 2024.
[2] DreamClean: Restoring Clean Image Using Deep Diffusion Prior. ICLR, 2024.
Other Strengths And Weaknesses: Strengths:
1. Theoretically grounded regularization using MMSE restoration ensembles.
2. Practical advantages in self-supervised MRI where prior denoiser training requires full data.
3. Clear empirical gains over denoiser-/diffusion-based baselines.
Weaknesses:
1. Limited diversity in tested degradations.
2. Computational costs of stochastic restoration steps are not analyzed.
Other Comments Or Suggestions: 1. Technical terms (e.g., "restoration operator") are inconsistently defined
2. Supplement should provide experiments with other degradation types..
Questions For Authors: 1. How does ShaRP’s performance/complexity scale with ensemble size?
2. Could ShaRP adapt to settings without even partially sampled data?
3. How did inexact restoration operators impact convergence in practice? Does the bias term remain stable?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback.
>1. *For concerns in Methods And Evaluation Criteria:*
Prompted by your comment, we ran additional experiments that we will include in the supplementary material of the paper. The new setting considers non-Cartesian sampling for CS-MRI, using a 2D Gaussian sampling mask. Importantly, the restoration network trained for the 8× uniform Cartesian sampling was applied directly without retraining or adaptation. Our results, summarized below, provide additional evidence of ShaRP's robustness to diverse sampling patterns, highlighting its strong generalization ability.
| Method | PSNR | SSIM |
|------------|-------|-------|
| PnP-ADMM | 31.33 | 0.917 |
| DPS | 32.01 | 0.904 |
| DDS | 33.19 | 0.924 |
| DRP | 32.70 | 0.921 |
| ShaRP | 34.01 | 0.942 |
It would certainly be interesting to explore additional natural distortions in SISR tasks, such as motion blur, in future work. We will mention this revised manuscript.
>2. *For concerns in Theoretical Claims:*
Thanks for your suggestion. We will include an expanded discussion about the two technical assumptions. Lipschitz continuity is a standard and widely-used assumption in optimization, needed for establishing convergence rates of gradient-based algorithms (see, for example, Section 1.2.2 in [1]). It is satisfied by a broad class of objective functions, including the least-squares data-fidelity term used in our experiments, as well as neural networks (e.g., Appendix B in [2]). Boundedness is a mild assumption, since it is always true for images that have bounded pixel values [0, 255].
[1] Nesterov, Introductory Lectures on Convex Optimization, 2004
[2] Hurault et al. Gradient step denoiser for convergent plug-and-play. ICLR, 2022.
>3. *..., but no ablation study quantifies how ShaRP’s gains scale with ensemble size b.*
Section E.2 of the supplementary material presents an ablation study quantifying performance gains with varying ensemble size (b). Computationally, ShaRP remains efficient because it uses only one restoration model from the ensemble at each inference step, thus having comparable computational complexity to single-model methods. The revised manuscript will ensure that this study is clearly referenced in the main manuscript for visibility.
>4. *SISR uses only two Gaussian blurs, which is not enough.*
Thanks for your suggestion. Prompted by your comment, we ran an additional SISR experiment using a new Gaussian kernel (σ = 1.0). As shown in the table below, we can achieve consistently good performance as the other two kernels in the manuscript. Note how ShaRP still provides excellent performance, suggesting that it would work with more Gaussian blurs.
| Method | PSNR | SSIM | LPIPS | FID |
|-----------|-------|-------|--------|--------|
| DPIR | 28.45 | 0.854 | 0.247 | 82.90 |
| DDRM | 27.26 | 0.803 | 0.209 | 44.77 |
| DiffPIR | 28.37 | 0.841 | 0.215 | 40.59 |
| DRP | 28.43 | 0.853 | 0.236 | 75.29 |
| ShaRP | 28.70 | 0.858 | 0.226 | 69.75 |
>5. *Perceptual quality requires human evaluation.*
Prompted by your comment, we introduced FID and LPIPS as perceptual metrics. Please refer to the rebuttal comment (7) to Reviewer DuT7 for the table. The revised manuscript will mention that true perceptual quality requires human evaluation.
>6. *The two papers shown below handle similar tasks: [1] Osmosis, ECCV, 2024. [2] DreamClean, ICLR, 2024.*
We will cite these papers in the revised manuscript.
>7 *Technical terms (e.g., "restoration operator") are inconsistently defined.*
We will review the use of “restoration operator” across to make sure it is consistent. Restoration operator in our paper refers to an operator that computes MMSE solution of eq. (3).
>8. *Supplement should provide experiments with other degradation types.*
We will include results reported in our response to your Comment 1 to the supplementary material.
>9. *How does ShaRP’s performance/complexity scale with ensemble size?*
Please refer to our response to your Comment 2.
>10. *Could ShaRP adapt to settings without even partially sampled data?*
We are not entirely sure what the reviewer means by “without even partially sampled data.” If the reviewer is referring to the setting where there are no measurements at all, we can still run our method, but it was not conceived for this setting. Indeed, it would be very interesting to extend our methodology to “unconditional generation” or “sampling”, i.e., generation of images from the prior specified using a set of restoration operators. This will be a great direction for future work.
>11 *How did inexact restoration operators impact convergence in practice? Does the bias term remain stable?*
See Figure 7 in Section C.3, which shows stable convergence even with inexact (self-supervised) MMSE restoration operators. Overall, we did not observe any stability issues with ShaRP.
---
Rebuttal Comment 1.1:
Comment: I appreciate the answers given and keep my score unchanged.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer w5bu,
thanks for reading our rebuttal. Please let us know if there is anything we could do to make you consider increasing your rating. | Summary: This paper introduces ShaRP, a stochastic regularization for linear inverse problems of the form y=Ax+e. This regulatization relies on MMSE (approximated) restoration machines R(*) for problems of the form s=Hx+n, where H is randomly chosen. By injecting the gradient of this regularization within the iterative recovery algorithm, and choosing at each itreration a differnt H to serve, ShaRP leads to improved recovery results in terms of PSNR, SSIM, (and LPIPS for SISR tests).
Claims And Evidence: The claims made throughout the paper are well-supported.
Methods And Evaluation Criteria: It is unclear what ShaRP is after - is it a better MAP estimate? or perhaps an MMSE one? Clearly, this is not a posterior sampler. This question is critical, because the tables in the results' section suggest that we are after an MMSE solution with the best PSNR, and the visual results (Figure 2 and 3) support this belief, as they they tend to be somewhat blurry.
In any case, a directly trained MMSE solver should be included in the comparisons, as it is nothing but a special case of the R operator that has been learned.
Also, as this work compares various different methods that strive for different goals (e.g. DPS and DDNM as posterior samplers), this must be accompanied with perceptual quality evaluation.
Theoretical Claims: The theoretical part of this paper is solid and beautiful.
Experimental Designs Or Analyses: Several comments above suggest that the comparisons are not complete and not fair:
- FID/KID results are missing - does ShaRP aim for high perceptual quality? If not, say so, but then, what is it aiming for?
- MMSE restoration is missing
Supplementary Material: OK
Relation To Broader Scientific Literature: OK in general
Essential References Not Discussed: A reference to RED-Diff is very much missing. See here: https://openreview.net/forum?id=1YO4EE3SPB
Other Strengths And Weaknesses: Strengths:
- Beautiful theoretical work
- Lovely idea that extends beyond RED and DRP.
Weaknesses:
- The paper is hard to follow in Section 3 - the order of the presentation is flawed, as it starts by referring to Algorithm 1, never define h(*), then jumps to talk about the gradient of f(*) - it is nearly impossible to follow.
- A clear definition of the obje3ctive or ShaRP is missing, and this could give better context to the results obtained.
Other Comments Or Suggestions: In equation 3, why not have sigma be random as well? In this case you get closer to RED-Diff mentioned above.
Equation 6 is very hard to follow, and esspecially so since you do not show the chosen h(*) beforehand. It would be helpfull to show how the actual h(*) is generalized to lead to (6).
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback.
> 1. *It is unclear what ShaRP is after - is it a better MAP estimate? or perhaps an MMSE one? Clearly, this is not a posterior sampler. This question is critical, ..., as they they tend to be somewhat blurry.*
This is a great point. ShaRP is not designed to produce classical MAP or MMSE estimates, nor is it a posterior sampler. Instead, it optimizes a novel objective function (Eq. (2) using h(x) in Eq. (6)) that balances data fidelity with an implicit regularization learned from an ensemble of priors. While the resulting reconstructions may exhibit characteristics similar to MMSE solutions (e.g., in terms of PSNR and visual smoothness), this is a consequence of the learned objective rather than an explicit design goal. We acknowledge that exploring sampling-based extensions could potentially enhance perceptual performance, and we will include this as a direction for future work.
> 2. *In any case, a directly trained MMSE solver should be included ...*
Thank you for the suggestion. For 4× CS-MRI, we added E2E-VarNet as a new baseline, which is a dedicated MMSE solver with data-fidelity constraint that achieves higher PSNR but is task-specific (i.e., [*] needs to be retrained for each task), effectively representing an upper bound on MSE performance.
| Method | PSNR (σ=0.05) | SSIM (σ=0.05) | PSNR (σ=0.10) | SSIM (σ=0.10) | PSNR (σ=0.15) | SSIM (σ=0.15) |
|--------------|---------------|---------------|---------------|---------------|---------------|---------------|
| Zero-filled | 26.93 | 0.848 | 26.92 | 0.847 | 26.90 | 0.848 |
| TV | 31.17 | 0.923 | 31.08 | 0.921 | 30.91 | 0.915 |
| PnP-FISTA | 35.88 | 0.938 | 31.14 | 0.894 | 30.32 | 0.846 |
| PnP-ADMM | 35.76 | 0.941 | 32.36 | 0.878 | 30.66 | 0.838 |
| DRP | 35.52 | 0.936 | 32.32 | 0.914 | 30.57 | 0.901 |
| DPS | 32.62 | 0.888 | 31.39 | 0.870 | 30.29 | 0.856 |
| DDS | 35.21 | 0.937 | 35.03 | 0.935 | 34.51 | 0.925 |
| ShaRP | 37.59 | 0.963 | 35.81 | 0.951 | 34.92 | 0.942 |
| E2E-VarNet [*] | 38.10 | 0.971 | 36.80 | 0.967 | 35.79 | 0.954 |
> 3. *Also, as this work compares various different methods ... perceptual quality evaluation.*
See our answer to Comment 4 below.
> 4. *Several comments above suggest that the comparisons are not complete and not fair: FID/KID results are missing - does ShaRP aim for high perceptual quality? If not, say so, but then, what is it aiming for? MMSE restoration is missing.*
We clarify that ShaRP primarily aims for high reconstruction accuracy rather than explicitly targeting perceptual quality. Nevertheless, we agree perceptual quality metrics provide valuable insights (in particular regarding the Perception Distortion tradeoff [3]). Hence, we have included FID alongside PSNR, SSIM, and LPIPS to offer a balanced assessment:
| Method | PSNR | SSIM | LPIPS | FID |
|----------|-------|-------|--------|--------|
| DPIR | 27.90 | 0.803 | 0.314 | 89.18 |
| DPS | 24.50 | 0.657 | 0.403 | 50.33 |
| DiffPIR | 28.59 | 0.834 | 0.172 | 46.12 |
| DRP | 28.24 | 0.836 | 0.235 | 64.23 |
| ShaRP | 29.28 | 0.872 | 0.209 | 58.79 |
[3] Y. Blau., and T. Michaeli. "The perception-distortion tradeoff." CVPR, 2018.
> 5. *A reference to RED-Diff is very much missing.*
We will cite RED-Diff in the revised manuscript.
> 6. *The paper is hard to follow in Section 3 - the order of the presentation is flawed ...*
This is valuable feedback. We will implement the following changes: We will begin by clearly defining all necessary functions and terms, with a specific definition of h(). Following this, we will introduce Algorithm 1. We believe this revised structure will significantly improve the clarity and flow of Section 3.
> 7. *A clear definition of the obje3ctive or ShaRP is missing ...*
The objective function minimized by ShaRP is the composite function consisting of a data fidelity term g and a novel regularizer h specified in eq. (6). See also our response to Comment 1.
> 8. *In equation 3, why not have sigma be random as well? ...*
While ShaRP generates a random noise at each iteration, it does so using fixed sigma. Using random sigma is an interesting idea that we could explore in the future.
> 9. *Equation 6 is very hard to follow, and esspecially so since you do not show the chosen h() beforehand. It would be helpfull to show how the actual h() is generalized to lead to (6).*
We will revise the paper to better explain Equation (6).
---
Rebuttal Comment 1.1:
Comment: I appreciate the answers given nd change my grade to 4-accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading our comments and raising the score. We will include your valuable feedback into the revised manuscript. | Summary: The paper presents Stochastic Deep Restoration Priors (ShaRP), a framework for imaging inverse problems that leverages an ensemble of pre-trained deep restoration models. ShaRP uses a stochastic gradient descent approach where, in each iteration, a random degradation operator (with added Gaussian noise) is applied to simulate degradations. It minimizes a composite objective combining a data fidelity term with a regularizer derived from MMSE restoration operator score functions.
Claims And Evidence: While the paper presents theoretical derivations and experimental results to support its contributions, some claims are not fully substantiated by clear and convincing evidence. For example, the claim that ShaRP can robustly handle diverse inverse problems without retraining—especially under self-supervised conditions from incomplete measurements—is supported by only a limited set of experiments and comparisons. Additionally, the theoretical guarantees regarding convergence and the robustness of the proposed regularizer would benefit from more extensive empirical validation and ablation studies.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria appear to be appropriate for the targeted imaging inverse problems. The selection of tasks like MRI reconstruction and single image super-resolution provides a relevant and practical basis for evaluation.
Theoretical Claims: The proofs assume a level of smoothness and differentiability for the restoration operators that may not always hold for complex deep networks.
Certain Lipschitz continuity and boundedness conditions are assumed without full justification.
The convergence analysis is presented under idealized conditions, and the impact of real-world deviations from these assumptions is not fully explored.
Experimental Designs Or Analyses: The experimental design was examined for MRI reconstruction and single-image super-resolution tasks using standard benchmarks and evaluation metrics, with comparisons made against established baselines. However, there are several issues: details on dataset splits, noise levels, and degradation patterns are sometimes insufficient, which could affect reproducibility; baseline methods may not be equally tuned, potentially affecting fairness; and the analysis lacks extensive ablation and sensitivity studies to clearly disentangle the contributions of the ensemble approach and MMSE-based regularizer.
Supplementary Material: I reviewed the supplementary material. I examined the detailed derivations of the theoretical proofs (including convergence analysis and assumptions), the extended experimental results and ablation studies, and additional implementation details regarding network architectures and hyperparameter settings.
Relation To Broader Scientific Literature: It builds upon the extensive literature on plug-and-play priors and regularization by denoising by moving beyond single Gaussian denoisers to an ensemble of restoration models, echoing ideas seen in recent works on deep restoration priors and stochastic denoising regularization. Additionally, it relates closely to diffusion model approaches that leverage score functions for sampling, while providing a novel MMSE-based regularizer that bridges the gap between traditional optimization methods and deep learning.
Essential References Not Discussed: While the paper cites a broad range of works on plug‐and‐play priors, denoising-based regularization, and diffusion models, it omits a few related works that could provide additional context for its contributions. For example, "Deep Image Prior" demonstrates that the inherent structure of an untrained convolutional network can serve as a powerful image prior, which is relevant to understanding unsupervised restoration strategies. Additionally, more recent self-supervised approaches such as Noise2Void and Self2Self that learn priors from corrupted or incomplete data could further clarify the paper’s context in scenarios without fully sampled measurements.
Other Strengths And Weaknesses: The paper is original in its integration of multiple pre-trained restoration models to serve as image priors, moving beyond the traditional reliance on single Gaussian denoisers. This ensemble approach, coupled with a novel MMSE-based regularizer, is a contribution that addresses diverse inverse problems more flexibly. However, some weaknesses include idealized assumptions in the theoretical proofs and a need for more extensive ablation studies and sensitivity analyses to fully validate the empirical findings.
Other Comments Or Suggestions: A discussion on the practical computational cost of using an ensemble of restoration models, especially compared to single-model baselines, would be beneficial.
Questions For Authors: Could you provide a more detailed description of the assumptions underlying your convergence proofs, particularly regarding the smoothness and Lipschitz continuity of the restoration operators?
Have you conducted ablation studies to isolate the impact of the ensemble approach versus the MMSE-based regularizer component?
What is the computational overhead of using an ensemble of restoration models compared to single-model approaches?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback.
> Response to concerns in Claims and Evidence
1. Generalization over configurations: We wanted to highlight that the supplementary material contains evidence supporting the robustness of ShaRP across diverse inverse problems without retraining, particularly under self-supervised conditions from incomplete measurements. Specifically:
As detailed in Section C, we conducted experiments in the CS-MRI setting evaluating ShaRP's performance under a range of challenging conditions, including: (1) Different undersampling rates (4x, 6x), (2) Varied mask types (Cartesian uniform, Cartesian random, and 2D Gaussian), (3) Multiple noise levels (0.005, 0.1, 0.15).
These results demonstrate ShaRP's ability to handle diverse inverse problems without retraining, even with self-supervised learning from incomplete data.
2. Convergence Stability and Robustness: Section C.3 of the supplementary material presents an analysis of ShaRP's convergence stability and robustness. We evaluated this using both supervised and self-supervised MMSE restoration priors, providing empirical support for our theoretical guarantees.
Prompted by the reviewer comment, the revised paper will include:
(1) A summary table that consolidates the key ablation results across different undersampling rates, mask types, and noise levels. This will provide a clearer and more accessible overview.
(2) Add a brief discussion in the main text highlighting these empirical findings and explicitly linking them to the claim of robustness without retraining.
We believe these additions will strengthen our paper by highlighting extensive evidence provided in the paper.
> Response to concerns in Theoretical Claims
Prompted by the reviewer the revised paper will better highlight the following:
1. Lipschitz continuity and boundedness are needed only for Proposition 1. They are not needed for our main results—Theorem 1 and Theorem 2.
2. Lipschitz continuity is a standard and widely-used assumption in optimization for establishing convergence rates of gradient-based algorithms (see, for example, Section 1.2.2 in [1]). It is satisfied by a broad class of objective functions, including the least-squares data-fidelity term used in our experiments. The smoothness of MMSE restoration operators is well-known and has been extensively discussed in literature (see, for example, [2]).
3. Boundedness is a mild assumption, since it is always true for images that have bounded pixel values [0, 255].
4. Figure 7 in Section C.3 shows empirically convergence of our method in a real-world setting.
[1] Nesterov, Introductory Lectures on Convex Optimization, 2004
[2] Gribonval and Machart, Reconciling “priors” & “priors” without prejudice?, NeurIPS 2013.
> Response to concerns in Experimental Designs Or Analyses
1. Prompted by your comment, we will include a table summarizing all the relevant information about the experiment to guarantee reproducible results. We will additionally release our code upon acceptance, which should further simplify reproducibility.
2. Section E.2 of the supplementary material shows clear performance gains as ensemble size increases.
> Respond to concerns in Essential References Not Discussed
Thank you for pointing to relevant works such as "Deep Image Prior," "Noise2Void," and "Self2Self." We will cite and discuss these works in our revised manuscript.
> Response to concerns in Other Strengths And Weaknesses
We appreciate the reviewer's assessment of our paper's originality and contributions. Following your recommendations, the revised manuscript will include summary tables pointing to the relevant results and will improve the discussion on the assumptions (see response to your Comment 2).
> Response to concerns in Other Comments Or Suggestions
The revised manuscript will explicitly mention that the computational cost of running ShaRP is comparable to those of single-model approaches. This is due to the stochastic nature of our algorithm that uses a $single$ restoration operator in each iteration.
> Response to concerns For Authors
1. The smoothness of h associated with the MMSE restoration operators is well-known (see for example [1] or Appendix B of [2]). The smoothness can also be seen in the derivation presented in Appendix A of our paper. This is a direct consequence of the smoothness of the p(s | H) in eq. (5), since it is a Gaussian convolved with the prior p(x). Note also that since our restoration operator is implemented as a neural network with smooth activation functions like eLU, it will be inherently Lipschitz continuous [2].
2. (1) The detailed results of ablation studies isolating the effect of the ensemble are available in Section E.2, which is in the supplementary material. (2) There is no computational overhead due to our approach since only one model from the ensemble is utilized for each restoration instance. We will explicitly state this in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: Based on the authors' substantive response and alignment with peer reviews, I upgrade my score to Accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading our comments and raising the score; we will include your valuable feedback to improve the manuscript. | Summary: The paper develops an plug-and-play imaging restoration algorithm that can use MMSE estimators trained to solve an arbitrary inverse problem involving linear forward operators and white gaussian noise (not just denoising), to solve a target linear inverse problem. That is, one can, for example, use a network trained to deblur and denoise images to solve a compressive sensing restoration task. The authors show the algorithm corresponds to taking biased stochastic gradients wrt the true MMSE loss. The proposed method is tested on compressive MRI and and single image super-resolution and is shown to generally outperform existing methods.
Claims And Evidence: Claims are supported with evidence.
Methods And Evaluation Criteria: The methods and eval make sense
Theoretical Claims: The theory seems correct, though Section 4.3 could use further explanation.
Experimental Designs Or Analyses: Evaluation criteria are appropriate
Supplementary Material: Supp includes 16 pages of proofs, additional results, and ablations. I skimmed it, but did not check it carefully.
Relation To Broader Scientific Literature: The paper presents a thorough overview of the existing literature.
The paper differentiates itself from related self-supervised methods by stating "Ambient DMs seek to sample from px using DMs trained directly on undersampled measurements. Thus, during inference Ambient DMs assume access to the image prior px, while ShaRP only assumes access to the ensemble of likelihoods of multiple degraded observations."
However, because p_x was learned directly from undersampled measurements, the assumptions on Ambient GAN and related methods don't seem any more restrictive than those on the current method (which I would argue has also implicitly learned p_x).
"We introduce a novel regularization concept for inverse problems that encourages solutions that produce degraded versions closely resembling real degraded images." This seems conceptually similar to the equivariant imaging concept. Can the authors comment on the relationship between the two works. Is there a relationship between Theorem 2 and equivariant imaging?
Essential References Not Discussed: May be worth discussing the following paper:
Bansal, Arpit, Eitan Borgnia, Hong-Min Chu, Jie Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, and Tom Goldstein. "Cold diffusion: Inverting arbitrary image transforms without noise." Advances in Neural Information Processing Systems 36 (2023): 41259-41282.
Other Strengths And Weaknesses: This is a well-written paper that studies an important problem. The proposed method is reasonably novel and apparently effective.
Other Comments Or Suggestions: It might be better to rewrite (1) and (2) with complex-valued forward models, given the focus on MRI.
At least in the context of an ensemble of blur-then-deblur operations, ShaRP can be interpretted as masking then restoring a portion of k-space: ShaRP could be viewed as an ensemble of masked (denoising) autoencoders.
A subcript A in $\nabla g(x)$ in Algorithm 1 (i.e., $\nabla g_A(x)$) might be useful to remind readers where the forward model comes into play in the reconstruction algorithm. (Or just some additional annotation that g(x)) is the data fidelity term.)
What is s' in (12)? Are the losses ell_sup and ell_self wrt \bar{x}? The notation in this section could use more annotation.
Questions For Authors: In the MRI restoration task, the trained restoration algorithms are solving problems very similar to the target application; both are solving subsampled MRI, just with different masks. Is the proposed method still effective when the restoration algorithms are solving a very different task? E.g., can I solve compressive MRI by regularizing with an algorithm trained to perform image deblurring or inpainting?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for feedback and thoughtful comments on our work.
> 1. *The paper differentiates itself from related self-supervised methods by stating "Ambient DMs seek to sample from px using DMs trained directly on undersampled measurements. Thus, during inference Ambient DMs assume access to the image prior px, while ShaRP only assumes access to the ensemble of likelihoods of multiple degraded observations." However, because p_x was learned directly from undersampled measurements, the assumptions on Ambient GAN and related methods don't seem any more restrictive than those on the current method (which I would argue has also implicitly learned p_x).*
Indeed, both Ambient DMs and ShaRP indeed implicitly learn about p(x) and we will revise that section of the paper to make this point clear.
> 2. *We introduce a novel regularization concept for inverse problems that encourages solutions that produce degraded versions closely resembling real degraded images." This seems conceptually similar to the equivariant imaging concept. Can the authors comment on the relationship between the two works. Is there a relationship between Theorem 2 and equivariant imaging?*
This is a very interesting perspective. Our approach can indeed be seen as finding a fixed point that exhibits equivariance under multiple degradations. The revised paper will include this nice interpretation suggested by the reviewer.
> 3. *May be worth discussing the following paper: Bansal, Arpit, Eitan Borgnia, Hong-Min Chu, Jie Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, and Tom Goldstein. "Cold diffusion: Inverting arbitrary image transforms without noise." Advances in Neural Information Processing Systems 36 (2023): 41259-41282.*
We will cite and discuss Cold Diffusion in the revised version. Both Cold Diffusion and Ambient Diffusion (mentioned above) point toward a promising direction for extending our method to sampling, which we will highlight as a potential avenue for future work.
> 4. *It might be better to rewrite (1) and (2) with complex-valued forward models, given the focus on MRI.*
We will fix this per your suggestion.
> 5. *At least in the context of an ensemble of blur-then-deblur operations, ShaRP can be interpreted as masking then restoring a portion of k-space: ShaRP could be viewed as an ensemble of masked (denoising) autoencoders.*
Indeed, when considering an ensemble of blur-then-deblur operations, ShaRP can be seen as masking and subsequently restoring portions of k-space—akin to the mechanism of masked (denoising) autoencoders. In this interpretation, MMSE restoration networks effectively reconstruct the missing null-space signals. We will incorporate this connection into the discussion section of our revised manuscript.
> 6. *A subscript A in $\nabla(g)$ in Algorithm 1 (i.e., $\nabla_A(g)$ ) might be useful to remind readers where the forward model comes into play in the reconstruction algorithm. (Or just some additional annotation that g(x)) is the data fidelity term.)*
We will modify the notation accordingly.
> 7. *What is s' in (12)? Are the losses ell_sup and ell_self wrt \bar{x}? The notation in this section could use more annotation.*
In Eq. (12), s’ refers to an independently subsampled measurement, defined as s’ = P’M with P’ denoting a separate sampling pattern. We will revise the notation and add clarifying annotations in the revised version.
> 8. *In the MRI restoration task, the trained restoration algorithms are solving problems very similar to the target application; both are solving subsampled MRI, just with different masks. Is the proposed method still effective when the restoration algorithms are solving a very different task? E.g., can I solve compressive MRI by regularizing with an algorithm trained to perform image deblurring or inpainting?*
Section E.1 of our paper explores the scenario suggested by the reviewer by applying a pre-trained super-resolution (SR) model to the compressive sensing MRI problem. Despite the task mismatch, the SR prior still outperformed the Gaussian denoiser prior. We will edit the manuscript to ensure that it is clear this experiment was included. | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.